AlexaDev Tuesday: Managing The Alexa Interaction Model

Keeping it short and practical today: two tips for managing the Alexa interaction model. When the user could say literally anything to Alexa while interacting with your skill, what can you do to avoid dead ends in that conversation?

 

Bad Conversation

 

 

1. Start with an assumption that virtually everything the user says to Alexa during your skill’s session will NOT align with your mapped utterances.

It’s typical to begin an Alexa interaction model design by listing the actions/intents included in the code, then trying to figure out all the things the user might say to get that action/intent to fire (e.g., “Help,” “Help me,” “I need help,” etc. etc. for the Help intent). There are two challenges with this approach. First, it’s impossible to anticipate every possible way every potential user might attempt to request the same action. Second, dealing with user speech that’s not included in the interaction model becomes a game of whack-a-mole during beta test.

Consider the opposite approach. Instead of thinking in terms of, “If the user speaks a valid utterance, fire this intent,” think in terms of, “Unless the user speaks a valid utterance, trap the error.” In other words, think of your error trap as the default interaction—not as an exception handler.

Start with your error responses, and make them just as conversational as the rest of your interaction model. Don’t just inform the user an error has occurred and kill the session. Give the user the opportunity to escape the error loop by keeping the session open and including one or more acceptable utterances in your error response.

Then test, and test some more. Test with real words and made up words. Test with foreign words, if you know any. Ensure your custom error loops fire every time. Verify the user is never exposed to a canned Alexa error response from Amazon—or even worse, the dreaded error tone and blue light flash—so long as he is attempting to interact with the skill.

Finally, move on to your utterances and intents. You don’t have to worry about unanticipated utterances now.

 

2. Anticipate utterance near misses, and include them in your interaction model.

Remember this hilarious Saturday Night Live sketch, advertising the Echo Silver edition created specifically for seniors?

Brainstorm all the different ways users who are trying to interact with your skill could get the intended utterance wrong. Ask non-techie friends and family how they might ask the skill to do this or that thing, without any advance knowledge of the interaction model you’ve designed. Add utterances for all the most likely near misses, and link them to the associated intents in your interaction model.

A problem prevented is a problem solved.

 

* * *

The Ninja Auto-iQ Compact System (UK visitors – click here) offers blending, chopping, nutrient extraction and food processing all in a single, small-footprint appliance. Whether you’re looking to make healthy smoothies, nutrient extraction fruit and vegetable juices, or simplify food prep, this is the one device that does it all. Currently (as of 7/17/17) rated 4.7/5 stars and currently priced at $112.04.

Advertisements make it possible for Love My Echo to bring you great content for free, so thanks for your support.

* * *

 

Tags: , , , ,

Top