My Alexa Display Template Skill Design & Coding Deep Dive series continues with what many of you have been waiting for: the Visual Tarot Alexa skill code.
What’s Included In The Visual Tarot Skill
All of the following methods, functions and items are employed in, and illustrated by, its heavily-commented code.
Use of sessionAttributes to persist skill data in-session
Select random, non-consecutive items from an array (virtual coin toss)
Display.RenderTemplate directive, used with ListTemplate2 and BodyTemplate2
ElementSelected to capture user touch selections from Display Template screens
selectNumberIntent to capture spoken list number selections from Display Template screens
Use of SSML tags for outputSpeech
Play welcome/instruction message only on first screen load
Drill-down through up to three menu/list screen levels
Control user navigation back and forth through screens via use of a custom
SessionEndedRequest cleanup to address Lambda latency issues
Get The Skill, Then Check Out The Code
This series of posts will continue with a detailed walkthrough of the skill’s design and coding, but I’m sure many who are reading this just want to jump directly to looking at the skill and its code. You’ll need an Alexa device with a screen to use the Visual Tarot skill. As of this writing, on 1/1/18, that means an Echo Show or Echo Spot.
Ask Alexa to “enable Visual Tarot”. As of this writing there are no other skills with the same name, but if you enable it by voice and find the skill doesn’t seem to match the code I’m sharing, you’ll need to go into the skill store on the Amazon site or in the Alexa app and enable it from there. Click here to jump to Visual Tarot’s product page in Amazon’s US skill store (here for the UK skill store). The skill is available in all regions where English is the supported language for Alexa and Alexa devices with screens area available.
The code and supporting documentation are available in the Visual_Tarot Github repository.
I suggest trying out the skill to see what it can do and how it works, then look at the repository for the underlying code and speech assets.
That’s it for today. I’ll be back here next week to continue the deep dive.