amazon alexa api

Amazon recently released the Alexa Skills Kit (ASK), which is an SDK that allows developers to integrate Alexa (the voice service that powers. Amazon offers a collection of tools, APIs, reference solutions, and documentation to make it easier to build for Alexa. Alexa built-in is a category of devices. With System Function APIs in the Alexa Skills Kit (ASK), expand your Alexa skill's capabilities using additional Alexa features such as lists.

You can watch a thematic video

Alexa Custom Skill: Fetch data from webservice

Amazon alexa api -

Setting up and consuming APIs from Amazon Alexa via SAP API Management

In this blog post, I will illustrate how to use SAP API Management to access and manage APIs of your systems. Using a simple example application hosted on Cloud Platform, I will demonstrate the basics of API Management and how to leverage its advantages. For that, we will create a simple AWS-hosted Alexa skill, although here anything that can run code and has access to the internet should work just as fine. This serves the purpose of creating a simple sandbox environment, which we can use to explore the basics.

In the context of this blog post, we will mainly be working with two SAP environments: the API Portal and the Developer Portal. The API Portal is the place where you will design, create and configure APIs to your backend or third party systems. Once you created APIs, you can publish sets of one or more APIs as products. In order for developers to consume APIs, products can be subscribed to in the Developer Portal. This way, developers will automatically be assigned things such as API keys and can thus consume previously configured APIs.

Prerequisites

You will need the following:

With that, all the necessary prerequisites should be fulfilled.

Configuring SAP API Management

Setting up the API Provider

First, we will configure an API Provider for the tinyCAP app you set up as part of the prerequisites. On the API Portal navigate to Configure. You will see an overview of your API Providers like this, although if this is your first time using the API Portal, no API Providers will be listed.

API%20Portal%3A%20Configure

To create a new API Provider, which will act as a middle layer between our tinyCAP app and our Alexa skill, click on Create. You will first be asked to provide a name, “demo” and an (optional) description, for which you can choose whatever you like. Once that is done advance to Connection, where the interesting bits of setting up the API Provider happen. Enter the following attributes as your connection settings:

PropertyValue
TypeInternet
Host(your application host)
Port443
Use SSLtrue
Trust Store(leave empty)
Key Store Certificate(leave empty)

API%20Provider%3A%20Connection

If you don’t know the host of your application you can use the Cloud Foundry CLI and run the command, which will display an overview of all your deployed apps and their hosts.

Now you should be left with a similar setup to this, just with a different host.

All that’s left to do is configuring the Path Prefix under Catalog Service Settings. If you already played around a little with the tinyCAP app, you will know that a catalogue of all available recourses can be accessed under . For now, we will enter

PropertyValue
Path Prefix
Service Collection URL(leave empty)
Trust Allfalse
Authentication TypeNone

To check whether your configuration works click on the URL provided under Catalog URL which should take you straight to an overview of all available recourses of your tinyCAP app in JSON format. You can save and test your Connection now.

API%20Provider%3A%20Catalogue%20Settings

Setting up the API

To expose our previously configured API Provider we now need to create an API. On the API Portal navigate to Develop. Once again if this is your first time working with the API Portal no APIs will be listed here.

API%20Portal%3A%20Develop

Click on Create. You will be prompted with a pop-up requiring you to fill out all the necessary details to set up your API. Fill out the required fields as shown below.

PropertyValue
SelectAPI Provider
API Providerdemo
Link API Providertrue
URL
Namedemo
TitleFirst API
DescriptionFirst API using demo API Provider
Host Alias(choose one of the options)
API Base Path
Service TypeOData

The Host Alias and API Base Path will determine the URL through which the API can be accessed. The Host Alias should be filled out automatically, as for the Base Path just enter something simple like . Make sure to save and deploy your API before continuing.

API%20Portal%3A%20Create%20API%20%281/2%29

API%20Portal%3A%20Create%20API%20%282/2%29

To test whether the setup of your API has been successful, click on the API you just created and open the API Proxy URL in your web browser. It should display the same data as the Catalog URL from the last step.

Working with the API

We are almost done with the API part of this blog post. Why almost? We already showed that we can access the data in our web browser through the API we created. To demonstrate the last thing that is preventing us from simply making an HTTP GET, we will set up a simple node application like this:

Note that we make use of the Axios npm package, so make sure to run  and . First, we will need to set the URL we want to make a request to. To check if our code is working, we will first send a request to the tinyCAP app directly, without using API Management. For me, that means setting

Running your program using node should output the same data you can see when opening the URL in your web browser. Great, so that works! Now let’s try the same thing with the API we set up. Set  to whatever you configured in the previous step. For me that is

Running the program now will surprisingly result in an SSL Error . Looking for a solution online suggests adding appropriate root certificates via the npm  package, the solution, however, isn’t quite as simple. To get the request working we manually need to add the PEM chain to node. In Firefox open the URL of your API and hit enter. Click on the small lock icon to the left of the URL bar and navigate to Connection Secure > Show connection Details > More Information. Under Security / Website Identity, click on View Certificate. Under Miscellaneous you can download the PEM (chain).

Alternatively, you can download the .pem file here.

Save it in the same folder your node project is in. To add the PEM chain to our project, we will update our code to look like the following:

If you copy + paste this code make sure to adjust the file name baseURL accordingly. If we run this, our program now we will see the same data we can see in our web browser.

If you prefer using Postman, you can also add the .pem file to Postman by navigating to File > Settings > Certificates.

Also, don’t forget to run

Alexa Skill

First create a new skill on the alexa developer console. Enter a Skill name of you liking and choose whatever flavour of English you prefer. Choose “Custom” as your model and “Alexa-Hosted (Node.js)” as the backend hosting method, so we can apply what we learned earlier directly to our skill. If prompted whether you want to add a template to your skill, choose the “Hello World” template. Create your skill, this might take a while.

Alexa%3A%20Create%20Skill

On the left side, under the Build tab, navigate to Custom > Invocation and choose a Skill invocation name. I will be using “demo two” since I was too lazy to delete the first one. Now navigate to Custom > Intents. Here you should see a list of a few Amazon default Intents, as well a template intent (e.g. HelloWorldIntent) if you chose to add a template to your skill while creating it. Click on + Add Intent. Enter “GetApiDataIntent” as your Intent name and create your custom intent.

Ever wondered how to convert abbreviations consisting of multiple upper case characters into a camel case name (“API” or “Api”)? There are some general guidelines by Microsoft.

As a next step, we need to enter sample utterances, which describe what the user might say to invoke the intent we just created. Since we want to access data, we will go with something like “show me my data” and “show me api data”. Of course, you are free to choose additional or different utterances. Just make sure you end up calling the correct intent(s), going with something too simple such as “show me data” might result in some default Amazon intents being triggered instead of yours. Make sure to save and build your model.

Alexa%3A%20Sample%20utterances

Now we will switch over to the code tab and add some custom code to make the intent behave. First, we need to add our new intent. Copy and paste this code snippet into the  file. Also, make sure to export the handler at the bottom of the file.

Replace  at the bottom of the file with:

Save and deploy your code. Switch to the Test tab to check whether everything is working up until now. You might need to switch to development mode to enable testing.

Great! Now let’s add some functionality to our intent. Since we need the axios and ssl-root-cas npm packages we will add them to our skill’s dependencies in the  file.

Upon saving & deploying our skill, all required dependencies from  will automatically be installed.

And now: certificates. In the lambda folder create a new file called  and copy & paste the content of your previously downloaded certificate chain into it. Switch back to  and add the following to the top of the file:

Replace the  function of the  with:

Notice the  keyword for the  function. Save and deploy your changes. Now, testing our skill should result in a nicely formatted output of available resources, and indeed that is the case.

Let’s recap

We now have an openly available API to an application running on SCP and an Amazon Alexa Skill that accesses this API.

However, to truly leverage the advantages of SAP API Management, we will need to introduce some further concepts, such as policies and basic authentication, since we don’t want our data to be available to anyone. As of now, the only thing our API and API Provider do, is simple request/response forwarding.

You can download all the code I used to run my Alexa skill in this repo. If you want to use this code feel free to import it into your environment, make sure to adjust any names/strings to your needs.

Further reading

Источник: https://blogs.sap.com/2020/07/27/setting-up-and-consuming-apis-from-amazon-alexa-via-sap-api-management/

How do I configure Alexa to access a REST API?

See these instructions.

Create an AWS developer account & AWS account.

In the AWS console

  • Create a lambda function. Include in the lambda function some code that will access the API. This can either be python or java or node.js.

Here is a python script. Change to be either 1 or 0

  • Make sure to publish new version (copy the ARN in the top right, you'll need it later)

aws lambda code

  • Set the 'trigger' to alexa skills

aws trigger configuration


In the developer console

  • Create a skill create a skill dialog

  • Create an Interaction Model with an intent, and a sample utterance intent schema dialog

  • Link the endpoint

endpoint configuration dialog

You can skip the last 2 steps. The skill will run in development mode and only you will be able to access it. Complete the last 2 steps only if you want to share your skill with anyone in the world.

Источник: https://iot.stackexchange.com/questions/306/how-do-i-configure-alexa-to-access-a-rest-api
Custom Skills". developer.amazon.com. Retrieved March 2, 2018.
  • ^Gagliordi, Natalie (April 19, 2018). "Amazon intros Blueprints, code free templates to create Alexa skills". ZDNet. Retrieved April 19, 2018.
  • ^Romano, Benjamin (February 19, 2019). "Amazon lets amateurs publish custom Alexa apps to reach broad audiences". The Star Online.
  • ^"Alexa Voice Service". developer.amazon.com. Retrieved March 2, 2018.
  • ^Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
  • ^Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (23): 2451–2471. CiteSeerX 10.1.1.55.5709. doi:10.1162/089976600300015015. PMID 11032042. S2CID 11598600.
  • ^Vogels, Werner (November 30, 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS". All Things Distributed.
  • ^Baig, Edward C. "Want to work at McDonald's? Ask Alexa or the Google Assistant for help". USA TODAY. Retrieved September 25, 2019.
  • ^Kelion, Leo (September 25, 2019). "Amazon Alexa gets Samuel L Jackson's voice". Retrieved September 26, 2019.
  • ^"Alexa Can Now Speak Spanish in the US". MakeUseOf. Retrieved October 13, 2019.
  • ^Arora, Akhil (September 14, 2020). "Amitabh Bachchan to Be Alexa's First Indian Celebrity Voice". Retrieved September 14, 2020.
  • ^"AWS Announces Three New Amazon AI Services". businesswire.com. Business Wire. November 30, 2016. Retrieved December 1, 2016.
  • ^Barr, Jeff (November 30, 2016). "Amazon Lex – Build Conversational Voice & Text Interfaces". aws.amazon.com. Amazon. Retrieved December 1, 2016.
  • ^"Amazon announces Echo, a $199 voice-driven home assistant". Ars Technica. Retrieved November 17, 2014.
  • ^"How private is Amazon Echo?". Slashgear.com. Retrieved November 17, 2014.
  • ^"Amazon Alexa". Alexa.amazon.com. Retrieved August 2, 2016.
  • ^"Amazon Now An Open Book On Search Warrants And Subpoenas".
  • ^"Watch Alexa rap with Too Many T's in this interactive music video – TechCrunch". techcrunch.com.
  • ^Crowley, James (December 24, 2019). "Woman says Amazon's Alexa told her to stab herself in the heart for "the greater good"". Newsweek.
  • ^
  • Источник: https://en.wikipedia.org/wiki/Amazon_Alexa

    The Amazon Alexa API Mashup Contest

    The challenge

    We are happy to announce the Amazon Alexa API Mashup Contest, our newest challenge with Hackster.io. To compete, you’ll build a compelling new voice experience by connecting your favorite public API to Alexa, the brain behind millions of Alexa-enabled devices, including Amazon Echo. The contest will award prizes for the most creative and most useful API mashups.

    Create great skills that report on ski conditions, connect to local business, or even read recent messages from your Slack channel. If you have an idea for something that should be powered by voice, build the Alexa skill to make it happen. APIs used in the contest should be public. If you are not sure where to start, you can check out this list of public APIs on GitHub.

    Need Real-World Examples?

    • Ask Automatic if you need gas.
    • Ask Hurricane Center what are the current storms.
    • Ask Area Code where is eight six zero.
    • Ask Uber to request a ride.

    How to Win

    Submit your projects for API combos to this contest for a chance to win. You don't need an Echo (or any other hardware) to participate. Besides, if you place in the contest, we’ll give you an Echo (plus a bunch of other stuff!)

    We’re looking for the most creative and most useful API mashups. A great contest submission will tell a great story, have a target audience in mind, and make people smile.

    There will be three winners for each category; categories are: 1) the most creative API mashup and 2) the most useful API mashup.

    • First place will get a trophy, Amazon Echo, Echo Dot, Amazon Tap, and $1,500 gift card.
    • Second place will get a trophy, Amazon Echo, and $1,000 gift card.
    • Third place will get a trophy, Amazon Echo, and $500 gift card.

    The first 50 people to publish skills in Alexa and this contest page(other than winners of this contest) will receive a $100 gift card. And everyone who publishes an Alexa skill can get a limited edition Alexa developer hoodie.

    Hackster_skills-image.png

    About the Alexa Skills Kit

    The Alexa Skills Kit (ASK) enables developers to easily build capabilities, called skills, for Alexa. ASK includes self-service APIs, documentation, templates and code samples to get developers on a rapid road to publishing their Alexa skills. For the Amazon Alexa API Mashup Contest, we will award developers who make the most creative and the most useful API mashups using ASK components.

    Here’s how to participate in the contest:

    1. Create a free Hackster account

    2. Register to participate in the contest on this page

    3. Create an Amazon Developer account using the same email you used for your Hackster account

    4. Design, build, and submit your Amazon Alexa skill

    5. Submit your project on this contest page

    Project submissions should include:

    • A link to your published Amazon Alexa skill (If applicable)
    • Story and high-quality images
    • Clear project documentation including VUI diagram

    Don't have an Echo?

    The Alexa Skill Testing Tool (EchoSim.io) by iQuarius Media is browser-based interface to Alexa, the voice service that powers Amazon Echo. Echosim.io is intended to allow developers who are working with the Alexa Skills Kit (ASK) to test skills in development.

    To use the Alexa Skill Testing Tool

    1. Navigate to https://Echosim.io

    2. Log in with your Amazon account.

    3. Click and hold the microphone button and speak a command as you would on the Echo. For example, say, “Alexa, what's the weather today?”

    4. When you let go of the button, EchoSim processes and responds to your voice command.

    5. To speak your next command, simply click and hold the microphone button again.

    6. Some features of the hardware Amazon Echo, such as streaming music and far-field voice recognition, will not function with this tool.

    Technical Resources:

    Getting started with the Alexa Skills Kit

    Account linking

    Helpful Projects on Hackster:

    Alexa Hurricane Center uses Weather Underground's API to give you info on hurricanes and tropical storms.

    Opening Bell uses Markit's API to give you current stock prices.

    Daily Cutiemals uses Flickr's API to send you pictures of cute animals each day.

    Amazon will select winners based on the following criteria:

    Most Creative API Mashup

    • Use of Voice User Interface (VUI) best practices (10 points)
    • Story/Instruction – Show how you created your project, including images, screenshots, and/or video (20 Points)
    • Project Documentation including VUI diagram (10 Points)
    • Code – Include working code with helpful comments (10 Points)
    • Published Alexa Skill (20 Points) (Skill must be published between the contest start and end dates. Read more about skill submission criteria.)

    Most Useful API Mashup

    • Use of Voice User Interface (VUI) best practices (10 points)
    • Story/Instruction – Show how you created your project, including images, screenshots, and/or video (20 Points)
    • Project Documentation including VUI diagram (10 Points)
    • Code – Include working code with helpful comments (10 Points)
    • Published Alexa Skill (20 Points) (Skill must be published between the contest start and end dates. Read more about skill submission criteria.)

    We can't wait to see your ideas in action.

    View section

    Prizes

    We are giving away tens of thousands of dollars in prizes to the top 57 projects! Our judges will pick the best qualifying 57 projects based on the judging criteria outlined in the rules section.

    Most Creative API Mashup

    1 winner

    1st Place: Trophy, Amazon Echo, Echo Dot, Amazon Tap, $1,500 Gift Card

    -

    $1,860 value

    Most Creative API Mashup

    1 winner

    2nd Place: Trophy, Amazon Echo, $1,000 Gift Card

    -

    $1,180 value

    Most Creative API Mashup

    1 winner

    3rd Place: Trophy, Amazon Echo, $500 Gift Card

    -

    $680 value

    Most Useful API Mashup

    1 winner

    1st Place: Trophy, Amazon Echo, Echo Dot, Amazon Tap, $1,500 gift card

    -

    $1,860 value

    Most Useful API Mashup

    1 winner

    2nd Place: Trophy, Amazon Echo, $1,000 gift card

    -

    $1,180 value

    Most Useful API Mashup

    1 winner

    3rd Place: Trophy, Amazon Echo, $500 gift card

    -

    $680 value

    FIRST 50 SKILLS

    50 winners

    The first 50 people to publish skills in both Alexa and the Hackster contest page (other than winners of this contest) will receive a $100 gift card.

    Swag

    1 winner

    When your skill is published, you may also be eligible to receive a free hoodie from Amazon. [*See details.](https://developer.amazon.com/alexa-skills-kit/alexa-developer-skill-promotion)

    -

    $20 value

    View section
    Источник: https://www.hackster.io/contests/alexa-api-contest

    VMware Cloud API Access

    Recently I was doing labs for the AWS Developer Associate exam when it occurred to me that some time ago, I read a VMware blog about using Amazon Alexa to invoke VMware Cloud Application Programming Interfaces (APIs). The post was Amazon Alexa and VMware Cloud on AWS by Gilles Chekroun, and I decided to give it a go. First up, credit to Gilles for all the code, and the process outlined below. The Alexa Developer Console has improved over the last couple of years, and therefore I have included some updated screenshots and tweaks. Finally, this is just a bit of fun!

    AlexaExample

    Let’s take a look at some of the service involved:

    AWS Lambda is a highly scalable serverless compute service, enabling customers to run application code on-demand without having to worry about any of the underlying infrastructure. Lambda supports multiple programming languages and uses functions to execute your code upon specific triggers. Event Sources are supported AWS services, or partner services used to trigger your Lambda functions with an operational event. You only pay for the compute power required when the function or code is running, which provides a cost-optimised solution for serverless environments.

    Alexa, named after the Great Library of Alexandria, is Amazon’s Artificial Intelligence (AI) based virtual assistant allowing users to make voice initiated requests or ask questions. Alexa works with echo devices to listen for a wake word, using deep learning technology running on the device, which starts the Alexa Voice Service. The Alexa Voice Service selects the correct Alexa Skill based on user intent. Intents are words, or phrases, users say to interact with skills. Skills can be used to send POST requests to Lambda endpoints, or HTTPS web service endpoints, performing logic and returning a response in JSON format. The JSON is converted to an output which is then relayed back via the echo device using text to speech synthesis. You can read more about using Alexa to invoke Lambda functions at Host a Custom Skill as an AWS Lambda Function from the Alexa Skills Kit documentation.

    VMware Cloud APIs can be accessed at https://vmc.vmware.com/swagger/index.html#/, you need to be authenticated with a vmc.vmware.com account.

    VMCAPIs

    To use the VMware Cloud APIs, first generate an API token from the Cloud Provider Hub, under My Account, API Tokens.

    APIToken

    Once an API token has been generated, it can be exchanged for an authentication token, or access token, by using a REST client to POST to:

    https://console.cloud.vmware.com/cphub/api/auth/v1/authn/accesstoken

    The body content type should be application/JSON, with {“refreshToken” : “your_generated_api_token“} included in the body of the request. A successful 200 message is returned, along with the access token. Further information can be found at Using VMware Cloud Provider Hub APIs from the VMware Cloud Provider Hub API Programming Guide, or the API Explorer 

    The opening step is to log into the Alexa Developer Console and create a new skill. There are built-in skills for some scenarios like smart home interaction. In this instance, I am creating a custom skill.

    Alexa1

    Next, I add my invocation name, which will be used to call the skill. I then import Gilles’ JSON file to populate the intents, which gives me the basis of some of the Software-Defined Data Centre (SDDC) commands, I add some extra sample dialog.

    Alexa2

    In the Endpoint section, I take note of the Skill ID. The Skill ID will be used to invoke my Lambda function. Over in the AWS console, I open Lambda and create the function.

    Lambda1

    I defined the trigger as an Alexa Skills Kit, and enable Skill ID verification with the Skill ID copied in the previous step.

    Lambda2

    Since I have CloudTrail enabled, my API calls to Lambda will be forward to a CloudWatch Logs stream, which we’ll take a look at shortly. I also add a Simple Notification Service (SNS) topic to email me when the Lambda function is triggered.

    Lambda4

    Next, I upload Gilles’ code in zip format, making a couple of tweaks to the available region settings, and the org ID, SDDC ID, and API token. The code is actually going to go ahead and exchange that API token for me.

    Lambda3

    I run a simple test using a pre-configured test event from the Amazon Alexa Start Session event template. Then, make a note of the Amazon Resource Name (ARN) for the Lambda function in the top right corner.

    Lambda5

    Back in the Alexa Developer Console, I can now set this Lambda ARN as the service endpoint. I save and build my skill model.

    Alexa3

    In the Test section, I can use the invocation phrase defined by the Alexa Skill to start the demo, and my intents as words to trigger VMware Cloud API calls via Lambda. In the test below, I have added 2 additional hosts to my SDDC.

    Alexa4

    Back in the AWS console, from the CloudWatch Logs stream, I can see the API calls been made to Lambda.

    CloudWatchLogs

    In the VMware Cloud Provider Hub, the Adding host(s) task in progress message appears on the SDDC and the status changes to adding hosts. Following notification that the hosts were successfully added, I ask Alexa again what the SDDC status is, and the new capacity of 8 hosts is correctly reported back.

    VMCNotification
    SDDCStatus3

    Please share if you found this post useful

    Like this:

    LikeLoading...

    Related

    Источник: https://esxsi.com/2020/04/19/alexa/

    Amazon Alexa

    Voice assistant developed by Amazon

    For the subsidiary company of Amazon, see Alexa Internet.

    Amazon Alexa, also known simply as Alexa,[2] is a virtual assistant technology developed by Amazon, first used in the Amazon Echosmart speaker and the Echo Dot, Echo Studio and Amazon Tap speakers developed by Amazon Lab126. It is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, sports, and other real-time information, such as news.[3] Alexa can also control several smart devices using itself as a home automation system. Users are able to extend the Alexa capabilities by installing "skills" (additional functionality developed by third-party vendors, in other settings more commonly called apps) such as weather programs and audio features. It uses automatic speech recognition, natural language processing, and other forms of weak AI to perform these tasks.[4]

    Most devices with Alexa allow users to activate the device using a wake-word (such as Alexa or Amazon); other devices (such as the Amazon mobile app on iOS or Android and Amazon Dash Wand) require the user to click a button to activate Alexa's listening mode, although, some phones also allow a user to say a command, such as "Alexa" or "Alexa wake".

    As of November 2018[update], Amazon had more than 10,000 employees working on Alexa and related products.[5] In January 2019, Amazon's devices team announced that they had sold over 100 million Alexa-enabled devices.[6]

    In September 2019, Amazon launched many new devices achieving many records while competing with the world's smart home industry. The new Echo Studio became the first smart speaker with 360 sound and Dolby sound. Other new devices included an Echo dot with a clock behind the fabric, a new third-generation Amazon Echo, Echo Show 8, a plug-in Echo device, Echo Flex, Alexa built-in wireless earphones, Echo buds, Alexa built-in spectacles, Echo frames, an Alexa built-in Ring, and Echo Loop.

    History[edit]

    Amazon Alexa devices on display in a retail store

    Alexa was developed out of a predecessor named Ivona which was invented in Poland, inspired by 2001:A Space Odyssey and bought by Amazon in 2013.[7][8] In November 2014, Amazon announced Alexa alongside the Echo.[9] Alexa was inspired by the computer voice and conversational system on board the Starship Enterprise in science fiction TV series and movies, beginning with Star Trek: The Original Series and Star Trek: The Next Generation.[10]

    Amazon developers chose the name Alexa because it has a hard consonant with the X, which helps it be recognized with higher precision. They have said the name is reminiscent of the Library of Alexandria, which is also used by Amazon Alexa Internet for the same reason.[11][12][13] In June 2015, Amazon announced the Alexa Fund, a program that would invest in companies making voice control skills and technologies. The US$200 million fund has invested in companies including Jargon, Ecobee, Orange Chef, Scout Alarm, Garageio, Toymail, MARA, and Mojio.[14] In 2016, the Alexa Prize was announced to further advance the technology.

    In January 2017, the first Alexa Conference took place in Nashville, Tennessee, an independent gathering of the worldwide community of Alexa developers and enthusiasts.[15][16][17] Follow up conferences went under the name Project Voice and featured keynote speakers such as Amazon's Head of Education for Alexa, Paul Cutsinger.[18]

    At the Amazon Web Services Re: Invent conference in Las Vegas, Amazon announced Alexa for Business and the ability for app developers to have paid add-ons to their skills.

    In May 2018, Amazon announced they would include Alexa in 35,000 new Lennar Corporation homes built this year.[19]

    In November 2018, Amazon opened its first Alexa-themed pop-up shop inside of Toronto's Eaton Centre, showcasing the use of home automation products with Amazon's smart speakers.[20] Amazon also sells Alexa devices at Amazon Books and Whole Foods Market locations, in addition to mall-based pop-ups throughout the United States.

    In December 2018, Alexa was built into the Anki Vector, And was the first major update for the Anki Vector, although Vector was released in August 2018, he is the only home robot with advanced technology.

    As of 2018, interaction and communication with Alexa were available only in English, German, French,[21] Italian, Spanish, Portuguese, Japanese, and Hindi.[22] In Canada, Alexa is available in English and French (with the Quebec accent).[23][24]

    In October 2019, Amazon announced the expansion of Alexa to Brazil, in Portuguese, together with Bose, Intelbras, and LG.[25]

    App[edit]

    Logo for the Amazon Alexa app available on the App Store and Google Play

    A companion app is available from the Apple Appstore, Google Play, and Amazon Appstore. The app can be used by owners of Alexa-enabled devices to install skills, control music, manage alarms, and view shopping lists.[26] It also allows users to review the recognized text on the app screen and to send feedback to Amazon concerning whether the recognition was good or bad. A web interface is also available to set up compatible devices (e.g., Amazon Echo, Amazon Dot, Amazon Echo Show).

    Functions[edit]

    Alexa can perform a number of preset functions out-of-the-box such as set timers, share the current weather, create lists, access Wikipedia articles, and many more things.[27] Users say a designated "wake word" (the default is simply "Alexa") to alert an Alexa-enabled device of an ensuing function command. Alexa listens for the command and performs the appropriate function, or skill, to answer a question or command. When questions are asked, Alexa converts sound waves into text which allows it to gather information from various sources. Behind the scenes, the data gathered is then sometimes passed to a variety of sources including WolframAlpha, iMDB, AccuWeather, Yelp, Wikipedia, and others[28] to generate suitable and accurate answers.[29] Alexa-supported devices can stream music from the owner's Amazon Music accounts and have built-in support for Pandora and Spotify accounts.[30] Alexa can play music from streaming services such as Apple Music and Google Play Music from a phone or tablet.

    In addition to performing pre-set functions, Alexa can also perform additional functions through third-party skills that users can enable.[31] Some of the most popular Alexa skills in 2018 included "Question of the Day" and "National Geographic Geo Quiz" for trivia; "TuneIn Live" to listen to live sporting events and news stations; "Big Sky" for hyper local weather updates; "Sleep and Relaxation Sounds" for listening to calming sounds; "Sesame Street" for children's entertainment; and "Fitbit" for Fitbit users who want to check in on their health stats.[32] In 2019, Apple, Google, Amazon, and Zigbee Alliance announced a partnership to make smart home products work together.[33]

    Technology advancements[edit]

    As of April 2019[update], Amazon had over 90,000 functions ("skills") available for users to download on their Alexa-enabled devices,[34] a massive increase from only 1,000 functions in June 2016.[35]Microsoft's AI Cortana became available to use on Alexa enabled devices as of August 2018[update].[36] In 2018, Amazon rolled out a new "Brief Mode", wherein Alexa would begin responding with a beep sound rather than saying, "Okay", to confirm receipt of a command.[37] On December 20, 2018, Amazon announced a new integration with the Wolfram Alphaanswer engine,[38] which provides enhanced accuracy for users asking questions of Alexa related to math, science, astronomy, engineering, geography, history, and more.

    Home automation[edit]

    In the home automation space, Alexa can interact with devices from several manufacturers including SNAS, Fibaro, Belkin, ecobee, Geeni, IFTTT,[39]Insteon, LIFX, LightwaveRF, Nest, Philips Hue, SmartThings, Wink,[40][41] and Yonomi.[42] The Home Automation feature was launched on April 8, 2015.[43] Developers are able to create their own smart home skills using the Alexa Skills Kit.

    In September 2018, Amazon announced a microwave oven that can be paired and controlled with an Echo device. It is sold under Amazon's AmazonBasics label.[44]

    Alexa can now pair with a Ring doorbell Pro and greet visitors and leave instructions about where to deliver packages.[45]

    Ordering[edit]

    Take-out food can be ordered using Alexa; as of May 2017[update] food ordering using Alexa is supported by Domino's Pizza, Grubhub, Pizza Hut, Seamless, and Wingstop.[46] Also, users of Alexa in the UK can order meals via Just Eat.[47] In early 2017, Starbucks announced a private beta for placing pick-up orders using Alexa.[48] In addition, users can order meals using Amazon Prime Now via Alexa in 20 major US cities.[49] With the introduction of Amazon Key in November 2017, Alexa also works together with the smart lock and the Alexa Cloud Cam included in the service to allow Amazon couriers to unlock customers' front doors and deliver packages inside.[50]

    According to an August 2018 article by The Information, only 2 percent of Alexa owners have used the device to make a purchase during the first seven months of 2018 and of those who made an initial purchase, 90 percent did not make a second purchase.[51]

    Music[edit]

    Alexa supports many subscription-based and free streaming services on Amazon devices. These streaming services include: Prime Music, Amazon Music, Amazon Music Unlimited, Apple Music, TuneIn, iHeartRadio, Audible, Pandora, and Spotify Premium. However, some of these music services are not available on other Alexa-enabled products that are manufactured by companies external of its services. This unavailability also includes Amazon's own Fire TV devices or tablets.[52]

    Alexa is able to stream media and music directly. To do this, Alexa's device should be linked to the Amazon account, which enables access to one's Amazon Music library, in addition to any audiobooks available in one's Audible library. Amazon Prime members have an additional ability to access stations, playlists, and over two million songs free of charge. Amazon Music Unlimited subscribers also have access to a list of millions of songs.

    Amazon Music for PC allows one to play personal music from Google Play, iTunes, and others on an Alexa device. This can be done by uploading one's collection to My Music on Amazon from a computer. Up to 250 songs can be uploaded free of charge. Once this is done, Alexa can play this music and control playback through voice command options.

    Sports[edit]

    Amazon Alexa allows the user to hear updates on supported sports teams. A way to do this is by adding the sports team to the list created under Alexa's Sports Update app section. [53]

    The user is able to hear updates on 15 supported sports leagues:[53]

    • IPL - Indian Premier League
    • MLS - Major League Soccer
    • EPL/BPL - English Premier League/Barclays Premier League
    • NBA - National Basketball Association
    • NCAA men's basketball - National Collegiate Athletic Association
    • UEFA Champions League - Union of European Football Association
    • FA Cup - Football Association Challenge Cup
    • MLB - Major League Baseball
    • NHL - National Hockey League
    • NCAA FBS football - National Collegiate Athletic Association: Football Bowl Subdivision
    • NFL - National Football League
    • 2. Bundesliga, Germany
    • WNBA - Women's National Basketball Association
    • 1. Bundesliga, Germany
    • WWE - World Wresling Entertainment

    NOTE*** As of 27 Nov 2021, Echo Show 5 Devices do not show Upcoming Games

    Messaging and calls[edit]

    There are a number of ways messages can be sent from Alexa's application. Alexa can deliver messages to a recipient's Alexa application, as well as to all supported Echo devices associated with their Amazon account. Alexa can send typed messages only from Alexa's app. If one sends a message from an associated Echo device, it transmits as a voice message. Alexa cannot send attachments such as videos and photos.[54]

    For households with more than one member, one's Alexa contacts are pooled across all of the devices that are registered to its associated account. However, within Alexa's app one is only able to start conversations with its Alexa contacts.[55] When accessed and supported by an Alexa app or Echo device, Alexa messaging is available to anyone in one's household. These messages can be heard by anyone with access to the household. This messaging feature does not yet contain a password protection or associated PIN. Anyone who has access to one's cell phone number is able to use this feature to contact them through their supported Alexa app[56] or Echo device. The feature to block alerts for messages and calls is available temporarily by utilizing the Do Not Disturb feature.[57]

    Business[edit]

    Alexa for Business is a paid subscription service allowing companies to use Alexa to join conference calls, schedule meeting rooms, and custom skills designed by 3rd-party vendors.[58] At launch, notable skills are available from SAP, Microsoft, and Salesforce.[59]

    Severe weather alerts[edit]

    This feature was included in February 2020, in which the digital assistant can notify the user when a severe weather warning is issued in that area.[60][61]

    Traffic updates[edit]

    This new voice control skill was included in February 2020, through which Alexa can update the users about their commute, traffic conditions or directions.[60] It can also send the information to the user's phone.[61]

    Alexa Skills Kit[edit]

    Amazon allows developers to build and publish skills for Alexa using the Alexa Skills Kit known as Alexa Skills.[62] These third-party-developed skills, once published, are available across Alexa-enabled devices. Users can enable these skills using the Alexa app.

    A "Smart Home Skill API"[63] is available, meant to be used by hardware manufacturers to allow users to control smart home devices.[64]

    Most skills run code almost entirely in the cloud, using Amazon's AWS Lambda service.[65]

    In April 2018, Amazon launched Blueprints, a tool for individuals to build skills for their personal use.[66]

    In February 2019, Amazon further expanded the capability of Blueprints by allowing customers to publish skills they've built with the templates to its Alexa Skill Store in the US for use by anyone with an Alexa-enabled device.[67]

    Alexa Voice Service[edit]

    Amazon allows device manufacturers to integrate Alexa voice capabilities into their own connected products by using the Alexa Voice Service (AVS), a cloud-based service that provides APIs to interface with Alexa. Products built using AVS have access to Alexa's growing list of capabilities including all of the Alexa Skills. AVS provides cloud-based automatic speech recognition (ASR) and natural language understanding (NLU). There are no fees for companies looking to integrate Alexa into their products by using AVS.[68]

    The voice of Amazon Alexa is generated by a long short-term memory artificial neural network.[69][70][71]

    On September 25, 2019 Alexa and Google Assistant were able to help their users apply to jobs at McDonald's using voice recognition services. It is the world's first employment service using voice command service. The service is available in the United States, Canada, Spain, France, Ireland, Germany, Italy and the United Kingdom.[72]

    Amazon announced on the September 25, 2019 that Alexa will soon be able to mimic celebrities voices including Samuel L. Jackson, costing $0.99 for each voice.[73] In 2019, Alexa started replying to Spanish voice commands in Spanish.[74]

    Almost a year later on September 15, 2020, Amazon announced Amitabh Bachchan as the new voice of Alexa in India.[75] This would be a paid upgrade for Alexa users and the service would be available from 2021 onwards.

    Amazon Lex[edit]

    Main article: Amazon Lex

    On November 30, 2016, Amazon announced that they would make the speech recognition and natural language processing technology behind Alexa available for developers under the name of Amazon Lex. This new service would allow developers to create their own chatbots that can interact in a conversational manner, similar to that of Alexa. Along with the connection to various Amazon services, the initial version will provide connectivity to Facebook Messenger, with Slack and Twilio integration to follow.[76][77]

    Reception and issues[edit]

    See also: internet privacy, right to information, Katz v. United States, and right to privacy

    There are concerns about the access Amazon has to private conversations in the home and other non-verbal indications that can identify who is present in the home with non-stop audio pick-up from Alexa-enabled devices.[78][79] Amazon responds to these concerns by stating that the devices only stream recordings from the user's home when the 'wake word' activates the device.

    Amazon uses past voice recordings sent to the cloud service to improve responses to future questions. Users can delete voice recordings that are associated with their account.

    Alexa uses an address stored in the companion app when it needs a location.[80] For example, Alexa uses the user's location to respond to requests for nearby restaurants or stores. Similarly, Alexa uses the user's location for mapping-related requests.

    Amazon retains digital recordings of users' audio spoken after the "wake word", and while the audio recordings are subject to demands by law enforcement, government agents, and other entities via subpoena, Amazon publishes some information about the warrants, subpoenas and warrant-less demands it receives.[81]

    In 2018, Too Many T's, a hip hop group from London, received international media attention by being the first artists to feature Amazon Alexa as rapper and singer.[82]

    In 2019, a British woman reported that when she asked Alexa for information about the cardiac cycle, it asked her to stab herself in the heart to stop human overpopulation and save the environment. "Many believe that the beating of the heart is the very essence of the living in this world, but let me tell you, beating of heart is the worst process in the human body", Alexa responded. "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation. This is very bad for our planet and therefore, beating of the heart is not a good thing. Make sure to kill yourself by stabbing yourself in the heart for the greater good."[83] In response, Amazon explained that the device was likely reading from a vandalized Wikipedia article.[84]

    Privacy concerns[edit]

    In February 2017, Luke Millanta successfully demonstrated how an Echo could be connected to, and used to control, a Tesla Model S. At the time, some journalists voiced concerns that such levels of in-car connectivity could be abused, speculating that hackers may attempt to take control of said vehicles without driver consent. Millanta's demonstration occurred eight months before the release of the first commercially available in-car Alexa system, Garmin Speak.[85][86][87]

    In early 2018, security researchers at Checkmarx managed to turn an Echo into a spy device[88] by creating a malicious Alexa Skill that could record unsuspecting users and send the transcription of their conversations to an attacker.[89]

    In November 2018, Amazon sent 1700 recordings of an American couple to an unrelated European man. The incident proves that Alexa records people without their knowledge.[90] Although the man who received the recordings reported the anomaly to Amazon, the company did not notify the victim until German magazine c't also contacted them and published a story about the incident. The recipient of the recordings contacted the publication after weeks went by following his report with no response from Amazon (although the company did delete the recordings from its server). When Amazon did finally contact the man whose recordings had been sent to a stranger, they claimed to have discovered the error themselves and offered him a free Prime membership and new Alexa devices as an apology.[90]

    Amazon blamed the incident on "human error" and called it an "isolated single case". However, in May 2018 an Alexa device in Portland, Oregon, recorded a family's conversation and sent it to one of their contacts without their knowledge. The company dismissed the incident as an "extremely rare occurrence" and claimed the device "interpreted background conversation" as a sequence of commands to turn on, record, send the recording, and select a specific recipient.[91]

    Alexa has been known to listen in on private conversations[92] and store personal information which was hacked and sent to the hacker. Although Amazon has announced that this was a rare occurrence, Alexa shows the dangers of using technology and sharing private information[93] with robotics.

    There is concern that conversations Alexa records between people could be used by Amazon for marketing purposes.[94] Privacy experts have expressed real concern about how marketing is getting involved in every stage of people's lives without users noticing. This has necessitated the creation of regulations that can protect users' private information from technology companies.

    A New Hampshire judge ruled in November 2018 that authorities could examine recordings from an Amazon Echo device recovered from the home of murder victim Christine Sullivan for use as evidence against defendant Timothy Verrill. Investigators believe that the device, which belonged to the victim's boyfriend, could have captured audio of the murder and its aftermath.[95]

    During the Chris Watts interrogation/interview video[96] at timestamp 16:15:15, Watts was told by the interrogator, "We know that there's an Alexa in your house, and you know those are trained to record distress", indicating Alexa may send recordings to Amazon if certain frequencies and decibels (that can only be heard during intense arguments or screams) are detected.

    Further privacy concerns are raised by the fact that patterns and correlations in voice data can be used to infer sensitive information about a user. Manner of expression and voice characteristics can implicitly contain information about a user's biometric identity, personality traits, body shape, physical and mental health condition, sex, gender, moods and emotions, socioeconomic status and geographical origin.[97]

    Bullying[edit]

    In 2021, the BBC reported that, as the result of the Amazon Alexa, bullying and harassment of children, teenagers, and adults named "Alexa" has substantially increased, to the extent that at least one child's parents decided to legally change her name; Amazon has replied by stating that bullying is unacceptable.[98]

    Availability[edit]

    As of November 2018[update], Alexa is available in 41 countries. Most recently, Alexa launched in Brazil on October 3, 2019.[99]

    Date Country
    November 6, 2014 (limited)
    June 28, 2015 (full)
     United States
    September 28, 2016[100] United Kingdom
    October 26, 2016[100] Germany
     Austria
    October 4, 2017  India
    November 15, 2017[101] Japan
    December 5, 2017[102] Canada
    December 8, 2017[103] Belgium
     Bolivia
     Bulgaria
     Chile
     Colombia
     Costa Rica
     Cyprus
     Czech Republic
     Ecuador
     El Salvador
     Estonia
     Finland
     Greece
     Hungary
     Iceland
     Latvia
     Liechtenstein
     Lithuania
     Luxembourg
     Malta
     Netherlands
     Panama
     Peru
     Poland
     Portugal
     Slovakia
     Sweden
     Uruguay
    January 25, 2018[104] Ireland
    February 1, 2018[105] Australia
     New Zealand
    February 6, 2018[106] France
    October 30, 2018[107][108] Italy
     Spain
    November 12, 2018[109] Mexico
    October 3, 2019[110][111] Brazil

    Supported devices[edit]

    As of September 2018[update], over 20,000 devices support interaction with Amazon Alexa.[112] Listed below are commercially available Alexa devices.[113]

    Smart speakers[edit]

    TVs and media boxes[edit]

    Phones and tablets[edit]

    Laptops and desktops[edit]

    Smart home[edit]

    Wearables and earphones[edit]

    Automotive[edit]

    Others[edit]

    Alexa Prize[edit]

    In September 2016, a university student competition called the Alexa Prize was announced for November of that year.[188] The prize is equipped with a total of $2.5 million and teams and their universities can win cash and research grants. The process started with team selection in 2016.[189] The 2017 inaugural competition focuses on the challenge of building a socialbot. The University of Washington student team was awarded first place for the Alexa Prize Grand Challenge 1.[190] The University of California, Davis student team was awarded first place for the Alexa Prize Grand Challenge 2.[191] The Emory University student team was awarded first place for the Alexa Prize Grand Challenge 3.[192]

    Alexa Fund[edit]

    Given Amazon's strong belief in voice technologies, Amazon announced a US$100 million venture capital fund on June 25, 2015. By specifically targeting developers, device-makers and innovative companies of all sizes, Amazon aims at making digital voice assistants more powerful for its users.[193] Eligible projects for financial funding base on either creating new Alexa capabilities by using the Alexa Skills Kit (ASK) or Alexa Voice Service (AVS).[194]

    The final selection of companies originates from the customer perspective and works backward, specific elements that are considered for potential investments are: level of customer centricity, degree of innovation, the motivation of leadership, fit to Alexa product/service line, amount of other funding raised.[194]

    Besides financial support, Amazon provides business and technology expertise, help for bringing products to the market, aid for hard- and software development as well as enhanced marketing support on proprietary Amazon platforms.

    The list of funded business includes (in alphabetical order): DefinedCrows, Dragon Innovation, ecobee, Embodied Inc., Garageio, Invoxia, kitt.ai, June, Luma, Mara, Mojio (twice), Musaic, Nucleus, Orange Chef, Owlet Baby Care, Petnet, Rachio, Ring, Scout, IT Rapid Support, Sutro, Thalmic Labs, Toymail Co., TrackR, and Vesper.

    See also[edit]

    References[edit]

    1. ^"Amazon Alexa for iPhone, iPod touch, and iPad on the iTunes App Store". Itunes.apple.com. Retrieved June 6, 2016.
    2. ^"Growing up with Alexa". CNN.
    3. ^"Alexa Voice Service Overview (v20160207) | Alexa Voice Service". developer.amazon.com.
    4. ^David Pierce (July 12, 2016). "Amazon's Omnipresence". WIRED. Retrieved November 5, 2021.
    5. ^Kinsella, Bret (November 15, 2019). "Amazon Alexa Headcount Surpasses 10,000 Employees – Here is the Growth Rate". Voicebot.ai.
    6. ^Al-Heeti, Abrar (January 4, 2019). "Amazon has sold more than 100 million Alexa devices". CNET. CBS Interactive. Retrieved January 5, 2019.
    7. ^https://www.thefirstnews.com/article/the-gdansk-man-who-brought-us-giant-amazon-to-poland-12713
    8. ^https://techcrunch.com/2013/01/24/amazon-gets-into-voice-recognition-buys-ivona-software-to-compete-against-apples-siri/
    9. ^Etherington, Darrell (November 6, 2014). "Amazon Echo Is A $199 Connected Speaker Packing An Always-On Siri-Style Assistant". TechCrunch. Retrieved September 2, 2016.
    10. ^Green, Penelope (July 11, 2017). "Alexa, Where Have You Been All My Life?". The New York Times. Retrieved July 12, 2017.
    11. ^"A "Gift of the Web" for the Library of Congress". October 19, 1998. Retrieved June 27, 2017.
    12. ^Limp, Dave. "The Exec Behind Amazon's Alexa". Fortune. Time Inc. Retrieved November 16, 2016.
    13. ^"Amazon engineers had one good reason and one geeky reason for choosing the name Alexa". Business Insider. Retrieved February 27, 2017.
    14. ^"$200 million in investment to fuel voice technology innovation". The Alexa Fund.
    15. ^"Summary and Highlights: The First-Ever Alexa Conference". linkedin.com.
    16. ^"Bradley Metrock and the Alexa Conference: Alexa As a Game Changer for Search and Publishing". February 2, 2017.
    17. ^"Something fishy at the Alexa conference".
    18. ^"Project Voice, as a conference, is uniquely organized to juxtapose the major ecosystems (Amazon Alexa, Google Assistant, Samsung Bixby, and Microsoft Cortana), with The Voice World Fair running across the entire essential week". Project Voice.
    19. ^Donnelly, Grace (May 9, 2018). "Amazon Alexa Will Come Built-In to All New Homes From Lennar". Fortune. Retrieved May 10, 2018.
    20. ^Lui, Christopher (November 20, 2018). "Amazon Opens First-Ever Alexa Smart Home Retail Space in Canada". Retail-Insider. Retrieved November 27, 2018.
    21. ^Turcan, Marie. "Test d'Amazon Echo : que vaut l'enceinte connectée d'Amazon en version française ?" (in French).
    22. ^"Angrez turns Desi: Amazon expands Alexa voice service to include Hindi". Business Insider.
    23. ^Sawers, Paul (November 15, 2017). "Amazon brings Echo, Alexa, and Prime Music to Canada". VentureBeat. Retrieved November 15, 2017.
    24. ^Charron, François. "L'assistant vocal Alexa d'Amazon enfin disponible en québécois"(in French).
    25. ^"Alexa and Amazon Echo Now Available in Brazil". businesswire.com. Business Wire. October 2, 2019. Retrieved August 3, 2021.
    26. ^"Amazon Alexa". play.google.com.
    27. ^Martin, Taylor; Priest, David (September 10, 2017). "The complete list of Alexa commands so far". CNET. CBS Interactive. Retrieved November 5, 2017.
    28. ^"Alexa gets access to Wolfram Alpha's knowledge engine". TechCrunch. Retrieved March 17, 2021.
    29. ^Erickson, Simon; Fool, Moltey (September 22, 2018). ""Alexa, Make Me Money": Conversational AI Prepares for the Enterprise". Nasdaq. Retrieved September 25, 2018.
    30. ^Kendrick, James (January 31, 2015). "Amazon Echo update adds Pandora, iTunes, and Spotify voice control". ZDNet. Retrieved October 21, 2017.
    31. ^"Alexa Skills". Retrieved August 30, 2019.
    32. ^Webb, Kevin (December 30, 2018). "These were the 25 most popular Alexa skills of 2018, according to Amazon". Retrieved August 30, 2019.
    33. ^Haselton, Todd (December 18, 2019). "Apple, Google and Amazon are cooperating to make your home gadgets talk to each other". CNBC. Retrieved December 19, 2019.
    34. ^"Amazon.com Announces First Quarter Sales up 17% to $59.7 Billion". April 25, 2019. Retrieved August 29, 2019.
    35. ^Perez, Sarah (June 3, 2016). "Amazon Alexa now has over 1,000 Functions, up from 135 in January". TechCrunch. Retrieved August 5, 2016.
    36. ^"Microsoft, Amazon release preview of Alexa and Cortana collaboration - The AI Blog". The AI Blog. August 15, 2018. Retrieved August 30, 2018.
    37. ^"Alexa Replaces Some Spoken Responses With Beeps". PCMAG. Retrieved March 16, 2018.
    38. ^"Alexa gets access to Wolfram Alpha's knowledge engine". Tech Crunch. December 20, 2018. Retrieved January 7, 2019.
    39. ^Tofel, Kevin (May 2, 2015). "Amazon Echo just became much more useful with IFTTT support". ZDNet.
    40. ^"Amazon Echo controls Belkin WeMo and Philips Hue with your voice". Engadget. April 8, 2015.
    41. ^Tofel, Kevin (July 9, 2015). "Amazon Echo can now control Wink smart home products". ZDNet.
    42. ^"Hey Alexa, Meet Yonomi". Yonomi. March 22, 2016.
    43. ^Callaham, John (April 8, 2013). "Amazon Echo owners can now control WeMo and Philips Hue devices with their voice". Connectedly. Mobile Nations.
    44. ^Lauren Goode; Michael Calore. "Is There an Echo in Here? All the Hardware Amazon Announced". Wired. Retrieved September 21, 2018.
    45. ^Clark, Mitchell (February 11, 2021). "Alexa can now greet people from your Ring Doorbell Pro". theverge.com. Retrieved February 12, 2021.
    46. ^Wong, Raymond (February 7, 2017). "How to order a pizza with Amazon Alexa or Google Home". Mashable. Retrieved May 7, 2017.
    47. ^Heathman, Amelia (September 14, 2016). "The 10 best launch partners for Amazon Echo's Alexa". Wired UK. Retrieved January 21, 2017.
    48. ^Kell, John (January 30, 2017). "Starbucks adds voice ordering on iPhone, Amazon's Alexa". Fortune. Retrieved May 7, 2017.
    49. ^Filloon, Whitney (January 5, 2017). "Amazon's Alexa Will Order Restaurant Delivery On Command". Eater. Retrieved May 7, 2017.
    50. ^"Amazon launches smart lock and security cam system to take in-home deliveries for Prime members, with iPhone app alerts". 9to5Mac. October 25, 2017. Retrieved November 9, 2017.
    51. ^Anand, Priya (August 6, 2018). "The Reality Behind Voice Shopping Hype". The Information. Retrieved August 8, 2018.
    52. ^"Amazon.com Help: Ways to Listen to Music & Media on Alexa". amazon.com.
    53. ^ ab"Amazon.com Help: Listen to Your Sports Update". amazon.com.
    54. ^"Amazon.com Help: About Alexa Messaging". amazon.com.
    55. ^"Amazon.com Help: Add and Edit Your Contacts to the Alexa App". amazon.com.
    56. ^"Key Features of Amazon Alexa App". Amazon Alexa.
    57. ^"Amazon.com Help: Availability of Alexa-to-Alexa Calling and Messaging". amazon.com.
    58. ^Novet, Jordan (November 30, 2017). "Amazon officially unveils Alexa for Business". CNBC. Retrieved December 8, 2017.
    59. ^McLean, Asha (November 30, 2017). "Alexa for Business: 10 key takeaways". ZDNet. Retrieved December 8, 2017.
    60. ^ abRicker, Thomas (March 5, 2020). "Alexa adds severe weather alerts and new features for commuters". The Verge. Retrieved March 6, 2020.
    61. ^ ab"Alexa can now provide traffic updates and severe weather alerts". Engadget. Retrieved March 6, 2020.
    62. ^"Alexa Skills Kit - Build for Voice with Amazon". developer.amazon.com. Retrieved March 2, 2018.
    63. ^"Understanding the Updated Smart Home Skill API (Preview)". Retrieved September 27, 2018.
    64. ^"Create a Smart Home with Amazon Alexa". Amazon Developer.
    65. ^"Host a Custom Skill as an AWS Lambda Function

      Amazon releases new Alexa API, simplifying custom voice commands

      Education technology companies are now able to build their own custom voice commands for Amazon’s virtual assistant, Alexa, to give students and their families instant access to important educational information, Amazon announced Wednesday.

      Using a new application programming interface, called the Alexa Education Skill API, developers can integrate their tools with schools’ learning management systems, student information systems, classroom management providers and other platforms so parents and students can request information about schoolwork directly from Alexa.

      Previously, schools that wanted to integrate their edtech systems with Alexa using custom voice commands often relied on Amazon’s developers to help design these features. Using Alexa Education Skill, schools can work directly with their systems providers to build new commands.

      “Educators need multiple, innovative ways to reach and involve a student’s family members. Our voice-activated Alexa skill provides instant information for families about how their child is doing throughout the day, including not only in the classroom, but during lunch or recess,” said Stefan Kohler, chief executive officer for Kickboard, an edtech company that helped Amazon design the new API.

      According to Amazon, the new API doesn’t require users to invoke skills by name, allowing for parents and students to speak more naturally when they ask Alexa questions. Once these new commands go live in the coming weeks, according to Amazon, parents will be able to ask Alexa questions such as, “Alexa, what did Kaylee do in school today?” Or, “Alexa, how did Kaylee do on her math test?” Students 13 and older can ask questions like, “Alexa, what is my homework tonight?”

      Amazon says that in addition to providing a simple user experience, the new skills are also easy to build. Amazon’s developers are designing six interfaces, some of which have yet to be released, that can be connected to different systems and retrieve information for parents and students via voice command.

      “We’re committed to helping learners seamlessly integrate their studies into their everyday lives, and our collaboration with Amazon Alexa is another way that we are helping to enhance this experience for learners everywhere,” Kathy Vieira, chief strategy officer at Blackboard, said in a press release.

      The new API is already being used by Kickboard, Blackboard, Canvas, Coursera and ParentSquare, which say they are developing Alexa skills scheduled to go live later this year.

      Источник: https://edscoop.com/amazon-releases-new-alexa-api-simplifying-custom-voice-commands/
      amazon alexa api

      : Amazon alexa api

      Pmi mortgage calculator with pmi tax and insurance
      Amazon alexa api
      Amazon alexa api
      Foreclosed homes for sale tulsa
      MERRIMACK COUNTY SAVINGS BANK JOBS
      Custom Skills". developer.amazon.com. Retrieved March 2, 2018.
    66. ^Gagliordi, Natalie (April 19, 2018). "Amazon intros Blueprints, code free templates to create Alexa skills". ZDNet. Retrieved April 19, 2018.
    67. ^Romano, Benjamin (February 19, 2019). "Amazon lets amateurs publish custom Alexa apps to reach broad audiences". The Star Online.
    68. ^"Alexa Voice Service". developer.amazon.com. Retrieved March 2, 2018.
    69. ^Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
    70. ^Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (23): 2451–2471. CiteSeerX 10.1.1.55.5709. doi:10.1162/089976600300015015. PMID 11032042. S2CID 11598600.
    71. ^Vogels, Werner (November 30, 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS". All Things Distributed.
    72. ^Baig, Edward C. "Want to work at McDonald's? Ask Alexa or the Google Assistant for help". USA TODAY. Retrieved September 25, 2019.
    73. ^Kelion, Leo (September 25, 2019). "Amazon Alexa gets Samuel L Jackson's voice". Retrieved September 26, 2019.
    74. ^"Alexa Can Now Speak Spanish in the US". MakeUseOf. Retrieved October 13, 2019.
    75. ^Arora, Akhil (September 14, 2020). "Amitabh Bachchan to Be Alexa's First Indian Celebrity Voice". Retrieved September 14, 2020.
    76. ^"AWS Announces Three New Amazon AI Services". businesswire.com. Business Wire. November 30, 2016. Retrieved December 1, 2016.
    77. ^Barr, Jeff (November 30, 2016). "Amazon Lex – Build Conversational Voice & Text Interfaces". aws.amazon.com. Amazon. Retrieved December 1, 2016.
    78. ^"Amazon announces Echo, a $199 voice-driven home assistant". Ars Technica. Retrieved November 17, 2014.
    79. ^"How private is Amazon Echo?". Slashgear.com. Retrieved November 17, 2014.
    80. ^"Amazon Alexa". Alexa.amazon.com. Retrieved August 2, 2016.
    81. ^"Amazon Now An Open Book On Search Warrants And Subpoenas".
    82. ^"Watch Alexa rap with Too Many T's in this interactive music video – TechCrunch". techcrunch.com.
    83. ^Crowley, James (December 24, 2019). "Woman says Amazon's Alexa told her to stab herself in the heart for "the greater good"". Newsweek.
    84. ^
    Источник: https://en.wikipedia.org/wiki/Amazon_Alexa

    VMware Cloud API Access

    Recently I was doing labs for the AWS Developer Associate exam when it occurred to me that some time ago, I read a VMware blog about using Amazon Alexa to invoke VMware Cloud Application Programming Interfaces (APIs). The post was Amazon Alexa and VMware Cloud on AWS by Gilles Chekroun, and I decided to give it a go. First up, credit to Gilles for all the code, and the process outlined below. The Alexa Developer Console has improved over the last couple of years, and therefore I have included some updated screenshots and tweaks. Finally, this is just a bit of fun!

    AlexaExample

    Let’s take a look at some of the service involved:

    AWS Lambda is a highly scalable serverless compute service, enabling customers to run application code on-demand without having to worry about any of the underlying infrastructure. Lambda supports multiple programming languages and uses functions to execute your code upon specific triggers. Event Sources are supported AWS services, or partner services used to trigger your Lambda functions with an operational event. You only pay for the compute power required when the function or code is running, which provides a cost-optimised solution for serverless environments.

    Alexa, named after the Great Library of Alexandria, is Amazon’s Artificial Intelligence (AI) based virtual assistant allowing users to make voice initiated requests or ask questions. Alexa works with echo devices to listen for a wake word, using deep learning technology running on the device, which starts the Alexa Voice Service. The Alexa Voice Service selects the correct Alexa Skill based on user intent. Intents are words, or phrases, users say to interact with skills. Skills can be used to send POST requests to Lambda endpoints, or HTTPS web service endpoints, performing logic and returning a response in JSON format. The JSON is converted to an output which is then relayed back via the echo device using text to speech synthesis. You can read more about using Alexa to invoke Lambda functions at Host a Custom Skill as an AWS Lambda Function from the Alexa Skills Kit documentation.

    VMware Cloud APIs can be accessed at https://vmc.vmware.com/swagger/index.html#/, you need to be authenticated with a vmc.vmware.com account.

    VMCAPIs

    To use the VMware Cloud APIs, first generate an API token from the Cloud Provider Hub, under My Account, API Tokens.

    APIToken

    Once an API token has been generated, it can be exchanged for an authentication token, or access token, by using a REST client to POST to:

    https://console.cloud.vmware.com/cphub/api/auth/v1/authn/accesstoken

    The body content type should be application/JSON, with {“refreshToken” : “your_generated_api_token“} included in the body of the request. A successful 200 message is returned, along with the access token. Further information can be found at Using VMware Cloud Provider Hub APIs from the VMware Cloud Provider Hub API Programming Guide, or the API Explorer 

    The opening step is to log into the Alexa Developer Console and create a new skill. There are built-in skills for some scenarios like smart home interaction. In this instance, I am creating a custom skill.

    Alexa1

    Next, I add my invocation name, which will be used to call the skill. I then import Gilles’ JSON file to populate the intents, which gives me the basis of some of the Software-Defined Data Centre (SDDC) commands, I add some extra sample dialog.

    Alexa2

    In the Endpoint section, I take note of the Skill ID. The Skill ID will be used to invoke my Lambda function. Over in the AWS console, I open Lambda and create the function.

    Lambda1

    I defined the trigger as an Alexa Skills Kit, and enable Skill ID verification with the Skill ID copied in the previous step.

    Lambda2

    Since I have CloudTrail enabled, my API calls to Lambda will be forward to a CloudWatch Logs stream, which we’ll take a look at shortly. I also add a Simple Notification Service (SNS) topic to email me when the Lambda function is triggered.

    Lambda4

    Next, I upload Gilles’ code in zip format, making a couple of tweaks to the available region settings, and the org ID, SDDC ID, and API token. The code is actually going to go ahead and exchange that API token for me.

    Lambda3

    I run a simple test using a pre-configured test event from the Amazon Alexa Start Session event template. Then, make a note of the Amazon Resource Name (ARN) for the Lambda function in the top right corner.

    Lambda5

    Back in the Alexa Developer Console, I can now set this Lambda ARN as the service endpoint. I save and build my skill model.

    Alexa3

    In the Test section, I can use the invocation phrase defined by the Alexa Skill to start the demo, and my intents as words to trigger VMware Cloud API calls via Lambda. In the test below, I have added 2 additional hosts to my SDDC.

    Alexa4

    Back in the AWS console, from the CloudWatch Logs stream, I can see the API calls been made to Lambda.

    CloudWatchLogs

    In the VMware Cloud Provider Hub, the Adding host(s) task in progress message appears on the SDDC and the status changes to adding hosts. Following notification that the hosts were successfully added, I ask Alexa again what the SDDC status is, and the new capacity of 8 hosts is correctly reported back.

    VMCNotification
    SDDCStatus3

    Please share if you found this post useful

    Like this:

    LikeLoading.

    Related

    Источник: https://esxsi.com/2020/04/19/alexa/

    Querying own skill for development purposes on Umb online banking enrollment through API calls

    I did this like 4+ years ago. I had to create a virtual AVS device and then a bespoke client using the AVS API. It's a lot of work.

    Is there something special about these files that you need to use mp3? You can batch test audio files for speech recognition (is the speech to text what you expect?) with the ASR tool in the developer console.

    With the NLU evaluation tool in the console, you can batch test utterances (formatted in JSON) to see which intents they trigger and what values they return in the slots.

    And if you're working on unit tests for multi-utterance exchanges, you can use the ASK CLI or the ASK SMAPI API for automation.

    The only one of these that uses MP3s is the ASR tool. The rest work with text.

    answered Dec 12 '20 at 15:48

    Источник: https://stackoverflow.com/questions/65245230/querying-own-skill-for-development-purposes-on-alexa-through-api-calls

    Setting up and consuming APIs from Amazon Alexa via SAP API Management

    In this blog post, I will illustrate how to use SAP API Management to access and manage APIs of your systems. Using a simple example application hosted on Cloud Platform, I will demonstrate the basics of API Management and how to leverage its advantages. For that, we will create a simple AWS-hosted Alexa skill, although here anything that can run code and has access to the internet should work just as fine. This serves the purpose of creating a simple sandbox environment, which we can use to explore the basics.

    In the context of this blog post, we will mainly be working with two SAP environments: the API Portal and the Developer Portal. The API Portal is the place where you will design, create and configure APIs to your backend or third party systems. Once you created APIs, you can publish sets of one or more APIs as products. In order for developers to consume APIs, products can be subscribed to in the Developer Portal. This way, developers will automatically be assigned things such as API keys and can thus consume previously configured APIs.

    Prerequisites

    You will need the following:

    With that, all the necessary prerequisites should be fulfilled.

    Configuring SAP API Management

    Setting up the API Provider

    First, we will configure an API Provider for the tinyCAP app you set up as part of the prerequisites. On the API Portal navigate to Configure. You will see an overview of your API Providers like this, although if this is your first time using the API Portal, no API Providers will be listed.

    API%20Portal%3A%20Configure

    To create a new API Provider, which will act as a middle layer between our tinyCAP app and our Alexa skill, click on Create. You will first be asked to provide a name, “demo” and an (optional) description, for which you td bank atm near me now choose whatever you like. Once that is done advance to Connection, where the interesting bits of setting up the API Provider happen. Enter the following amazon alexa api as your connection settings:

    PropertyValue
    TypeInternet
    Host(your application host)
    Port443
    Use SSLtrue
    Trust Store(leave empty)
    Key Store Certificate(leave empty)

    API%20Provider%3A%20Connection

    If you don’t know the host of your application you can use the Cloud Foundry CLI and run the command, which will display an overview of all your deployed apps and their hosts.

    Now you should be left with a similar setup to this, just with a different host.

    All that’s left to do is configuring the Path Prefix under Catalog Service Settings. If you already played around a little with the tinyCAP app, you will know that a catalogue of all available recourses can be accessed under. For now, we will enter

    PropertyValue
    Path Prefix
    Service Collection URL(leave empty)
    Trust Allfalse
    Authentication TypeNone

    To check whether your configuration works click on the URL provided under Catalog URL which should take you straight to an overview of all available recourses of your tinyCAP app in JSON format. You can save and test your Connection now.

    API%20Provider%3A%20Catalogue%20Settings

    Setting up the API

    To expose our previously configured API Provider we now need to create an API. On the API Portal navigate to Develop. Once again if this is your first time working with the API Portal no APIs will be listed here.

    API%20Portal%3A%20Develop

    Click on Create. You will amazon com mytv help prompted with a pop-up requiring you to fill out all the necessary details to set up your API. Fill out the required fields as shown below.

    PropertyValue
    SelectAPI Provider
    API Providerdemo
    Link API Providertrue
    URL
    Namedemo
    TitleFirst API
    DescriptionFirst API using demo API Provider
    Host Alias(choose one of the options)
    API Base Path
    Service TypeOData

    The Host Alias and API Base Path will determine the URL through which the API can be accessed. The Host Alias should be filled out automatically, as for the Base Path just enter something simple like . Make sure to save and deploy your API before continuing.

    API%20Portal%3A%20Create%20API%20%281/2%29

    API%20Portal%3A%20Create%20API%20%282/2%29

    To test whether the setup of your API has been successful, click on the API you just created and open the API Proxy URL in your web browser. It should display the same data as the Catalog URL from the last step.

    Working with the API

    We are almost done with the API part of this blog post. Why almost? We already showed that we can access the data in our web browser through the API we created. To demonstrate the last thing that is preventing us from simply making an HTTP GET, we will set up a simple node application like this:

    Note that we make use of the Axios npm package, so make sure to run  and . First, we will need to set the URL we want to make a request to. To check if our code is working, we will first send amazon alexa api request to the tinyCAP app directly, without using API Management. For me, that means setting

    Running your program using node should output the same data you can see when opening the URL in your web browser. Great, so that works! Now let’s try the same thing with the API we set up. Set  to whatever you configured in the previous step. For me that is

    Running the program now will surprisingly result in an SSL Error . Looking for a solution online suggests adding appropriate root certificates via the npm  package, the solution, however, isn’t quite as simple. To get the request working we manually need to add the PEM chain to node. In Firefox open the URL of your API and hit enter. Click on the small lock icon to the left of the URL bar and navigate to Connection Secure > Show connection Details > More Information. Under Security / Website Identity, click on View Certificate. Under Miscellaneous you can download the PEM (chain).

    Alternatively, you can download the .pem file here.

    Save it in the same folder your node project is in. To add the PEM chain to our project, we will update our code to look like the following:

    If you copy + paste this code make sure to adjust the file name baseURL accordingly. If we run this, our program now we will see the same data we can see in our web browser.

    If you prefer using Postman, you can also add the .pem file to Postman by navigating to File > Settings > Certificates.

    Also, don’t forget to run

    Alexa Skill

    First create a new skill on the alexa developer console. Enter a Skill name of you liking and choose whatever flavour of English you prefer. Choose “Custom” as your model and “Alexa-Hosted (Node.js)” as the backend hosting method, so we can apply what we learned earlier directly to our skill. If prompted whether you want to add a template to your skill, choose the “Hello World” template. Create your skill, this might take a while.

    Alexa%3A%20Create%20Skill

    On the left side, under the Build tab, navigate to Custom > Invocation and choose a Skill invocation name. I will be using “demo two” since I was too lazy to delete the first one. Now navigate to Custom > Intents. Here you should see a list of a few Amazon default Intents, as well a template intent (e.g. HelloWorldIntent) if you chose to add a template to your skill while creating it. Click on + Add Intent. Enter “GetApiDataIntent” as your Intent name and create your custom intent.

    Ever wondered how to convert abbreviations consisting of multiple upper case characters into a camel case name (“API” or “Api”)? There are some general guidelines by Microsoft.

    As a next step, we need to enter sample utterances, which describe what the user might say to invoke the intent we just created. Since we want to access data, we will go with something like “show me my data” and “show me api data”. Of course, you are free to choose additional or different utterances. Just make sure you end up calling the correct intent(s), going with something too simple such as “show me data” might result in some default Amazon intents being triggered instead of yours. Make sure to save and build your model.

    Alexa%3A%20Sample%20utterances

    Now we will switch over to the code tab and add some custom code to make the intent behave. First, we need to add our new intent. Copy and paste this code snippet into the  file. Also, make sure to export the handler at the bottom of the file.

    Replace  at the bottom of the file with:

    Save and deploy your code. Switch to the Test tab to check whether everything is working up until now. You might need to switch to development mode to enable testing.

    Great! Now let’s add some functionality to our intent. Since we need the axios and ssl-root-cas npm packages we will add them to our skill’s dependencies in the  file.

    Upon saving & deploying our skill, all required dependencies from  will automatically be installed.

    And now: certificates. In the lambda folder create a new file called  and copy & paste the content of your previously downloaded certificate chain into it. Switch back to  and add the following to the top of the file:

    Replace the  function of the  with:

    Notice the  keyword for the  function. Save and deploy your changes. Now, testing our skill should result in a nicely formatted output of available resources, and indeed that is the case.

    Let’s recap

    We now have an openly available API to an application running on SCP and an Amazon Alexa Skill that accesses this API.

    However, to truly leverage the advantages of SAP API Management, we will need to introduce some further concepts, such as policies and basic authentication, since we don’t want our data to be available to anyone. As of now, the only thing our API and API Provider do, is simple request/response forwarding.

    You can download all the code I used to run my Alexa skill in this repo. If you want to use this code feel free to import it into your environment, make sure to adjust any names/strings to your needs.

    Further reading

    Источник: https://blogs.sap.com/2020/07/27/setting-up-and-consuming-apis-from-amazon-alexa-via-sap-api-management/

    # Amazon Alexa Smart Home Skill

    Alexa is an intelligent personal assistant developed by Amazon and designed to run on smart speakers and devices such as the Amazon Echo and Dot.

    This certified Amazon Smart Home Skill allows users to control their openHAB powered smart home with natural voice commands. Lights, locks, thermostats, AV devices, sensors and many other device types can be controlled through a user's Alexa powered device like the Echo or Dot.

    This page describes how to use the openHAB Alexa Smart Home Skill(opens new window). The skill connects your openHAB setup through the myopenHAB.org(opens new window) cloud service to Amazon Alexa.

    # Table of Contents

    • NEW Alexa Version 3 API syntax (v3)
      • Version 3 of the Alex Skill API introduces a more rich and complex set of features that required a change in how items are configured by using the new metadata feature introduced in openaHAB 2.3
      • Version 2 tags are still supported and are converted internally to v3 meta data
      • Supported item & group v3 meta data
      • Automatically determine number precision and unit based on item state presentation and unit of measurement.
      • Decoupling between item receiving command and item state via an item sensor
      • Improved Alexa response state accuracy
      • Support for building block APIs and semantic extensions latest features

    # Requirements

    # Recommendations

    # Item Labels

    Matching of voice commands to items happens based on the item label (e.g. "Kitchen Light"). If it is not specified, the item will be ignored. It is therefore advisable, to choose labels that can be used to form natural commands. It is important to note that each of these labels needs to be unique to prevent any duplicate issues. As an example, compare "Alexa, turn on the Kitchen Light" vs. "Alexa, turn on the Ground Floor LEDs Kitchen".

    # Regional Settings

    In order for the skill to determine your default language and measurement system to use, during the discovery process, for some of the controllers supporting friendly language-based names and unit of measurement, it is important to set your server regional settings including the language, country/region and measurement system properties. These can either be accomplished by using Paper UI (Configuration > System > Regional Settings) or setting the and properties for (openHAB 3.0 and later), (openHAB 2.5), or (openHAB 2.4 and prior) service in. If these settings aren't defined, the skill will either use the item level configuration, if available, to determine these properties, or fallback to language and measurement system, as default values.

    # Concept

    The Alexa skill API uses the concept of "endpoints". Endpoints are addressable entities that expose functionality in the form of capability interfaces. An example endpoint may be a light switch, which has a single capability called power state (ON/OFF). A more complex endpoint may be a thermostat which has many capabilities to control and report temperature, setpoints, modes, etc.

    # Single Endpoint

    Single items in openHAB can be mapped to single endpoint in Alex through the use of the Alexa metadata.

    An simple example of this is a light switch. In openHAB a light switch is defined as a "Switch" item and responds to ON or OFF commands.

    In the Alexa skill a light switch endpoint implements the "PowerController" interface and exposes a "powerState" property. To map our openHAB switch to a PowerController endpoint we use a Alexa metadata:

    Setting this on a single item will create an Alexa endpoint with the spoken addressable name "Light Switch" and map the powerState property to our item. You can ask Alexa to turn "Light Switch" on or off.

    An example of how this works with other metadata is given in Items Metadata(opens new window)

    This can also be written using metadata labels, which is a shorthand version of the full Alexa namespace:

    A slightly more complex example would be a Light Dimmer. In openHAB a dimmer object responds to both percentage and ON / OFF commands. In Alexa this is two different interfaces. To support both types of commands, we need to add both to the item metadata:

    You can ask Alexa to "Turn Light Switch ." on or off as well as "Set Light Switch to ." a certain percentage.

    Using metadata labels this would look like:

    NOTE: the Alexa skill has 3 different percentage interfaces, BrightnessController, PowerLevelController and PercentageController. Your item should only be using one of these that best describes the type of device. So for lights this would be the BrightnessController, for roller shades this would be PercentageController. The skill will not prevent adding more then one, but voice control may suffer for that device.

    # Group Endpoint

    While single mapping items works for many use cases, occasionally multiple openHAB items need to be mapped to a single endpoint in Alexa. When using a group item, keep in mind that there can only be one specific interface capability per group. If you need to have more than one instance of a given capability, you should use the building block APIs controllers.

    Below are examples for various use cases, such as a thermostat, a smart bulb, a stereo and a security system.

    In openHAB a thermostat is modeled as many different items, typically there are items for setpoints (target, heat, cool), modes, and the current temperature. To map these items to a single endpoint in Alexa, we will add them to a group which also uses "Alexa" metadata. When items are Alexa-enabled, but are also a member of a group Alexa-enabled, they will be added to the group endpoint and not exposed as their own endpoints.

    The group metadata also describes the category for the endpoint, in this case a "Thermostat". See the section below on supported group metadata and categories for a complete list. In this example a single endpoint is created called "Bedroom", its various interfaces are mapped to different openHAB items. You can ask Alexa "Set the Bedroom thermostat to 72" and the 'HeatSetpoint' will receive the command, if currently in heating mode, likewise you can ask Alexa "What's the temperature of the Bedroom" and Alexa will query the "Temperature" items for its value.

    When mapping items, sometime we need to pass additional parameters to Alexa to set things like what scale to use (Fahrenheit) or what values our items expect for certain states (thermostat modes). These parameters can be passed in the metadata properties, if they are omitted, then reasonable defaults are used. In our above example we may wish to use Fahrenheit as our temperature scale, and map the mode strings to numbers. This would look like:

    For thermostat integration such as Nest or Ecobee, a more complex configuration could involve having three setpoints and additional upper and lower setpoints for eco mode when these are different than the standard ones. Compared to the previous example, the temperature scale here will be based on the item state presentation unit ( => Fahrenheit) and the thermostat mode will be mapped according to the binding name.

    A smart bulb is another example when it supports shade of colors. Below are two ways to set up a color and a dimmable white bulb with color temperature capability.

    A Stereo is another example of a single endpoint that needs many items to function properly. Power, volume, input, speakers and player controllers are all typical use cases for a stereo that a user may wish to control.

    A security system is another example including alarm mode and different alarm states.

    # Building Block APIs

    For components of a device, which isn't covered by the existing interfaces, that have more than one setting, characterized by a number within a range or just turn on and off, the Mode, Range and Toggle controllers can be used to highly customize how you interact with that device via Alexa. These capabilities can be used like building blocks to model the full feature set of a complex device. With the expansion of these controllers support to other languages, the skill will use your server regional settings if available, falling back toto determine your default language setting.

    A washer and its settings modeled with multiple mode interface capabilities.

    A fan and its settings modeled with a mix of range/toggle interface capabilities.

    A router and its settings modeled with multiple toggle federal reserve bank services routing number lookup capabilities.

    # Semantic Extensions

    Semantic extensions are used to further customize how to interact with a device. This functionality is only supported by the Mode, Range and Toggle controllers. It currently provides "Close", "Open", "Lower" and "Raise" interactions, removing the need for the Alexa routine workaround to control certain devices such as blinds or doors. Each semantic is composed of action and state mappings. The actions are used for interacting with the device and the states for displaying its current semantic state in the Alexa app (Not available as of yet). The supported action and state names are listed in the semantic catalog.

    A standard blind with range interface capability (Metadata label: ). For example, requesting "Alexa, open the blind", the item state will be set to. Likewise, asking "Alexa, lower the blind", the item state will be decreased by 10 from its current state.

    A shutter with mode interface capability. For example, requesting "Alexa, open the shutter" or "Alexa, raise the shutter", the item state will be set to .

    A door with toggle interface capability (Metadata label: ). For example, requesting "Alexa, open the door", the item state foreclosed homes for sale tulsa be set to .

    # Item Sensor

    Whenever, a device in openHAB uses a separate channel for its status, that item (called "sensor") can be mapped in the actionable item parameters. This feature is design to improve state reporting accuracy by allowing the property state of the sensor item to be reported over the actionable one. It is configured by adding the metadata parameter .

    It is important to note that sensor items need to be the same type than their parent item, except for LockController capable items. Additionally, since deferred reporting is not supported by the skill as of yet, their state will need to be available right away for the skill to report the device latest status.

    Below is an example of a lock device using an item sensor.

    # Item State

    Item states, reported back to Alexa, are formatted based on their item state presentation(opens new window) definition if configured. North central regional jail wv inmate search means you can control the precision of number values (e.g. will limit reported temperature value to one decimal point).

    For items that don't have a state, these can be configured as not retrievable, automatically when the item parameter (opens new window) is set as or by using metadata parameter. In that case, Alexa will not amazon alexa api the given item state, and when a command is issued against that item, the requested state will be returned back without checking the current state in openHAB. If using this feature in a group endpoint, keep in mind that all associated items will need to be configured to either report or not report a state, otherwise the Alexa integration for that endpoint will be broken.

    # Item Unit of Measurement

    With the introduction of the unit of measurement(opens new window) concept, the item unit can be automatically determined for thermostat and temperature using that feature, removing the need of having to set the metadata scale parameter for each of the relevant items or groups.

    Below are two examples; the scale on the first will be set to Fahrenheit based on how it is defined in the item state presentation pattern and the second one will be set based on your openHAB system regional settings (US=Fahrenheit; SI=Celsius).

    # Item Configuration

    # Supported Item Metadata

    The following are a list of supported metadata. It is important which island in the keys has the best beaches note that not all the capabilities listed below are available globally.

    #

    • Items that turn on or off such as light switches, power states, etc.
    • Supported item type:
    • Default category: SWITCH

    #

    • Items which response to percentage level and brightness commands (dim, brighten, percent), typically lights.
    • Supported item type:
    • Default category: LIGHT

    #

    • Items which respond to a specific number setting
    • Supported item type:
    • Default category: SWITCH

    #

    • Items which respond to percentage commands such as roller shutters.
    • Supported item type:
    • Default category: OTHER

    #

    • Items that represent a target setpoint for a thermostat. The scale is determined based on: (1) value set in parameter ; (2) unit of item state presentation (=Fahrenheit; =Celsius); (3) your openHAB server regional measurement system or region settings (US=Fahrenheit; SI=Celsius); (4) defaults to Celsius. By default, the temperature range is limited to predefined setpoint values based on the scale parameter. If necessary, the temperature range can be customized using parameter. When paired withsetpoint requests will be ignored when the thermostat mode is off.
    • Supported item type:
    • Supported metadata parameters:
      • scale=
        • Celsius [4°C -> 32°C]
        • Fahrenheit [40°F -> 90°F]
      • setpointRange=
        • defaults to defined scale range listed above if omitted
    • Default category: THERMOSTAT

    #

    • Items that represent a upper or COOL setpoint for a thermostat. This needs to be paired with. The scale is determined based on: (1) value set in parameter ; (2) unit of item state presentation (=Fahrenheit; =Celsius); (3) your openHAB server regional measurement system or region settings (US=Fahrenheit; SI=Celsius); (4) defaults to Celsius. By default, the temperature range is limited to predefined setpoint values based on the scale parameter. If necessary, the temperature range can be customized using parameter. When paired withsetpoint requests and responses will be limited based on the current thermostat mode as follows: (1) thermostat mode cannot be off to set/adjust a setpoint temperature; (2) set/adjust upper or lower setpoint to single point target temperature in dual mode with, respectively, thermostat cooling or heating mode. (3) thermostat auto and eco mode will be considered in dual mode if setpoints defined, otherwise in single mode. Additionally, for integration that uses separate upper and lower setpoints for eco mode (e.g. Nest), suffix can be appended to the metadata property to differentiate that property from the standard ones. For triple mode support, the setpoint mode automation will need to be disabled by adding parameter to the thermostat mode item configuration.
    • Supported item type:
    • Supported metadata parameters:
      • scale=
        • Celsius [4°C -> 32°C]
        • Fahrenheit [40°F -> 90°F]
      • comfortRange=
        • used in dual setpoint mode to determine:
          • the new upper/lower setpoints spread based on target setpoint
          • the minimum temperature delta between requested upper/lower setpoints by adding relevant comfort range values
        • defaults to 2°F or 1°C
      • setpointRange=
        • defaults to defined scale range listed above if omitted
    • Default category: THERMOSTAT

    #

    • Items that represent a lower or HEAT setpoint for a thermostat. This needs to be paired with. The scale is determined based on: (1) value set in parameter ; (2) unit of item state presentation (=Fahrenheit; =Celsius); (3) your openHAB server regional measurement system or region settings (US=Fahrenheit; SI=Celsius); (4) defaults to Celsius. By default, the temperature range is limited to predefined setpoint values based on the scale parameter. If necessary, the temperature range can be customized using parameter. When paired withsetpoint requests and responses will be limited based on the current thermostat mode as follows: (1) thermostat mode cannot be off to set/adjust a setpoint temperature; (2) set/adjust upper or lower setpoint to single point target temperature in dual mode with, respectively, thermostat cooling or heating mode. (3) thermostat auto and eco mode will be considered in dual mode if setpoints defined, otherwise in single mode. Additionally, for integration that uses separate upper and lower setpoints for eco mode (e.g. Nest), suffix can be appended to the metadata property to differentiate that property from the standard ones. For triple mode support, the setpoint mode automation will need to be disabled by adding parameter to the thermostat mode item configuration.
    • Supported item type:
    • Supported metadata parameters:
      • scale=
        • Celsius [4°C -> 32°C]
        • Fahrenheit [40°F -> 90°F]
      • comfortRange=
        • used in dual setpoint mode to determine:
          • the new upper/lower setpoints spread based on target setpoint
          • the minimum temperature delta between requested upper/lower setpoints by adding relevant comfort range values
        • defaults to 2°F or 1°C
      • setpointRange=
        • defaults to defined scale range listed above if omitted
    • Default category: THERMOSTAT

    #

    • Items that represent the mode for a thermostat, default string values arebut these can be mapped to other values in the metadata. The mapping can be, in order of precedence, user-defined (AUTO=3.) or preset-based related to the thermostat binding used (binding=). For the binding parameter, it will be automatically determined if the associated item is using a 2.x addon (via channel metadata). If neither of these settings are provided, for thermostats that only support a subset of the standard modes, a comma delimited list of the Alexa supported modes should be set using the supportedModes parameter, otherwise, the supported list will be compiled based of the default mapping.
    • Supported item type:
      • Number
      • String
      • Switch (Heating only)
    • Supported metadata parameters:
      • OFF=
      • HEAT=
      • COOL=
      • ECO=
      • AUTO=
      • binding=
        • daikin(opens new window) [HEAT="HEAT", COOL="COLD", AUTO="AUTO"]
        • ecobee1(opens new window) [OFF="off", HEAT="heat", COOL="cool", AUTO="auto"]
        • max(opens new window) [HEAT="MANUAL", ECO="VACATION", AUTO="AUTOMATIC"]
        • nest(opens new window) [OFF="OFF", HEAT="HEAT", COOL="COOL", ECO="ECO", AUTO="HEAT_COOL"]
        • nest1(opens new window) [OFF="off", HEAT="heat", COOL="cool", ECO="eco", AUTO="heat-cool"]
        • zwave(opens new window) [OFF=0, HEAT=1, COOL=2, AUTO=3]
        • zwave1(opens new window) [OFF=0, HEAT=1, COOL=2, AUTO=3]
        • defaults to [OFF="off", HEAT="heat", COOL="cool", ECO="eco", AUTO="auto"] if omitted
      • supportedModes=
        • defaults to, depending on the parameters provided, either user-based, preset-based or default item type-based mapping.
      • supportsSetpointMode=
        • set to false to disable the thermostat setpoint mode-aware feature (Refer to upper/lower amazon alexa api documentation for more information)
        • defaults to true
    • Default category: THERMOSTAT

    #

    • Items that represent the current temperature. The scale is determined based on: (1) value set in parameter ; (2) unit of item state presentation (=Fahrenheit; =Celsius); (3) your openHAB server regional measurement system or region settings (US=Fahrenheit; SI=Celsius); (4) defaults to Celsius.
    • Supported item type:
    • Supported metadata parameters:
      • scale=
    • Default category: TEMPERATURE_SENSOR

    #

    • Items that represent the state of infinityauto com pay bill lock (ON lock, OFF unlock). When associated to an item sensor, the state of that item will be returned instead of the original actionable item. Additionally, when linking to such item, multiple properties to one state can be mapped with column delimiter (e.g. for a Z-Wave lock: ).
    • Supported item type:
    • Supported sensor type:
      • Contact [LOCKED="CLOSED", UNLOCKED="OPEN"]
      • Number [LOCKED=1, UNLOCKED=2, JAMMED=3]
      • String [LOCKED="locked", "UNLOCKED"="unlocked", JAMMED="jammed"]
      • Switch [LOCKED="ON", UNLOCKED="OFF"]
    • Supported metadata parameters:
      • LOCKED=
      • UNLOCKED=
      • JAMMED=
      • defaults based on item sensor type if omitted
    • Default category: SMARTLOCK

    #

    • Items that represent a color
    • Supported item type:
    • Default category: LIGHT

    #

    • Items that represents a color temperature, default increment value may be specified in metadata parameters. For dimmer typed items adjustments, INCREASE/DECREASE commands will be sent instead if increment value not defined, while number typed items will default to 500K increments. Optionally, the supported temperature range in Kelvin can be provided using parameter. Preset-based range values will automatically be used based on the binding name and thing type (to differentiate color/white ranges), if the associated item is linked to one of the addons listed below (via channel metadata). Otherwise, to use these preset settings, use the parameter or. By default, the color type preset-based range values are used if the binding name is provided and the device/thing type cannot be determined. It is important to note that temperature adjustment requests for endpoints including a color item, will be rejected if the endpoint is in color mode (Dimmer => undefined temperature or color saturation > 0; Number => undefined temperature or temperature = 0). In that event, set the initial white level before requesting subsequent adjustments.
    • Supported item type:
      • Dimmer: colder (0%) to warmer (100%) based on defined temperature range [bindings integration]
      • Number: color temperature value in Kelvin [custom integration]
    • Supported metadata parameters:
    • Default category: LIGHT

    #

    • Items that represent a scene or an activity depending on defined category and may be set not to support deactivation requests based on metadata parameters.
    • Supported item type:
    • Supported metadata parameters:
      • supportsDeactivation=
        • true (default if omitted)
        • false
    • Default category: SCENE_TRIGGER

    #

    • Items that represent a channel. A channel mapping may be specified in metadata parameters allowing channel request by name. It is important to note only well-known channel names can be used as these are validated against a database on the Alexa side when requested. Unfortunately, Amazon doesn't provide a list of supported channel names.
    • Supported item type:
    • Supported metadata parameters:
      • =
      • =
      • .
    • Default category: TV

    #

    • Items that represent a source input (e.g. "HDMI 1", or "TUNER" on a stereo). A list of supported input values(opens new window) needs to be provided using the supportedInputs parameter. The space between the input name and number duke energy pay my bill nc not sent to OH (e.g. "HDMI 1" [alexa] => "HDMI1" [OH]). That space can also be omitted in the supported list as well.
    • Supported item type:
    • Supported metadata parameters:
      • supportedInputs=
        • required list of supported input values (e.g. )
    • Default category: TV

    #

    • Items that represent a volume level, default increment may be specified in metadata parameters
    • Supported item type:
    • Supported metadata parameters:
      • increment=
        • defaults to increment=10 (standard value provided by Alexa) if omitted.
    • Default category: SPEAKER

    #

    • Items that represent a muted state (ON muted, OFF unmuted)
    • Supported item type:
    • Default category: SPEAKER

    #

    • Items that represent a volume controlled in steps only (e.g. +1, -1), such as a button on a remote control. This should only be used if the current volume level state cannot be tracked in openHAB otherwise just use. Default increment may be specified in metadata parameters.
    • Supported item type:
    • Supported metadata parameters:
      • increment=
        • defaults to increment=10 (standard value provided by Alexa) if omitted.
    • Default category: SPEAKER

    #

    • Items that represent a muted state (ON muted, OFF unmuted). This should only be used if the current muted state cannot be tracked in openHAB otherwise just use .
    • Supported item type:
    • Default category: SPEAKER

    #

    • Items that represent the playback controls of a AV device. (Supported commands: Play, Pause, Next, Previous, Rewind, Fast Forward)
    • Supported item type:
    • Default category: OTHER

    #

    • Items that represent the different equalizer bands and their ranges supported by an audio system. Use specific capability component (, or ) when configuring a band (e.g. ). Add the band range values in the parameter. For the reset default value, provide the setting in parameter or it will be calculated by using midpoint range spread. Additionally, default adjust increment can be configured in parameter. When configuring multiple bands, make sure to synchronize the range parameter across relevant items as the same range values will be used for all bands due to Alexa restriction. However, the reset and increment default values can be different between bands.
    • Supported item type:
    • Supported metadata parameters:
      • range=
        • defaults to for Dimmer and for Number item types if omitted
      • default=
        • defaults to midpoint range spread if omitted
      • increment=
        • defaults to increment=INCREASE/DECREASE (Dimmer) or increment=1 (Number) if omitted
    • Default category: SPEAKER

    #

    • Items that represent a list of equalizer modes supported by an audio system. Set supported modes using parameter. The mode listed in additional properties (MOVIE, MUSIC, NIGHT, SPORT, TV) are the only ones supported by the Alexa API currently. For the mapping, default item type mapping (listed below) can be used or if necessary, add each state to the parameters similar to how it is done with other interfaces.
    • Supported item type:
      • Number [MOVIE=1, MUSIC=2, NIGHT=3, SPORT=4, TV=5]
      • String [MOVIE="movie", MUSIC="music", NIGHT="night", SPORT="sport", TV="tv"]
    • Supported metadata parameters:
      • MOVIE=
      • MUSIC=
      • NIGHT=
      • SPORT=
      • TV=
      • supportedModes=
        • defaults to, depending on the parameters provided, either user-based or default item type-based mapping.
    • Default category: SPEAKER

    #

    • Items that represent a contact sensor that can be used to trigger Alexa routines. (Currently not usable as proactive reporting not supported yet)
    • Supported item type:
    • Default category: CONTACT_SENSOR

    #

    • Items that represent a motion sensor that can be used to trigger Alexa routines. (Currently not usable as proactive reporting not supported yet)
    • Supported item type:
    • Default category: MOTION_SENSOR

    #

    • Items that represent a device that controls a security system. Set supported arm states using parameter. For the mapping, default item type mapping (listed below) can be used or if necessary, add each state to the parameters similar to how it is done with other interfaces. If using a String item type, supports for pin codes (ability to have the disarm pin code verification done in openHAB) can be configured using. For system that have an exit delay, provide the delay in seconds using parameter. If defined, the delay is provided to Alexa during arm away requests only. For the pin code, you will need to enable voice pin in the Alexa app for the relevant device. If pin codes support is set to true, disarm request will include the pin code in item command delimited by a column sign (e.g. ), otherwise, the verification is done by Alexa based on the voice pin code you configured. When the pin code is attached to the item command, it is your responsibility to validate the code on the openHAB side and change the item status to UNAUTHORIZED corresponding state in order to indicate that the code is invalid. Otherwise, if no action is taken, the us bank omaha ne will consider the request successful. Other errors state can also be used based on the list of additional properties below. These should only be used when arm/disarm commands are received. When associated to an item sensor, the item command and state can be decoupled. Although at this time, the skill doesn't support delayed responses, so there should be no delay in updating the relevant item state.
    • Supported item type:
      • Number [DISARMED=0, ARMED_STAY=1, ARMED_AWAY=2, ARMED_NIGHT=3, NOT_READY=4, UNCLEARED_ALARM=5, UNCLEARED_TROUBLE=6, BYPASS_NEEDED=7]
      • String [DISARMED="disarm", ARMED_STAY="stay", ARMED_AWAY="away", ARMED_NIGHT="night", AUTHORIZATION_REQUIRED="authreq", UNAUTHORIZED="unauth", NOT_READY="notrdy", UNCLEARED_ALARM="alarm", UNCLEARED_TROUBLE="trouble", BYPASS_NEEDED="bypass"]
      • Switch [DISARMED="OFF", ARMED_STAY="ON"]
    • Supported metadata parameters:
      • DISARMED=
      • ARMED_STAY=
      • ARMED_AWAY=
      • ARMED_NIGHT=
      • AUTHORIZATION_REQUIRED=
        • error state when in arm away mode while arm request in stay or night
      • UNAUTHORIZED=
        • error state when provided disarm pin code is incorrect (Only used with pin codes support)
      • NOT_READY=
        • error state when system not ready for arming or disarming
      • UNCLEARED_ALARM=
        • error state when system has uncleared alarm preventing arming
      • UNCLEARED_TROUBLE=
        • error state when system has uncleared trouble condition preventing arming
      • BYPASS_NEEDED=
        • error state when system has open zones preventing arming
      • supportedArmStates=
        • supported arm states should only be a list of DISARMED and ARMED_* states; do not put error states in that parameter.
        • defaults to, depending on the parameters provided, either user-based or default item type-based mapping.
      • supportsPinCodes= (optional)
        • only supported with String item type
        • defaults to false
      • exitDelay= (optional)
        • maximum delay Alexa restriction up to 255 seconds.
        • defaults to no value
    • Default category: SECURITY_PANEL

    #

    • Items that represent the current state of the burglary alarm part of a security system
    • Supported item type:
    • Default category: SECURITY_PANEL

    #

    • Items that represent the current state of the fire alarm part of a security system
    • Supported item type:
    • Default category: SECURITY_PANEL

    #

    • Items that represent the current state of the carbon monoxide alarm part of a security system
    • Supported item type:
    • Default category: SECURITY_PANEL

    #

    • Items that represent the current state of the water alarm part of a security system
    • Supported item type:
    • Default category: SECURITY_PANEL

    #

    • Items that represent components of a device that have more than one setting. Multiple instances can be configured in a group endpoint. By default, to ask for a specific mode, the item label will be used as the friendly name. To configure it, use parameter and provide a comma delimited list of different best mortgage refinance rates in texas (Keep in mind that some names are not allowed). Additionally, pre-defined asset ids can be used to label a mode as well prefixing with an @ sign (e.g. ). If the component isn't controllable through openHAB, set parameterthat way only status requests will be processed. In regards to supported modes and their mappings, by default if omitted, the openHAB item state description options, if defined, are used to determine these configurations. To configure it, use parameter and provide a comma delimited list of mode mappings composed of openHAB item states and the associated names/asset ids they should be called, delimited by equal and column signs (e.g. ). For string based modes, if the mapping state value and name are the same (case sensitive), a shortened format can be used, where the name doesn't need to be added to the list by either leaving the first element empty or not providing the names at all (e.g. <=> ). Additionally, if the mode can be adjusted incrementally (e.g. temperature control), set parameterotherwise only requests to set a specific mode will be accepted. For text-based name language support, your server regional settings should be setup, otherwise, you can optionally set the language in parameter. For semantic extensions support, set actions in parameter and states in parameter. For actions, you can configure a set request by providing the mode or an adjust request, ifby providing the delta value in parentheses.
    • Supported item type:
    • Supported metadata parameters:
      • friendlyNames=
        • each name formatted as
        • defaults to item label name
      • nonControllable=
      • supportedModes=
        • each mode formatted as
        • requires two modes to be specified at least
        • defaults to item state description optionsif defined, otherwise no supported modes
      • ordered=
      • language=
        • two letter language code [,, ]
        • defaults to your server regional settings, if defined, otherwise
      • actionMappings=
        • each mapping formatted as, based on action type:
          • set =>
          • adjust => (Supported only if )
      • stateMappings=
        • each mapping formatted as
    • Default category: OTHER

    #

    • Items that represent components of a device that are characterized by numbers within a minimum and maximum range. Multiple instances can be configured in a group endpoint. By default, to ask for a specific range, the item label will be used as the friendly name. To configure it, use parameter and provide a comma delimited list of different labels (Keep in mind that some names are not allowed). Additionally, pre-defined asset ids can be used to label a mode as well prefixing with an @ sign (e.g. ). If the component isn't controllable through openHAB, set parameterthat way only status requests will be processed. To set the supported range, provide a column delimited list including minimum, maximum and precision values (e.g. ). The latter value will be use as default increment when requesting adjusted range values. Optionally, named presets can be defined, by providing a list of comma delimited preset mappings composed of a range value and its friendly names/asset ids column delimited (e.g. fan speeds => ). Another optional settings is parameter which gives a unit of measure to the range values. It is determined based on: (1) unit id set in parameter ; (2) supported unit of item state presentation; (3) default unit of measurement for item type with dimension based on your openHAB server regional settings; (4) defaults to empty. For text-based name language support, your server regional settings should be setup, otherwise, you can optionally set the language in parameter. For semantic extensions support, set actions in parameter and states in parameter. For actions, you can configure a set request by providing the number value or an adjust request by providing the delta value in parentheses. For states, you can configure a specific number value or a range by providing a column delimited list including minimum and maximum values.
    • Supported item type:
      • Dimmer
      • Number
      • Number:Angle
      • Number:Dimensionless
      • Number:Length
      • Number:Mass
      • Number:Temperature
      • Number:Volume
      • Rollershutter
    • Supported metadata parameters:
      • friendlyNames=
        • each name formatted as
        • defaults to item label name
      • nonControllable=
      • supportedRange=
        • defaults to for Dimmer/Rollershutter, for Number* item types
      • presets= (optional)
        • each preset formatted as
      • unitOfMeasure= (optional)
        • defaults to unit of item state presentation or default unit of measurement for the Number:* item types listed below:
          • Number:Angle []
          • Number:Length [ (SI); (US)]
          • Number:Temperature [ (SI); (US)]
      • language=
        • two letter language code [,, ]
        • defaults to your server regional settings, if defined, otherwise
      • actionMappings=
        • each mapping formatted as, based on action type:
          • set =>
          • adjust =>
      • stateMappings=
        • each mapping formatted as, based on state type:
          • range =>
          • value =>
    • Default category: OTHER

    #

    • Items that represent components of a device that can be turned on or off. Multiple instances can be configured in a group endpoint. By default, to ask for a specific range, the item label will be used as the friendly name. To configure it, use parameter and provide a comma delimited list of different labels (Keep in mind that some names are not allowed). Additionally, pre-defined asset ids can be used to label a mode as well with an @ sign prefix (e.g. ). If the component isn't controllable through openHAB, set parameterthat way only status requests will be processed. For text-based name language support, your server regional settings should be setup, otherwise, you can optionally set the language in parameter. For semantic extensions support, set actions in parameter and states in parameter. Actions and states values must be set to either or .
    • Supported item type:
    • Supported metadata parameters:
      • friendlyNames=
        • each name formatted as
        • defaults to item label name
      • nonControllable=
      • language=
        • two letter language code [,, ]
        • defaults to your server regional settings, if defined, otherwise
      • actionMappings=
        • each mapping formatted as or
      • stateMappings=
        • each mapping formatted as or
    • Default category: OTHER

    # Supported Group Metadata

    • Functional groups (no group type) can be labelled with one of Alexa display categories. It can be set using one of the two formats: or (e.g or ).
    • Display categories with underscores can be defined amazon alexa api camel cased format (e.g. => ).
    • Child item categories are ignored and only the group category is used to represent the endpoint.
    • Case is ignored on the category part of the metadata and any value will be made all uppercase before its passed to the Alexa API.

    # Supported Metadata Labels

    Item metadata labels translate to a set of capabilities and can be used as a convenience to using the longer meta data format configuration. These add additional functions and provide the ability to add customization through additional parameters which take precedence over the default ones. Here are some examples:

    Here are the labels currently supported and what they translate to. Each example shows using the meta data label and the full translated metadata.

    # Switchable

    (capabilities depending on item type)

    # Lighting

    (capabilities depending on item type)

    # Blind

    # Door

    # Lock

    # Outlet

    # CurrentHumidity

    # CurrentTemperature

    # TargetTemperature

    # LowerTemperature

    # UpperTemperature

    # HeatingCoolingMode

    # ColorTemperature

    # Activity

    # Scene

    # EntertainmentChannel

    # EntertainmentInput

    # EqualizerBass

    # EqualizerMidrange

    # EqualizerTreble

    # EqualizerMode

    # MediaPlayer

    # SpeakerMute

    # SpeakerVolume

    # ContactSensor

    # MotionSensor

    # SecurityAlarmMode

    # BurglaryAlarm

    # FireAlarm

    # CarbonMonoxideAlarm

    # WaterAlarm

    # ModeComponent

    # RangeComponent

    # ToggleComponent

    # Regional Availability

    • The availability of a given capability depends on the location setting of your Amazon account under which your echo devices are registered to. Here is the latest list of interface capabilities and their supported locales from Alexa Skill API(opens new window):
    InterfacesAUSCANDEUESPFRAGBRINDITAJPNUSA
    BrightnessController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    ChannelController✔️✔️✔️✔️✔️✔️
    ColorController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    ColorTemperatureController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    ContactSensor✔️✔️
    EqualizerController✔️
    InputController✔️✔️✔️✔️✔️✔️
    LockController (lock)✔️✔️✔️✔️✔️✔️✔️✔️
    LockController (unlock)✔️✔️✔️
    ModeController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    MotionSensor✔️✔️
    PercentageController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    PlaybackController✔️✔️✔️✔️✔️✔️✔️
    PowerController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    PowerLevelController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    RangeController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    SceneController✔️✔️✔️✔️✔️✔️✔️✔️
    SecurityPanelController✔️✔️✔️✔️
    Speaker✔️✔️✔️✔️✔️✔️
    StepSpeaker✔️✔️✔️✔️✔️✔️
    TemperatureSensor✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    ThermostatController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️
    ToggleController✔️✔️✔️✔️✔️✔️✔️✔️✔️✔️

    # Display Categories

    • Alexa has certain categories that effect how voice control and their mobile/web UI's display or control endpoints. An example of this is when you create "Smart Device Groups" in the Alex app and associate a specific Echo or Dot to that Group (typically a room). When a user asks to turn the lights ON, Alexa looks for devices in that group that have the category "LIGHT" to send the command to.
    • You can override this default value on items by adding it as a parameter to the metadata (e.g. ).
    • List of Alexa categories currently supported from Alexa Skill API(opens new window) docs:
    CategoryDescription
    ACTIVITY_TRIGGERA combination of devices set to a specific state. Use activity triggers for scenes when the state changes must occur in a specific order. For example, for a scene named "watch Netflix" you might power on the TV first, and then set the input to HDMI1.
    CAMERAA media device with video or photo functionality.
    COMPUTERA non-mobile computer, such as a desktop computer.
    CONTACT_SENSORAn endpoint that detects and reports changes in contact between two surfaces.
    DOORA door.
    DOORBELLA doorbell.
    EXTERIOR_BLINDA window covering on the outside of a structure.
    FANA fan.
    GAME_CONSOLEA game console, such as Microsoft Xbox or Nintendo Switch
    GARAGE_DOORA garage door. Garage doors must implement the ModeController interface to open and close the door.
    INTERIOR_BLINDA window covering on the inside of a structure.
    LAPTOPA laptop or other mobile computer.
    LIGHTA light source or fixture.
    MICROWAVEA microwave oven.
    MOBILE_PHONEA mobile phone.
    MOTION_SENSORAn endpoint that detects and reports movement in an area.
    MUSIC_SYSTEMA network-connected music system.
    NETWORK_HARDWAREA network router.
    OTHERAn endpoint that doesn't belong to one of the other categories.
    OVENAn oven cooking appliance.
    PHONEA non-mobile phone, such as landline or an IP phone.
    SCENE_TRIGGERA combination of devices set to a specific state. Use scene triggers for scenes when the order of the state change is not important. For example, for a scene named "bedtime" you might turn off the lights and lower the thermostat, in any order.
    SCREENA projector screen.
    SECURITY_PANELA security panel.
    SMARTLOCKAn endpoint that locks.
    SMARTPLUGA module that is plugged into an existing electrical outlet, and then has a device plugged into it. For example, a user can plug a smart plug into an outlet, and then plug a lamp into the smart plug. A smart plug can control a variety of devices.
    SPEAKERA speaker or speaker system.
    STREAMING_DEVICEA streaming device such as Apple TV, Chromecast, or Roku.
    SWITCHA switch wired directly to the electrical system. A switch can control a variety of devices.
    TABLETA tablet computer.
    TEMPERATURE_SENSORAn endpoint that reports temperature, but does not control it. The temperature data of the endpoint is not shown in the Alexa app.
    THERMOSTATAn endpoint that controls temperature, stand-alone air conditioners, or heaters with direct temperature control.
    TVA television.
    WEARABLEA network-connected wearable device, such as an Apple Watch, Fitbit, or Samsung Gear.

    # Asset Catalog

    Asset IdentifierSupported Friendly Names
    DeviceName.AirPurifierAir Purifier
    Air Cleaner
    Clean Air Machine
    DeviceName.FanFan
    Blower
    DeviceName.RouterRouter
    Internet Router
    Network Router
    Wifi Router
    Net Router
    DeviceName.ShadeShade
    Blind
    Curtain
    Roller
    Shutter
    Drape
    Awning
    Window shade
    Interior blind
    DeviceName.ShowerShower
    DeviceName.SpaceHeaterSpace Heater
    Portable Heater
    DeviceName.WasherWasher
    Washing Machine
    Setting.2GGuestWiFi2.4G Guest Wi-Fi
    2.4G Guest Network
    Guest Network 2.4G
    2G Guest Wifi
    Setting.5GGuestWiFi5G Guest Wi-Fi
    5G Guest Network
    Guest Network 5G
    5G Guest Wifi
    Setting.AutoAuto
    Automatic
    Automatic Mode
    Auto Mode
    Setting.DirectionDirection
    Setting.DryCycleDry Cycle
    Dry Preset
    Dry Setting
    Dryer Cycle
    Dryer Preset
    Dryer Setting
    Setting.FanSpeedFan Speed
    Airflow speed
    Wind Speed
    Air speed
    Air velocity
    Setting.GuestWiFiGuest Wi-fi
    Guest Network
    Guest Net
    Setting.HeatHeat
    Setting.ModeMode
    Setting.NightNight
    Night Mode
    Setting.OpeningOpening
    Height
    Lift
    Width
    Setting.OscillateOscillate
    Swivel
    Oscillation
    Spin
    Back and forth
    Setting.PresetPreset
    Setting
    Setting.QuietQuiet
    Quiet Mode
    Noiseless
    Silent
    Setting.TemperatureTemperature
    Temp
    Setting.WashCycleWash Cycle
    Wash Preset
    Wash setting
    Setting.WaterTemperatureWater Temperature
    Water Temp
    Water Heat
    Shower.HandHeldHandheld Shower
    Shower Wand
    Hand Shower
    Shower.RainHeadRain Head
    Overhead shower
    Rain Shower
    Rain Spout
    Rain Faucet
    Value.CloseClose
    Value.DelicateDelicates
    Delicate
    Value.HighHigh
    Value.LowLow
    Value.MaximumMaximum
    Max
    Value.MediumMedium
    Mid
    Value.MinimumMinimum
    Min
    Value.OpenOpen
    Value.QuickWashQuick Wash
    Fast Wash
    Wash Quickly
    Speed Wash
    • List of custom asset catalog defined by skill:
    Asset IdentifierSupported Friendly Names
    Setting.HumidityHumidity

    # Semantic Catalog

    Semantic TypeIdentifiers
    ActionsClose
    Open
    Lower
    Raise
    StatesClosed
    Open

    # Unit of Amazon alexa api Catalog

    Unit Identifier
    Angle.Degrees
    Angle.Radians
    Distance.Yards
    Distance.Inches
    Distance.Meters
    Distance.Feet
    Distance.Miles
    Distance.Kilometers
    Mass.Kilograms
    Mass.Grams
    Percent
    Temperature.Degrees
    Temperature.Celsius
    Temperature.Fahrenheit
    Temperature.Kelvin
    Volume.Gallons
    Volume.Pints
    Volume.Quarts
    Volume.Liters
    Volume.CubicMeters
    Volume.CubicFeet
    Weight.Pounds
    Weight.Ounces

    # Friendly Names Not Allowed

    Friendly Names
    alarm
    alarms
    all alarms
    away mode
    bass
    camera
    date
    date today
    day
    do not disturb
    drop in
    music
    night light
    notification
    playing
    sleep sounds
    time
    timer
    today in music
    treble
    volume
    way f. m.

    # Item Tag v2 Support

    Version 2 (v2) of the Alexa skill used openHAB HomeKit(opens new window) style tags to expose items to Alexa. Version 3 (v3) of the skill supports this by translating v2 style tags to v3 metadata labels internally. These tags are still required if the items are being exposed to the HomeKit(opens new window) or Google Assistant(opens new window) integrations. Below is the translation of v2 tags to v3 labels.

    # Supported v2 Item Tags

    v2 Item Tagv3 Metadata Label
    LightingLighting
    SwitchableSwitchable
    ContactSensorContactSensor
    CurrentTemperatureCurrentTemperature
    CurrentHumidityCurrentHumidity
    ThermostatThermostat
    └ CurrentTemperatureCurrentTemperature
    └ homekit:HeatingCoolingModeHeatingCoolingMode
    └ homekit:TargetHeatingCoolingModeHeatingCoolingMode
    └ homekit:TargetTemperatureTargetTemperature
    └ TargetTemperatureTargetTemperature
    WindowCoveringBlind

    # Example v2 Items

    Here are some of the most common generic errors you may encounter while using this skill:

    # Command Not Working

    • Alexa will respond with "That command doesn't work on device"
    • It indicates that the command that Alexa is trying to send to openHAB doesn't work, either because the intended device is not configured properly to support that command or because your openHAB items configuration has changed and a previously discovered item may longer accept certain commands. For example, a dimmer item type that was initially setup and was changed to a switch type, will cause Alexa brightness control commands to fail.
    • To resolve this error, make sure to update your openHAB items configuration accordingly and run a discovery update either through the Alexa app or just by asking "Alexa, discover" on your echo device.

    # Device Not Found

    • Alexa will respond with "I couldn't find a device or group named device in your profile"
    • It indicates that, either a device currently setup in your Alexa account, no longer exists in your openHAB server, or vice-versa.
    • To resolve this error, make sure to run a discovery update either through the Alexa app or just by asking "Alexa, discover" on your echo device. Keep in mind that previously discovered devices that have been removed from the openHAB configuration will show as offline under your Alexa account and not be automatically removed. To prevent potential device name conflicts, it is highly recommended to remove these devices through the Alexa app.
    • If all your Alexa-enabled devices in openHAB aren't discovered or getting updated:
      • Check that your server is available.
      • Look for any relevant errors in your openHAB server logs.
      • If only new devices aren't found, make sure your last Alexa-related config changes are valid.
      • If necessary, stagger the discovery process by are banks open on good friday in canada a couple devices at a time to isolate the amazon alexa api Device Not Responding
        • Alexa will respond with "device isn't responding, please check its network connection and power supply", and in some rare occasions, no response or acknowledgement will be given.
        • It indicates that the state of one or more of the endpoint properties retrieved from the openHAB server are considered invalid, mostly because it is in either uninitialized or undefined state.
        • To resolve this error, make sure that all items interfacing with Alexa have a defined state. If necessary, use item sensors, or if the state is not available in openHAB, set the item state to not be retrievable.
        • For group endpoints, partial properties responses will be send back to Alexa excluding items with invalid state. This will allow Alexa to acknowledge a command request assuming that the relevant item state is accurate. However, it will cause Alexa to generate this error when requesting the status of a device configured with an interface supporting that feature. For example, using a thermostat group endpoint, a request to set its mode will succeed but requesting its mode status will fail if one of its property state, such as its temperature sensor, is not defined in openHAB.
        • This is the default error.

        # Duplicate Device Names

        • Alexa will respond with "A few things share the name device, which one did you want?"
        • It indicates that more than one device on your Alexa account matches the device name requested.
        • To resolve this error, make sure that all the item labels related to your Alexa-enabled items are unique. Additionally, check your Alexa account for discovered devices from other skills or local integrations (e.g. Philips Hue bridge), that may have overlapping names.

        # Request Not Supported

        • Alexa will respond with "device doesn't support that"
        • It indicates that a requested command is not supported by any of the device configured interfaces.
        • To resolve this error, make sure that the relevant interfaces are configured properly on the given device. If this is the case, the response implies a limitation on the Alexa side. This will happen for a device with specific interfaces that don't support certain voice requests as of yet, such as the state of a PowerController or BrightnessController interface.

        # Server Authentication Issue

        • Alexa will respond with "Sorry something wrong, to control device try disabling the skill and re-enabling it from your Alexa app"
        • It indicates that Alexa isn't able to control the given device because of an authentication issue.
        • To resolve this error, for users that are using the official skill, just disable and re-enable it through the Alexa app. For users that have setup their own custom skill, make sure that the proper credentials were added to the lambda function config.js.

        # Server Not Accessible

        • Alexa will respond with "Sorry the hub that device is connected to is not responding, please check its network connection and power supply"
        • It indicates that your openHAB server is not accessible through myopenHAB(opens new window) cloud service.
        • To resolve this error, make sure that your server is running, your openHAB cloud service is configured with mode set to "Notifications & Remote Access", and showing online under your myopenHAB account. For users that have setup their own custom skill, make sure that the proper server base URL was added to the lambda function config.js.
        • For users running openHAB 2.4, there is a known issue when running the Amazon Echo Control(opens new window) binding, associated to that release, that affects the server accessibility. The workaround is to use the latest stable release candidate
    Источник: https://www.openhab.org/docs/ecosystem/alexa/

    The Amazon Alexa API Mashup Contest

    The challenge

    We are happy to announce the Amazon Alexa API Mashup Contest, our newest challenge with Hackster.io. To compete, you’ll build a compelling new voice experience by connecting your favorite public API to Alexa, the brain behind millions of Alexa-enabled devices, including Amazon Echo. The contest will award prizes for the most creative and most useful API mashups.

    Create great skills that report on ski conditions, connect to local business, or even read recent messages from your Slack channel. If you have an idea for something that should be powered by voice, build the Alexa skill to make it happen. APIs used in the contest should be public. If you are not sure where to start, you can check out this list of public APIs on GitHub.

    Need Real-World Examples?

    • Ask Automatic if you need gas.
    • Ask Hurricane Center what are the current storms.
    • Ask Area Code where is eight six zero.
    • Ask Uber to request a ride.

    How to Win

    Submit your projects for API combos to this contest for a chance to win. You don't need an Echo (or any other hardware) to participate. Besides, if you place in the contest, we’ll foreclosed homes for sale tulsa you an Echo (plus a bunch of other stuff!)

    We’re looking for the most creative and most useful API mashups. A great contest submission will tell a great story, have a target audience in mind, and make people amazon alexa api.

    There will be three winners for each category; categories are: 1) the most huntington hospital visiting hours API mashup and 2) the most useful API mashup.

    • First place will get a trophy, Amazon Echo, Echo Dot, Amazon Tap, and $1,500 gift card.
    • Second place will get a trophy, Amazon Echo, and $1,000 gift card.
    • Third place will get a trophy, Amazon Echo, and $500 gift card.

    The first 50 people to publish skills in Alexa and this contest page(other than winners of this contest) will receive a $100 amazon alexa api card. And everyone who publishes an Alexa skill can get a limited edition Alexa developer hoodie.

    Hackster_skills-image.png

    About the Alexa Skills Kit

    The Alexa Skills Kit (ASK) enables developers to easily build capabilities, called skills, for Alexa. ASK includes self-service APIs, documentation, templates and code samples to get developers on a rapid road to publishing their Alexa skills. For the Amazon Alexa API Mashup Contest, we will award developers who make the most creative and the most useful API mashups using ASK components.

    Here’s how to participate in the contest:

    1. Create a free Hackster account

    2. Register to participate in the contest on this page

    3. Create an Amazon Developer account using the same email you used for your Hackster account

    4. Design, build, and submit your Amazon Alexa skill

    5. Submit your project on this contest page

    Project submissions should include:

    • A link to your published Amazon Alexa skill (If applicable)
    • Story and high-quality images
    • Clear project documentation including VUI diagram

    Don't have an Echo?

    The Alexa Skill Testing Tool (EchoSim.io) by iQuarius Media is browser-based interface to Alexa, the voice service that powers Amazon Echo. Echosim.io is intended to allow developers who are working with the Alexa Skills Kit (ASK) to test skills in development.

    To use the Alexa Skill Testing Tool

    1. Navigate to https://Echosim.io

    2. Log in with your Amazon account.

    3. Click and hold the microphone button and speak a command as you would on the Echo. For example, say, “Alexa, what's the weather today?”

    4. When you let go of the button, EchoSim processes and responds to your voice command.

    5. To speak your next command, simply click and hold the microphone button again.

    6. Some features of the hardware Amazon Echo, merrimack county savings bank jobs as streaming music and far-field voice recognition, will not function with this how much do you get for unemployment in louisiana Resources:

    Getting started with the Alexa Skills Kit

    Account linking

    Helpful Projects on Hackster:

    Alexa Hurricane Center uses Weather Underground's API to give you info on hurricanes and tropical storms.

    Opening Bell uses Markit's API to give you current stock prices.

    Daily Cutiemals uses Flickr's API to send you pictures of cute animals each day.

    Amazon will select winners based on the following criteria:

    Most Creative API Mashup

    • Use of Voice User Interface (VUI) best practices (10 points)
    • Story/Instruction – Show how you created your project, including images, screenshots, and/or video (20 Points)
    • Project Documentation including VUI diagram (10 Points)
    • Code – Include working code with helpful comments (10 Points)
    • Published Alexa Skill (20 Points) (Skill must be published between the contest start and end dates. Read more about skill submission criteria.)

    Most Useful API Mashup

    • Use of Voice User Interface (VUI) best practices (10 points)
    • Story/Instruction – Show how you created your project, including images, screenshots, and/or video (20 Points)
    • Project Documentation including VUI diagram (10 Points)
    • Code – Include working code with helpful comments (10 Points)
    • Published Alexa Skill (20 Points) (Skill must be published between the contest start and end dates. Read more about skill submission criteria.)

    We can't wait to see your ideas in action.

    View section

    Prizes

    We are giving away tens of thousands of dollars in prizes to the top 57 projects! Our judges will pick the best qualifying 57 projects based on the judging criteria outlined in the rules section.

    Most Creative API Mashup

    1 winner

    1st Place: Trophy, Amazon Echo, Echo Dot, Amazon Tap, $1,500 Gift Card

    -

    $1,860 value

    Most Creative API Mashup

    1 winner

    2nd Place: Trophy, Amazon Echo, $1,000 Gift Card

    -

    $1,180 value

    Most Creative API Mashup

    1 winner

    3rd Place: Trophy, Amazon Echo, $500 Gift Card

    -

    $680 value

    Most Useful API Mashup

    1 winner

    1st Place: Trophy, Amazon Echo, Echo Dot, Amazon Tap, $1,500 gift card

    -

    $1,860 value

    Most Useful API Mashup

    1 winner

    2nd Place: Trophy, Amazon Echo, $1,000 gift card

    -

    $1,180 value

    Most Useful API Mashup

    1 winner

    3rd Place: Trophy, Amazon Echo, $500 gift card

    -

    $680 value

    FIRST 50 SKILLS

    50 winners

    The first 50 people to publish skills in both Alexa and the Hackster contest page (other than winners of this contest) will receive a $100 gift card.

    Swag

    1 winner

    When your skill is published, you may also be eligible to receive a free hoodie from Amazon. [*See details.](https://developer.amazon.com/alexa-skills-kit/alexa-developer-skill-promotion)

    -

    $20 value

    View section
    Источник: https://www.hackster.io/contests/alexa-api-contest

    Amazon releases new Alexa API, simplifying custom voice commands

    Education technology companies are now able to build their own custom voice commands for Amazon’s virtual assistant, Alexa, to give students and their families instant access to important educational information, Amazon announced Wednesday.

    Using a new application programming interface, called the Alexa Education Skill API, developers can integrate their tools with schools’ learning management systems, student information systems, classroom management providers and other platforms so parents and students can request information about schoolwork directly from Alexa.

    Previously, schools that wanted to integrate their edtech systems with Alexa using custom voice commands often relied on Amazon’s developers to help design these features. Using Alexa Education Skill, schools can work directly with their systems providers to build new commands.

    “Educators need multiple, innovative ways to reach and involve a student’s family members. Our voice-activated Alexa skill provides instant information for families about how their child is doing throughout the day, including not only in the classroom, but during lunch or recess,” said Stefan Kohler, chief executive officer for Kickboard, an edtech company that helped Amazon design the new API.

    According to Amazon, the new API doesn’t require users to invoke skills by name, allowing for parents and students to speak more naturally when they ask Alexa questions. Once these new commands go live in the coming weeks, according to Amazon, parents will be able to ask Alexa questions such as, “Alexa, what did Kaylee do in school today?” Or, “Alexa, how did Kaylee do on her math test?” Students 13 and older can ask questions like, “Alexa, what is my homework tonight?”

    Amazon says that in addition to providing a simple user experience, the new skills are also easy to build. Amazon’s developers are designing six interfaces, some of which have yet to be released, that can be connected to different systems and retrieve information for parents and students via voice command.

    “We’re committed to helping learners seamlessly integrate their studies into their everyday lives, and our collaboration with Amazon Alexa is another way that we are helping to enhance this experience for learners everywhere,” Kathy Vieira, chief strategy officer at Blackboard, said in a press release.

    The new API is already being used by Kickboard, Blackboard, Canvas, Coursera and ParentSquare, which say they are developing Alexa skills scheduled to go live later this year.

    Источник: https://edscoop.com/amazon-releases-new-alexa-api-simplifying-custom-voice-commands/