Powering your Apps with cognitive services

April Dunnam

0 comments

What are Cognitive services?

Cognitive services are a set of APIs that Microsoft has made available to us allowing us to utilise a lot of AI and machine learning functionality within our applications.

This infographic showcases some of the possibilities:

In this article we are going to focus on a few of these Cognitive Services capabilities:

  • Computer Vision API
  • Describe / Analyze an image
  • OCR text from an image
  • Text Analytics API
  • (Sentiment Analysis (positive, negative)
  • Keyword detection
  • Language detection
  • Face API
  • Face Detection (Gender, Age, etc)
  • Emotion Identification (happy, sad, etc)
  • PowerApps Spell Checker with BING and LUIS
  • Using Flow, BING, and LUIS to carry out spell check and suggest a correct sentence.

Other useful resources

For further information, take a look at the Microsoft site around this area as there are some great live demos about each of these pieces of functionality and how they work – https://azure.microsoft.com/en-gb/services/cognitive-services/

For example, go and try the computer vision section here – https://azure.microsoft.com/en- gb/services/cognitive-services/computer-vision/

You can click on the various different images and the service will detect a number of different properties and information about the image.

You can see in the example below that it has detected a number of different objects, and where it is not sure it gives a confidence rating, e.g. it is 99.7% certain there is a train in the picture.

So your first step to understanding more about what is on offer and how you might use it in your App is to have a click around these various demos.

Getting started in Azure

Signing Up for an Azure Subscription

Cognitive Services are running on the Azure platform, so to use them you will require an Azure subscription. To do this, if you don’t have one already:

Setting Up Resources in Azure

We will need to go in and create the resources for the cognitive services applications we want to use. These are configured in the Azure Portal.

Step 1 – You should see a section under ‘Azure Services’ for ‘Cognitive Services’. Click on that as per below:

Step 2 – In the next screen click the ‘+ Add’ button to add a new service.

Step 3 – You will then have a number of options on the screen. If you scroll down to the Cognitive Services section and click ‘More’ on the right-hand side, you will see a full list.

For the purpose of demonstration, let’s add the ‘Speaker Recognition’ cognitive service API. However, you will need to add the relevant one for each of the services you wish to use in your PowerApps.

On selecting it you will be presented with more information about the API, what it is used for, and a “Useful Links” section to show where to get more information on it.

Click ‘Create’ at the bottom of the information screen.

Step 4 – Fill in the details for the service, including Name, details on your subscription, pricing tier, and Resource Group.

NB: If you don’t already have a Resource group setup you can just click ‘Create new’ underneath that field.

Then once you have all the details click the ‘Create’ button at the bottom of the screen.

Your request will be validated, created and deployed, which normally takes 2-5 minutes. Once complete, you can then go directly to your resource.

Step 5 – Now your resource is ready to use. To use it within your PowerApp you will need to grab the ‘Key’. You can do this by clicking ‘Keys’ as shown on the screen below:

Download Postman

The next thing we need to do before making use of these cognitive services in our PowerApps is to download a tool called Postman. This is an API testing tool that allows you to test your API calls and even get the JSON format. The tool is free to download from here – https://www.getpostman.com/downloads/

Once you have Postman installed it should look something like this:

You can select the type of Request you want to make, select or enter your API URL along with any parameters associated with that Request, click the ‘Send’ button, then you will see the JSON object that is returned in the main window.

This is a great way to test and make sure that your calls are working both when building an App or when trying to debug and understand problems.

So, there we have it, we have our Azure account all setup, we have keys for this service, but will need to add any other services we want to use later. We now need to look for the connectors so we can use the services in our App.

API Connectors

Many of the cognitive services API has pre-built connectors that we can use in our PowerApp, but not all.

One of them is the ‘Computer Vision API’. It is also worth noting, at the time of writing this, all of these connectors are still in ‘preview’.

Take a read of the references to understand in more detail how to use this connector, any limits or constraints – https://docs.microsoft.com/en- us/connectors/cognitiveservicescomputervision/.

Demo 1: Image description

So now we have covered all the basics that we are going to need for the Computer Vision API to be used in PowerApps, let’s get on with our PowerApp Demo.

This demo will use the ‘Computer Vision API’ for Image analysis. The PowerApp will allow the user to upload an image, and once uploaded click a ‘Scan’ button for the Cognitive service to analyze the image and return a description of it. It should look something like this:

In other examples, the service may be able to tell more details about the image. So let’s look at how this is built.

NB: You will need to ensure you have set up the ‘Computer Vision’ service in Azure to get the key, once it is added simply click the ‘Keys’ link in Azure and copy and paste it when promoted in PowerApps.

Step 1 – In your PowerApp, go into our Data sources and add the ‘Computer Vision API’ connector.

Click the ‘View’ menu and ‘Data sources’ and in the menu that pops up click ‘+ New connection’. You can then search for the ‘Computer Vision API’, and it should be the first one to show in the list.

Select it from the list, then you will be prompted to enter your ‘Key’. This is the key we mentioned earlier from your Azure dashboard.

Step 2 – Next we are going to add our add picture control onto our screen to receive the image to be scanned.

Click on ‘Insert’ and from the ‘Media’ menu click ‘Add picture’ control. Position the control as per the example.

You will see in your navigation on the left-hand side that this control actually adds a group of controls usually called by default – ‘AddMediaWithButton1’. When you look into this group by clicking on it, you will see it is made up of an image and a button.

Step 3 – Then we need to add and set up our buttons. One to initiate the scan of the image and one to clear out the image.

From the ‘Insert’ then ‘Controls’ menu, add two buttons. Set the text to ‘Scan’ and ‘Clear’, and position below the Picture control as per the example above.

Select the ‘Scan’ button and set the ‘OnSelect’ property as per the screenshot below:

This is where we are going to call the computer vision API with the image we have uploaded, and add the result of the call to a collection called ‘colImageDesc’.

We are using the ‘DescribeImageContent’ method of the API for this, and from that call returning the ‘Description’.

Select the ‘Reset’ button and set the ‘OnSelect’ property as per the screenshot below:

This code is going to clear out what we have in our picture control by setting the reset to true of the ‘AddMediaButton1’ part of that control, and also clear out anything we have in the collection ‘colImageDesc’.

Step 4 – Now we need to add a label to display the description that has come back from our API call.

From the ‘Insert’ menu add a ‘Label’ control and position it to the right of the Picture control. Select the ‘Text’ property of the label and set it as shown in the screenshot below:

This will set the text to be the first item of the ‘colImageDesc’ collection.

Now when you run your PowerApp and upload any image on clicking the ‘Scan’ button, you will get back a description from the call to the Computer Vision API. Give it a go with some different images.

Demo 2: Text Analytics, Text Analysis

As the name suggests this will analyze a body of text and return various pieces of information about the text. This has a number of possible use cases that are really quite powerful, here are just a few of them:

  • Employee Feedback App
  • Score suggestions based on positive or negative feedback
  • Maybe tie in with a Flow to automatically alert if an item is posted that has a sentiment score of less than 50%.
  • Peer Evaluations
  • Content Tagging
  • Automatically tag list items with relevant keywords

In our example it will set checkboxes to decide which of the following information we want cognitive services to analyze and return:

  • Language
  • Sentiment (How positive the comment is)
  • Key Phrases and Keywords

NB: Remember, as with the other demos in this post, you will need to ensure you have set up the service in Azure to get the key, and then added the relevant data source, in this case, Text Analytics.

For this demo we will build something that looks like this:

Step 1 – On the screen, we will need the following controls laid out as you can see above.

  • A ‘Text input’ control.
  • Three separate ‘Checkbox’ controls with related labels to show what each one does.
  • Two ‘Button’ controls.
  • Two Label controls to display the Language and percentage positive rating.
  • A ‘Blank Vertical’ Gallery control for the returned Keywords to be displayed in.

Step 2 – Select your ‘Analyse’ button and set the ‘OnSelect’ property as per the screenshot below:

This is checking if the related checkbox on-screen is checked and if it is then calling the related Text Analytics method, passing in the text from the text box to return the answer for each one, and place them into a related collection so that we can use them in our other controls on the screen to display the answers.

Step 3 – Now to show the data that is returned in the related labels on our screen. Starting with percentage positive and Language.

Select the percentage positive label and set the ‘Text’ property as per the screenshot below:

This is using a rounding the value returned in the sentiment collection and multiplying it by 100 to get the percentage, then to give us a meaningful label it has the text concatenated in before and after e.g. “The text is 90% positive”.

Now select the language label and using a similar approach set it to – “The Language detected is: ” & First(languageCollect.Value) .

Step 4 – Next we need to create the Keywords list, this is done using the Gallery so is slightly different from the others.

Select the Gallery you inserted earlier and set the ‘Items’ property to be the collection ‘phrasesCollect’ using the right-hand menu as shown in the next picture:

Step 5 – All that is left to do is set the ‘OnSelect’ property of the ‘Clear’ button to clear the screen.

You should be able to work this out for yourself now, to help you, you will need to clear the 3 collections, and the text input box, as well as set each of the checkboxes to false.

Then give it a go and see how it works.

Demo 3: Face API

The Face API can detect one or more human faces in an image and return face rectangles for where in the image the faces are, along with face attributes that contain machine learning-based predictions of facial features. The face attribute features available are Age, Emotion, Gender, Pose, Smile, and Facial Hair, along with 27 landmarks for each face in the image.

For this example, we are going to create a gallery of face images and use the Face API to return the Gender and Age of the person in the photo, it will look something like this:

Step 1 – Firstly you will need some photos to add to the gallery, for this example, we have them in a SharePoint list. We are using the fields ‘Image’ and ‘title’…you may have other information you want to show on the Gallery from your SharePoint list.

You will also need to ensure you have set up the service in Azure to get the key, and then added the relevant data source to the PowerApp, in this case, the ‘Face API’. Refer back to the previous sections on how to do that.

Step 2 – From the ‘Insert’ menu select ‘Gallery’ and choose a ‘Vertical Gallery’. Position it on the screen as per the screenshot above.

With the Gallery selected, from the right-hand menu panel set the data source of the Gallery to be your SharePoint list and choose the ‘Image and Title’ layout.

Then add a second ‘Blank Vertical’ Gallery that will show our results data, and position on the right and side of the screen.

Step 3 – Select the first Gallery control with the images and names in, click on the top arrow and choose the ‘OnSelect’ property.

Then enter the formula as per the below screenshot:

This code is using the Face API and adds the results to the ‘FaceAPICollection’. The call to the face API is using the ‘Detect’ method to return age and gender attributes.

This code is also deciding which image to pass to the API based on the title from the Gallery. It passes the picture related to the correct person. e.g A photo of Dennis Bottjer when he is selected from the list. In this example, the photo links are hardcoded.

At this point hold the ‘Alt’ key down and select one of the arrows in this image gallery. This will populate the new collection with some data. You can check this by selecting the ‘View’ menu and choosing ‘Collections’.

Step 4 – Now to show that data in the second Gallery control. Choose the second gallery control, and from the right-hand panel set the ‘Items’ to be the name of our collection, in this case, ‘faceAPICollection’. You can also set this using the property formula bar and typing in the name of the collection in the ‘Items’ property.

Next, add a label to the gallery and set the ‘text’ property as shown below:

This is concatenating the ‘faceAttributes’ gender and age returned values together with a label for age, to show it in the label on the control.

Now run your app and give it a go.

Azure Cognitive Services APIs without Connectors

All the demonstrations so far have their own connectors in PowerApps making it pretty easy to consume those services. However, you will notice there are a bunch of other cognitive service APIs that can be used e.g. Speech, spell check, and QnA Maker, etc.

So how do we use those APIs when there are no connectors to use yet?… There are two main ways of doing this:

  • You can create a custom connector for PowerApps to the cognitive service API you want to use.
  • The option which is sometimes preferred is to call these APIs from a Flow. Flow has the HTTP trigger, so you can do an HTTP Get or Post natively through flow. Then we can call a Flow or get data from a Flow in PowerApps.

Demo 4: PowerApps Spell Checker

We’re going to perform the PowerApps Spell Checker with BING Spell Check & LUIS. So first of all what is LUIS (Language Understanding):

LUIS can be used in combination with other cognitive services API’s. e.g. If you use the BING spell check API to check the text that you send for spelling errors. You could then send those spelling errors to LUIS, which can understand those errors and send you back a suggested complete grammatically correct sentence:

Take a look at the documentation and examples of the BING spell check API here – https://azure.microsoft.com/en-gb/services/cognitive-services/spell-check/.

There are a number of examples to work through that show you the power of the API. Showing corrections and a sample of the JSON returned each time.

Azure Setup

As there is no out of the box connector to get this working, there are a few extra steps we need to go through, starting in Azure.

Step 1 – In our Azure subscription go back into our ‘Cognitive Services’ section and select ‘Add’.

Step 2 – Search for the Bing Spell check and select it to add the service.

Step 3 – Go to your ‘Keys’ and store ‘Key 1’ somewhere you can retrieve it later (NB We will have a number of keys and data we need to save for later so best to use a notepad, OneNote or similar).

LUIS Setup

Next, we need to set up LUIS and get a Key for that also.

Step 1 – To get started with LUIS you need to go to https://www.luis.ai/and create a free account.

Step 2 – Once you have created this it will enable you to get your LUIS key, however, you do also need to create a LUIS app so you can pass in an app ID in the API call.

In LUIS go to ‘My Apps’ and click the ‘Create new app’ option, giving it a name and description.

Step 3 – There isn’t much you need to do in order to get your API to work with the BING spell check other than to click the ‘Train’ button to train it. Once this is done the ‘Publish’ button will now be enabled.

Publishing the App will give us the AppID we need to pass into our API call. To get this you can click the ‘Manage’ tab, save the ‘Application ID’ somewhere you can retrieve it later:

We also need the LUIS key, you can get that by clicking on your name in the top right of the screen and clicking ‘Settings’.

From within settings copy the ‘Authoring Key‘ and the ‘Authoring Region’ and save them for use later.

Understanding the Endpoint URL

We now have all of the Keys and setup we need to get this API working.

There is a great tutorial here on the Microsoft site to help walk you through the setup we have just done as well as some of the further steps – https://docs.microsoft.com/en- us/azure/cognitive-services/luis/luis-tutorial-bing-spellcheck

Step 1 – That documentation gives the details on the Endpoint URL format… The endpoint URL has several values that need to be passed correctly. You must set the spellCheck parameter to true and you must set the value of bing-spell-check-subscription-key to the key value:

https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appID}?subscription-key=

{luisKey}&spellCheck=**true**&bing-spell-check-subscription-key=**

{bingKey}**&verbose=true&timezoneOffset=0&q={utterance}

This URL is where we will add some of the saved values we got earlier, the URL will need updating as per the below

  • Region = LUIS ‘Authoring Region’
  • appID = LUIS ‘Application ID’
  • LuisKey = LUIS ‘Authoring key’
  • bingKey = Bing Application Key – ‘Key 1’
  • utterance = Text that we will want to pass in
  • The various ‘*’ removed from the URL.

Step 2 – Now we want to test this, we mentioned earlier in this post that Postman is a great way to test things like this, however, you can use your browser.

To use the browser simply prepare the URL manually as described above ( it will be built more dynamically when we build the flow), and paste it into your browser. If it is all working you should get something like this:

If there is an issue with your URL, then you will get an error. e.g. if you had a key wrong you would get an ‘Invalid Key’ error.

Step 3 – From the same documentation link above, take a copy of the JSON format that LUIS will return and save it somewhere you can retrieve it later, for ease of reference here it is:

{
  "query": "How far Is the mountain?",
  "alteredQuery": "How far Is the mountain?",
  "topScoringIntent": {
    "intent": "Concierge",
    "score": 0.183866
  },
  "entities": []
}

We should now be ready to build the Flow.

Demo 4: PowerApps Spell Checker

Build the Flow

We are going to build the Flow to be triggered from text added to a PowerApp. This will then run through LUIS and return the output back to the PowerApp. We will create a flow to look like this:

Step 1 – Create a new blank Flow and set the trigger to be PowerApps, click the ‘+ New step’ button below the PowerApps trigger and add 3 different ‘compose’ actions. These compose actions will store the different keys to pass into the API call (shown on the screenshot above).

Name them using the ‘…’ menu on the right-hand side and in the ‘Inputs’ field paste in the keys you saved previously for each of them as shown:

  • Key – For the Bing subscription key. ‘Key 1’
  • AppID – for the LUIS App ID. ‘Application Key’
  • LuisKey – for the LUIS Key. ‘Authoring key’

Step 2 – Add a new ‘HTTP’ action to the Flow, to create our EndPoint URL

This is a ‘GET’ method as shown below, with the EndPoint URL we discussed earlier, with the relevant sections replaced or completed using the ‘Compose’ actions:

You will notice that when you edit the URL you will have the option to type or add from the ‘Dynamic content’ section to use our compose actions we just created in the Flow, as you can see here:

The original EndPoint URL will be updated as follows:

  • {Region} = manually enter the LUIS ‘Authoring Region’
  • {appID}= LUIS ‘Application ID’ from the ‘AppID’ Compose action we just created.
  • {LuisKey} = LUIS ‘Authoring key’ from the ‘LuisKey’ Compose action we just created.
  • Remove the ‘*’ that is next to the ‘True’ statement.
  • Remove the ‘*’ next to the {bingKey}.
  • {bingKey} = Bing Application Key – ‘Key 1’ from the ‘key’ compose action we just created.
  • {utterance} = Text that we will want to pass in. We will replace this with an ‘Ask in PowerApps’ action from the bottom of the Dynamic content menu, this is to get the data from PowerApps.

Step 3 – Add a new action ‘Parse JSON’ action to the Flow.

This is to read the response and parse it. Set the ‘Content’ to be the ‘Body’ from our HTTP call as shown below:

To set the ‘Schema’ we are going to use the sample JSON we saved earlier in this post in the ‘Understanding the EndPoint URL’ section.

Simply click the ‘Use sample payload to generate schema’ option at the bottom of the Schema box and in the pop-up window paste in the sample JSON then click ‘Done’:

This generates our sample schema for us.

Step 4 – Add a new ‘Respond to PowerApps’ action to the Flow.

Add an output and call it ‘CorrectedText’ and pass in the argument from our Parse JSON of ‘alteredQuery’, as shown:

This is the spell checked and correctly formatted text that is returned from the API call to LUIS.

Step 5- Ensure that you have named your flow, something like ‘SpellCheck’, so you know what to call from the PowerApp, and then save it.

Build the PowerApp

Now we are ready to build our PowerApp. This PowerApp will have the following controls:

  • Text Input 1 – This is to enter the text you want to send to the Flow to be checked.
  • Text Input 2 – This is to show the returned text from the Flow call.
  • Button – To make the call to the Flow.

Step 1 – Add the components and position them to look something like the below screenshot:

Step 2 – Select the button and choose the ‘OnSelect’ property to enter the code that is going to call the flow.

Click the ‘Action’ menu and select ‘Flows’. The flows you have created should show up in the list, and you can select your ‘SpellCheck’ Flow, this will then give you the starting formula to call the Flow.

Update the code as per the screenshot below, this will call the Flow and add the returned corrected text to a variable called – ‘VarCorrectedText’:

Step 3 – Select the second text input box and set the ‘Default’ property to = ‘varCorrectedText’.

This means when the button is clicked, it updates the variable and the text is shown in the second text input.

Step 4 – All that remains is to reset the Text box and variables when the screen is shown, to do this select the screen on the left-hand panel and choose the ‘OnVisible’ property and set it as shown below:

Step 5 – Give it a go!

The PowerApp is very simple as Flow is doing all the hard work and processing. Run the PowerApp and give it a go by entering some sample text with some errors in the Left-hand text input and clicking the button.

This PowerApp could be used for various Use cases, here are some ideas for you:

  • Education – Spelling App.
  • Help teach children how to spell words.
  • Auto-Proofing any App with input controls.
  • Mistakes happen.
  • Can’t rely on end-users to have proper grammar or spelling.
  • Use these features to automatically fix errors in the background prior to submission.

0000-00-00 00:00:00


Leave a Reply

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}