In the below video of Deconstructing Chatbots, Priyanka Vergadia will show how to integrate Dialogflow with Google CloudML APIs. You will learn how to use Dialogflow APIs in your own custom app as well.
What's important to note is that Tellephant and Aiyo Labs are part of the Google Cloud for StartUps program and can help your business get on Google Business Messages live and running in no time.
A full transcript of the video is below. Enjoy!
PRIYANKA VERGADIA: Welcome to Deconstructing Chatbots. I am Priyanka Vergadia, and in this episode, we will learn how to integrate DialogFlow agents with ML APIs.
Depending on what your conversational AI use case is, your users may want to upload an image to the chat application, and expect a response that guides them further in that conversation. This is exactly what we will learn today. We will be seeing how to create a DialogFlow agent that fulfils an image request by calling the Google Vision API. We will be using the same Django front-end application that we created in the previous episode and add image upload functionality to it. If you've not watched the previous episode, I highly recommend you start there.
All right. So here's what we are building. The user ends up on our chat application and interacts with it. The chatbot responds with a greeting defining the two things it can do-- setting up appointments and understanding invoices. We've already seen setting up appointments in the previous episodes, so we'll just skip that for now and just ask for understanding of our bill. User is prompted to upload the image of their invoice. Once they upload that image, we display it on our front-end and return back the text detected in that image by processing it through the Vision API.
Now, from an architectural standpoint, here's how this is working. The chat app will be served on App Engine. The app is built in Python using Django, which is a Python-based free and open source web framework. App Engine will interact with Cloud SQL for any database requests. When users upload the files, we will save them to Google Cloud Storage. And the app will interact with our DialogFlow agent via the DialogFlow API.
The example uses the appointment scheduler that we created in the third episode, and enhanced in the eighth episode, to integrate with Google Calendar.
Today, we are enhancing it further by adding an integration with Vision API. Now, with that background, let's dive in and see how to make it work. Check out the link in the description below to download the source code for this application. Make sure to follow the setup steps.
I'm going to show you the main part that makes the integration with the Vision API possible. But you can read the elaborate setup instructions in the readme file. First, let's look at the DialogFlow console. I'm still using the previously created appointment scheduler agent from episode three. We will start from the welcome intent. Modify the response so that the users clearly know that our chatbot can handle appointment scheduling and invoice-related questions. Next, create an intent to capture users' input if they are contacting us about their invoice. Provide a static response telling the user to upload a file. Create an intent that takes the file input. Once the user has provided the file, we extract the file name using a custom entity. If you don't know what custom entities are, check out episode six. And for the response, enable the fulfilment. Check out the front-end Django code to see how to handle the plumbing from front-end to DialogFlow.
Now, let's see the fulfillment.js file. Scroll down to the intent map section, and define the new intent map for file upload. In the function applyML pass the agent and declare a variable to get hold of the file name that the user has uploaded. We can do that by writing agent.parameters.filename. Then call a function to handle Vision API requests. You pass the bucket name and the file name to create the GCS bucket pod. Then you create the client to call the Vision API. On this client, we call the text detection method and pass the GCS bucket pod from where the file will be pulled. At that point, we have a resulting JSON with all the detected text.
Since for this example, we are only interested in the text description, I created a list of all the text descriptions and returned it as is.
Back in our intent map, we are taking the return value and sending it as a response to the user. As you can imagine, the possibilities are endless with an integration like this. Once you have it working, you can pretty much change the Vision API methods to identify labels, landmarks, faces, and more. You can also use the same concept and integrate with other ML APIs, like NLP, vision, translation, jobs, and even your custom ML APIs that you built through AutoML.
I cannot wait to hear from you how you are applying this integration. Tell us all about your use case in the comments below. Today we learned how to create a custom DialogFlow front-end on App Engine using Django framework and integrated it with Vision API on the back-end.
I hope to see you in the next episode of Deconstructing Chatbots. If you liked this video and would like to see more such content, don't forget to like and subscribe to our channel.
With the new Instagram's Messenger API, you can now iterate Instagram messaging with apps you’re already using, and offer more meaningful conversations to your customers, and grow sales even further using Instagram.Read Story