OPNET built a custom image model on Google Cloud Platform using TensorFlow, they are on the left-hand side of this image. But increasingly, you don't have to do that. There are a variety of domains where Google exposes Machine Learning Services trained with our own data. For example, if you want to transcribe speech, you could use a speech API instead of having to collect audio data, train it, and predict with it. There are many such pre-trained models and such pre-trained models are excellent ways to replace user Input by Machine Learning. Here's an example of using a pre-trained model. Ocado is the world's largest online-only grocery. It's based in the UK. The way it used to work was a customer sent it e-mail and traditionally each e-mail would get read and then routed to the appropriate department where it would get read again. So that doesn't scale. So Ocado turned to natural language processing and they were able to extract the sentiment of the email and entities, the things being talked about in the e-mail, and even the syntax of the email. This computational technology helps Ocado parse through the body of e-mails and tag and route these e-mails to help the contact center reps determine the priority and context in a very efficient way. But increasingly, customers do not want to go to your website and click on a button. They do not want to send you an e-mail. They want to talk to you interactively to get their questions and concerns answered. Manually answering each call doesn't scale and so Gartner estimates that in a few years, we will be spending more on conversational interfaces than even on mobile apps. So does this mean using the Speech API, transcribing the speech and then trying to make sense of it? No. What I'm showing you here is a high-level conversational agent tool called Dialogue flow. So look here at this screen and we notice that the agencies, " How may I help you today?" And the customer says, "I want to get a pizza with bacon and cheese." Based on that dialogue flow goes ahead and builds a JSON message that says that the customer wants to order a pizza and is able to go ahead and add the toppings corresponding to bacon and cheese, and then the agent says, " Well, what size do you want?" Because that's another requirement. Based on that the size as large also gets added and the standard crust and then says, "Add olives" and now into the toppings, olives gets added. Notice that this is a very conversational interface and from Dewey's conversations a JSON, structured message gets built. It is this JSON message, this very structured message that goes to the rest of the application. Which works the same way as before, except that the user input has not come from the customer pointing and clicking their way through a web form where instead has come through a conversational interface.