Cloud Vision is an API that lets you perform complex image detection with a single rest API request. Before we get into the details, I want to show you an example of a company that's using Cloud Vision in production. Let's talk about Giphy. Giphy is an app that lets you search for GIFs across the web and share them on various social channels. As many of you know, GIFs often have text in them, and they use the Vision API to improve their search to account for text in GIFs. So what they used was the Vision API's optical character recognition feature, or OCR, to extract text from GIFs and surface that in their search results. What they found was that this greatly improved their search experience. You can learn more about how they did this following the link on the slide. Now let's talk about all the different things you can do with the Vision API. At its core, the Vision API provides label detection, which will tell you what is this a picture of. For the image here, it might return elephant or animal. Then we have Web Detection, which will go a step further and search for similar images across the web and extract content from the pages where those images are found to return additional details on your image. Then we have OCR, which is the Giphy use case that I talked about on the previous slide. Using OCR, or Optical Character Recognition, the Vision API will extract text from the images. It'll tell you where that text was found, and it can even tell you what language that text is in. Then we have logo detection, which will identify company logos in an image. Landmark detection, can tell if an image contains a common landmark. It will also provide the latitude-longitude coordinates of that landmark. Crop hints can help you crop your photos to focus on a particular subject. Finally, the Vision API provides explicit continent detection, which is really useful for any website or app that has user-generated content. Instead of having somebody manually review whether an image is appropriate or not, you can automate this with an API call to the Vision API, so you only have to review a subset of your images. You can try out all of our machine learning APIs directly in the browser before you start writing any code. In this example, you can upload your images to the Vision API product's page and see your response you get back from the Vision API. Lets try this out in a demo. So if we go to the product's page for the Cloud Vision API, here we can upload an image and see what the Vision API will respond. I'm going to click on this to select my image. I'm going to choose a selfie that I took a couple of months ago on a trip to Japan. Here we can see everything the Vision API is able to find in our image. So it's actually able to identify the exact landmark that I'm standing in front of with 71 percent confidence. The face detection features of the Vision API is able to identify my face where is in the image, and it's also able to detect an emotion. It detects that joy is likely. We can also see the labels returned for this image. We can see the additional entities returned from the web detection endpoint of the Vision API. We also get the dominant colors in the image, and with safe search, this will tell us whether the image is appropriate or not, and it breaks it down into different categories. So adult looks for pornographic content, spoof looks for mean type content, medical looks for surgical graphic content, and violence looks for bloody content. Obviously in this image, inappropriate content for each of these categories is very unlikely. Finally, we can see the full JSON response from the API. If we look here, we can scroll through the entire API response. So I encourage you to try this out with your own images by going to cloud.google.com/vision.