The next section of the exam guide is about analyzing and modeling. Analyzing is looking for patterns and gaining insight from data. Modeling is about identifying patterns that can be used for categorizing or recognizing new data or predicting values or states in advance. In the first section, we'll look at analyzing data and enabling machine learning. The way these two are related is that a lot of businesses begin by analyzing data. After they get value from the analysis, they start to look deeper and often realize that they have unstructured data that could provide business insights. And that leads them to want to enable machine learning, so this is a natural adoption path. Make sure you're familiar with each pre-trained ML model. For example, what are the three modes of the Natural Language API? The answer is sentiment analysis, entities, and syntax. Would Natural Language API be an appropriate tool for identifying all of the locations mentioned in a document? Yes, it might be useful for that purpose. Pre-trained models can turn apparently meaningless data into meaningful data. I think the Translation API is a good example. If you have text in a language you don't understand, the meaning contained in the data is not available to you. Use the Translation API to convert it to a language you do understand, and suddenly there's meaning and value in it. Pre-trained models create value from spoken word, from text, and from images, common sources of unstructured data. If none of the pre-trained models will work for you, you can use TensorFlow and Cloud Machine Learning Engine to create your own models.