There's the Google Translation API that's also enabled already.
And the Natural language API,
there it is, that's enabled as well.
And, the Speech API, let's just make sure it's also enabled.
And that's also enabled.
So great. So all of the APIS are enabled.
So, let's go ahead and get the Credentials.
So, we'll go down to the APIs and Services,
and get the Credentials.
So, we already have the API key.
So, I went ahead and used it.
Or we can go ahead and say,
create Credentials with an API key,
and create a brand new key,
copy that and there we go.
So, that's our API key.
So, here it is.
And now, we're ready to go into the ML APIs.
And in our Notebook where it says API key,
I will replace by the new API key that we have, and run it.
So, I can either click the run button,
or I can do shift enter.
So, let's go ahead and
install the Python client.
So having done that,
let's go ahead, and run the Translate API.
And you notice that there is the inputs,
is it really this easy?
And you see the translation in French because we asked for the target to be French.
Let's change the target to be ES,
that's Espaniol, and run it.
And now, what we get back is Spanish.
So, how does this work?
We went ahead and specified the inputs as an array of strings,
and asked the service to go ahead and do a translation from English
to whichever language we want passing in those inputs.
And what we got back is the outputs, the translated string.
Similarly, what you want to do is to go ahead and invoke the Vision API.
And to invoke the Vision API,
we need an image.
And in this case, the image is the image of a street sign.
I don't know Chinese so I don't know exactly what it says.
Let's see what it says. So we'll go ahead and put this on Cloud storage.
So this is actually been made public so we don't have to change anything here.
We can go ahead and read,
we can ask the Vision API to read that image,
and tell us what text is in it.
So we can go ahead and run that.
And at this point, we get back the JSON output.
So again, what we're doing here is that we're invoking the version one of the Vision API,
passing in the GCS image URI.
GCS meaning, again, Google Cloud Storage.
We have this image on cloud storage.
We could also pass an image as part of
our request but having it on cloud storage makes it faster.
Because we don't have to upload all of that image data along with our request.
And we are asking it to do text detection,
and what comes back is all of the text in this image,
along with the language ZH meaning Chinese,
and a bounding polygon of each of those pieces of text.
We could of course go ahead and get the first piece of it,
and take the text annotation,
get the language, the locale which we said was ZH.
And then, we could go ahead and print out what we got,
and we got back the foreign language to ZH,
and the foreign text which is all of this.
So now, what we can do is to go ahead and run it.
Of course, the result of it having been drawn is already
here so I can click on this cell, clear it.
And then now, you can run it again,
and you can make sure that what you are being run is yours,
and we see that the Chinese text has now been translated into English.
The other thing that we can do is the Language API.
So here, we have a set of quotes.
And what we want to do is to look at the sentiment associated with these quotes.
So again as before,
let's go ahead and clear the cell and run it.
So in this case,
we are printing out the polarity and the magnitude,
all associated with each of these codes.
So, the polarity is positive,
if it's a positive sentiment,
it's negative if it's a negative sentiment.
And that makes sense.
If you say, to succeed you must have tremendous perseverance,
that's a very positive thing.
But, if you say for example,
when someone you love dies.
Well, that's a pretty negative thing.
So polarity is negative.
And the magnitude is an indicator of how
often very strongly worded language occurs in the text.
The final piece that we're showing up here is a Speech API.
And as before, we have an audio file loaded into cloud storage and we
are asking for the result of that speech to be made into text.
So, we can go ahead and run that,
and we get back a JSON response.
And the JSON responds at a very high confidence is that the speech in that audio file is,
"How old is the Brooklyn Bridge?"
So what we have done in this lab is that we've used
Datalab to use Python APIs to essentially invoke the machine learning models.
So remember that these are not machine learning models that we had to build.
These are machine learning models that we could just go ahead and use.
We could incorporate this machine learning models into our own applications.
This is something that you want to recognize that
not every ML thing that you need to do has to be done from scratch.
If what you want to do is to recognize text and images,
you might just use the Vision API.