Whenever you're working with events in real time, it helps to have a messaging service. That's what Cloud Pub/Sub is. It's meant to serve as a simple, reliable, scalable foundation for stream analytics. You can use it to let independent applications you build send and receive messages. That way they're decoupled, so they scale independently. The Pub in Pub/Sub is short for publishers and Sub is short for subscribers. Applications can publish messages in Pub/Sub and one or more subscribers receive them. Receiving messages doesn't have to be synchronous. That's what makes Pub/Sub great for decoupling systems. It's designed to provide "at least once" delivery at low latency. When we say "at least once delivery," we mean that there is a small chance some messages might be delivered more than once. So, keep this in mind when you write your application. Cloud Pub/Sub offers on-demand scalability to one million messages per second and beyond. You just choose the quota you want. Cloud Pub/Sub builds on the same technology Google uses internally. It's an important building block for applications where data arrives at high and unpredictable rates, like Internet of Things systems. If you're analyzing streaming data, Cloud Dataflow is a natural pairing with Pub/Sub. Pub/Sub also works well with applications built on GCP's Compute Platforms. You can configure your subscribers to receive messages on a push or pull basis. In other words, subscribers can get notified when new messages arrive for them or they can check for new messages at intervals. Scientists have long used lab notebooks to organize their thoughts and explore their data. For data science, the lab notebook metaphor works really well, because it feels natural to intersperse data analysis with comments about their results. A popular environment for hosting those is Project Jupyter. It lets you create and maintain web-based notebooks containing Python code and you can run that code interactively and view the results. And Cloud Datalab takes the management work out of this natural technique. It runs in a Compute Engine virtual machine. To get started, you specify the virtual machine type you want and what GCP region it should run in. When it launches, it presents an interactive Python environment that's ready to use. And it orchestrates multiple GCP services automatically, so you can focus on exploring your data. You only pay for the resources you use. There is no additional charge for Datalab itself. It's integrated with BigQuery, Compute Engine, and Cloud Storage, so accessing your data doesn't run into authentication hassles. When you're up and running, you can visualize your data with Google Charts or map plot line and because there's a vibrant interactive Python community, you can learn from published notebooks. There are many existing packages for statistics, machine learning, and so on.