[MUSIC] Welcome back to the next live show, I'm Stephanie Wong. Joining us now is Jennifer Lin, product management director. Thank you so much for joining us. >> Thanks for having me. >> So for those viewers who are just learning about Kubernetes and Borg, can you explain what each of those is and how we're making Kubernetes easier to consume and the significance of how each has evolved at Google. >> Sure, Kubernetes is the externalization in the open source of our internal controller development, container orchestration tool called Borg. It's something that we've evolved over the last decade, and we have said publicly that we launched four billion containers a week across the Google environment. So managing that at scale, we've learned a lot about just running large global scale systems and doing container orchestration at scale in a resilient, reliable way. Kubernetes was open sourced a few years ago and we've had our own managed version of Kubernetes with Google Kubernetes engine now over four years. So, a lot has happened in a very short period of time, but there's lots of lessons learned under the covers. >> We continue to talk about API services and open source, why is Google differentiating in this space? >> Yeah, I think this industry is changing so fast, developers writing new software, so it's really important that we get the interfaces and how to enable the life cycle management of services, right, as a system. And I think that's why Kubernetes has also gained a lot of traction very quickly. It sort of makes the notion of a service, a first class citizen. We've been very clear about how APIs interact and how do we essentially define services and do the life cycle management of services. And that's all coming with the maturity of things like Kubernetes and SDO. >> Yeah, and this is all enabling our developers. A lot of developers want to approach the way that Google does and dive in and get experience the way that Google developed software. How does open source enable the community and empower the community? >> Yeah, I think open source has been the reason why a lot of this has moved so quickly. I mean, it feels like overnight Kubernetes has become sort of the container orchestration system of choice. Just a few years ago, there was a lot of fragmentation, a lot of folks understood the value proposition of containerization with Docker. But essentially managing that at scale at the system level, there were various options through Open source. I think we've allowed a lot of people to just number one, understand what's going on, number two, contribute in a way that's meaningful for them and get a lot of the use cases out there in the open so we can iterate on best practices etc. And, yes, a lot of this has been very focused on developer agility, but now as we move into sort of the maturity of it. Yes, a lot of the practices for the operational administrators, and SREs and making this essentially a production level system is a lot of what the community is excited about as well. >> That's great, yeah, open source is really focused on our users and we also want to build our user experience into our products. Can you talk a little bit more about that. And given that our Cloud users are different and they have different needs, how do we build that into our products as well? >> Yeah, I mean on the product side within Google we do a lot with the user research team to just make sure we understand who are the personas, whether it's developer, or security administrator, or a network administrator, or the IT professional, or the user. So just making sure we're taking sort of a use case driven approach and understanding sort of who is the user of the product. And essentially what problem are they trying to solve. And how do we hide the complexity and make it a lot easier. And make sure that they're essentially doing their job well. But there are a lot because this is a software stack with a lot of, we've talked a lot about decoupling and separation of concerns. You can have a unified stack, but you have to recognize that there's lots of folks that essentially are consumers of that stack. >> So what's your advice for those who want to move to the Cloud with acceleration, and not have to do a rip and replace and manage their existing lifecycle, how do they bring in Cloud Native? >> Yeah, I think Cloud Native has really been about sort of openness and the flexibility that we talked about today within those. There are an architectural set of principles which I think Kubernetes and and things like SDO have kind of put out there in the open source community, there's a framework, and then there's an implementation. So number one, I think the industry is really rallying around the fact that container orchestration, there's sort of a clear de facto standard there. Obviously we have our manage version of that, because there's a lot beyond just the software bits in terms of how do you actually do lifecycle management and keep essentially automatic upgrades, patching, security updates, etc, as part of the system. And so that developers can move quickly, but you can keep essentially the security and stability of the system. >> Right, and one big part of the acceleration is GKE on prem, which was announced last august. And this is the first time we've actually brought our technology into the data center at this scale. So why now and why not sooner? >> Yeah, I mean what we found is everybody understands the benefits of Cloud, but in many cases there is technical and non technical reasons why they can't move all their workloads in the Cloud overnight. So this was a lot about bringing the best of the Kubernetes stack and what we're doing with Google Cloud to the on prem environment, and helping people move at their own pace, while at the same time modernizing in place. So it's not just about renting compute cycles, it's actually about application modernization, and developing new services, and adding more business value etc. So we're pretty excited about that. Now, it's not just on RGCP resources, but we can essentially run this on third party servers in a private data center for the enterprise. And as we talked about today, other environments as well. >> So speaking of modernization, we loved your demo during the day one key note. >> Thank you. >> We heard a lot about modernizing in place this year. Can you talk about how Anthos enables you to modernize your applications no matter where they are, as you mentioned on premises or in the Cloud. >> Yeah, I think that's the nice thing about Kubernetes, with Anthos we're really thinking about sort of open APIs, so that the interface is between those environments. Number one, we can hide the complexity, but still keep a consistent management environment. For the developers, independent of where those workloads run, they can learn one set of tools and essentially evolve as the industry evolves without picking sort of exactly where that's going to run or what production environment it's going to run in, because many developers don't know. They want to write once run anywhere. Their consumers may be running in many Clouds. And similarly for the platform administrator, they don't have to learn a bunch of different vendor tools. This framework is here to stay, we believe it's going to be the standard for many years to come. So even if that's not running in Google's Cloud per se, that skill set that many enterprises are trying to hire people who understand Kubernetes, that's very portable. So people are willing to invest in it now, because it's going to be here to stay for many years. >> And on that theme, we're hearing that developers and operators want to be managing at higher levels of the stack, but they still want visibility and control over policy management at a service level. How is Anthos providing a unified programming model and monitoring and policy across on premise and multiple Clouds? >> Yeah, great question, I mean, with Kubernetes obviously, we've really thought about container orchestration and cluster administration at scale. With this deal we're thinking about, assuming most of your microservices are now containerized, how do you look out for the life cycle and health of those services and the interactions between those services. How do you make sure that those services are authenticated and that you can put essentially policies around those services without having to manage the underlying infrastructure complexity. And then we showed how we're doing config management and automating essentially configurations at scale in a way that essentially is declarative. So you can define the policies once and push them down to different environments, and you don't have to rewrite them with a lot of toil for each cloud environment. And that's really important for customers that are thinking about, let's say, PCI compliance or governance. Those rules around how to have those security controls in place don't change Cloud to Cloud. But today they're spending a lot of time just to make sure they can ensure audit and compliance in each different environment. So we believe that sort of extra overhead, that doesn't need to be there, and that was a key push with Anthos. Many of our enterprise customers, they want to embrace new technologies. But it's not easy to figure out, how do I ensure that I'm still reducing my costs, keeping the efficiency, and addressing what the auditors and and compliance regulatory environments are looking for. >> Right, and we actually know that not everybody has the luxury of moving to the Cloud immediately for whatever reason, how can we help them get there? >> Yeah, I think a lot of it is, I mean, this has been a great week so far, I think the partner community is very engaged. I think a lot of the use cases and the best practices that are being put out there are based on folks trying things out, sharing best practices, we're trying to take more of a use case driven approach. With Anthos, I think a lot of what we're trying to share is the operational domain knowledge and the best practices of us having run these systems at scale. But also tapping into our hundreds of ecosystem partners who are getting excited about and contributing back. As this matures, and I think this is happening very quickly in the last couple of years. But it really is about solution delivery, and not just technology delivery. >> With Anthos, all these customers moving into production and skills very exciting. And obviously as you said, there are security concerns, how are we making that easier for our customers? >> Yeah, I mean, security is a major differentiator for us for GCP more broadly. And I think because containerization is relatively a new space, we've been doing a lot with both our customers and partners. To do number one, education and awareness around container security, and secure software supply chains, and things like binary authorization, and CICD, and how to make sure that security is baked in way upstream. So that when a developer checks in code, essentially it starts the governance process of security. And it's not a bolt on afterthought after the fact. I think that is a lot of the way that the Google developers are, when they check in code essentially, they don't have to be an expert in security. But the platform takes care of a lot of the complexity of security and governance. >> Thank you so much for your time Jennifer. >> Thanks for having me. [MUSIC]