You have an awesome idea and you've built a cool application. Like any entrepreneur or developer, you really hope that your app will go viral and lots of users will want to use your app, or maybe your enterprise has a business-critical application that must be highly available and handle millions of transactions reliably. In the module Best Practices for Application Development, you learn design and development techniques to develop applications that are secure, scalable, resilient, and highly available. Here are some key areas we'll discuss. Implement microservices based architectures with loosely-coupled services that can be monitored and can fail gracefully on error. Consider issues such as compliance with local laws and user latency when deciding where to deploy your infrastructure. Be sure to implement, build, and release systems that enable continuous integration and delivery. While it's crucial that you have repeatable deployments, it's also important that you have the ability to roll back to a previous version of the app in a few minutes if you catch a bug in production. Finally, your journey to the Cloud is not an all or none scenario. Depending on your organization's maturity and comfort level with the Cloud, you can re-architect and migrate legacy applications in an incremental manner. In the module, we'll discuss all these areas and more. Applications that run in the Cloud must be built for global reach, scalability, and high availability, and security. Your application should be responsive and accessible to users across the world. Your application should be able to handle high traffic volumes reliably. The application architecture should leverage the capabilities of the underlying Cloud platform to scale elastically in response to changes in load. Your application and the underlying infrastructure should implement security best practices. Depending on the use case, you might be required to isolate your user data in a specific region for security and compliance. In this presentation, you'll learn best practices related to code and environment management, design and development, scalability and reliability, and migration. Let's start with managing your application's code and the environment. Store your application's code in a code repository such as Git or Subversion. This will enable you to track changes to your source code and set up systems for continuous integration and delivery. Don't store external dependencies such as Java files or external packages in your code repository. Instead, depending on your application platform, explicitly declare your dependencies with their versions and install them using a dependency manager. For example, for a Node.js application you can declare your application dependencies in a package.json file and later install them using the npm install command. Separate your application's configuration settings from your code. Don't store configuration settings as constants in your source code. Instead, specify configuration settings as environment variables. This enables you to easily modify settings between development test and production environments. Instead of implementing a monolithic application, consider implementing or re-factoring your application as a set of microservices. In a monolithic application, the codebase becomes bloated over time. It can be difficult to determine where code needs to be changed. Packages or components of the application can have tangled dependencies. For example, in this monolithic application, the UI, order, payment, shipping, and other components are all part of a single large codebase. The entire application needs to be deployed and tested even if a change is made to a small part of the codebase. This increases the effort and risk when making feature changes and bug fixes. Microservices enable you to structure your application components in relation to your business boundaries. In this example, the UI, payment, shipping, and order services are all broken up into individual microservices. The codebase for each service is modular. It's easy to determine where code needs to be changed. Each service can be updated and deployed independently without requiring the consumers to change simultaneously. Each service can be scaled independently depending on load. Make sure to evaluate the costs and benefits of optimizing and converting a monolithic application into one that uses a microservices architecture. Remote operations can have unpredictable response times and can make your application seems slow. Keep the operations in the user thread at a minimum. Performed backend operations asynchronously. Use event-driven processing where possible. For example, if your application processes images that are uploaded by a user, you can use a Google Cloud Storage bucket to store the uploaded images. You can then implement Google Cloud functions that are triggered whenever a new image is uploaded. Cloud Functions can process the image and upload the results to a different Cloud storage location. Design application components so that they are loosely-coupled at runtime. Tightly coupled components can make an application less resilient to failures, spikes in traffic, and changes to services. An intermediate components such as a message queue can be used to implement loose coupling perform, asynchronous processing, and buffer requests in case of spikes in traffic. You can use a Cloud Pub/Sub topic as a message queue. Publishers can publish messages to the topic and subscribers can subscribe to messages from this topic. In the context of HTTP API payloads, consumers of HTTP APIs should bind loosely with the publishers of the API. In the example, the email service retrieves information about each customer from the customer service. The customer service returns the customer's name, age, and email address and its payload. To send an email, the email service should only reference the name and email fields in the payload. It should not attempt to bind with all the fields in the payload. This method of loosely binding fields will enable the publisher to evolve the API and add fields to the payload in a backwards compatible manner. Implement application components so that they don't store state internally or access a shared state. Accessing a shared state as a common bottleneck for scalability. Design each application so that it focuses on compute tasks only. This approach enables you to use a worker pattern to add or remove additional instances of the component for scalability. Application components should start up quickly to enable efficient scaling and shut down gracefully when they receive a termination signal. For example, if your application needs to process streaming data from IoT devices, you can use a Cloud Pub/Sub topic to receive the data. You can then implement Cloud functions that are triggered whenever a new piece of data comes in. Cloud Functions can process, transform, and store the data. Alternatively, your application can subscribe to the Pub/Sub topic that receives the streaming data. Multiple instances of your application can spin up and process the messages in the topic and split the workload. These instances can automatically be shut down when there are very few messages to process. To enable elastic scaling, you can use any compute environment such as Compute Engine with Cloud Load Balancing, Google Kubernetes Engine, or App Engine. With any approach, you don't have to develop code to manage concurrency or scaling. Your application scales automatically depending on the workload. So you're performing asynchronous operations and your database queries are doing well, but your applications still seem a bit slow. What can you do? Caching content can improve application performance and lower network latency. Cache application data that is frequently accessed or that is computationally intensive to calculate each day. When a user requests data, the application component should check the cache first. If data exists in the cache meaning the TTL has not expired, the application should return the previously cached data. If the data does not exist in the cache or has expired, the application should retrieve the data from backend data sources and recompute results as needed. The application should also update the cache with the new value. In addition to caching application data in a cache such as Memcached or Redis, you can also use a content delivery network to cache web pages. Cloud content delivery network and cache load-balanced frontend content that comes from Compute Engine VM instance groups or static content that is served from Cloud Storage. For more information about using Cloud CDN, see the downloads and resources page. Implement API gateways to make backend functionality available to consumer applications. Here's an example of an ordinary API deployed on Cloud Endpoints. You can use Cloud Endpoints to develop, deploy, protect, and monitor APIs based on the open API specification or GRPC. Also the API for your application can run on backends such as App Engine, Gke, or Compute Engine. If you have legacy applications that cannot be refactored and moved to the Cloud, consider implementing API as a facade or adapter layer. Each consumer can then invoke these modern APIs to retrieve information from the backend instead of implementing functionality to communicate using outdated protocols and disparate interfaces. Here's an example of a payment API deployed on Apigee. Using the Apigee API platform, you can design secure, analyze, and scale your APIs for legacy applications. For more information about the Apigee API platform, see the downloads and resources page.