Hello and welcome to secure JavaScript programming is Vladimir [inaudible]. In this video, we will be talking about a security best practices when having server as NodeJS application, a server as JavaScript application. There are a lot of best practices in term of code that are linked to vulnerabilities. But that we will see in another video, which is more designed to NodeJS security. In that one, we will focus about things that are typically linked to server-less and Cloud deployments. There are actually two very important things you need to do. First one, even if all the Cloud deployment platform or the Cloud function platforms enable you to have an online IDE in the browser that you can use to build your coding, never use that. Always have a project on a version control system, whether it's [inaudible] Mercurial, whatever you have to use, but never have something inside an unsafe storage in a browser rosettes on your cloud provider side. Second one, deploy cloud through CLI is not entirely true. There is 2-alts. Use a stricture as code. Meaning that build your Cloud setup as a code base with tools maybe like Terraform and deploy everything through CLI's. But never actually drag and drop things to a three or drag and drop zips into an interface. That's actually very important because you need to be able, those two first items are here to help you save things if something breaks, if something disappear, if some hacker managed to get into your cloud instance. Well, because of these two items, you will be able to rebuild the system quickly, rather than losing everything if someone just get access to your Cloud infrastructure and breaks everything. That's why everything must be stored in a VCS and backed up. Last item regarding production is have secrets is in environment variables. Most of the time that's okay. Or if you can, and it maybe get expensive, use the Cloud Provider Secret Store. In our case, we won't choose the Cloud Provider Secret Store because they are expensive. But they exist and are services to manage secrets in your applications. Last but not least, regarding the lifestyle cycle of the project. We'll test locally and in CI. Make sure that you're deploy flow refuses to work if they have not been test session running. When you tests use code coverage and things like that. Let's build, first of all, a simple function. We use a module name fastest levenshtein. What we want to do is to deploy a Cloud Function on Google Cloud. That will compute the levenshtein distance between two word. Let's get to first-word. Word 1 equals.req.query.word1. We are going to add an unification like //GET/? word1=hello&word2=bar. We'll do the same for word 2. One things that's often overlooked when knowing NodeJS programming is data validation. First thing we need to do here is ensure that the type of the items we get is the one we expect. You can use libraries like JSON validator or joi. Joi is my favorite library for that. But right now it would be a bit over-keen to use joi just to check the value of two strings. But be aware that it can grow very quickly in complexity when you do data validation. In that case, well, you might have to do something else. Next item is, what do we do when there's an error. Well we do res.statues. We send a 400 error. Now we will use return res.end('Arguments must be strings'). I always return when I call res.end to make sure that there are no line of codes that can be called otherwise, even if I use else, it's called defensive programming. That's actually your best practice. If there's a case, we will do res.end distance between word 1 and word 2. That sounds good. Now, how do we deploy that? If you remember, I told you that we don't want to deploy through the interface so we could totally go into Google Cloud Function, create a function, and copy paste everything. But we don't want to do that because we want to be able to rebuild everything. If you're using the serverless framework, we'd have scripts already pre-created for that. Otherwise, I would recommend that you keep your scripts inside, so repository life cycle. That's because sometimes you will need to update them and you want a developer to know about that. However, in certain situation, for instance, a project is dependent on staging of production. In that case, you will need to make this part of the argument dynamic. Npm commands arguments, and actually npm commands accept arguments. To do that, you need to use the dash, dash example. I can't find the documentation right now. Let's quickly check Stack Overflow. Yeah, that's the one. That's dash dash. You will do npm run dash, dash, and then pass parameter that can be added as a parameter to your script. In our case, we likely want the project to not be a fixed perimeter. But you know me, you know that I'm usually very lazy in a lot of things. Now let's clear, so I don't spoil what I already did. Then I will just do npm run deploy. Another advantage of having it as an npm is an abstraction in front of your deployment, and I like the script is that if you change your Cloud provider, it will still work. Also, you notice that there are no authentication points in this. That's because the google Cloud CLI already knows where to find my tokens. We'll try, but I won't show you them on my disk. When you get in CI, you will have to do secret management to provide them as environment variables if you want to deploy from CI. I've also created a second method that's called describe. If we run it now, it will just tell us deploy in progress but it will also give us the URL of the function. Here, we can see that it already works. If we do describe again, it tells us that it's active so the first command is over. Here, we've got our error. That's totally fair because we did not have word 1 and word 2, so let me add word 1 and word 2. That's not good. That's an error. We get a 500 error here. If I were good at acting, I will pretend that I am terrified but that's actually a bug we placed into the code, and that's a Node.js security issue most of the time. If we refresh because we don't understand what the bug is, we can refresh the interface and we can click under function that has been created. Damn it, I have spent a lot of money on this. What we want to check is the logs because we don't know why it failed. When the log was released [inaudible], we had realized that I made a type error. Here, we actually have a stack trace that is readable. We see that the chunk argument must be a string, that we don't really have the details but we see that we received a number. The right way is to actually replay the inputs. As you can see, they're not in the logs. You will have two logs inputs if you want to be able to replay them, of course, providing that they don't contain sensitive data. In our case, some mistake is that res.end actually expects a string. I made this terrible mistake of providing a number. Right now, we can redeploy the function. As you can see, the previous version is still active unless the new one is replacing it. We just need to wait. Hopefully soon we will be able to write our Levenshtein distance. Let me check my notes. While it deploy, I will show you where to put the variables. As you can see, Google knows how to do environment variables for both, buildings, time and runtime. But it doesn't actually invite you to put them here. You will need either to go into edit and find a deeper stuff here. Either you will deploy them through CLIA items. Actually you never want to commit your environment variables. The code must be available and committed in a CVS but the environment variable must never be shared on a server. It's actually okay to have special jobs where you get to environment variables from Secret Store or if you don't have the resources or it's too much complexity to use a secret store actually just put them manually. This one scale use Secret Store if you can. Now we know that it has deployed, it has deployed with a new version number. If we refresh this page, well, it worked. All of that has been demoed with Google Cloud Engine. That actually works really well with AWS Lambda. At one point the only difference between AWS Lambda and Google Cloud it's about testing. If you want to do testing on Google Cloud, well, since those two objects are believed to follow the expressed/node core request and response things, you can actually just install any test runner. I really like tape, to be honest. I have to check the documentation most of the time. But it's like the simplest and most scalable test runner for JavaScript I know of. Let's just do these other tests. You tests of course I want to make pretty good. Of course, I have sticky fingers const distance require index. Now we don't want timing, we don't want primases. What we want is t that equal distance, foo, fooo, one and then we put the number of assertions. We can run this file and its complained because we actually called it with the wrong arguments. What we want to do is actually MOOC the object. Req equals query. That's what happens where you don't think. Word 1 foo, word 2 fooo. Now it's also important to create a response object and function r t.equal r 1. Now instead of these, we just do distance req res and it should work and we have local tests that actually work. I failed on something. Now I want to know why. Yes, of course. You know why? It's again, a type error and this time I was not aware of it. Yes, we respond with a string and not with the number. One of the main issues is that for AWS Lambda, they don't comply with the Express interface. You will have to use AWS Lambda, docker image. They actually provide docker images for you to use, to actually test your functions locally if you don't want to run your unit testing to Cloud. That is for this video about serverless best practices. In the next video, we will go in depth about a few things about node secure coding. That is the must known in term of secure programming with JavaScript on the server side, even if it's serverless. Thanks so much for watching this video. I hope you enjoyed it and see you soon in another video.