So now we're actually going to type the SQL commands that actually work with the data that's in the tables. And so we start with INSERT, SQL INSERT. And again, we have this situation where it's kind of wordy. INSERT INTO is kind of the keyword, name of table, and then a parenthesized list of the column names from the schema, the word VALUES, and then there is a one-to-one correspondence. There are two here. There's got to be two there. One-to-one correspondence between the values, and away you go and then you put a semicolon at the end of each line. And then you would type these into your pgsql, right? And that would insert records. The DELETE command, talking again to pgsql, says DELETE FROM, users is the name of the table, WHERE clause, you're going to see WHERE clauses on a lot of SQL commands. And the reason is that SQL is not a procedural language. I've already mentioned that. And so there is no real concept of a loop in SQL. The DELETE FROM implies loop. And if you don't say WHERE, if you take this off, it would be like delete all the rows. Delete all the rows from users, if you took this off. You could imagine it sort of implies that. Delete all the rows from users. This is kind of like a loop. It'd be like for blah, blah, blah, in rows do something. But you don't say that, because DELETE has kind of an implied loop around it. And so if you don't say that, you just emptied out your entire user table, literally, you emptied out your entire user table. So you're like oh, wait, wait, wait. I didn't really mean to do all of it. So really, all I meant to do was delete all the rows from users where the email equals this. And so it's like a loop plus an if statement. So DELETE FROM users, WHERE is a loop plus an if statement. Loop through all the rows and users, and then if email equals email@example.com, delete it. Otherwise, skip it. Now, there is no way that it would be efficient to write that loop to read all that table and then delete one line. This is an abstraction. Because that's not how it's done in a database. It's got some complex place. It has got a little data here, and a little data here, and some index over here that points to this little bit of data. And then it just like goes and changes this to deleted. Instead of reading tons of data, which there might be like a million records in users, it doesn't read them. If we're doing sequential master update, literally a delete could take four hours. Because we're copying all of it to the new one and we're taking all the records except one and copying them, and that could be four hours, right? That's not how it works in a relational database. How it works in a relational database is there's some complex structure on disk that you have no idea what's going on, and there are some little mark that happens and that's that, and then maybe a week later it cleans something up. You don't even know. Did I mention that I love SQL? I do love SQL. Because you'd don't have to write loops and you don't have to write if statements. But you have to write WHERE clauses, otherwise you delete all your rows. UPDATE. So create, delete, update, read, I'm doing it in the wrong order. CRUD is the right. We're getting through CRUD right now. So UPDATE. UPDATE, the name of the table, SET is an SQL statement. This is the column and the new values. You can actually have a comma-separated of those. Name equals Charles, email equals, etc. And then you've got to be careful because this, like DELETE, is an implied loop. And so you need a WHERE clause. You can literally update every column in a database if you want. It's rare. But you might update them to like set them to null or something like that. So you have a WHERE clause that says this implied goes through all of them, and then there's some if statement that when this is true, then do it. Now the interesting thing is, is if there were two records that had email equals firstname.lastname@example.org, it would update both of them. Okay? So this doesn't just do one record, it does every record where this is true. Like if I said where email has an at sign in it, then it would get rid of all of them because it would match. So it's not just one, this actually will return. If you run it, it'll return like one row updated or 11 rows updated. Again, you're asking for something to be true. When this is done, the following is true. All the users that email of email@example.com have their name set to Charles when this is done, please magically do that for us, and don't make us tell you how to do that. Retrieving is SELECT. Now that you know all of these, you'll probably use SELECT first. So the SELECT command kind of implies this WHERE clause being here, right? SELECT WHERE email. SELECT star, you give it a list of columns, could be name comma email. SELECT star says give me all of them. FROM and SELECT are keywords, and then there's a WHERE clause. Because again, this has like an implicit loop on it. SELECT has an implicit loop on it that runs over and over and over and over again. So SELECT star FROM users just says get me all the users. Right? SELECT star FROM users WHERE email equals firstname.lastname@example.org. Again, there's an implied loop around this SELECT, and the WHERE clause reduces that. Now, again, it doesn't actually read all the data. It's got clever indexes and it finds it fast and maybe it takes very little disk access or maybe it's already in memory. You don't know. We don't care. We love SQL. Sorting is an important part of these things. So you can use the ORDER BY. SELECT star FROM users ORDER BY email. That basically says I would like this ordered. It's ascending order normally and you can say DESC for a descending, add DESC toward the end there and then you can descend it. You can do wildcards. This basically is looking for things where the name has the letter e in it anywhere because the percent signs function as the wildcards. This wildcard the way I've got it, unless you put some very special indexes on here, this actually probably would force what's called a full table scan, which means it's really going to have to retrieve the records because it can't build an index. All those WHERE clauses, it could build an index to short-circuit, and probably, I keep talking about index. So I'm going to talk about index in a second. But I'll talk about them right here. So let's just say, going back to this like sequential thing, where you got like a terabyte of data, right? And the index is basically like a little set of shortcuts into this. You can think of it as a set of shortcuts to say jump. And so the index is smaller than the data so you kind of can jump and then you can look for a little bit of data, That's kind of how an index works. The problem with this LIKE, is that there's no good way unless you make a very special kind of index to take advantage of an index. So this has to do a full table scan which means it has to pull all the records to check the WHERE clause and read all the way through the data from beginning to end. Later, we're going to talk about query optimization and query planning, because at the end of the day Postgres can look at this and say, do you realize that because you asked for that, I got to do a full table scan? Because it will tell you that. And it can look at some of these other ones and it can say, oh, that doesn't take a table scan at all. That uses the index. Actually, that one will take the full table scan but let me go to a different one like this WHERE clause, WHERE email equals email@example.com, the query planning will say, "I've got an index for that and this is very efficient." And so you can tell when your application starts to slow down, you can ask for help from the database and say, hey, what are we doing? So the LIKE clause is actually really useful for some of the data mining things because it's a wildcard. You're like oh, let's go look around. It's a little slower than a WHERE clause sometimes, a WHERE clause that has an exact match. Although for some indexes, I'm so obsessed. For some indexes if you have like quote blah, blah, blah, blah, blah percent at the end, if it's a prefix lookup, then it can actually use indexes to improve its performance. Part of why we even use databases in the first place is to go fast. And that's why I'm always talking about performance, performance, performance. Again, go back to the 50 terabytes to log in and somehow it happens in under a quarter of a second. This LIMIT and OFFSET clauses are how we do like paging. So if you're like going through a long list of things and it's like, this is 1-25. Next. And then it's like this is 26-50. Next. So this actually is well supported in databases. So you actually don't have to take your server and your database and somehow retrieve all the rows and then only show the first 25. Right? You actually ask for the first 25. So do your magic thing that it's very good at using index, etc. and give me only 25 rows back, as compared to your application code throwing them away. And you can use this OFFSET that says skip ahead a certain amount and then give me the next 25. Now, I'm making these short because all my examples are short, about four or five rows. But the only other thing to make sure to mention is that these offsets go from row zero. Which is like Python lists. So that's not all bad. You just got to remember. So that OFFSET 1 is really the second row, not the first row. But these are super-efficient and really necessary for looking through long lists. Counting is another common thing. Counting is more efficient than retrieving all the rows and then looping through the rows and counting them. You can say, look, I just want to know. So COUNT star as just count me the rows and users. Now, the interesting thing is it might actually read all of the users to do this, but chances are good that it actually knows somewhere in its own internal structures how many rows there are, so it just tells you. The WHERE clause might actually require a little bit of data to be scanned, but not so critical. So you can count them, which is a lot cheaper than actually retrieving them and sending them back to you. So this is the stuff I showed you so far. You're like, I went to college and I'm taking a database class, and that's it? The answer is yep. Sort of. This is the core of SQL. The core of SQL is beautiful. It's simple. It's really easy to understand. It's in a sense easier than learning how to write programs, far easier than learning. People look at SQL and they're scared of it. No, you shouldn't be scared of it because it's just like this. You have this little syntax that allows you to like reach in and grab things and poke things, and pull things out, and delete them, and it's a really beautiful abstraction, which goes back to the notion in the '60s and '70s when they defined SQL. They said, look, let's give something that's simple for the programmers to use that hides the complexity. And that's when this abstraction is most powerful. Now, we have not yet begun to do complex things. We've only done a single table, and it gets a little trickier and more interesting, I think fascinating, when we start exploiting relationships between two tables. Because that's really where things work the most. So up next, we're going to talk about datatypes in Postgres, the different kinds of things that you can express when you're building a schema for your table.