Now that you've learned about threads, let's study a key concept in concurrent programming that goes hand in hand with threads, and those are locks. In fact, we'll start by studying a special case of locks that are called structured locks. So here's the motivation. Suppose we have a very simple method, increment, and it takes an object A. And in its body, it is reading and writing some field of A, like A.Count, incrementing it. And that's fine for sequential programming but if you have two threads T0, T1 and they both want to increment A. We have a problem because these two threads could be scheduled on two different cores and they could be reading and writing A.count at the same time and the result would be unpredictable, and in fact, that kind of error is called a data race. So, how do we avoid the situation when two threads want to access the same object? That's exactly where locks come in. And with structured locks, we have a construct like "synchronized". Where we pass the object A as a parameter and what that says is that the runtime system will guarantee that there's a lock which is acquired on object A when a thread enters synchronized. And the whole idea behind a lock, is it ensures that only one thread can acquire the lock at a time. So, if T0 were the first to enter synchronized, it would acquire the lock on A before T1, and T1 will have to wait. And then, after the work is done and synchronized, it releases the lock on A and then T1 could go ahead and when it's doing its increment, acquire the lock. What this ensures is that this operation that reads and writes fields of object A are carried out in what is called mutual exclusion. Which means that they are executed by only one thread at a time because of the synchronized construct. It's called structured locks because you have the block structure here with the synchronized statement, and the acquire and release are implicit. For example, if an exception was thrown in the middle of the synchronized block, the runtime will ensure that the lock is released so that T1 is not blocked out forever. Let's look at another example which is like a buffer object, so let's say, if we have a buffer object X that has a buffer and in and out indices. And the typical operations one could perform on the buffer are insert. So let's say, we're inserting an item. And for this what we would do is, there would be an IN index if BUF is let's say an array so we assign that to the IN index and we increment IN so that it's available for the next insert. And then, we could have a REMOVE operation. What the remove will do is read the value at the current out index, BUF sub out. Increment out so that it is ready to read the next item by the next time the remove is called, and return the item. Now again, you can see how this would work in a sequential program. You can call multiple inserts and removes, that's fine. What we need to think about is what can happen when inserts and removes are called by multiple threads in parallel? So we will need to do some synchronization and so we will have to do something like synchronize on X over here. Around the bodies of insert and remove. Now in this case what will happen is, only one thread will enter either the insert or the remove. And if a thread has acquired the log of X, let say thread T0, no other thread could either do an insert or a remove until thread T0 releases the log at the end of X. So this idea of locking can apply to multiple statements being protected by the same lock. Now, there's one for the subtlety. This was effectively an unbounded buffer. We kept incrementing the in and out indices and assumed that we had as much space as we needed in the array. Suppose for some reason we want to ensure that there are never more than some number K entries in the buffer at any time. Well, how do we do that? We can check every time an insert is performed whether the buffer is full but we have to do more, we have to do a wait operation. Wait if X is full over here and the reason being that there isn't space to insert the next item. And in the case of remove, there's an issue in that, what if the buffer is empty, there's no item to remove. So here we would wait if X is empty. Now, the structured locks actually take care of this, if you add a call to wait in synchronized, it'll ensure that the thread waits at that point til a certain condition is satisfied. And then the question is that, if a thread is waiting at a wait point, how does it get released? Well, some other thread has to do a notify. So for example, if thread T0 is waiting when the buffer is full and thread T1 did a remove, it can do a notify when the remove is complete. And then, thread T0 can go ahead with the insert. And in this example, it's symmetric, so you would need a notify at the end of the insert so that if another thread T2 was blocked on remove because the buffer was empty, the insert would notify it. So what you've seen is that once you have threads we have to be really careful about how they access shared resources, that's a key challenge of concurrent programming. We need to avoid data races with proper synchronization and we have to use wait and notify when certain conditions occur. But with this, you have the key primitives to get started with concurrent programming. With threads and structured locks, you can do a lot of multi-core programming right away.