We have a really interesting sorting algorithm from Quick sort where we use

a pivot behavior and we recursively sort different parts of the array based

on partitioning the array into the smaller than the pivot and larger than the pivot.

And so, what we can do now is recognize that sorting was useful for

solving our new problem.

And so maybe we can use a sorting algorithm and modify it in some way to

make a new algorithm that's useful for the current problem.

The problem of finding the k'ths smallest element.

So what if we picked a pivot in our elements, in our array of elements and

thought through what happens when we sort the, not sort,

partition the array into the elements that are smaller than the pivot and

those that are bigger than the pivot.

Now, we're not looking for fully sorted array at the end of this procedure,

what we want is the k'ths smallest element.

And what's really useful here is that if we partition the array into those

that are smaller than the pivot and those bigger than the pivot,

than if we only have say three elements that are smaller than the pivot.

Then the third smallest element over all is going to be the maximum of those three

elements because the pivot's going to be bigger and all of the other elements

in the array are going to be bigger than those bottom three elements.

And so they're not going to be the third smallest and so the beauty of this

approach is going to be that we only need to recursively look at only roughly half

of the elements each time if we're lucky with our choice of pivot.

And so what we can do is at each stage pick some random element of the array.

Partition the array into those elements that are smaller than the pivot,

those elements that are bigger than the pivot.

And then look at the size of the set of elements, is it smaller than the pivot and

compare that size to the ring.

Because that size of that set is going to tell us whether the k'th smallest element

belongs in that set of elements that are smaller than the pivot, or

maybe that k'th smallest element is the pivot.

Or maybe that case smallest element is bigger than the pivot and so

we need to look in the rest of the elements in the array.

And so we can code up this strategy and

notice that this is going to be a recursive method.

Which is great if we can come up with a recursive solution.

To our problem strategy, we're demonstrating yet

another skill in programming and algorithmic development in our interview,

we're demonstrating our breadth of knowledge, and

we want to still be careful about the code that we write.

We're doing the input validation.

We have a helper method, and so we're demonstrating that we know the difference

between the private helper methods And the public method call that we have.

And then as we go through this recursive function development,

it's going to be very important to test the code,

which is why we had that base example that we can keep tracing through.

Now as we develop this new strategy,

we still want to think about performance, and the performance is going to hinge on

the fact that when we compare to the pivot at each recursive function call.

We hope that we're going to divide our array of elements into

the smaller thans and the bigger thans and those are going to be hopefully,

roughly the same size to one another and so we get to reduce our

problem size exponentially by reducing the array size in half each time.

And so what we're hoping for, on average at least, is linear time.

And a careful analysis of the recursive performance

of that function would really get us at its expected on time.

Now in an interview situation this might seem a little daunting to

come up with such an elaborate algorithm and to do its performance analysis.

But here's the punchline.

What we want to make sure to do in the interview is always keep going.

We never want to be content with the solution that we have at hand.

And so when we do have a solution, it's really important that we think about,

does it match the assumptions that we made at the beginning?

How will we change it if we had different assumptions, if for example we allowed

repeated elements in the array, what would we have to do differently in our methods?

We want to consider performance every time we come up with a new problem

solving strategy.

Have we made progress or maybe have we gone backwards?

Are we coming up with better solutions or

are we coming up with just different solutions?

And then are there tradeoffs that we can consider there for time versus memory?

And keep going, keep going.

Think of ways that we can bring in our tools.

What we want to do is through our practice,

have a wide array of tools that we can apply to new problems.

It may well happen that you've done so much practice that

in the interview situation you're asked a question that you've already solved.

Now, it's really important in this situation to mention that you've actually

worked through that solution.

And then the interviewer might ask you to still write out the solution, but then

they might ask you to go further, like we did, with those different algorithms and

take different assumptions or take different paths so

you can demonstrate your novel problem solving skills.

And when we are approaching a problem that's very new

what's useful is when we've done all that practice that we can go

through a mental checklist of all the useful data structures that we have and

all of the known algorithms that we've worked through.

And really help solve the new problem using what we've done before.