All right, good morning team. >> Morning. >> Nice to see you. >> Thank you for being here today. Nice to see you as well. So our design spreads with the techs have given us a good core narrative to work with. I think we're tracking pretty well and I think we are in great shape for the implementation sprint we've got coming up. So two outputs for the sprint planning meeting today. Output number one is just our sprint goal, which is a working version of the parts availability epic. And then output number two is going to be a prioritized backlog. So we're going to be using SCRUM and as far as our discretionary items go, sprint duration we'll be using one week iterations. I'll be the product owner role and then Sri you'll be taking the SCRUM master, so is everybody cool with that? >> Yep. >> Yeah. >> Okay, great. Now here's the Trello board, I pull what I think are the right story candidates. Sri in terms of time frame do you have estimates yet? >> Yeah, I think I'm going to do stories from an implementation standpoint, and I've got some estimations already. For the four child stories that we've got so far, I'm going to say they're all on the order of hours, so they'll take roughly a day each. >> Okay. >> And then, I was just reading about SCRUM a little bit. There's this idea of storied points. Is that how we do our estimates? Is the hours thing a storypoint somehow? >> We could've done it that way, but we're actually using a minutes, hours, days, months scale. We could have been more detailed and used a numeric storypoint, which is what you're talking about, but we don't want to use number values because it might be confusing for people who aren't within our team, and or if they're upper management. >> Yeah, probably a good idea. Is that, I mean, I'm not really an expert on these storypoints, is that sufficient? What do we need these things for exactly? >> Yeah, really there's two purposes. Number one, to prioritize value relative to cost. And Sri did the estimates last week, so we're ahead of the game there. I've taken a look and I feel comfortable with them. And then the second purpose is to establish backlog size. Sri, how do you feel about the four stories, is that too few, too many? >> For the stories that we've got I'd say we can get it done in this one week sprint. >> Okay. >> But we'll really have to see. >> Okay so if we can't get them all done, could we do a partial version even if some components are only half baked. I know that Marla would love to see something by the end of the week. >> I'm going to have to say no to that just because the goal of the iterative process is to produce a potentially ship-able working piece of software and we want to maintain the basic parts. >> Yeah, no, that's a good point, forget I even mentioned that. >> But what about the priority on these stories that we have? >> Yeah, that's a good question. So in terms of sequence, ideally we'd get through the entire arc of the epic in a week, and I think if we did all of the search functions we might be running a risk of that not happening. So I think that we ought to do the first story on search where the tech has the part number and then go ahead and skip to the price and availability stories, that way we'll make it through the arc of the epic as I mentioned. And then we could circle back to the other two search functions if we have time over the course of the week. >> Okay that sounds good. >> Have you had a chance to break down the tasks yet or is that something you'd like to hold off on? >> I actually think we should hold off on that. Let's just break down the tasks as we go. I mean I do want to take a lot at some of the more detailed task cases right now, just so I know that I'm on the right track before I head off into coding. >> And the test cases are how we know something's done, is that right? >> Yeah, the test cases are supposed to let us know if it's working as expected, but we can't necessarily define done as one thing. >> Well what are the other things we would define it as? >> Well it really depends on if it's readily useable. And even if it is usable do we know that the user will actually want to use it, are they motivated to use it? And we want to know does it drive better outcomes for our company also? The other kinds of done, they'll basically be tested after we have the full working software so that's after iterations. >> Mm-hm, okay. >> So Sri, now that you mention tracking of users are you utilizing the functionality that we're adding? Is the code going to have that kind of tracking capability? >> Yeah. There are some logs that are generated about the user input. I'm going to have to write some manual scripts though to collect that data. But if we find that we're using it a lot more frequently then I can make it a lot nicer to get that. >> Okay, great. Well then let's get to those test cases. >> Yeah. >> Sounds good. >> Cool.