We're here with Bill Wake, veteran of 15 plus years and a major thought leader in space. Thanks for joining us Bill. >> Thanks Alex. >> Let's talk about the notion of acceptance and how we know software is really done.. I mean there's sort of traditional view of acceptance testing.just sort of Making sure that stuff isn't broken or out of whack with our stories and then there's these larger questions of how we're iterating to a valuable result? I mean what are the right discussions about accepting this testing as we go through the process of building a really great product? >> Yeah so I think >> There really are different levels going on and I think the kind of top one about there's outcomes we're trying to achieve and how we're going to measure those. And what our success is from the program as a whole versus here's a feature. I think it's a work display. It doesn't really work this way. If it does, it's good and so on. So, I hope I hope we can get teams to focus on how are they going to understand if they succeeded or not. An example, we had a company doing some data brokering kinds of things. They would take all kinds of information come in and merge the ones that were related and sell the resulting package. So for them they could measure their outcome in terms of percentage of information available of this particular thing they were doing. You know, what percentage of that information have we actually made available. And if adding a new feed of information doesn't really increase that, then, you know, even though you think we want that feed, there's no reason to do it. It's got to deal with the overall success of things. And sometimes that success is pretty easy to understand, other times we really have to spend some effort and money in order to measure that and understand it. And if we don't have that understanding we really need to get it somewhere because otherwise we're just kind of spending things out and not doing it. Where there are more functional level kind of acceptable thing, it tends to be more self-contained and we can say we're defining a feature and it's It's got these inputs and these outputs and these effects are they doing what we want and can we automate that and so on. >> And I mean automating the functional tests or assistant tests whatever you're going to call them >> It's a pretty big job and it's a place, it's one of several places, where probably successful interdisciplinary collaboration has been determined and whether that's a good investment or investment that doesn't look like it's paying. How do some of the practices that you work with teams on come into play there and what are some patterns you see of that whole thing coming together versus >> Not being a very good investment. >> Yeah, it definitely is the case that if we're talking about functional level tests and system level tests, their acceptance task force and so on, it could be pretty expensive to automate a lot of those. And even worse than expensive to automate, they get hard to change and so >> If the system changes or the user interface changes, all of the sudden, all of these tests are broken, not because the system is broken but because we just change how we presented things. So, there definitely is this tension of automating those For us, I work for Industrial Logic. Our company, we have e-learning we produce and we constantly have this balance of, if you're going to test this at the unit level, it's going to be far better to test it down there if we can't test it enough there then we test up at a higher level and we know that test is going to be probably five or ten times as expensive to create and manage and almost always is five to ten or more times slower to run >> And so it's an expensive test all around and if we're finding that one of those tests just seems to break for no reason all the time, we'll take it out and find ways to test some of those things at a different level because >> You've gotta pay attention to that balance, and the counter case is it seems that they've invested a lot. It's sort of a sunk cost kind of problem. They've invested a lot in terms of creating those tests, and they want to preserve their investment in that, but the test may not be paying, and it may be slowing them down quite a bit. And they not realize that there's a lot of duplication the test they made up here if the team is really doing the unit testing and so on a lot of that same functionality is already going to be covered and so there creating extra tests that don't really add any extra assurance or coverage and kind of moving away from that and realizing that there are different levels we can test through It may not be that we have to test every aspect of this system through the web interface maybe there's an API layer or soak layer or some kind of intermediate layer between that we can test a lot of things through that. And that may be a lot cheaper than the higher tests. >> So it's definitely classic for teams, and a lot of the classic user testing tools that let you kind of run and play and just type, use the system and it captures things. Those often are very UI focused and very susceptible to breaking, so having that range of tests and understanding which test fits where really helps. >> And are there particular patterns or practices, habits, that you think help teams delineate the right place to test the right thing and avoid duplication and create focus in the right places? >> Yeah, and the big thing is collaboration, so if I can make sure that my testers and programmers are collaborating on things early on. That helps a lot and one of the teams I work with when the programmer sat down to work on something they'd pull a QA person over and say, here's the kind of test I'm going to be writing for this thing. What do you think? And the QA person might say That sounds great. I won't even have to create this test over here because you've done that but you haven't considered this and this and this. Maybe your tests ought to cover that too and I can cover these two easily but I can't really do that one, can you do it. And then you get the trade off between them to hit that balance where if you just leave them working independently or even the worst case I see teams where the developers work on this branch and the testers can't test anything till the next branch, it's much more sequential and it's hard to get that shared sense of what we're trying to do and get the benefit together, so moving that so that people are working together can really help us understand and then periodically just paying attention to what's working and what's not and which tests seem to break for no reason and all those things to understand. >> And why do you see that pattern happening where we can't test until the very end or the next >> next cycle is that of working, managing working progress problem or system construction problem or, where do you think that comes from? >> I think mostly it tends to be a process problems >> At least a fair bit of groups, I went to one a few weeks ago actually that they had this mindset that says QA should be independent. And they sort of think "we're acting like an independent organization," but the reality is they're not really that much independent, and there are certainly places where a truly independent thing might Might be significant, but the kind of business systems we're looking at I don't think the value is really there. And a lot of times the process has been structured kind of on the assumption, we say mini waterfall sometimes. People still assume they'll do some >> Specification, and design, and programming, and testing, that it'll happen more or less in that order. It doesn't have to. I can get people more involved together, and let the team know that the story's done only when >> The development and the testing and really, hopefully, the deployment and everything is done with it, and my friend Tim Ottinger calls it plate emptying. You know, that we want to get this stuff off our plates and out of our hands so for the programmer, finish our stuff, throw it over to the tester and them do it then, isn't the same kind of mindset as saying, we're in this together, how can we both work together to get this thing finished. So it's certainly a shift in attitude among the development and QA organizations to work that way. People want to think if I wait till it's all done, that it'll be faster for me. But really, the influence you could have along the way is It makes a lot more difference. >> I mean I don't want to put words in your mouth. But is silo QA just pretty much a legacy notion at this point? Does it, I mean can we almost categorically say that interdisciplinary teams Perform better? Or are there circumstances where that's not true? >> I'm sure there are circumstances where you say I really do need some silos and maybe the life-critical things or something like that that are that critical that we really do have to >> to have at there but a lot of times the success I see comes a lot more from working together on things and avoiding the misunderstandings in the first place and guiding things as they go Seems to have a lot more benefit from what I see. I think some teams you know they feel like put Q A out of business even you know and there are a number of teams out there running with no real separate Q A person that the quality that they're producing is high enough they can, they can live without that. Lots of other teams still have QA people but they're working in that shared way to make that go. >> Because while test is emerging as having its own renaissance in a way with devops and automation the idea that it should exist as a department those are two different things would you agree with that? >> Yeah you know if, if I was setting up the company I would not make separate departments like that, but put them together. But maybe there's some reason you would convince me that that's not quite the best way to go. But I tend to prefer to see teams that really are whole teams. Because I want them delivering on a short cycle they need to be able to work together pretty closely. It's not like I'm going to finish development and then get you three months to test it. We're doing a page here. We're going to develop it this morning and ship it this afternoon. And if it's going to need QA testing, then we've gotta get that in that cycle and get it going as well. And that's not going to happen if I'm putting the QA people in a separate building and they're just talking to each other once a week or something. You know. It works much better if they're on a shared team. >> That's some great advice on the practicalities of working on tests and creating successful teams together. Thanks, Bill.