Hi, I'm Trish Greenhalgh. I'm Professor of Primary Care Health Sciences at the University of Oxford. For much of my career, I was a GP. I also have a degree in Social Political Sciences and an MBA, so what I do now is research at the interface between medicine, social sciences, and business and management. I'm particularly interested in IT programs. In other words, a program in which someone is using a technology or perhaps several technologies to try and effect change in a complex system. One of the things I did a few years ago was, I contributed to the evaluation of the national program for IT. I think we have to unlearn quite a lot. I think in medicine we are often told that everything is rational and logical. When you're working in a complex system, the thought processes and patterns of behavior that work in a lot of places just don't work in the complex system. So for example, it's very hard to predict what's going to happen. In particular, it's very hard to predict when you do something in your part of the system, what the knock-on effects are going to be somewhere else in the system. So we have to unlearn those logical, rational approaches to change and implementation, and start to learn how to do this in a complex system. The only way you're going to find out how a complex system works is to observe it. The way to make progress is to gather data, think about the data, look at it, reflect, make a change, and then gather some more data. You can plan the whole thing logically in advance over months or years. It is much more circuitancy flight by the [inaudible] , that kind of thing it did. It feels not very logical, but that's because you're working in a complex system. But one of the key things that you need to do is identify what data you need and keep gathering that data. Keep thinking about what the data means, gather a bit more data and adjust your course of action accordingly. Recently my team did a big systematic review of the literature and also did some empirical work to try to develop a framework for looking at technology supported innovation and change in health and care. Now, one of the things we found was that every aspect of that problem could be divided into simple, complicated or complex. Simple meaning, everything is very predictable, there's very few components, the example I use is making a sandwich. You just open the fridge, get the stuff out, make the sandwich. Complicated, things are still predictable, things are still very logical. The example we use when we're presenting the NASSS framework is building a rocket. So long as you follow the instructions like the Haynes manual, eventually you'll get your rocket every single time. So simple and complicated are using the same approach, philosophically they're the same thing. Complex is different, complex means things are unpredictable, they're dynamic, they're forever changing and you simply can't predict it anymore. You can predict what the weather is going to be in two months time. The example we use for complexity is raising a child. You can have the manual for raising a child. You could have raised one child previously, but the next child may not react in the same way to the same whatever it is you're doing. Now, because health systems are complex and different bits of health systems exhibit complexity, we need a different framework for looking at change. The framework we developed, which we called the NASSS framework, N-A-S-S-S, which stands for non-adoption, abandonment and barriers to scale up spread and sustainability of health and care technologies. In the NASSS framework, we developed seven different domains and each of these domains can be simple, complicated, or complex. So the first domain is the condition, what's wrong with the patient or the client? What's the thing that they want you to help them with? A simple condition like be a broken ankle, or it might be heart attack, it simply it might kill you but it's still pretty straightforward and predictable and you know what to do about it. A complex condition might be something like dementia, it might be a mental health problem with drug dependency and you can treat the same patient with the same evidence-based guideline, you won't get the same result because of all sorts of things that are going to influence that condition. Time and again, the textbook condition, which is always simple, doesn't map to what the patient in front of you is actually got, so that's the first domain. The second domain, is the technology and the technology can be simple, meaning it already exists, is dependable, it's already installed, and plugged in, and bought such as the telephone or it can be complex and the obvious element of complexity and the technology, is that it hasn't actually been developed yet. It's something that someone is promising you, but we don't know whether it's going to be interoperable, we don't know what might happen if the supplier pulls out for example, there's all sorts of material, and commercial complexities, and social complexities around the technology. The third domain, which can be simple or complex, complicated in the middle, is the value proposition. In other words, does this technology bring value? Can it bring value to the patients? Is it desirable? Is it going to be useful? Is it effective? Is it cost-effective? Then there's also the supply side value to the developer, is it going to be a good proposition for a venture capitalist to put some money behind? Or is it the technology where people say, well, that's very, very risky? Most health technologies are extremely risky, the supply side value proposition is often quite weak. The fourth domain is what might be called the adopter system or the people that you want to use the technology. Now, there's plenty of evidence that shows that the reason why technologies don't work in the healthcare system is because clinicians don't use them, for all sorts of reasons. Similarly with patients, it's quite hard to persuade a patient to use a technology if there's no really obvious benefit to them. So there's a whole area of simple versus complex in relation to the users of the technology. Not just are they clever enough, do they have the skills, but also is it a threat to their identity or is the member staff being asked to become what they would call a data entry clerk instead of a professional? Those questions around staff issues are another area of complexity that you have to factor in. The fifth domain in the NASSS framework is organizational and there's an awful lot of different dimensions of the organization that can be either simple or complex, but let me give you one example. I'm going to supply a full lecture on the NASSS framework if you want to go into more detail. But just as an example of the organizational dimension, a lot of technologies now are introduced to produce a dream of integrated care, so organization A and organization B are going to work together through this technology and care is somehow going to be integrated. Now, the problem with that is, that if the organizations are two separate organizations, they don't have an established contractual relationship, they don't have even an established partnership. So there's two boards, each of them trying to make a decision as to whether this technology project is a goer and people in one organization wants it to happen because people in the other organization are going to do a bit of work that's going to make their life easier. Now, how do you operationalize the savings in organization 1, when the people in organization 2 start using the technology? That's surprisingly common. The system might make savings overall, although that's not guaranteed, but the organization that's signing on the dotted line might not itself make any savings and so there has to be quite complex, some cross-system financial flows and things like that, as just one example of an organizational element. The sixth domain is, what's going on outside the organization? What's going on in relation to policy? What's going on in relation to regulation of technologies? Some technologies have ticked the boxes already for say information governance regulations, but very often when a bright entrepreneur develops a technology, it has to go through approvals and particularly demonstrating that the information governance of the platform is up to scratch. But there's also other aspects. There's also questions about whether the public want it, whether the professional bodies are behind it, whether there's a political and policy push, or whether the policy wind is blowing in the other direction. So all that external environment is very, very important. The last domain in the NASSS framework is time. Now, all complex systems change over time. The problem with implementing a technology is very often the plan is simply to implement it now, this year. But what's going to happen in three or four years time when, for example, patterns of diseases might have changed, your organization might have merged with another organization, the person who is pushing this might have left to work somewhere else, how resilient is your organization to external changes and also internal changes? Is the organization able to reflect and learn and adapt, and change? Is the technology tweakable, customizable, adaptable? Because if it's the technology that really you can't do anything with apart from use it as it's been initially designed, it's probably not going to last very long. So complexity in seven domains. Now, what do we do with the NASSS framework? Is if we say, "It's not going to work if there is complexity in every domain." There's some aspects of complexity, for example, in the condition that you're trying to treat, that you can't change, but there are other things that you could make more simple. So the idea is that with every domain we try and pull it more into the simple or complicated field than in the complex field, and reducing complexity in the technology, but also in the organization, and the way it's being introduced, we believe we'll increase the success rate of technology supported innovation. So the obvious question now having introduced the NASSS framework, is how do we use it? Is it going to be any use to use a Chief Clinical Information Officer? I guess the NASSS framework is a little bit experimental, we only just published it. But the other thing I would say is, "You can't use it mechanistically." It's not something you can go through and tick all the boxes, and then expect things to fall into place. It's the framework that might prompt you to think a bit more, explore certain areas a bit more, certain domains a bit more, have conversations, and I'm in the process at the moment of turning it into a self-assessment document using the questions that we've put in the paper, and for each of the domains, there may be qualitative information that you can get, there maybe things that you know but they can be surfaced by asking the question, there may be quantitative information, information about disease prevalence, comorbidities, waiting times, that kind of thing, and of course, the business planning, and value proposition data. So it's not all soft data. I think once you've identified an area of high complexity, the game is to try and reduce that complexity, and that isn't going to be easy, but sometimes you can chunk things down, you can break things up, you can get rid of certain things that are contributing to the complexity. One obvious example is, does this technology absolutely have to be interoperable with three or four other technologies? Perhaps if it was a freestanding technology, it might not be so technically elegant, but it might be easier to implement because it's more free standing, and that probably means it will be more dependable. That's just an example of the top of my head. But to work through the NASSS framework thinking, which of these domains are salient to the problem that I'm trying to address? Is there any way that I can bring a particular domain or subdomain into a more simple zone? Is the way that we've been beginning to use it. I think it's very important to understand that there isn't a single solution to complexity in any domain or subdomain. It's much more creative than that, and that fits with the general principles of a complex adaptive system, that there isn't a fixed, there isn't a technical fix for this. The trouble with people who are into technologies, they're often looking for technical fix. It's much more, can you use your imagination to come up with something that is going to make this particular thing work? Actually there's a lot in the literature about what we call articulations or in the vernacular, workarounds. Workaround is very interesting thing to look at from a research perspective, but very often. When a workaround is needed it's because there is too much complexity in the system, and what the workaround does is it shows circuits, either a technical or a social challenge. Now, the problem with workaround is they're sometimes against the rules, and so it's quite a good way of getting into there and surfacing the complexities. Why are these people doing it this way when the standard operating procedure suggests do it that way? Is there any way that we can change the standard operating procedure to reflect what people are actually doing? Because what they've done is they've come up with a creative solution. That's one example. But of course, if the problem is that, for example, there wasn't enough drawdown money in the organization, what we call organizational slack, to support, for example, new training, to support trials of something, then the solution has got to be somehow to generate some drawdown money so that people at the core phase can actually start experimenting and trying things out, and taking a few risks without catching it in the neck. So how to apply the NASSS framework? The answer is not mechanically. The answer is with a lot of imagination.