Let's talk about a case study for EDF in terms of how they actually did this for real. Joanna if you could just give me a bit of a background, where you were with your running process and how Anaplan so it came into the mix. Yeah. Okay, Thanks David. EDF Energy is the UK's largest producer of low-carbon electricity. We generate electricity through nuclear energy and then sell that onto businesses and households throughout the UK. We've got about 13,000 people in the UK, but were part of a larger group that's headquartered in France. We went as part of our user story and gathering. We looked at, well, what do we need to build this consolidation piece? You see the Anaplan facilitation there, and talked on a certain subject matter experts. But as part of that we'd naturally gathered all of the master data that we needed and the requirements for all of the master data that we needed. When we were looking at how do we organize these user stories into a project, we go all those data stories and put them together into one sprint. That was the beginning of our Data Hub. We didn't know at that point that was understandable model. But that was our concept of this is the stuff that we need in order to even begin building the consolidation. Our very first build was a three week sprint that was creating this Data Hub. All of our team, we had probably only about half of their time was spent building in a thing. The other half was spending time sourcing the data that we needed, and I'm making sure that it was in the right format so that we had something that was sustainable and could be managed going forward. The idea that we had all of this master data that was coming across from SAP, we needed to be able to update that on a daily basis so that everyone that was using our models was using live data. We very quickly ended up with a structure where we had broadcasts, reports and queries coming out from SAP being put into our shared area and then importing them into Anaplan. Then some of those treaties were necessarily in the right format that we would want. We really wanted to think about having a nice data say that we would be able to upload into Anaplan with all of the fields that we needed and the format that we needed them in. We also knew that we would have to replicate this every single day. Now luckily a great person on our team came up with the idea of using robotics processing to be able to bring that data in everyday. This was something that we're very much at the start of looking at within the finance team anyway. This gave us a good chance to pilot that approach. We got the robot to not only import those files into Anaplan everyday, but it also reformatted those queries. I don't know any of you that are working with SAP in here. I'm sure you've come across a problem of trying to change the queries and SAP and it's taking forever. We said, well actually, the robot can do this really quickly and easily. It means it will not messing about with queries that are in formats that the other people need it to be in formats for their work. The robot can just open the query every morning, reformat it to what we want and then upload it into Anaplan. We then ended up with a nice sustainable set of lists and modules that had all of the data that we needed, it was being updated overnight, every night by the robot, and it was coming in in the format that we needed. But it was very much just coming in as text fields. Then what we can do it in the Data hub was very quickly format that so those text fields were becoming period fields and list fields. It was becoming in a more usable format. What we could then also do was type that data to see what part of the business does this belongs to. So we started to want to send this data out to our regional models. We don't want, for example, our generation model. Getting the data that actually is appropriate for our customers' business. They only want to see their generation data, we could tag it using the company that the data belong to and say, well, that's a generation company therefore that is generation data. By the end of things then in our Data Hub, we had validated and well-formatted data that was all saved down into appropriate views depending on where it was going, so generation stuff typed as generation, saved in the generation view. Consolidation, which is looking at everything saved in the consolidation view. That's how it all worked, which was wonderful. The robot importing that data into the Data Hub every day, and then going on to distribute that out to the regional models every day. It literally opens up the model, presses the import button to refresh the whole model, and that then runs all of the actions that you need. It can pull data back from those models as well. For example, the plans that had been created in those regional models. We want to be able to pull them back into the consolidation to consolidate them. It's doing all those things and that was our first implementation case, which was great. Brilliant. I love hearing about the robot, I think it's fantastic. Again, pushing the boundaries of what they can do in innovation, in the field. It's just amazing for me. [inaudible] We would have taken a human about 2.5 hours just to set through the upload of those processes, so saving us about 50 years a month. Brilliant. Let's fast forward to today. That was obviously the initial use case. As we can see, you've expanded the footprint. Could you talk through now on how you're joining the dots? As we get in any new project, any new use case and we're gathering those user stories, the requirements for that project, we think about again, what's the data that's required for that? For example, in our call center planning and our workforce planning, they're very much looking at people data. So how many people do I need to resource my calls into depending on demand? They need to know who the people are in the organization and we think about using that data, is that data that we're likely to use in any other build that we have in the future? If that is a yes, then that goes into our Data Hub. If it's a no and it's very specific to that individual keys, then it can go straight into that model. But we've grown our Data Hub basically, as we've started to build everything using exactly the same technique that we used in that initial implementation. We have all of our HR data and going out to our call center planning and workforce planning. We also have it coming into our regional models because of course we're looking at our cost base and we're trying to look our people costs. We find that then very quick to build new models because we've got everything sitting there in our Data Hub. For example, when it came to looking at our legal consolidation, of course our national case was looking at our internal management reporting and our headquarter reporting, actually when we report it to our external shareholders using statutory codes, we're using a different company hierarchy. We need that to happen in a different place using the same data, but a definite end result. We could very quickly build that model because we've got everything that we need sitting in the Data Hub. To start a new model and have all of the master data structures, etc., that we need, really it only takes us about half a day to build and to get all those imports and processes all set up. That makes it fantastically speedy. Our initial consolidation took us about three months to build. Our legal consolidation only took us two weeks. That's because we had all that stuff either in the Data Hub or in the consolidation model. When we built the initial consolidation, we actually put the rules of our consolidation into our Data Hub. For example, Watson enter company a code, all that stuff was tagged in the Data Hub. As soon as we brought it across the legal consolidation, our work was practically done. Real speeds in developing future models by having all that stuff there in the Data Hub.