Now that we have seen what big data is and how it is transforming industries, I want to double-click on one of the things we have mentioned at the end of the last video, how trust is important to what companies can do with big data. And I'm very happy to have with us today John Rose. John Rose is a senior partner in our New York office. He has been working on the trust topic for the last five years, starting with his work with the World Economic Forum, then continuing in his fellowship with the Henderson Institute. John, thank you for being here. Thank you very much for inviting me. So John, let me start with a very broad question but I guess, an important one to get right. How do you define trust? Well, trust in a data and analytic sense as it applies to privacy and personally identifiable information PII information, is really defined in two ways and there are two definitions. And one of the issues is that the corporate world understands the first and is focused on the first but not necessarily the second. The first definition is the one everybody looks at and spends time with, which is, what are the legal and regulatory rules that define how an organization collects data, should store that data, steward that data, forget that data, and use that data in new ways, and what rules do they have to follow along those dimensions. The second definition of trust is the consumer sentiment definition of trust, which is, do I trust you to steward my data in ways that I find safe, effective, and careful? And that is a much more subtle and nuanced definition of trust because it's contextual. And it has to do with consumer understanding and perceptions. And has absolutely nothing to do with regulation. And we know that a lot of businesses are still very bad at managing the second part of trust, why is that? Why do they probably under estimate what it means for them and how important it is? What is the hardest thing to get right? Well, I think it's a couple of things. Historically, when data has been stolen or when companies have used data in new ways, there has been no negative implications. And more fundamentally, it's only in the last few years that the combination of storage, data and analytic tools, and the cloud have enabled data that was formally stock and captured in legacy systems to be extracted and used in powerful ways. I could just based on your name, when we finish this, with about five minutes let you know what your social security number is, where you live, and a whole bunch of things that you'd be stunned that I could understand about you. And that's because data now flows, and in the past data didn't. So the historic context in which companies could use data for new uses, was highly limited. All of the power in whether you call it big data, data analytics, performance marketing, is all about taking data that was collected in one context that the consumer understands, combining it with other data that was collected in other contexts which the consumer understands, and creating a stew that allows you to do something fundamentally different with that data than the purposes for which it was ever collected. And so this issue of re-purposing data for fundamental new uses is a new issue, it's not a historic issue. So we know why this is difficult, why the old way of doing things is not working. But how should companies think about it moving forward? How should a leader, a manager having customer data, customer database that's sensitive of course, how should he think about maintaining that trust while using data for suited outcome? So the biggest issue that needs to be addressed here, is the understanding and notion that data misuse, in addition to its definition of crossing a legal and regulatory boundary, has a second definition, which is unpleasantly surprising consumers about either what you know or about what you're doing with it. Technically in the industry we call it the creep out factor. If consumers are surprised or creeped out, the second part is they actually act in ways that are brand destroying and revenue harmful. So the survey in analytic work that we've done on people who've experienced a data misuse, suggests that in the first year of a data misuse, in today's level of understanding of what a data misuse is, meaning the percentage of the population who perceives, can create a 7, 8 percent drop in revenue. Which then in year two decays a bit to three to four. First of all, that by itself is meaningful. But today when there is a data misuse only about 20 percent of the population perceives of it as one. So as those perceptions are increasing and our research suggests they are increasing, if I go from 20 percent to 40 percent, my 7 to 8 is now 14 to 16. So there is a very significant brand and revenue risk for triggering a perceived data misuse. And secondly, if I trust you more and in fact, you haven't not only triggered data misuse in my perspective, but I understand how you were stewing my data and I trust it. I am seven to ten times more likely to let you do new stuff with my data than if I'm not. So there is both a sustainable competitive advantage in being a trusted steward of data, and there's a serious brand and revenue hit risk of inadvertently crossing the line. Can you give us one example maybe to conclude this, of the wrong way to maintain trust with the customers, and one example of the right way of doing it. Yeah. So let me start by where the issue comes from. Companies are in general focused on the legal and regulatory issues. And they believe they're making good and safe decisions by being conservative about data usage, and doing things that they are certain are not close to a trigger line in terms of what consumers would be afraid of. The reality is consumers have a much wider willingness to allow companies to do stuff with their data than companies believe, and then they're being very conservative. The reckless part of what we've termed recklessly conservative behavior, is that they do not believe that it is important to actively engage their stakeholder community on what they're doing. They believe if the actions themselves are benign, the reactions will be benign. And that's where the mistake really comes from. So what companies are missing is the transition they have to make from creating the ability for consumers to have access. So there's many, many companies where if you log onto the website and you self identify yourself, you can see the data they have on you, you can see the uses to which that data will be put, and even in some advanced cases, open up or restrict the ability to use that data. However, if you think of yourself as a consumer and you think the number of digital organizations you have touchpoints with including your credit card company, and your phone company, and your cable company, not just Facebook, but all these places that have data on you, it is an impossible task to be monitoring 30, 40, 50 companies worth of different activities, databases, and policies. So it's wonderful that they've created that ability to pull the information, but the reality is, only a very small single digit percentage of consumers do that with any single company, much less everyone. So the challenge and requirement is to make this shift from a pull-based model where I as a consumer I'm pulling the information in, to a push-based model where I as a company I'm ensuring that you understand the data I have, the way I'm using it today. And when I do something materially different, the way I'm going to use it in the future and give you some opportunity in the new use to engage in some manner. That doesn't mean that everything has to be an opt in permission, there are lots of different ways of doing it. But I need to allow for some degree of engagement. And that's where the reckless part of conservatively reckless fit in, and the consequences of not doing that are horrendous. So for example, because you wanted an example. When O2 launched a totally benign retail service, in which they took all of the wireless location data and all of the underlying attributes of age, sex, earnings and created an anonymize database for retailers in Europe, to be able to do things like planned store promotions, figure out what to put in their display windows. All based on the fact that at different hours of the day, on different days of the week different types of people are walking by, there was a furor. Because they hadn't communicated that the data was anonymized. They hadn't communicated in advance of launching the service, what the service was. They hadn't communicated that it was going to be protected and how it was going to be protected. And they didn't give any consumers any ability to engage in the, am I comfortable or not. As a consequence, the press reports were O2 sells data, sells their consumer data to retailers. The Germans forced them to pull that service from the market. And the German regulators went on a one year war to convince consumers to stop using O2 as a wireless provider and switch to wireless providers A, B, and C. So the big shift is understanding that this issue has real consequences, and making the pivot from making information available to ensuring that all of the consumers, and I use the term stakeholders, the other relevant stakeholders understand what you're doing, understand how you're doing it, are able to engage in new use cases, and then move forward on that. And that engagement and transparency is the place where the rubber will meet the road on, am I developing a brand as a data steward that people trust? Or am I going to continuously run into adverse consumer reactions when I do new things? And the key is to that pivot indeed [inaudible]. The key is that mental pivot from making information available, to assuming the responsibility to ensure that all of your relevant stakeholders, A understand, and B have some mechanism to engage. Thank you John. It was a fascinating conversation. It's been a pleasure.