Our research into the world of platforms and data started between 2014 and 2015. It seems like they're close years, but people's perceptions of data usage by digital platforms were completely different. I clearly remember one of the first lessons in which I told about the Client-as-a-Source models with the case of Twitter and the creation of the Twitter Political Index based on the sentiment analysis of tweets to try to understand how the American elections between Oban and Romney in 2012 would have gone. It was a communication design classroom and I still remember the face of a girl in the back row, she was literally speechless, shocked by the revelation I was making to her. We were already used to using digital services, giving them - continuously- huge amounts of data, but we were not at all aware of its implications. A few years later would come Cambridge Analytica the #LeaveFacebook and various scandals and movements that created greater awareness, but also a certain distrust of platforms and their data usage. And here we are at the paradox we talked about: more and more services to which we give our data, without asking us big questions, ready to enjoy a free service or something perfect for our needs...but at the same time an increasingly morbid concern about our privacy. How can we solve this paradox? How can we make this relationship between users and platforms healthier and more equal? A possible answer can be found among the first companies we used to explain the two-sided orthogonal platforms and the Client-as-a-Source logic. Strava, implementing Data Trading, created the Strava Metro, the service that allows institutions of various kinds to buy packets of data about how users move around a certain geographic area to make decisions about, for example, managing bike lanes. As with all the cases - not involved in the scandals - that we have discussed, the ability to use anonymized and aggregated data for this purpose is clearly expressed in the privacy policies. In the days of the Cambridge Analytica scandal I read a particularly interesting post on LinkedIn, "Our parents taught us not to sign a contract without reading it, but not to tap on a screen without reading." In fact, many of us, myself included, don't read privacy policies before using a digital service. They're long, often complex, and it's not easy to find the information we're looking for. This creates a grey area in which many platforms are more or less comfortable: they comply with regulation, but it is unclear to the customer what they can do with the data, they are somewhat opaque. In fact, they even make it difficult to find clear information online about how they use the data and who they give it to. Strava is not opaque, Strava is a transparent case. Strava Metro is not a hidden service, it is easily foundable on the Strava Site, has its own site and clearly explains how it uses the data and why. This observation raised a question for us: what if platforms were totally transparent and explained, succinctly and clearly, to users how and why they use their data? We have therefore defined the concept of Business Model Transparency as the degree of clarity and transparency with which a user can understand the business model of a digital platform by having simple and clear access to how and why they use the data they collect and with whom it is shared. And we designed an experiment. We designed two mock-ups of two digital services, both fitness trackers similar to the various Strava or Nike+ we mentioned in this course. The two services have the same functionalities, they simply differ in terms of branding. For both, we designed two versions. One totally transparent, which already in the presentation phase of the service explains that thanks to the data collected the service is free and that the data is shared with third party companies in the transport and medical sectors in a totally anonymous way to enable research on where people do sport and how they do sport. The second version is instead opaque: in the presentation of the service no information is given about the use of data. Information available in the privacy policy, in the middle of all the other information and with general indications about the kind of companies involved are granted. We did our experiment by having each participant see two services: one opaque and the other transparent in random order and wondered how and if their behavior woukd change. We discovered two particularly interesting things: - the first is that the average intention to use the first service offered to him, whether it is transparent or opaque, does not change - The second is that people who first saw the transparent case show a significantly lower willingness to download the opaque service This allows us to make two points. The first is that users, beyond what we might think from the various dissent movements we see on social networks, are not particularly influenced by transparency in choosing to use or not to use a service. This should at least reduce the fear of being more transparent. The second is a dynamic nature. As people become accustomed to transparency....they tend to demand it. Put another way, we're not used to a particularly high level of transparency in digital services right now. So some people perceive it as a delighter, appreciate it if it's there, but don't miss it in its absence. Our data, however, show that once users become accustomed to transparency...they show dissatisfaction in case it is missing, becoming - according to Kano's classification, a must have, a feature that if present does not generate satisfaction, but in case of absence generates dissatisfaction...like air bags on a car. We're not at that level yet, but we may get there soon. If we think about it, not too many years ago, attention to sustainability was in a similar position: a delighter for a few people who paid attention to it back in the 1990s. Today, it's a must-have, capable of fluctuating the valuation of the entire company even appreciably. What if transparency experienced a similar trend? What if it will become in a few years a must have in the digital world? We'll find it out...in the meantime we can reason about the kind of relationship we want to build with our users on the platform and decide what level of Business Model Transparency we're going to create. The higher the transparency, the more we could treat a two-sided orthogonal platform as a transactional platform, making explicit the presence of the second side to the first and drawing potential benefits in terms of data-driven innovation, but also in terms of quantity, quality and variety of data collected.