Hello. My name is Rene Weber. I'm a full professor here at UC Santa Barbara. I'm the Director of the Media Neuroscience Lab at UC Santa Barbara, and part of our research in our lab is about narratives and persuasive messages. So the question is what is an engaging message? It turns out that if we look into what makes messages engaging, that moral information is a crucial part. If something touches on world conflict, moral violations where people are treated, for example, unfairly, unjustified, that is something that's relevant and that makes you act upon the contents of that message. So if we say that's evidence, would it be nice if we were able to analyze on a global level, what are the moral information? What the moral frames in, for example, news, in movies, or any other collection of works? That's what we do in our lab with the so called MoNA project. MoNA stands for Moral Narrative Analyzer project. What we do is in one component of this MoNA project, we capture global news or what happens right now in the news globally, we do this every 15 minutes across the day and capture what is news content in that moment, and then we apply a so-called data mining, BigData, pipeline consists of various methodologies to find out what these moral frames are so we can create maps of moral information and how these maps of moral information change over time at literally, any 15 minutes during a day. So there's a lot to say about that and how it actually methodologically works, and here's where I referred to Freddie because Freddie is one of our five lab members who is nearly involved in this research. So Freddie, why don't you come over and talk a little bit about the methodological [inaudible] All right. Thanks for the introduction from Rene Weber. My name is Frederic Hopp and I'm a graduate student in the department here at UC Santa Barbara, Communication. Also, lab members of the Media Neuroscience Lab, and as Rene Weber already said, moral values have big motivational relevance for message sharing behavior, for how we process and evaluate messages. But as you can imagine, extracting moral values from text data is really difficult because it's a latent construct, it's not something that is directly clear what are these values about, and I'm here today to show you some methodologies that we apply to extract, as fast as we can, the latent nature of these moral values from texts. To extract more value from text, we developed the Moral Narrative Analyzer which is an online platform that combines both human coded training as well as conduct analysis procedures. After substantial experimentation, we found out that a simplified coding task in which humans simply use the highlighting tool to annotate texts, works best to extract more information from texts. So as you can see in this slide, for example, on the left hand, you can see, this is the typical interface that people use in our lab to code moral information in [inaudible] So people use this highlighting tool as you would in high school, whenever, you're highlighting portions of texts to identify more information on text. On the right side of this slide, you can see how we use task over scripts of movies, in this case, Gone Girl. So people go through scenes of movies, highlight certain portion of texts, maybe even identify if there's a conflict of those texts. Now, you might wonder, where is the computational part in all of this? So for example, for our analysis of movie scripts, we use what is called the Sentiment Analysis Tool to extract sentiment from scenes. Then, as you can see on the left side here, this displays a graph over the whole movie where we can see how the sentiment change across this whole movie so we can pre-select certain scenes to should provide for coders for annotation. For example, we might want to select scenes of a very negative or very positive, that might have interesting overall information. What we can also do is, as you can see on the right side here, we can construct so-called character networks by looking at which character score, crew, and dialogue, and action descriptions, and thereby, build a character network that tells us which is the most central character in the scripts? Which are interesting correlations or associations between characters? For example, who converses most often? So in this case, you can see for Star Wars Episode 5 that Han and Leia are the primary characters that converse most often. Whereas, you can see Luke, for example, and Yoda, they also have a stronger etch kind of suggests, okay, these characters co-occur in a lot of scenes. So MoNA is great if you want to code a sample of these [inaudible] or movie scripts. Now, what is a sample? Think of sample as something as a small selection of bigger part. So the big population of news articles means every news article that is out there n this world. Now, you cannot possibly code every newspaper article that's out there in this world with humans, right? Humans take their time to code articles, humans might be better than automatic methods because they're are really precise, they thinking about things. But if you want to detect trends at global level, then human coders might not be a good choice. So can we do it? Well, since we extract more information in a word sequence form. So remember, people highlight text in our MoNA system, we can aggregate and transform these highlights to build so-called Moral Dictionaries. Now, you might wonder what exactly is a moral dictionary? Think of it this way. As you can see here on the slide, we have certain words that co-occur with certain moral categories. What does co-occur mean? This means that our code has highlighted these words more often when they were given a certain moral category to focus on. So for example, fairness category, you can see the word "fraud" and I think we all can agree that fraud suggests cheating. So what the dictionary does then, it takes a newspaper article, or a movie script, or maybe even a novel, and looks, okay, how many times does the word "fraud" occur in this article? Does it occur one time? Does it occur two times? It doesn't just do this for one word, it actual looks at all other words in our dictionary and counts them, this way, we can generate a moral profile of an object. We can say, "Okay, this article or movie script really emphasizes caring, fantasy, maybe even sanctity."