Hi, and welcome everybody to, I almost wanted to say to our show. It feels a little bit like a show that we're producing here to this, but it's not a break. You're going to get, it's an official course, you're going to get your credits as long as you do the work. But yes, we are here today and I'm here in this beautiful white hill and forests mixed canvas of the University of California in Santa Cruz, UCSC together with one of the pioneers of natural language processing and dialogue systems, Professor Marilyn Walker. Professor Walker has a long history not only get academia, she's been working in many industry and many companies like Hewlett-Packard, Mitsubishi, AT&T, last for a long time. Last year at Google as well in Mountain View. Right. I went on leave and went to Google. Okay. Well, we want to know a lot about it. Professor Walker is in the Computer Science and Computational Media departments. But you also have a Master's degree in linguistics. Right? Right. Yeah. Okay. When I was doing my PhD, Penn had a very strong focus on interdisciplinary work, so they encouraged everybody that was doing PhD in computer science to get a master's in either psychology or linguistics at the same time. Fantastic. That's what we are also doing, we're crossing all kinds of different disciplines here. All right, so let's talk a little bit of a natural language and dialogue systems, that's also the name of your lab? Right. All right. So well, we all nowadays, we talk with our, I almost want to say we talk with our pockets, with our phones, and with our watches, with theory and we talk with our kitchens and living rooms with Alexa's. So we have a rudimentary understanding of what these systems are, computational dialogue systems. For a more technical perspective, how would you describe them? What are they? Are there different classes of flavors of them? Where do they come from? What's a little bit the history? Well, Okay. So the very first dialogue systems were being built like in the early to mid 80's. The idea was you had an SQL database and rather than have to write an SQL query to get information out of the database, you would be able to talk to it naturally. Companies were really interested in it. So one of the first things that I did for Hewlett-Packard is we worked with Unilever, to provide a natural language interface to their database of sales and marketing information. So there's big push then of having it be a way to make customized reports that you can specify exactly what you wanted, and it would go and put the data together. So if you were a sales and marketing manager, you wouldn't have to like master detailed knowledge of SQL and be able to write sophisticated SQL query, just be able to like use her. But then you wouldn't talk to it, like you would type it or? You mostly type. Yeah. You're going to make Google search. So Google search is, you type something in and something comes back. So my first job is at Hewlett-Packard and I was hired as a software engineer. So I wasn't a researcher at that point. I started as a software engineer and we had some we were working with voice input, but it was interesting at that point in time is speech recognition was actually too slow. Okay. So speech recognition, I think a lot of that. Speech recognition wasn't real time in the 80's. All right. So we had to type it in and then we would have text-to-speech out along with pictures on the screen. Also text-to-speech is easier than speech-to-text. So that came first. At that point in time the text-to-speech, but there was a specialized box that you had to have to do text-to-speech, it wasn't regular software that would just run on any machine, it was specialized hardware. So some of the things that have changed right now like why we see this technology coming into the marketplace is that, as a slow transition from voice search, but just having the ability to like talk to your phone to do a search on Google, allow the companies to collect enough samples of people speaking that they could get the speech recognition to work much better. So that was the problem then because people have different dialects, I have an x and people have different accent dialects. Right. So DARPA, government agencies, funded a lot of work in this area. They have these big speech recognition challenges. So they had ones in 1995, 1996, 97, 98, and those push the state of the art for somewhat enough to support like the voice search. When voice search started maybe only like two percent of the queries to Google, or to Bing, or one of the search engines, maybe only two percent would have been in voice instead of typing. But that too given the millions of searches every day, that two percent would be enough to build up a big database of search. So voice search got better and that really helps speech recognition get a lot better. Fascinating. Yeah. We saw that a lot as well. So that's actually also how we started in this class talking about big data. So it was big data, there's a very tight relationship that we actually see in several areas of this computational science paradigm. It's the amount of data that then allows us to do computational was sophisticated tasks, right? Right. Because you've seen everything. You start to where you've seen everything and then also, I mean the big data goes hand in hand with the compute power. Like we can process all that data much more quickly. So we can store it, we can process it, all the big advances in my storage and computing. Speed and things like that have really had a big effect.