[MUSIC] Hi, my name is Killian Lynch. So today, we're going to be talking about how Oracle is providing industry leading tools for each of the legs and the data analysis medley. To start, there's ODI, which is Oracle data integrator and it's an enterprise class integration tool with extract load and transform. Or ELT architecture. Then there's enterprise data quality, which is a sophisticated powerful tool, for giving you the ability to do things like profiling, cleaning and preparing your data. There's the analytics view that is built into the Oracle database that provides a common framework, for defining universally accessible semantic models. And then there's the Oracle analytics cloud, which is the perfect complement providing beautiful and insightful analysis for all of this data. For our traditional market, this is a comprehensive and compelling suite of tools, enterprise class tools for an enterprise class market. So, get ready, because the next time you log into your autonomous database, you'll see that we've taken you from a toolbox that looks like this. The one that looks like this. And over time, we're going to be adding more and more tools to this week. And now I want to take a few moments and go ahead and showcase some of these tools to you. This is laid out with a card view for each tool, in groups such as development and data tools. At the top here, I can access context sensitive information via that question mark. It can be viewed in the side panel and this could be toggled to the left or to the right, or it can be open for viewing in a separate window if you'd prefer Here's quick access to various helpful links. Here via this menu in the upper right under the username, you can access preferences detailed about information, as you can see here. Or you could log out of the service if you would like. On the top left of every screen, there's a four bar menu, often called the hamburger menu. This provides direct navigation to various individual tools. Or you could go back to the database actions homepage itself. The next step is to load your data. And if you've ever tried to load data before, you know that's a lot easier said than done. Until now, that is, just say what you want to do. Load data into your autonomous database, link to a database in a remote location or even set up a live feed. Then you just say where your data is. Is it in a local file system on a remote database, or in an object store in the cloud somewhere? Then you press go. That's it. You can also link to your data in your on premise and cloud storage sources. So that changes in the source are reflected automatically. Or, you could create a feed from cloud storage sources, so that when new data appears it's loaded automatically. So with a simple gesture, we've set up a single load for the data of three different file types in to for target tables. Into the green check marks, indicate that the tables were created successfully. And we can now look at the target, the tables we just created and see the simple hierarchy of the specific devices on which the movies were watched within the broader form category. So next, we're going to go ahead and load a bigger file, and this time we're going to load from cloud storage. First, we want to set up access to this cloud storage location, and we don't have any of these yet. So we're going to click add cloud storage, in the top right ,add a new one. Where going to call this movie sales, and we're going to specify the URI for the cloud storage location here. By just copying and pasting. And this is a public bucket with no credential required. And now we can see the card for the cloud storage location. So we're ready to load the data from there. So, now we're going to click load data, and we're going to load it from cloud storage and press next. And here we can see within the cloud storage location that there's the files are present, and we're going to use movie sales 2020. So we're going to drag and drop that in, and that will load the data. Now let's take a look at the properties and as before, we have a good default table name. This is a CSV file and again, this is an initial load. So the create table is the right choice here. The column names seem good. So let's go ahead and run. Now, you can see the status up in the top, and I will be back with you when this is finished. Okay, so that took about 15 seconds. Not bad. Let's go ahead and take a look with the Explorer card at the bottom. So here we have the four tables we loaded from the three local files. And here are a couple of instrumentation tables that were created in case we need to view the details of the data load. So here, we're going to be looking at the data in the so called facts table that we just created. We see the data across various dimensions, geography, time, genre, customer segment, viewing device. Here are the measures for sales and individual purchase events. Now, I want to show you the statistics, and I find this very helpful for understanding the structure and content of the table. For example, if we click on day, and we see that there are 14 distinct values, that doesn't seem right. Rolling my mouse across the histogram below, we can actually see the problem comes in from uppercase and lowercase in some of the date names. So, that is something that we'll need to fix, and we're going to go ahead and take a note of that. Here for months there are 12 which is okay for a calendar. However, our specific task involves analyzing data for Q2, which is April, May and June. So I need to filter out the data for the other months as well. So let's make a mental note. We need to correct the values for the number of days in the week, and we need to filter out the months outside of Q2. Now, we can return to the database actions main page. So what have we seen here today? And just a few short minutes, we've loaded data both from files of various types and from multiple locations. We quickly scan through the data, and we were able to realize that there were some problems that need to be addressed.