Now this is one of my favorite modules. What we're going to do here is compare Common Data Exploration techniques and focus on writing some good SQL and BigQuery on our own course dataset. Now that you're a little bit more familiar with the dataset that we're going to be exploring, which is the IRS charity dataset. Let's talk about your different options for how you can actually explore that data. Here's a data analyst. You're not necessarily limited to just using SQL instead of the BigQuery Web UI. There's also some other pretty cool data preparation tools, like Cloud Dataprep that we're going to introduce you to in later on. Last but not least, you can actually explore your data visually using a visualization tool like Google Data Studio or Tableau or Looker or another one of those tools. It's not necessarily a linear process going from writing SQL and BigQuery processes, and then visualizing it, you can use each of these different tools at any point that you want. A lot of it is a matter of preference depending upon who you talk to. The first thing that we're going to explore first is using the SQL approach and using the BigQuery Web UI. Why? Because SQL is a very good skill to have, almost an imperative skill to have as a data analyst and it's one of the fastest ways you can interact with data behind the scenes, or inside of BigQuery, and it's fun. What do you actually have to do before you write this query on your dataset? The first is often the hardest part is thinking up of question or some unknown or anything that just gives you interest about the dataset. Here you can look for something as simple as well, just give me at the top revenue or the lowest revenue for these organizations in this dataset but coming up with these really complex questions can often be the most challenging or difficult part. It's like a blank canvas when you first look at a dataset and then you have to determine, well, okay, well where do we start? As you see a little bit later on when we get into Cloud Dataprep, having some basic statistics of frequency of values and how many meet the datatype constraints and whether or not you have missing values can be a great first start. But for now we're going to start with a blank Canvas, which is just the dataset schema, and then throw some SQL on it. Second, of course is accessing that dataset. This presumes that you already have loaded your data into BigQuery. Again, here we're using the public dataset. Last but not least, as you're going to get very familiar with in this talk is writing that SQL that is going to query the fields and rows that you actually want returned as part of your question. Converting it from your question all the way on the left to interpreting it as part of SQL is a skill that you're going to master as you get more and more familiar with basic, intermediate, and advanced SQL. A two-second background on SQL, I call it SQL or S-Q-L, it is that Structured Query Language it's been around since the 80s. There is a national standard language library for Sequel called ANC2011 SQL, and BigQuery standard SQL follows those standards and writing the query in standard SQL mode gives you a lot of those performance advantages that we'll talk about in later courses. It is Pseudo English so you're using things like select, which basically means give me these columns from this dataset, give me the names of these charities, give me the revenue that they have from this particular dataset, a table, and then do some manipulation on it. If you hear orderBy is synonymous with sorting, sorted alphabetically, sorted highest to lowest. You'll get very familiar with these dark blue all capitalize words called SQL clauses, as well as the order in which you place them in your queries. That's what we're going to review for a large part.