Now let's talk about working with SQL databases. SQL stands for Structured Query Language, and that's going to be the set of highly structured relational databases with fixed schema. Now, there are many types of SQL databases, and they'll function similarly with some subtle differences in syntax. Examples of SQL databases you may be used to are Microsoft SQL Server, Postgres, MySQL, AWS Redshift, Oracle DB, and the Db2 family for IBM. Using the right Python library with some subtle changes, you should be able to connect to the database that you use in your workforce and use the example that we see here in order to actually pull out data from your SQL databases. This example is going to be using sqlite3. Some other packages that are popular are SQL alchemy, which will work for a number of different SQL databases, Psycopg2 for Postgres, ibm_db for the Db2 family. Here we're going to, if we look to the code on the left, import the library specific to sqlite3, just call the sqlite3, as well as importing pandas as pd. Our next step is going to be to initialize the path to our SQLite database. We're going to set the variable path equal to the string data, that's going to be our folder where our database lives, and then the name of the database is classic_rock.db. Then using this path, we can then establish a connection to our database. So we create a connection to our SQL database by just using our library sq3 and saying sq3.connection, we're going to use that function that's available within that library and pass in the argument of the path that we just created. We're going to set the output of sq3.connection to con and now we have a variable that is equal to that connection. Then, we will write the actual query. So we can write a SQL query within Python. So all we do is write out that string, select star from rock_songs. Rock_songs is going to be a table within our database. We're going to select all the values, and we said that string equal to query, and then we pass that into our read SQL function available with Pandas and we just pass in the query as well as the connection that we established earlier, and that will output a Pandas DataFrame. So we learned how to use SQL, let's now move into working with NoSQL databases. Those are going to be non-relational databases and they'll vary a lot more in structure, and depending on the application, it may perform more quickly or reduce technical overhead compared to the structure of SQL format. Most NoSQL databases are going to store data in the JSON format that we touched on earlier. Some examples of NoSQL databases are document databases, and that's going to contain documents where each document is like one observation. We think about the JSON files, those would each be a actual document, each one of those dictionaries, the key-value type is going to be key and value, where the key is the lookup, think of having a primary key such as an ID and then your values would be if the ID is a person, their name, their age, their address, etc. We have graph databases which are going to be used for network analysis and it's great for maintaining relationships. Think LinkedIn and you have your first-level connection, second-level connection, and third-level connections, and how those are all maintained in order to see the relationship between one person and the other and there are wide column families, and that's going to be where all of your columns are collected together in a certain column family. So for a person, you may have their personal details in one column family and then their professional details in another column family, where the personnel would be columns such as name, location, age, and their professional column family would be something like experience, skills, whether or not they have a visa, etc. So let's look at the Python code needed to read from a NoSQL database. Now, we're going to show here how to pull from a MongoDB database, which is very popular database when it comes to NoSQL databases. But as we saw earlier, there are many different NoSQL databases that we can be pulling from. So the first thing we need to do is establish a connection. So if we look to the code now, we see that we pulled from the pymongo library, the MongoClient function. We're going to create our Mongo connection using Con, setting our variable equal to MongoClient. Open and close parentheses, within those parentheses we can pass an argument of the path. Let's say, it's something in the Cloud we can pass in that path, may need your username and password included in order to specify where that's database actually lives, where that connection actually lives. From that connection, we can pull different databases. We're going to use con. list database names if we want to actually see what is available. One of those databases that is available is going to be database name in our example. Our connection will have that attribute, database name or whatever different name you have for that database. When you say db equals con. database_name, we're just setting the DB variable to a specific database called database_name. We're then going to read in the data, and in order for it to be pulled into Pandas, we have to specify the query. So we say db, so now we're specifying the database. The database will have multiple collection names similar to how we have multiple tables. Then we say. find from that table and it will be a query. That query should be a MongoDB query string, similar to how we have a SQL query string. If we wanted to do something like select star from that table, it would just be these curly brackets passed in, and that would be all in order to select all and label that as our cursor. Now, our cursor is just going to be a generator object with all of our JSON files in it, JSON documents in it. For us to actually pull that into a Pandas DataFrame, we're going to say list and then pass in our cursor, and that'll give us a list of Python dictionaries. Once we have a list of Python dictionaries, we can pass that into our Pandas DataFrame, Pandas. DataFrame function, and we set that equal to our DF variable and then we'll have it available in that Pandas DataFrame that we are familiar with. Now, let's briefly touch on working with APIs and Cloud Data Access. Variety of data providers make data available via Application Programming Interfaces, APIs. You can think of Twitter if you want to get the different tweets, you can actually plug into Twitter using an API. You can do the same for pulling in marketing data from Amazon, which we've done in the past for business and that makes it easy for it to quickly connect from the data source into your Python notebook. There are also a number of datasets available online in various formats. We can connect, as we see here to the left to an available example from the UC Irvine Machine Learning Library. We see here that all we have to do is define the URL. We say Data URL equals, and this is just a string, you could put it in yourself. That is the same as just clicking on Download for your CSV. Then you do pd. read_csv using your read_csv function from Pandas, passing the Data URL. Then that'll say df equal to Pandas DataFrame that you now have access to. So let's do a quick recap. We just went over how to pull in data using SQL databases to get data from relational databases with structure, which we'll see in action in the labs. We also saw how you can get unstructured data from NoSQL databases such as MongoDB. We also had a brief discussion on how we can connect to different APIs or to Cloud data sources as we try to pull data from the web. With that, we also had a brief discussion on some of the common issues that may arise when attempting to import data with the correct format, and some of the arguments you may want to be aware of or pass into your functions moving forward.