Hello. In this lesson, let's get into technical issues you may have with your content. What do I mean by this? Well, I want to explore the Schema.org and structure data initiatives in more detail. Let's also talk a little bit more about canonical tags and duplicate content issues and finally, we'll determine if these are the right technical developments that you want to enable for your site. Technical SEO and Content SEO have differences. But there is overlap, where technical improvements can affect your visible content. Tools like Google Search Console or Screaming Frog identify problem issues and allow you to make changes. Examples of problems that need to be fixed include; missing title or description tags, titles or descriptions that are too short or too long, or maybe the H1 tag is missing. Be sure to use tools to find and fix content issues. That's a simple start, but let's look additional ways to improve your content. Schema.org is a great website that's worth checking out because it focuses on structured data. Over the last few years, many search engines have rallied around such endeavors. It helps the engines understand data at a very specific level. The most common use of Schema.org is for local search, where name, address, and phone number, NAP is often being wrapped in schema code. You can also find books, movies, and events where schema code is enabled, and helps make that content more visible. What users are seeing in the search results is an improved markup for videos, or for name, address, and phone number, or a way that the knowledge graph or the video module within search results might be further enabled by appropriate schema markup. Now this is an advanced initiative even within Technical SEO and does require some bandwidth to act on. However, understanding structured data is important for the future especially as mobile and voice seaarch become even more common. It helps position your skill set in an advanced way with your clients. So consider the Schema.org initiative. Let's look at canonical tags. In the world of content management, it's common for the same content to be accessed through multiple URLs. So canonical tags are simply a way to tell Google the official and original version of the page. You're setting a single line of code on a page and indicating whether the canonical is in place. Ideally, every page should have a canonical tag. So you're using these to improve linking and ranking signals for that primary or original content, as distinct from similar or duplicate content distributed throughout your website. In WordPress, canonicals are setup by default, and many of the tools I've covered can help determine where you have canonical issues. The quickest way to determine the canonical pattern for an individual page is to check the view source of a website and look for rel equals canonical. You will also need to address duplicate content issues. There are tools that allow you to identify duplicate content issues. These include CopyScape, Google Search Console, Screaming Frog, and DeepCrawl. Duplication might be in title tags or meta descriptions that are identical to other content. What you're trying to do here is give variety and relevance to the pages that you've already got ranking and indexed, so that the engine see value in that content for the distinct intent and search query that represents what users are looking for. Meta tags are a great way for webmasters to provide search engines with information about their sites. I want to focus on a few meta tags that Google understands, that are not technical in nature but help inform how content can be written better or coded in such a way that makes the content more valuable to the user. So title and meta description are obvious meta tags. But if you want to not index a page or prevent it from being indexed, you could use the noindex inclusion. You could do this through the robots.txt files. It's also common to do this at the category level. Noindex is a great way to prevent that page from being indexed. Nofollow prevents Googlebot from following links from this page. It doesn't mean that links though aren't followed from other bots or other pages that may link to that URL. Noindex and nofollow are two meta tags that Google understands, that are worth considering for your site, where the need presents itself. A site's URL structure should be as simple as possible. URL should be logical and intelligible to humans. Use robots.txt to block GoogleBots access to problematic URLs, whether they be dynamic, or URLs that generate search results that you don't want visible in Google's results, or URLs with confusing parameters or session IDs that don't make sense. Use a combination of good URL structure, robots.txt, and appropriate use of the parameters functionality within Google Search Console to clean up the URL structure visible in search results. Whenever possible, shorten URLs by trimming the unnecessary parameters to make them simpler for engines to crawl. Ajax is a good approach for users. But if you're starting from scratch, I recommend building your site structure using HTML. Keep it real simple. Once you have the site's pages, links, and content in place, you can improve the appearance and interface using Ajax. You'll likely have links requiring JavaScript for Ajax functionality. So when you create these links, format them so that they'll offer a static link as well as calling a JavaScript function. Google includes a lot more information on their Ajax Crawling Document within the developer console, and I recommend you check that out for more information.