Non-routinely collected data enable a level of precision and detail that can potentially unlock huge health gain. They can also enable some level of rich qualitative understanding of critical value in public health, but almost frequently overlooked or simply impossible with the constraints of a busy service development. I like to think about non-routinely collected data as belonging to one of three groups. Number 1, passive, ad hoc surveys. Two, active surveys using validated instruments, and three, active surveys using bespoke methodologies. Let's take passive, ad hoc surveys first. Depending on where you are, there will be a range of datasets that have been generated by government, the NGOs, or other organizations that are subsequently made freely available. These are often one-off situations and therefore don't fulfill the criteria to be routine data. They're often subject-specific, and may have been set up in response to a particular intelligence need. While they weren't necessarily answer your question with precision you desire, they may be able to provide detailed information about specific types of morbidity, sub-populations, or otherwise. It can often be worth a troll of what's been collated via database indices. Examples of this include the United Nations Statistical Database and the UK government's find Open Data portal. While these contain a mix of routine and non-routine data, oftentimes, there are data out there, it's just a matter of finding them. Professional networks also provide good opportunities to send out an email, asking if anyone can point you in the right direction. The second group for non-routinely collected data is using validated instruments. I'm describing these as active because it's going to require you or your colleague to actively undertake a survey. They selfishly comes with resource implications. High-quality data collection is not only expensive and time consuming, but it's complicated. Ethical permissions, oversight, and governance all make your project much more complicated and risk prone. For those of you who are entirely new to public health or for that matter, new to science, undertaking a survey is not as simple as writing down some questions and finding some people to ask, validated survey instruments are in their simplest form questionnaires that have been pieced together by experts and tested. That is validated to confirm that the responses they generate are related to the question posed. By that, I mean, if you have a survey that seeks to identify post-traumatic stress disorder, someone somewhere has tested the question and have confirmed that it's better than no questionnaire or chance alone in detecting PTSD among patients who had been diagnosed by experts like psychiatrist, what we call the gold standard. Survey instruments can be validated in a number of ways. But the statistical approach is used to determine whether or not the work may involve sensitivity, and specificity measures, and many more metrics. There are two other things that I want you to be aware of when using validated instruments. Firstly, the validation is specific. For example, just because a survey has been established as valid among adults, it doesn't necessarily mean it's valid among children. Likewise, you constantly take a question that has been translated into another language and to seem it remains valid. Particularly, when it comes to mental health and other phenomena, where culture and social norms are important, validity is specific. That doesn't mean you can't use it, but it means you must use it recognizing and reporting the risks. Secondly, in relation to quantitative scales in particular, validated instruments can be interpreted in a number of different ways. If we check the SDQ, the strengths and difficulties questionnaire, this is validated for children aged 4-16 years of age, and seeks to detect behavioral problems. Now, the output score, which is a number out of 40, can be graded into raised, high, and very high. Therefore, statements such as 30 percent of young people with scold as high risk are impossible to understand. You must interpret such statements with caution. Moreover, just because someone has a high risk, it doesn't necessarily mean they have the disorder. So that's why you have to be careful and precise in how you interpret the results. If we move on, the other thing I want to say about instruments is that you should be very careful when referring to them. Some instruments have variants. The audit tool, used to screen for alcohol misuse comes in a 10-question format and a three-question abbreviated format called AUDIT-C. Well, both are valid. The 10-question format is better, but obviously takes longer to complete. So always be clear when selecting an instrument as to precisely which one you mean and how valid it is. The advantage of validated survey instruments is that it's more accurate than a questionnaire that you've drafted yourself. That can also enable comparisons as with some of them are widespread instruments, there'll be data available that enables you to compare prevalence or other statistics with other populations. But sometimes you'll be faced with a situation whether is it a validated instrument or the question you're seeking to answer is so specific that the bespoke method is the only analytical approach you can employ. Let's look at that next.