But as you can see, you've got a lot of scatter in that data, and

your attenuation appears to be relatively slow.

So in this case, it's pretty hard to establish that the slope is real.

>> Okay, but now let's see what happens if you only did semiannual

monitoring over four years.

So the same number of sample events, but just over twice the time period.

But if you think about it, it's statistically much easier to establish

that this attenuation is occurring and to calculate what that rate really is.

>> Yeah, so basically the key point in this is that increasing the time between

monitoring events, assuming you're collecting the same number of data points,

that'll increase the confidence and

accuracy of your long-term attenuation rate.

The question is by how much?

So one way to look at this is to use monitoring data from a whole bunch

of sites.

And these data that we're going to show here in the next couple of slides were

compiled as part of an ESTCP project that GSI recently worked on, Tom McHugh and

Punom Konkarni as well as yourself, right?

>> Mm-hm.

>> But the basic ideas here is they were trying to determine how much monitoring

data is needed to identify the long-term trend with a defined level of confidence

and accuracy.

So they set up these two metrics ensuring that they had enough confidence and

enough accuracy in these trends.

>> Okay, so there's some statistical definitions.

Give us an example of how they define sort of medium confidence, medium accuracy.

>> Well, if you look under that red in Medium Confidence, they're basically

saying a statistically-significant, decreasing concentration trend (p <

0.1) for 80% of the monitoring wells that they were looking at.

And accuracy sort of the same thing, they set this threshold for

how they would do this, then they got 20 sites, examined to see

how much data was needed to meet those two thresholds that they set up.

So I gotta ask you a question here, Chuck, and I know you worked on this project.

How much data would you assume that you would need to meet these

sorts of thresholds of accuracy and confidence?

>> Okay, so if you don't have the background data, you'd say, well,

I don't know, maybe it's two years, maybe it's ten years.

But what they did is they crunched the numbers, and they went through and said

for these definitions of confidence, for accuracy and confidence, what's that time?

So if we go to the next slide, it's not two years, but

if we see it's more like a seven years that for their median site.

So they had 20 sites and some were hydrocarbon sites,

some were chlorinated solvent sites, some metal sites I think.

>> Yeah.

>> And basically,

it took seven years of data before that trend emerge from this merk.

And so this is statistically significant levels at this medium confidence level.

>> Yeah, and so some took shorter, and some took a lot longer, but

seven years in this case was sort of a good, a rule of thumb.

So key point at this is that most sites, it really does take a long time to

accurately characterize the long-term attenuation data.

So the key points here again, in this case, we're looking at seven years or

more of quarterly monitoring data to characterize

the attenuation rate with even a median level of accuracy.

And the bottom line is then, if you're making decisions based on this, so

in terms of your remedy effectiveness, remedial timeframe,

if you use insufficient data, you might have incorrect decisions.

>> Okay, so what's the trade-off sort of between cost and

between understanding this rate?

>> Yeah, so obviously then, it becomes a big implications in terms of how

frequently you have to schedule these things.

And so they did another neat thing where it's part of this study as they

were looking at sort of the trade-off that Chuck had mentioned.

They're plotting sort of on the y-axis here, the number of monitoring events,

and the x-axis, the time between sampling events.

And then they determined that basically, along that curve, that curve that's shown

here, you're going to see an equal amount of accuracy and

competence in the long-term trend that you established.

So if you look at sort of the way that intersection in that first line is,

that's four years of quarterly monitoring data.

So 0.25 on the x axis, that's every quarter.

And then if you do that for four years, you have 16 monitor events.

Well, it turns out that there's some pretty neat implications for that,

right, Chuck?

>> That's, right and they actually developed a tool that's based on this, but

the answer is sort of the same is to get that level of accuracy.

It's four years of quarterly data.

Maybe it's five years of the semiannual monitoring,

seven years of this annual monitoring.

So they did come up with this curve, this trade-off, and

sort of put it into this tool, right, that you're going to describe.

>> Yeah, and so you got a sort of picture of the flowchart that's involved with that

tool shown right here.

But basically, it's pretty easy to enter in here, and you enter your data in, and

you determine if it's ready for evaluation basically.

But then you go into sort of these two different sort of bins.

You first start off with basically looking at monitoring variability.

So the questions that you're trying to answer here are when will this site meet

its groundwater cleanup goal?

And then do any individual wells appear to be attenuating more slowly than the source

as a whole?

>> And just a quick note is we did a lot of work with different statistical tools,

and as you know,

a lot of the work to do this type of analysis is cleaning up the data.

Where data's got all these duplicates, and they've got non-detects and

different detection limits.

So there has to be this cleanup step,

which is on the top left before you get there.

But once you do that, you can do some powerful things with that data.

On the left is the monitoring variability tool that you talked.

And then they've also got this monitoring optimization tool.

You say how much data do I need to determine what my trend is, my monitored

natural attenuation rate, concentration versus time, how long will it take me to

reach my cleanup goals with a determined level of accuracy and confidence?

In question two then, it might site, what are my trade-offs sort of between this

frequency and the time required before that information becomes clear.

>> Yeah, and that second question, it's pretty interesting.

Set this up maybe in terms of the costs associated with it.

And that's shown on this next table as we go through here.

And we'll take a couple lines here, but it's basically saying, okay,

here's these various options.

Here's the amount of sampling frequency I'm going to need to get to the end point

within a specified amount of time.

That end point would be if I established that long-term trend, maybe.

And so I'll look at option one here, this is sampling weekly, so

going out there every week and getting a data point.

You'd be done in 1.6 years, but to do that,

you've got to get 82 sampling events done.

So the cost per well, in that case, is pretty big, $123k.

>> Okay, so if somebody says I want to know what that monitored natural

attenuation rate in that source zone is in two years,

you might have to sample weekly.

>> Yeah.

>> But on the other hand, if you said that I'm going to be out there a long time,

we've got time to do this, if you sampled every five years,

think about a site that doesn't change very much, no risk.

If you do that for 18.4 years, then you get sort of the same information, right?

>> Yeah. >> But the cost is much different.

Instead of 123,000 for that intensive weekly sampling, this would only send

you out there basically for four events, maybe cost you about $6,000.

>> Yeah, so you end up with this trade-off.

You want the answer quick, maybe you have to spend a little bit more money on it,

you're willing to wait, if you're a patient person like yourself,

maybe you can do this last option here.

>> Delayed gratification, right?

>> Yeah, well, that's great.

Let's take a look at some of the key points then from this lecture.

So short-term variability makes it harder to determine the trend and increases

the amount of monitoring needed to evaluate progress in terms of remediation.

>> Okay, and next, based on this big data analysis of these 20 sites and

going through this statistical sort of analysis,

it commonly takes 7 years of monitoring data if you doing it quarterly, right?

To characterize this attenuation rate with the medium level of accuracy and

confidence.

>> On the other hand, less frequent monitoring over longer periods of time may

be more cost appropriate for determining trends for MNA.