Sampling Distributions
Kelly McConville
Stat 100
Week 8 | Fall 2023
Modeling & Ethics: Algorithmic bias
Sampling Distribution
R
Return to the Americian Statistical Association’s “Ethical Guidelines for Statistical Practice”
“The ethical statistical practitioner seeks to understand and mitigate known or suspected limitations, defects, or biases in the data or methods and communicates potential impacts on the interpretation, conclusions, recommendations, decisions, or other results of statistical practices.”
“For models and algorithms designed to inform or implement decisions repeatedly, develops and/or implements plans to validate assumptions and assess performance over time, as needed. Considers criteria and mitigation plans for model or algorithm failure and retirement.”
Algorithmic bias: when the model systematically creates unfair outcomes, such as privileging one group over another.
Example: The Coded Gaze
Facial recognition software struggles to see faces of color.
Algorithms built on a non-diverse, biased dataset.
Algorithmic bias: when the model systematically creates unfair outcomes, such as privileging one group over another.
Example: COMPAS model used throughout the country to predict recidivism
Steps to Construct an (Approximate) Sampling Distribution:
Decide on a sample size, \(n\).
Randomly select a sample of size \(n\) from the population.
Compute the sample statistic.
Put the sample back in.
Repeat Steps 2 - 4 many (1000+) times.
What happens to the center/spread/shape as we increase the sample size?
What happens to the center/spread/shape if the true parameter changes?
R
!Important Notes
To construct a sampling distribution for a statistic, we need access to the entire population so that we can take repeated samples from the population.
But if we have access to the entire population, then we know the value of the population parameter.
The sampling distribution is needed in the exact scenario where we can’t compute it: the scenario where we only have a single sample.
We will learn how to estimate the sampling distribution soon.
Today, we have the entire population and are constructing sampling distributions anyway to study their properties!
R
Package: infer
infer
to conduct statistical inference.Create data frame of Harvard trees:
Add variable of interest:
Let’s look at 4 random samples.
Now, let’s take 1000 random samples.
# Construct the sampling distribution
samp_dist <- harTrees %>%
rep_sample_n(size = 20, reps = 1000) %>%
group_by(replicate) %>%
summarize(statistic =
mean(tree_of_interest == "yes"))
# Graph the sampling distribution
ggplot(data = samp_dist,
mapping = aes(x = statistic)) +
geom_histogram(bins = 14)
The standard deviation of a sample statistic is called the standard error.
What happens to the sampling distribution if we change the sample size from 20 to 100?
# Construct the sampling distribution
samp_dist <- harTrees %>%
rep_sample_n(size = 100, reps = 1000) %>%
group_by(replicate) %>%
summarize(statistic =
mean(tree_of_interest == "yes"))
# Graph the sampling distribution
ggplot(data = samp_dist,
mapping = aes(x = statistic)) +
geom_histogram(bins = 20)
What if we change the true parameter value?
What did we learn about sampling distributions?
Centered around the true population parameter.
As the sample size increases, the standard error (SE) of the statistic decreases.
As the sample size increases, the shape of the sampling distribution becomes more bell-shaped and symmetric.
Question: How do sampling distributions help us quantify uncertainty?
Question: If I am estimating a parameter in a real example, why won’t I be able to construct the sampling distribution??
Goal: Estimate the value of a population parameter using data from the sample.
Question: How do I know which population parameter I am interesting in estimating?
Answer: Likely depends on the research question and structure of your data!
Point Estimate: The corresponding statistic
It is time to move beyond just point estimates to interval estimates that quantify our uncertainty.
Confidence Interval: Interval of plausible values for a parameter
Form: \(\mbox{statistic} \pm \mbox{Margin of Error}\)
Question: How do we find the Margin of Error (ME)?
Answer: If the sampling distribution of the statistic is approximately bell-shaped and symmetric, then a statistic will be within 2 SEs of the parameter for 95% of the samples.
Form: \(\mbox{statistic} \pm 2\mbox{SE}\)
Called a 95% confidence interval (CI). (Will discuss the meaning of confidence soon)
95% CI Form:
\[ \mbox{statistic} \pm 2\mbox{SE} \]
Let’s use the ce
data to produce a CI for the average household income before taxes.
What else do we need to construct the CI?
Problem: To compute the SE, we need many samples from the population. We have 1 sample.
Solution: Approximate the sampling distribution using ONLY OUR ONE SAMPLE!