You’re reading a study. It says "people prefer blue over red." It sounds definitive. But wait. Who are these "people"? In the world of psychological research the population is basically the entire group of individuals that a scientist wants to draw conclusions about. It sounds simple, right? It isn't.
Think of it like a soup. You can't drink the whole pot to know if it's salty. You take a spoonful. That spoonful is your sample. The giant pot of soup? That's your population. If you only scoop from the top where the cream sits, you’re going to think the whole pot is creamy, even if the bottom is full of chunky vegetables you missed.
The Big Group vs. The Small Group
Researchers rarely study everyone. It’s too expensive. It’s physically impossible. If you want to study "American teenagers with anxiety," your population is every single human in the U.S. between ages 13 and 19 who meets the clinical criteria for an anxiety disorder. That’s millions of people. You can’t put millions of kids on a couch or give them all a survey.
So, you pick a sample. But here is the kicker: the sample has to actually look like the population. If your "American teenager" study only looks at wealthy kids in private schools in Malibu, your findings don’t apply to the kid in rural Ohio or the teenager in a crowded Bronx apartment. When scientists say "generalizability," they are basically asking if the small group actually represents the big group.
Why in Psychological Research the Population Is Frequently "WEIRD"
Most of what we "know" about the human brain comes from a very specific, very narrow population. In 2010, Joseph Henrich and his colleagues at the University of British Columbia dropped a metaphorical bomb on the field. They pointed out that the vast majority of psychological research populations are WEIRD.
That stands for Western, Educated, Industrialized, Rich, and Democratic.
It turns out that about 96% of subjects in psychological studies come from countries that represent only 12% of the world’s population. We’ve spent decades assuming that a college sophomore in a Psych 101 class at Ohio State represents the fundamental nature of humanity. He doesn't. He represents a very specific subset of humanity.
Take the "Müller-Lyer illusion." It's that famous image of two lines with arrows on the ends. One looks longer than the other, right? For years, psychologists thought this was a universal human brain glitch. Then they tested the San foragers of the Kalahari. Guess what? They didn't see the illusion. Their brains didn't process the lines the same way because they hadn't grown up in a "carpentered world" full of 90-degree angles and rectangular buildings.
If your population is "all humans," but your sample is "people who live in boxes," your data is skewed.
Defining the Target Population
Before a psychologist even starts a timer or hands out a questionnaire, they have to define the target population. This isn't just a broad category. It’s a specific set of parameters.
- Clinical Populations: This might be "adults over 65 diagnosed with early-stage Alzheimer’s."
- Developmental Populations: "Infants between 6 and 9 months old."
- Occupational Populations: "Air traffic controllers with at least ten years of experience."
If the definition is too loose, the data gets messy. If it's too tight, you might find something interesting that applies to exactly five people in the world. It’s a tightrope walk. You’re trying to find the sweet spot where the results are specific enough to be meaningful but broad enough to be useful.
Probability vs. Non-Probability Sampling
How do you get from the big population to the small group?
Random sampling is the gold standard. In a perfect world, every single person in the population has an equal chance of being picked. If I want to know what "New Yorkers" think, I’d need a way to randomly pull names from a hat containing all 8 million residents.
But let’s be real. That almost never happens.
Instead, we often get "convenience sampling." This is exactly what it sounds like. It’s the researcher using whoever is nearby. This is why so many studies happen on college campuses. It’s not because 19-year-olds are the most interesting people on earth; it’s because they are standing right there and need extra credit.
The Problem of Representative Samples
If in psychological research the population is diverse, the sample must be diverse. Period.
If I'm studying the effectiveness of a new therapy for depression, but my sample is 90% women, I can't confidently say it works for men. Hormones, social conditioning, and even the way different genders report symptoms vary wildly.
This leads to "sampling bias." This is the "silent killer" of good science. It’s when the way you choose your participants systematically excludes certain types of people.
Imagine a study on "internet addiction" that only recruits participants through online Facebook ads. You’ve already biased your sample toward people who are online. You’ve ignored the people so "addicted" they’ve smashed their routers, or the people who don’t use social media at all. Your population and your sample are out of sync before you even start.
📖 Related: Food for Hair Growth: Why Your Expensive Shampoo Can’t Fix a Bad Diet
The Power of N
You’ll see the letter N a lot in research papers. It represents the number of people in the sample. A small N (like 15 people) is okay for a pilot study to see if an idea has legs. But for a population-level claim? You need a big N.
The larger the sample, the more likely it is to mirror the population's true "mean" or average. It’s the Law of Large Numbers. If I flip a coin three times, I might get heads every time. If I flip it 10,000 times, I’m going to get very close to a 50/50 split.
Does the Population Ever Include Animals?
Sometimes. In comparative psychology, the population might be "Rhesus macaques" or "Sprague-Dawley rats." Researchers use these populations as models for humans.
This is controversial, obviously.
The argument is that certain brain structures are similar enough that we can learn about human memory or stress by looking at rats. The limitation? A rat isn't a human. You can't ask a rat about its childhood trauma or its existential dread. When the population is non-human, the leap to human application requires a massive amount of caution.
Actionable Insights for Reading Research
Next time you see a flashy headline about a new psychological discovery, don't just take it at face value. Do a quick "population check" using these steps:
Check the "Who" immediately.
Look past the headline. Does the study say "People who..." or does it specify "14 male athletes"? If the population is extremely specific, the results might not apply to you at all.
Look for the N count.
If a study claims a major breakthrough but only tested 20 people, stay skeptical. Small samples are prone to "flukes." You want to see hundreds, if not thousands, of participants for a generalized population claim.
Ask about the "WEIRD" factor.
Was this study done in a diverse setting, or was it performed on Ivy League students? If the research involves culture, emotions, or social behavior, the cultural background of the population is a dealbreaker for the results' validity.
Identify the exclusion criteria.
Good research tells you who they didn't include. If a study on "workplace stress" excluded everyone who works part-time or in manual labor, it's not a study about the "general workforce." It's a study about office dwellers.
Understand the "Inference."
The whole point of defining a population is to make an inference. Researchers take what they learned from the small group and "infer" it applies to the big group. If the gap between the sample and the population is a canyon, the inference is probably a leap of faith rather than a scientific fact.
When you understand that in psychological research the population is the ultimate goal—but the sample is the only reality—you start to see why science is so slow and why "truth" is often a moving target. It’s about narrowing that gap until the spoonful finally tastes exactly like the soup.