method whereby 2 or more variables are systematically measured and the strength of the relationship between them is assessedrange from -1.0 to 1.0 (anything above .3 is considered strong relationship -- significance)only linear relationships (not curves)
as X increases, Y increases
as X increases Y decreases
Video Games and Aggression - Correlational Design
Measure video game exposure: Observe at home: How many hours of video games per day? Survey them or their parents Measure aggression: Observe on the playground: How many times do they hit others? Trip them and see if they hit you back.
what does a correlation of 0.84 mean?
A correlation between X (video game playing) and Y (aggression) could mean any of the following: X causes Y--> Video game playing causes aggression Y causes X--> Aggression causes people to play video games Z causes both X and Y--> Something else causes people to play video games and to be aggressive.
positive correlation between drowning and ice cream consumption means...
1) Eating ice cream causes people to drown.2) Drowning causes people to eat ice cream.3) Something else causes people both to eat ice cream and drown
how does french drink all that wine and stay healthy?looked at how much wine people drink and then eventually their cause of death -- researchers find that people who drink 5 glasses of wine a day have lower mortality than people who never drink winewhat can we conclude?1) Drinking wine causes better health.2) Being in better health causes people to drink wine.3) Something else causes people both to drink wine and to be in better health.
Do condoms cause STDs
Rosenberg, Davidson, Chen, Judson, & Douglas, 1992 Researchers examined records of women who had visited a clinic for STDs. They found a positive correlation between condom use and STDs Women who used condoms had more STDs than women who used diaphragms or contraceptive sponges.
Meaning of correlation:
1) Using condoms causes you to catch an STD.2) Catching an STD causes you to use condoms.3) Something else causes people both to use condoms and to catch an STD.
correlation does NOT equal causation
you are committing the correlation fallacy if you assume causality from a correlation
How to demonstrate causation?
using a true experiment!
steps of true experiment
randomly assign participants to conditions (experimental group and control group)manipulate IVhold everything constantmeasure DV
independent variable (IV
the thing you manipulate
dependent variable (DV)
the thing you measure
randomly assign participants to conditions
assigning subjects to the conditions of your experiment in such a way that every subject has an equal change of being assigned to any of the conditions of the studyno feature of the subject should have any effect on what condition the subject gets assigned to
executive monkey study
wanted to look at stress of executivestrained to avoid electric shock by pressing lever ("executives")controls got same amount of shock as executivesresults: executive monkeys got more ulcers from having to make decision to press the lever
correlation for executive monkeys and ulcers
1) Being an executive causes ulcers.2) Having an ulcer causes you to be anexecutive.3) Something else causes you to both become an executive and get an ulcer.
random assignment vs random sampling
random assignment to conditions: assigning subjects to the conditions of your experiment in such a way that every subject has an equal chance of being assigned to any of the conditions in the studyrandom sampling from the population:choosing subjects for your study in such a way that every member of the population of interest has an equal chance of being selected into your study
What can happen when your random sample isn't random: Alf vs. FDR (1936)
Literary Digest surveyed 10 million people and 25% responded:- prospective subscribers- people randomly sampled from phone books and car registriespoll result: Landon won in a landslide
What went wrong with this poll?
selection bias:survey systemically excluded poor voters (likely to be democrats who would vote for FDR)non-response bias:because only 25% of people returned surveys, non-respondents may have different preferences from respondents
Choosing an appropriate sample
does death penalty deter crimes? -- no, but only surveyed people on death row, so it didn't deter those crimesdo cigarette ads cause kids to smoke? -- yes, not peer pressure, only surveyed kids who didn't smoke, so don't know reason for kids that doWhy do tourists visit Chicago? -- said that monet exhibit influenced them coming to visit chicago -- but only surveyed people at the museum
how to show causality
1) Cannot infer causality from correlational design.2) To show causality, must hold everything constantexcept IV of interest, which you manipulate.3) To hold everything constant, must randomly assign to condition.4) To generalize results, need to sample from population of interest (or else your results won't be generalizable to population)
the extent to which a valid causal statement can be made about the effects of the IV on the DV in your studyis the effect you found actually due to your manipulation? if not, not internally valid
the extent to which results of a specific study can be generalized to other people, places, or timescan't have external validity unless the study is internally valid
error caused by extraneous and uncontrolled variables whose average influence on the outcome of an experiment is the same in all conditions- subject variables, extraneous events- does not affect validity of results- but can hide the effect of IVex: body image, have people look in mirror, if mirror is distorted, error in results, affects all participantsgoal is to keep random error as small as possible
error caused by extraneous variables that tend to influence all scores in one condition and to have no effect or a different effect on scores in other conditions- can distort the effect of the IV- threatens internal validity- goal is to eliminate it
threats to validity
all threats to validity are types of systematic error, not random errorRemember: random error doesn't threaten your study's validity. Random error just keeps your study from showing the effects you are looking for.
history (threat to internal validity)
definition: specific events that happen during the course of a study can affect the variable being measuredex: study conducted Dec 2004. If you were looking for effects of scary movies (like Jaws) on kids' fear of swimming, they might be more fearful, but not because of scary movies, but because of the tsunami in Southern Asiawhat to do: have a control group
maturation (threat to internal validity)
definition: changes that happen within individuals as a function of time passing can affect the variable being measuredex: in a study looking at the effects of a support group on subjects' coping responses, participants may learn to cope better over time even without an intervention, and so look like better copers when you follow them up at the endwhat to do: have a control group
regression to the mean (threat to internal validity)
score on test = true ability + random luck-----------------score on math test = true math ability + random luckrandom luck: guessing, sleep, whether you studied the right thing, etc...------------------# hone runs hit = true hitting ability + random luck random luck: swing, which pitcher it is, sun, health that day, etc...
regression to the mean example
sports illustrated jinx:excellent performance one week is likely to be associated with less excellent performance next weekdice rolling:# dots that come up = # dice you roll + luck
what is the relationship between reliability and regression to the mean?
more reliable your measure, less regression to the mean because less chance factors (reliability is about consistency)
regression to the mean (threat to internal validity) explained
definition: the tendency for people who get high scores on a particular measure to score closer to the mean (lower) if they are tested again (and vice verse)ex: if you select people who are high in optimism, they will be lower next time you measure it, regardless of your manipulationwhat to do: have a control group; don't select based on extreme scores
testing effects (threat to internal validity)
definition: people tend to do better on a test the second time they take itex: even people who did not take a Princeton Review course tend to do better on their SATs the second timewhat to do: have a control group; don't give a pretest; use a different type of test the second time
experimental mortality -- heterogenous attrition (threat to internal validity)
definition: when different amounts of people or different types of people drop out of the two conditions of your experimentex: control vs intervention group, pre and post test of how many cigarettes you smoke a day, all the heavy smokers drop out of the intervention group (but not the control group) of your "quit smoking" study, it will look like the intervention worked, even if it didn't.what to do: minimize drop-outs; compare the drop-outs from the 2 groups and measure each person's change in smoking from T1 to T2
participant reaction biases - expectancies and reactance (threat to internal validity)
definition: participants try to behave in ways that are consistent with -- or the opposite of -- the researcher's hypothesis.2 types:- subject expectancies (what to please the experimenter by trying to do what you think they want)- subject reactance (rebel against whatever they think the study is about - do the opposite because you don't like being told what to do)ex: if participants are told they have been given alcohol, they act drunk, even if it's not true.what to do: give a false hypothesis or be vague; convince P he's an E (milgram); keep P unaware he's in a study; use behavioral measures instead of q-aires; have a non-obvious hypothesis
experimenter bias/influence (threat to internal validity)
definition: experimenters' expectancies can affect either how they act with participants or what they observeex: if E thinks P has been given alcohol, E would be more likely to notice if P slurs speechwhat to do: keep E blind to hypothesis; keep E blind to what condition P is in; have 2 E's that are each partly blind; have all instructions on tape or computer
severity of initiation study
group of college women chosen to potentially be in sex discussion groupBefore being accepted into sex discussion group, had to go through initiation: Control Group: No initiation Mild Initiation: Read list of non-obscene sex words to male experimenter Severe Initiation: Read list of hard core sex words and two explicit sexual passages to leering male experimenterWho liked the group the most? - severe initiation groupWhy did subjects in the severe initiation condition report liking the bug discussion most? Because of the severe initiation. Because the "severe" initiation was actuallyarousing and fun. Because the "severe" initiation led them to believe that the discussions would get better.
confounds (threat to internal validity)
definition: some additional variable (that you don't care about) varies systematically along with the thing you manipulated; alternative explanationex: in "severity of initiation" study, severity of initiation was confounded with how fun or arousing the initiation was.what to do: Do a true experiment; measure variables that might be confounds; do a different study to rule out confounds
experimental mortality - homogeneous attrition (threat to external validity)
definition: When a particular type of person drops out of your study, regardless of which condition they are in (impacts ability to generalize results)ex: if all the overweight people drop out of a diet study then you can't generalize the results to overweight peoplewhat to do: minimize drop-outs; compare dropouts to people who remain in the study
selection bias (threat to external validity)
definition: sampling people from an unrepresentative sampleex: 1936 Presidential Election poll of 2 million people predicted Landon would win easily over Rooseveltwhat went wrong: people's votes were based on how wealthy they were and mainly wealthy people respondedwhat to do: use random sampling techniques
tradeoffs between internal and external validity
internal validity well controlled usually lab studiesexternal validity generalizes to other people and other settings usually field studies**increase external validity by increasing mundane realism or (more easily) experimental realism