psy 290 exam 3 Flashcards

correlational research

focuses on examining the relationships among variables
allows for study of individual differences, but you cannot make
conclusions about causality
correlation does not = causation!!!

what is the difference between correlational research and
experimental research

in experimental research, there is some kind of manipulation
in correlational research you are only measuring 2 or more
characteristics in the same individual and finding a correlation

what is the main concern in correlational research

investigating the relationships between naturally occurring variables
and with studying individual differences

two variables that are related to each other "in some fashion"...

are said to be correlated

direction of correlation

positive correlation or negative correlation

positive (direct) correlation

high score on one variable is associated with high score on a second variable
ex: height positively correlated with weight; study time and GPA

negative (inverse) correlation

high score on one variable is associated with a low score on a second
variable, or vice versa
ex: # weeks training negatively correlated with race time;
time goofing off and GPA

strength of correlation

correlation coefficient - pearson's r
ranges from -1 to 1
-1 - strong negative correlation
0 - no correlation
1 - strong positive correlation

as strength of correlation weakens...

points on the scatterplot move further away from a perfect diagonal line

threats to detection of linear relations between variables

non-linearity, restricted range


threat to detection of linear relations between variables
pearson's r describes the direction and magnitude of linear
association between variables
if the variables are associated in a non-linear fashion,
pearson's r cannot identify the nature of the relationship


provides a visual representation of the relationship shown by a correlation

example of curvilinear relationship in psych

yerkes-dodson law (inverted U)
arousal and performance - medium arousal means highest
performance, low or high arousal means low performance
pearson's r would not work for this

restricted range

it is important that you have a sample with a spread out range
otherwise you will get a weak correlation, or won't find a correlation
at all
ex: sample of SAT scores ranging from 400 to 1200 - strong correlation
vs. just a sample of kids who scored high (900-1200)


r^2 = coefficient of determination - will always be positive b/c it's squared
r^2 = portion of variability in one of the variables in the
correlation that can be accounted for by variability in the second variable


a score that is dramatically different from the remaining scores in a
data set

interpreting r^2

ex: SAT and GPA
if r=.5, r^2=.25 - here we can say that 25% of the
variability in GPA scores can be associated with SAT scores, the
other 75% is due to other factors

effect of outliers

gives a false representation of what the real correlation coefficient is
pulls the mean up or down

regression analysis

making predictions based on correlational research
if X and Y are strongly correlated, knowing score on X
allows you to predict score on Y
Y = a + bX
a = y intercept, b = slope

correlation does not equal causation

a correlation between two variables does not allow you to conclude
that one variable causes the other to occur
ex: research that is misinterpreted in the press
this is different from experimental research because there
you have manipulated a variable and controlled for other factors, so
you can say the DV is caused by the IV

directionality problem

causal relationship can occur in any direction
is A causing B or is B causing A?
ex: correlation between amount of TV and aggressive
behavior...could watching TV make kids aggressive, or do
aggressive kids like to watch TV?
the most you can say is that there is an association between
TV watching and aggression

to talk about A causing B you need to show...

1. A and B occur together2. A precedes B in time3. A
causing B makes sense in relation to theory4. other explanations
for co-occurrence can be ruled out

cross-lagged panel correlation

think about the box with the X inside, each corner has a variable,
and you find the correlation between each variable in all directions
.3-.4 is a strong correlation, anything higher is not
important, anything lower is not correlated

third variables and problems with causality

correlational research does not attempt to control extraneous variables
- but extraneous variables could account for the association
between 1 and 2
"variable 3 causes both 1 and 2

partial correlation

ruling out the other variable to see a correlation, attempts to
control for 3rd variables statistically
imagine mickey mouse's head (ears and face) - unless the
face is there, the ears wont touch and therefore don't interact - 1
and 2 work THROUGH 3

examining the influence of a third variable by looking at a partial correlation

if partial correlation is a lot smaller than correlation, this means
that the third variable is indeed accounting for the association
between the variables
if partial correlation is about the same as the correlation,
this means that that third variable doesn't account for the
association between the variables

personality research

correlations between different personality dimensions
ex: pessimism and depression - r=.56
therapy reduces depression but magnitude of correlation
stays the same..why? - as depression went down, so did pessimism

the nature-nurture issue

are certain characteristics more highly correlated among identical
versus fraternal twins?
do genetics or the environment play a bigger role in shaping
a person

multiple regression

examining the relations among more than 2 variables - multiple
predictors of a single criterion variable
ex: do SAT score, motivation and high school GPA predict
college GPA?


where the 3rd variable is necessary for a correlation to occur
think mickey, if the head is removed, there is no more correlation
necessary for a correlation


there is an X-Y correlation, and it doesn't depend on the 3rd
variable, but correlation changes depending on the level of the 3rd variable
ex: michelle is 3rd variable - tells you everything that is
wrong and changes the strength of relationship
changes what is already there

split-half reliability

split the test you are giving in half (ex: odd and even numbers) and
correlate the two halves.
someone scoring high on one half should score high on the
other half

test-retest reliability

the relationship between two separate administrations of the test.
a reliable test yields consistent results from one to the
other, so the reliabilities should be high on both

how to test for reliability

split-half reliability
test-retest reliability

a multiple regression study has...

one criterion variable and a minimum of two predictor variables
this allows you to determine that the two variables predict
some criterion, and the relative strengths of the predictors

multivariate analysis

examines the relationships among more than 2 variables

bivariate analysis

investigates the relationships between any two variabels

when would a multivariate analysis be used

when there are multiple factors that could lead to some outcome
ex: success in college cannot simply be measured by high
school GPA alone, it could be ACT/SAT score, extracurriculars, etc.

factor analysis

a large number of variables are measured and correlated with each
other, then groups of variables are clustered together to form factors
ex: giving children tests of different tasks (vocabulary,
geometry, puzzle) and finding the correlation between each test and
one other

quasi-experimental design

has all the characteristics of a true experiment, but there is
something missing, usually no manipulation of IV
cannot make causal conclusions because you did not have
complete control over all the variables

why is P x E is a quasi-experiment

P = person (subject) variable, can't be manipulated
cannot make causal statement because you can't rule out other variables

non-equivalent control groups design

used in order to evaluate effectiveness of the treatment
did the groups start/end in the same place?
if they started at the same place and ended at different
places, there might be some kind of correlation, if they started at
different places and ended at the same place, there might be some
kind of correlation

2 ways of interpreting different end scores for nonequivalent control
groups design

1. the IV worked
2. the groups were different to start with

possible problems with nonequivalent control groups

ceiling effect could explain why some groups didn't change at all -
they had no where to go to but down to begin with
must make sure there is room for movement (up or down) when
designing a study
when pretest measures are really low or high, regression to
the mean may occur
matching samples when the actual populations are different to
start with (ex: head start example - groups were different at start)

regression effect in nonequivalent control groups

if the sample is a biased measure of the population (too high or too
low) the effect of IV will go one way, but the true population mean
will pull them another way
think about head start study - if you sample only the top
performing head start students, their improvement wants to go up but
the real mean for the group is lower and pulls them down - they
cancel out and show no effectiveness

example of nonequivalent control groups design

little league study - one team coach given effectiveness training,
other not - self-esteem score of the players measured pre and post season
used two different leagues so there was a 50% chance win rate

langer and rodin study

study with nursing home patients given the option to do things on
their own vs. have a nurse do things for them - happiness, death rate,
nurse score measured
not randomly assigned because they had to allow the patient to
choose if they wanted to be more in control or not

hollingworth coca-cola study

used multiple kinds of procedures (counterbalancing, placebo/control
group vs. different caffeine dosage group, and double blind) which
gave them a range of results, as well as enough money to travel and go
to grad school
combined several techniques for a strong study

applied reseach

psychological studies that produce results that can be applied to the
real world for some kind of benefit

problems with applied research

ethics - consent and privacy, non-proper debriefing,
employees believe their job status depends on their participation

internal vs external validity - high external
validity because it models real life situations, low internal validity
because of possible confounds

between subjects design problem - cant use random
assignment, so you have to compare nonequivalent groups so reduced
internal validity

within subjects design problem-cant always properly
counterbalance so this creates order effects, attrition is also a
problem for long studies

what do you usually compare in nonequivalent control groups design?

the change scores between observation 1 and observation 2, after the
treatment in the experimental group, and before and after in the
control group

standard for presenting nonequivalent control groups design (stanley
and campbell)

O1 T O2 - experimental group
O1 O2 - control group

pittsburgh vs. cleveland plant example

studied 2 different plants producing pans, one implemented a new
"flexible" worktime schedule, while the other scheduled
people as normal. both made workers work 40 hrs/week. measured
productivity in the plants before and after the new schedule in one plant.

attempt to reduce the nonequivalency in groups - matching

this creates problems because you may be taking a sample that is too
high or too low, so the mean will pull them in one direction while the
effectiveness will pull them in another
this leads to no change - think head start study

interrupted time series design

taking multiple observations over a period of time before the
treatment, then imposing the treatment, then taking multiple
observations periodically after the treatment
O1 O2 O3 T O4 O5 O6
allows researcher to rule out alternative explanations of an
apparent change from pre to post test

variations on time series design

inclusion of control can help with interpretation - seeing a control
groups scores compared to the experimental groups scores over time,
before and after the treatment
switching replications - vary the time at which the
"event" or treatment occurs, can the change be directly tied
to the event?
measure multiple dependent variables - select some where you
expect change and others where you dont expect change, (ex: 3 strikes
law in california, misdemeanors vs felonies)

advantage of interrupted time series design

allows researcher to observe trends - consistent patterns of events
over time
rule out alternative explanations of an apparent change from
pre to post test

example of interrupted time series design

implementing a worker incentive program and observing the worker
productivity for a period of time before and after the new program was
implemented. this allowed researchers to compare the productivity pre
and post to see if the program worked.

example of a variation in a time series design - california crime index

implemented a "three strikes" program where after 3
felonies resulted in jail time. they compared this to misdemeanor
charges, which were not affected by the new program. they measured the
amount of felony charges before and after the program, and saw that
the amount of felonies decreased significantly while misdemeanor
charges remained relatively constant

interrupted time series design with switching replications

implementing the treatment or program in two locations at two
different times. if the same outcome can be seen in both locations
following the treatment, this is a good indication of the program working

archival research

going through existing records (medical files, census data, court
reports), data that you are not collecting, someone else already did

benefits of archival research

very convenient, ready to go, you don't have to collect anything more
medical research - can look at changes in cohorts
don't have to worry about reactivity - the subject has nothing
to react to, they don't have to worry about being observed

drawbacks of archival research

you're at the discretion of whoever collected it, limited to what has
already been collected
there might be one question you wanted to know that was not recorded
problems with consent - people may not have wanted their private
information released
experimenter bias - you know what you're looking for and may
ignore anything else that is important

quasi experimental design using archival research - ulrich

study using archival hospital records where researchers collected
data on patients after surgery in a room with a view of trees or a
brick wall. length of stay, nurses notes, minor complications after
surgery, request for medication was recorded - people in rooms with
the view of trees had an advantage and spent a shorter amount of time
in recovery

program evaluation

applied research that seeks to asses the effectiveness and value of a
public policy or specially designed program

4 purposes of program evaluation

needs analysis - determine community and individual needs for programs
formative evaluative - asses whether program is being run as
planned and if not, implement change
summative evaluation - evaluate program outcomes and get rid of
them if they are pointless
cost-effective analysis

example of program evaluation in connecticut

governor recognized that there was a record number of traffic
fatalities so he implemented a crackdown on speeding, the next year
there were less deaths but was it because of the crackdown? record
showed that even before the crackdown there were less deaths in
previous years

qualitative analysis

along with numbers, assessing the non-numerical data as well like
interviews and surveys

ethical problems with evaluation research

informed consent - participants believe that health services will be
cut off if they do not participate
maintaining confidentiality - it is sometimes necessary for the
researcher to know who the participant is
perceived injustice - some people object to being in a control
group because they believe that they are missing out on some kind of
beneficial treatment
avoiding conflict with stakeholders - make sure that the research
from program evaluators will not interfere with stakeholders who have
a vested interest in the program

benefits of small N designs

better individual subject validity - not just the average as a
representation of a single individual
you lose info that would have been found on a closer investigation
(ex: learning in children overcoming "learning curve")

when may it be hard to find enough participants for your study

when it has a large N and you are studying something specific or rare
clinical psychology (diseases), counseling psychology, invasive
animal studies

goal of small N design

to show change in behavior as a result of treatment being applied

steps of small n design

1. operationally define behavior of interest
2. establish baseline level of responding over set period of time (=A)
3. implement treatment and record change in responding (=B)

withdrawal design

how tightly coupled are the behavior changes and the treatment?
if you stop the treatment will behavior go back to baseline or stay
the same?
how would this pattern affect your interpretation of the
effectiveness of treatment

withdrawal design = reversal design

simplest version is A-B-A design
could also use an A-B-A-B design to strengthen interpretation
further? - if scores during all treatment sessions are high and all
baseline times are opposite, this is a strong treatment

multiple baseline designs

establish baseline measures and implement treatments at different times
withdrawal designs aren't always possible - if you're trying to
teach new behavior then you may not be able to reverse it - if nothing
changes after treatment is removed, maybe it was a permanent treatment
ex: 1. baseline for same behavior in 2 or more individuals2.
baseline for 2 or more behaviors in same individual3. baseline
for same behavior in same individual but in different settings (home
vs school)

changing criterion designs

based on operant conditioning paradigm of shaping
when target behavior is too complex to acquire all at once - break
down into steps or increments, sequentially change criterion making it
more stringent until target behavior is learned
applied to health-related behaviors, sever developmental disabilities

applied behavior analysis

uses behavior or operant principles to solve real life behavioral problems

examples of multiple baseline designs

decreasing stuttering in 6-10 year olds, goal was to change same
behavior in 2 or more individuals - took baseline readings and
implemented treatments in each child at different times and observed
the outcome
changing football performance, changing 3 behaviors within single
participant - establish baseline for the players and observe
perfomance in each player after "public posting" for improvement

example of changing criterion design

changing lifestyle of obese children, goal was to increase stationary
bike use in obese and non-obese 11 y/o boys
recorded baseline with 8 sessions saying to "exercise as long
as you like"
1st criterion - reward after 15% increase in use2nd criterion
- reward after 30% increase in bike usewithdrawal phase -
reinforcement removed
both the obese and non-obese showed the same trends

case study research

an in-depth analysis of a single person or case, most often clinical
cases - want to describe the developmental causes and consequences of
an event
"the ultimate small N

classic example of case study in psychology

fineas gage - got part of his brain taken out by a rod and his
emotions completely changed from nice to mean afterwards. researchers
were able to study him because that already happened, they could never
impose that on someone

strengths/weaknesses of case study research

strengths - very detailed analysis not usually found in research, can
show what happens in extreme cases that can be casued over a lifetime
if not taken care of (head injuries in boxer), support for a theory,
suggest hypothesis for future research
weaknesses - limited generalizability because they are extreme
cases, subject recall/eyewitness testimony can be iffy

two types of observational research

naturalistic observation and participant observation
vary based on the level of involvement of the researcher with the participant

naturalistic observation

as pure as you can get, you are not changing the system at all
no researcher involvement with participants (ex: set up post and
observe passer-bys)
study of people in their natural environments

downsides/benefits to naturalistic observation

bad - can't impose or change anything, cant listen to people, don't
know what happens after the observation
good - for initial research to eventually lead to a hypothesis and
study on behavior

lab based "naturalistic" observations

where you are in a lab but it is made as "natural" as
possible (ex: make the lab into a person's living room)
downside - ecological validity - how well does this apply to the
real world?
upside - you can change the stimuli
ex: kids in a lab with other kids and toys

participant observation

researcher actually joins the group being observed
participation provides first hand insight into group dynamics
ex: psychologists join religious cult to learn about it

problems with observational research

absence of control - variability in degree of control, implications
for interpreting results
observer bias - researcher unintentionally acts in a way that
influences participants
ethics - invading privacy, no consent

problem with participant observation

because the researcher is actually joining the group, the group is
now a new group - they might act different because there is a
"newcomer" or "outsider"
ex: religious cult study - researchers acted weird and the cult
started to believe that the researchers were sent there by their aliens
problem with ethics and consent!

observer bias

experimenter unconsciously acts in a way that can make participants
behave in an "expected"way
preconceived ideas about what will happen color one's observations

ways to minimize observer bias

develop clear, precise operational definitions for behaviors
generate behavior checklists (coding for behavior)
train observers in identifying target behaviors
have 2 researchers observe and correlate outcome/see if the they match
use different sampling procedures to reduce the amount of data (time
sampling-observations at specific times, event sampling-observe only a
specific set of events)

subject reactivity in observational research

people's behavior changes when they know they are being observed
minimize this by: direct unobtrusive measures (2 way mirror,
hidden video)indirect unobtrusive measures - record events that
are assumed to result from behavior of interest (ex: study trash to
see eating habits)

when dont we need informed consent?

when behavior is studied in public
if people are not interfered with in any way
if confidentiality is maintained

survey research

a structured set of questions or statements given to a group of
people in order to measure their attitudes, beliefs, values or
tendencies to act

advantage of survey research

can collect a lot of data with minimal effort, but
must be sure that sample observed is representative of the population
you want to generalize your results to

probability sampling

used when the goal is to describe features about an identifiable
group of individuals
population = group of individuals
often not possible to study the entire population of
interestsample=subgroup of population of interest
sample needs to be unbiased and representative
random sample! each person has an equal chance of being selected

self-selection problem in probability sampling

people who respond to the survey or questionaire are the ones who
feel an extreme need to respond
ex: good or bad experience with customer service are more likely to respond

random sampling

stratified sampling - use when there's a systematic feature of the
population you want reflected in sample
ex: college campus with 80/20 girls/boys but want to ensure the same
gender distribution in sample - proportions of important subgroups in
the population are represented precisely in sample

cluster sampling

random selection of a cluster of peope all having some feature in common
use when population of interest is huge
ex: survey of on-campus living experiences - high rise building
(6/10, floors 3-8)
used by national polling organizations

face to face interviews

good - you know who you are talking to, can do a follow up
bad - social desirability bias (don't want to release all info and
sound weird), people actually have to take their time to go into see
the researcher

phone interviews

good - easy, don't have to be there, more people
bad - you don't exactly know who is on the other end

written surveys

open-ended vs closed-ended questions - sometimes people don't want to
have to think up a response and write it out, they want to fill in a bubble
likert scale to measure degree of agree/disagree

electronic surveys

good - access to anyone with a computer, once the program is made it
will run itself
bad - don't know who is on the other end of if they are who they say
they are, personal info is vulnerable, no follow up quesitons

tips for written and electronic survey compliance

make it simple, mostly closed-ended questions with optional open ended
start with interesting questions
make it look professional

designing a survey

rely mostly on closed-ended questions
use likert scale for agree/disagree, easier to quantify and make
conclusions, use same scale on all questions, reverse order some questions
use careful and not confusing wording - complete sentences, no
abbreviations or slang, dont phrase questions negatively (hard to process/understand)

issues to consider when interpreting the results of a survey

sampling biases - how well can results be generalized to the population?
response bias - social desirability bias, are people responding how
they really feel or how they think they should respond

jane goodall observational study

spent a lot of time observing the apes from varying distances and
they became so habituated to her that they acted completely normal
with her only 10 feet away

festinger study

researchers joined a religious cult and observed them to see how they
reacted, and coped (cognitive dissonance) after their theory of the
world ending was wrong