probability
expected relative frequency of an outcome; the proportion of successful outcomes to all outcomes
outcome
term used in discussing probability for the result of an experiment (or almost any event, such as a coin coming up heads or it raining tomorrow)
expected-relative frequency
number of successful outcomes divided by the number of total outcomes you would expect to get if you repeated an experiment a large number of times
long-run relative frequency
understanding of probability as the proportion of a particular outcome that you would get if the experiment were repeated many times
subjective interpretation of probability
way of understanding probability as the degree of one's certainty that a particular outcome will occur
hypothesis testing
procedure for deciding whether the outcome of a study (results for a sample) support a particular theory or practical innovation (which is thought to apply to a population)
research hypothesis
statement in hypothesis testing about the predicted relation between populations (often a prediction of a difference between population means)
null hypothesis
statement about a relation between populations that is the opposite of the research hypothesis; statement that in the population there is no difference between populations contrived statement set up to examine whether it can be rejected as part of hypothe
set up of hypothesis testing
1) restate the question a research hypothesis and a null hypothesis about the populations
2) determine the characteristics of the comparison distribution
3) determine the cutoff sample score on the comparison distribution at which the null hypothesis shou
comparison distribution
distribution used in hypothesis testing. It represents the population situation if the null hypothesis is true. It is the distribution to which you compare the score based on your sample's results
cut off scores
point in hypothesis testing, on the comparison distribution at which, if reached or exceeded by the sample score, you reject the null hypothesis.
also called, critical value.
conventional level of significance
(p<.05, p < .01) levels of significance widely used in psych
statistically significant
conclusion that the results of a study would be unlikely if in fact the sample studied represents a population that is no different from the population in general; an outcome of hypothesis testing in which the null hypothesis is rejected.
one tailed test
hypothesis testing procedure for a directional hypothesis; situation in which the region of the comparison distribution in which the null hypothesis would be rejected is all on one side (tail) of the distribution
directional hypothesis
research hypothesis predicting a particular direction of difference between populations-for ex, a prediction that the population like the sample studied has a higher mean than the population in general
two-tailed tests
hypothesis testing procedure for a nondirectional hypothesis; the situation in which the region of the comparison distribution in which the null hypothesis would be rejected is divided between the two sides (tails) of the distribution
nondirectional hypothesis
research hypothesis that does not predict a particular direction of difference between the population like the sample studied and the population in general.
hypothesis testing in research articles
research articles typically report the results of hypothesis testing by saying a result was or was not significant and giving the probability level cutoff (usually 5% or 1%) that the decision was based on.
t test
hypothesis-testing procedure in which the population variance is unknown; it compares t scores from a sample to a comparison distribution called a t distribution
t distribution
mathematically defined curve that is the comparison distribution used in a t test
t test for a single sample
hypothesis-testing procedure in which a sample mean is being compared to a known population mean and the population variance is unknown
t tests for dependent means
hypothesis-testing procedure in which there are two scores for each person and the population variance is not known; it determines the significance of a hypothesis that is being tested using difference or change scores from a single group of people.
when are they used?
when participants have two scores, such as a before score and after score. Also used when you have scores from different pairs of research participants.
t tests for dependent means with scores from pairs of participants
in this t test, you first figure a difference or change scores for each participant, then go through the usual five steps of hypothesis testing
t tests in research articles
research articles usually describe t tests in a fairly standard format that includes the degrees of freedom, the t scores, and the significance level. Rarely does an article report the std dev of the difference scores.
problems with repeated alternative hypothesis in t-tests
studies using different scores often have much larger effect sizes of research designs. That is, testing each of a group of participants twice usually produces a study with high power.
t tests for independent means
used to compare two sets of scores, one from each of two entirely separate groups of people.
hypothesis-testing procedure in which there are two separate groups of people tested and in which the population variance is not known.
logic
1) the mean of the distribution of differences between means
2) the estimated population variance
3) the variance of the two distributions of means
4) the variance and standard deviation of the distribution of differences between means
5) the shape of the
comparison of the 3 kinds of tests
the population variance is not known for each test, and the shape of the comparison distribution for each test is a t distribution. The single sample t test is used for hypothesis testing when you are comparing the mean of a single sample to a known popul
t tests reported in research articles
reported in research articles with the means of two groups plus the degrees of freedom, t score, and significance level. Results may also be reported in a table which each significant difference may by shown by asterisks.
analysis of variance (anova)
hypothesis-testing procedure for studies with three or more groups
basic logic of anova
hypothesis testing in analysis of variance is about whether the means of the samples differ more than you would expect if the null hypothesis were true
within-groups estimate of the population variance
estimate of the variance of the population of individuals based on the variation among the scores in each of the actual groups studied
between-groups estimate of the population variance
estimate of the variance of the population of individuals based on the variation among the means of the groups studied
f ratio
ratio of the between groups population variance estimate to the within groups population variance estimate.
f distribution
mathematically defined curve that is the comparison distribution used in an analysis of variance.
f table
is the table of cutoff scores on the f distribution
you look up on the f table how extreme an f ratio is needed to reject the null hypothesis at say, the .05 level.
planned contrasts
comparison in which the particular means to be compared were decided in advance.
posthoc comparisons
multiple comparisons, not specified in advance; procedure conducted as part of an exploratory analysis after an analysis of variance.
between groups
people in the levels of IV come to the experiment with nothing in common..remember random assignment. in t tests, we referred to this as independent samples
within groups
means each participant is exposed to every level of the iv before the experiment even begins, the two groups have something in common
in t tests, we referred to this as dependent means or paired samples
systematic variance
between groups
deviation of the group means from the grand mean
this number is small when the differences between the groups is small
increases as group means differences increase
we want this score to be large
error variance
within groups
deviation of the individual scores in each group from their respective group mean
scores in the group vary from each other
we want this number to be small
this shows consistency of effect, called individual differences