health science #2

When we say that a sample statistic is a " point estimate, " we mean that it is

our best guess of a population parameter.

Simply put, " bias " means that our sample statistic is not a good estimate of the corresponding population parameter. Which of the following situations might lead to bias?

taking a nonrepresentative sample
b. adding 3 to each measurement
c. cluster sampling

If " bias " refers to whether or not our estimate is a good predictor of a population parameter, " efficiency " refers to

how far our estimate is likely to deviate from the population parameter.

Having the square root of N in the denominator of the formula for the standard error of a mean means that

as N gets larger, the standard error will get smaller.

From a sample of 200 private college students, you found that the average number of hours of study time each week is 25 with a standard deviation of 6. A point estimate of the average study time for all private college students would be

25

Two sample statistics are unbiased estimators. They are

Means and Proportions

An estimator is unbiased if the mean of its sampling distribution is equal to

The population value

The more efficient the estimate, the more the sampling distribution

is clustered around the mean.

What are Inferential Statistics

Tools that are used to make a "best guess" about the value of a population parameter.
We are using statistics from a sample to estimate parameters in a population.

An always-present concern about inferential statistics:

We can never be 100% sure that our statistics generated from a sample reflects "what's really so" in the entire population.
It could be that we happened to pick an unusual sample!

Margin of error is

a statistical concept (and calculation!) that we use to account for the fact that our statistics might be different than the "true" population parameter!

Two Main Uses of Inferential Statistics

Setting confidence intervals
Performing hypothesis tests

Setting confidence intervals

It's an statistical estimate of how confident we are that a sample mean that we calculate is close to the true population mean (that most likely we will never know).
An important tool for honest reporting of scientific results.

Performing hypothesis tests

Unlike the confidence interval, we begin with a guess about what number the true population value is (based upon a theory).
The statistic tells us the likelihood that my guess (hypothesis) is right or wrong, given the data I collect on my sample.

Basic Concepts of Confidence Intervals

Point estimate
Lower limit
Upper limit
Margin of error

Point estimate

The middle of the confidence interval.
The sample mean (statistic), which is our "best guess" of the population mean (parameter).

Lower limit

The lowest value on the confidence interval (left side)

Upper limit

The highest value on the confidence interval (right side)

Margin of error

The distance between the lower limit and confidence interval (which = the distance between the confidence interval and the upper limit)

Margin of Error

determines width of confidence interval. The narrower the confidence interval is, the more confident we are about our results.
Margin of error accounts for sampling error

Sampling error (p. 216)

is exactly how far off our sample statistic is from the population parameter (something we usually don't know!).

What Determines How Big (Wide) the Margin of Error Is (p. 231)

Depends upon equation,
Sample size (N)
The percent confidence interval that we want (Z)
Variation in the sample (s)

Sample size (N)

Larger sample size= smaller margin of error.
This is because we have more confidence that most often, a larger sample is more representative of the population than a smaller sample.

The percent confidence interval that we want (Z)

Bigger z= bigger margin of error= more confidence that the true population mean is somewhere in that confidence interval.

Variation in the sample (s)

More variation in the variable= larger margin of error.
This is because more dispersion (variety) of answers means that not as many people in the sample have scores that are close to the point estimate (the sample mean).

IQ has a normal distribution in a sample of students.
The 95% confidence interval= (100, 105)
Find the point estimate:

It is right in the middle of the confidence interval! (100+105)/2= 102.5

Find the margin of error:

It is the distance between the point estimate and the lower or upper limit (either one). 105 - 102.5 = 2.5

Question: Did McCain have a significantly
higher % of the suburban women vote
compared to Obama?

Margin of error= 3.1%
Answer to question:
CI for McCain: 44 + 3.1= (40.9, 47.1)
CI for Obama: 38 + 3.1= (34.9, 41.1)
Answer is no, McCain did not have a significantly higher % of the suburban women vote. Even though McCain had a higher vote % (44% vs. 38%

Sampling Concepts

Probability Samples
Non-probability Samples
Stratified Sample
Cluster Sample
These are important because the sampling methods you choose can largely influence sampling error!

Probability Samples

In this sampling strategy, every person in the population of interest has a chance of being selected for the sample.

Example of probability sample:

You're studying exercise frequency among CSUF college students. You get your participants by recruiting them all day at the Titan Student Union.
Since most students hang around there at least sometimes, this is likely to be a probability sample.

Non-probability Samples (p. 214)

Not everyone in the population of interest has an equal chance of being selected for the sample.

Non- probability Example:

You're studying exercise frequency among Cal State Fullerton students. You get your participants by recruiting them at the gym.
Since nearly all students in the gym are working out, students who do not work out would not be represented in this sample!

Probability vs. Non-probability Samples

In reality, we often use non-probability samples because we do not have potential access to the entire population of interest.

Non-probability samples are often

ok for doing hypothesis testing of a theoretical model (e.g., factors associated with exercise frequency)

if you are trying to estimate a confidence interval for a characteristic in the population (e.g., mean IQ in the US population), you should use

probability samples (not non-probability samples)!

Stratified Sampling

The population is divided into groups (strata), based upon some variable that's important.
Then you randomly pick a specific number of participants from each group to be in your sample.
That way, each group represents a fixed % of the total sample. The %

Example of Stratified Sampling

These people were stratified by gender and then 2 participants per group
were randomly selected to be in the sample.

Cluster Sampling

Is possible when your population of interest naturally occurs in "clusters" (e.g., churches, companies, schools, etc.).
Instead of sampling individuals, we randomly select clusters of individuals

Cluster Example:

There are 100+ schools in the Orange County School District, and I randomly select 3 schools and survey all of the students in those three schools.

The two main types of inferential statistics are

setting confidence intervals and testing hypothesis.

The basic components of the confidence interval are

the point estimate, margin of error, lower limit and upper limit. You can find the point estimate and the margin of error if you have the lower and upper limit.

Probability samples are those which are based upon

the entire population of interest, whereas non-probability samples are those that are based only on a subset of the population.
Two types of probability sampling strategies: stratified sampling and cluster sampling.

In hypothesis testing, the __________ is the critical assumption, the assumption that is actually tested.

null hypothesis

Which assumption must be true in order to justify the use of hypothesis testing?

random sampling

If we reject the null hypothesis of " no difference " at the 0.05 level,

the odds are 20 to 1 in our favor that we have made a correct decision.

Given the same alpha level or p -level, the one-tailed test

makes it more likely that the H 0 will be rejected.

one-tailed test of significance could be used whenever

the researcher can predict a direction for the difference.

A sample of people attending a professional football game averages 13.7 years of formal education while the surrounding community averages 12.1 years. The difference is significant at the 0.05 level. What could we conclude?

The sample is significantly more educated than the community as a whole.

A researcher is interested in the effect that neighborhood crime-watch efforts have on the crime rate in the inner city, but he is unwilling to predict the direction of the difference. The appropriate test of a hypothesis would be

two tailed

Do sex education classes and free clinics that offer counseling for teenagers reduce the number of pregnancies among teenagers? The appropriate test of a hypothesis would be

one tailed test

When reading output from SPSS, in order to find the significance of a onetailed test for a difference in means, one needs to

multiply the significance of the two-tailed test by 0.5.

When H a is that the mean is " greater than " some value, you have a __________; when H a is that the mean is " not equal to " some value, you have a ___________.

one tailed test, two tailed test

Alpha Level represents...

The percent to which we can be certain a result is not due to chance! (Significance level!)

If P value is < than a

we reject the null

If P is > than a

we do not reject the null

Fail to reject the null hypothesis" simply means that

the evidence in favor of rejection was not strong enough

The central problem in the case of two-sample hypothesis tests is to determine

if two populations differ significantly on the trait in question.

When testing for the significance of the difference between two means, which is the proper assumption?

that samples are independent as well as random

When random samples are drawn so that the selection of a case for one sample has no effect on the selection of cases for another sample, the samples are

independent

If the Levene test in SPSS output reports an outcome with significance less than our alpha (say, 0.05), we should probably

not assume that the two samples ' variances are equal.

SPSS reports significance levels for two-tailed tests by default; to find the significance of one-tailed tests,

divide the two-tailed significance by 2.

When looking at a difference in proportions between two populations, to be wrong in rejecting the null hypothesis means that

there is not a significant difference between the groups.

Samples from two high schools are being tested for the difference in their levels of prejudice. One sample contains 39 respondents, and the other contains 47 respondents. The appropriate sampling distribution is

the t-distribution.

When solving the formula for finding Z with sample proportions in the twosample case, we must first estimate

the population proportion.

When testing the significance of the difference between two sample proportions, the null hypothesis is

P u1= P u

From a university population, random samples of 145 men and 237 women have been asked if they have ever cheated in a college class. Eight percent of the men and 6 percent of the women say that they have. What is the appropriate test to assess the signific

Test for the significance of the difference between two sample proportions, large samples.

For all tests of hypotheses, the probability of rejecting the null hypothesis is a function of

the size of the observed difference.

In twins studies, in which one twin is assigned to one group and one to another, the selection of subjects is

non independent

Why do the degrees of freedom matter when performing t-tests?

The shape of the t-distribution differs for different degrees of freedom.

Our null hypothesis about paired means or proportions is usually that there is no difference between

the scores in the population

In the section ex., we tested for difference in means between two non-independent groups. What could we have done if the variable of interest were categorial instead of interval/ratio?

We could have converted the outcome into a dichotomy and assigned dummy coding scores.

The higher the alpha level

the greater the probability of rejecting the null hypothesis.

Four tests of significance were conducted on the same set of results: for test 1: alpha � 0.05, two-tailed test; for test 2: alpha � 0.10, one-tailed test; for test 3: alpha � 0.01, two-tailed test; for test 4: alpha � 0.01, one-tailed test. Which test is

Test 2

The value of all test statistics is directly proportional to

sample size

The larger the sample size

the more likely we are to reject the null hypothesis.

A difference between samples that is shown to be statistically significant may also be

a. practically insignificant.
b. theoretically insignificant.
c. sociologically insignificant.
d. all of the above.
D

If a difference between samples is not statistically significant, it is almost certainly ___________. On the other hand, a statistically significant difference is not necessarily ___________.

unimportant; important

The Levene test can be helpful in

deciding which estimator of the standard error of the mean difference is most appropriate.

A One Sample Z-Test is used to...

Compare a sample mean to the population mean

What are the 7 Steps to Follow when computing an inferential statistic?

State the Null and Alternative Hypotheses
Set the level of risk (Alpha Level)
Select the appropriate test statistic
Compute the test statistic value
Determine the value needed to reject the null
Compare your obtained value and critical value
State Results

An Alpha Level is...

A value selected by the researcher (typically .05) which indicates the percent to which he/she wants to be certain that the result was not due to chance.

Follow up question! If a value is Significant at the .05 level... what does this mean?

This means that the researcher can be at least 95% certain that the result was NOT due to chance

One Sample Z-Test

Used to compare a sample's mean to the population mean on one variable.
Helped us to identify if a sample's result was significantly different from "The Norm"
Can sometimes be impractical because we may not know the Normative values for a population

A Z-Test only lets us

work with one sample and we have to compare it to the population
Wont work for every study

Independent Samples T-Test!

(sometimes called independent means t-test)
Compares two samples on one variable

Independent T-Test Tools Needed:

______ Identified Significance Level
______ Computed T-Statistic
______ Degrees Of Freedom

Significance Level for comparing two samples

Just like with ANY inferential Statistic
First Identify your ALPHA level
Commonly used one????
P = .05

The T Statistic:

Lets Break it Down
Essentially what we are working with, is a ratio:
The Difference in Sample Means divided by
The Variability of Groups

The T Statistic pooled = :

variability of groups
it will always be given

Because a T-Test deals with Variance, It operates on a Few different assumptions
The most Important of these is referred to as Homogeneity of Variance
The Assumption states that the variances between the two groups will

be equal

Degrees of Freedom

The rank of a quadratic form"
What?!?!?
Essentially, it is a value which provides an approximation of the Sample Size
How far can our variables vary?
Its sort of a way of keeping score
"This many values must be taken into account

IMPORTANT: The calculation for Degrees of Freedom is

different for each inferential test.

DF stands for

degrees of freedom

Why do DF matter?

Larger sample size = Smaller Critical Value

Effect Size

provides a measure of HOW different two groups are from one another.

Effect Size Values
The value ranges outlined by Jacob Cohen

Small = 0.0 - .20
Medium = .20 - .50
Large = .50 and above

Reporting a T-Test
So For our First Example
T-Statistic = 2.75
Df = 18
Significant at p=.05
So...

t(18)=2.75, p< 0.05

if our t score is larger than the critical value we

reject the null.
smaller we accept the null

Surprise! We followed these Steps! for the T- Test

State the Null and Alternative Hypotheses
Set the level of risk (Alpha Level)
Select the appropriate test statistic
Compute the test statistic value
Determine the value needed to reject the null
Compare your obtained value and critical value
State Results

Step 1: State the Null and Research Hypothesis

H0 = There will be no difference in average time between those who are vegetarian and those who are carnivores
H1 = There will be a difference in average time between those who are vegetarian and those who are carnivores

step 2)
Set Alpha Level!

Lets Stick with .05

Step 3)
Calculate the T-statistic!

X1-X2 / pooledSD

step 4)
Calculate the Degrees of Freedom!

df= (n1- 1) + (n2- 1)

how do you find the critical value?

off that sheet of paper

0.12 < 2.093 = Not Significant

We fail to reject the Null!
Conclusion = "There is no significant difference between vegetarians and carnivores in regard to their average mile time.

Dependent T-Test

Also Called a Paired Samples T-Test
Compares one sample on two means over time
The Same subjects are Tested more than once, and we are comparing those two scores!

We are still comparing two "Groups" however those groups are made up of the same people at different time points.

Dependent T-Test

Both T-Tests are reported the same way...

Requires the T-Value, The Alpha Level and the Degrees Of Freedom
t(df)=tstatistic, p(<)alphalevel

There are two types of T-Test

independent and dependent
Each Has their own T-Test Formula

Independent -

Two samples compared on the same dependent variable

Dependent -

One sample, two time points. Compared on the same dependent variable.

For both indepedent and dependent T-Tests:

Using the obtained T-Statistic, identified alpha level, degrees of freedom, and critical level, we determine weather or not to Reject the Null!

Chi-Square analysis!!!
It allows you to

compare two or more groups on nominal or ordinal data!
Example: Gender, Age, Hair color, Income Level etc...

Using Expected Frequency:

The Expected probability of Two independent events happening at the same time...

What information does a frequency table contain?

Information regarding the "frequency" of responses to the various categories of a variable

A Correlation tells us

about the relationship between variables.
Bi-Variate Correlation = 2 variables
Represented with one number
Range between -1 and 1

The Pearson correlation focuses on

variables which we consider "continuous"
They can assume any value on a continuum
Doesn't work as well for non continuous
Nominal etc...

Coefficient of Determination
A much more precise way to

interpret the correlation coefficient
Correlations measure the similarities between two variables

The Coefficient of Determination

Describes the amount of variance which can be accounted for by the variance in another variable
This is calculated by squaring the Correlation Coefficent,

Bi Variate correlation -

two variables
When we have more than two variables we can use a "Correlation Matrix

For each Pair of Variables, there is a

Pearson's r value (correlation value)
Illustrates several bi-variate correlations for all variables.

Correlation coefficient

Describes the relationship between two different variables
Pearsons r
Represented with a range of numbers
-1 0 1

Correlation coefficient is used to test hypotheses that...

Examine Relationship between variables rather than the difference
Two variables

Finding DF for a correlation Coefficient is easy!

df = n - 2

One Way Analysis of Variance allows

us to compare 3 or more groups on a single dependent Variable
Interval or dependent variable

One way Anova often referred to as

an F-Test

Analysis of Variance (ANOVA) is used when

more than two group means are being tested simultaneously
Means of groups differ from one another on a particular score

What test statistic would you use for a Anova

F- Test

ANOVA examines the

variance between groups and the variances within groups
These variances are then compared against
each other

ANOVA is Similar to t test...only in this case,

you have more than two groups

Computing the F-Test Statistics: we want the within-group variance

to be small and the between-group variance large to find significance

F Test =

F= mean squares between / mean squares within

Degrees of freedom for anova

Two sets of degrees of freedom
one for between and one for within

DF for between groups (anova)

Number of groups minus one
k - 1
3 groups, so 3 - 1 = 2

DF for within groups (anova)

Total sample size minus number of groups
N - k
30 sample size minus 3 groups, so 30 - 3 = 27

F Test: The Plan (anova)

1)A statement of null and research hypothesis
Null Hypothesis
Research Hypothesis
2)Set the level of risk
3)Select the appropriate test statistic
4)Compute the test statistic value (called the obtained value)
-Compute the between-group sum of squares
-Com

F(2.27) = 8.80, p < .05

F = test statistic
2,27 = df between groups and df within groups
8.80 = obtained value
p < .05 = probability less than 5% that null hypothesis is true

One-Way ANOVA: What does it do?

Compare the variance between 3 or more groups
Specifically
The variance between groups - How different are the three groups from one another? (Means)
The variance within groups - how different are the scores within the group from one another? (Means)

And what is the Formula for the F-Ratio?

F= Means of squares between (Variance between groups) / Means of squares within ( Variance Within groups)

SS=

Sum of Squares

n=

total sample size

k=

# of groups

(anova) The less variance evident within groups, the

higher your F-Ratio will be
Higher is good!

As the variance between groups increases (MSb) and the variance within groups decreases (MSw) the F-Ratio

Increases!

The following are the steps to follow when computing a One Sample ANOVA

Identify your Null and Research Hypothesis
Calculate your F-Ratio
Compare your value to a critical value
Make a conclusion about your hypothesis

To read an F-Table, we need three things

Our Alpha Level _ 0.05
(k-1) df Numerator
(n-k) df Denominator

Anova- looking at the table for F- test

Just like with the T-Test Table, always round up if your exact value is not listed

We need to compare our Obtained Value of 4.44 to the critical value of 3.89...
4.44 > 3.89 Meaning we have a significant result!
do we accept or reject the null

We Can Reject the Null hypothesis and conclude that there is a significant difference in overall sales between the three branches!!!

What is a Tukey?
Tukey's Range test

Works in a similar way to the T-Test
Determines where significant differences lie between the groups used in an ANOVA
This is where you can identify what groups are significantly different from one another.
No Need to learn to do by hand, SPSS does a grea

Always keep in mind: The further away a test statistic is from zero, the p-value does down, and the more likely you will

reject the null hypothesis!

F-ratio is the test statistic for what inferential test? and what is the formula

One Way Anova
Mean Square Between Groups divided by Mean Square Within Groups.
Because the F-ratio is a ratio, it's always a positive number (never negative).

Independent samples t-test:

It's a function of the difference between two group means. A t-score of zero means no difference!

One-way ANOVA:

It's a ratio of between group differences to within group differences. The smaller the F-ratio is, the smaller the group differences are.

Pearson's chi-square:

It measures the extent to which observed frequencies are different from expected frequencies in each cell of a contingency table. The closer the chi-square test statistic is to zero, the more likely you will fail to reject the null hypothesis.

Pearson's correlation coefficient:

It measures the strength and direction of a linear relationship between the independent variable and dependent variable. The further away the correlation test statistic number is from zero (either negative or positive), the stronger the relationship is!