Go to the Students section
Go to the Staff section
Go to the Alumni section
Go to the Study here section
Go to the International section
Go to the About section
Go to the Research section
Go to the Business and Employers section
Go to the Support us section
For the simple statistical techniques listed below, click to find out what the method is for, download a worked example or download instructions for applying the technique using SPSS, with a test data file to practice on.
One-way analysis of variance (ANOVA) is a method for testing whether three or more populations have the same mean value. One-way ANOVA is used when quantitative data has been collected from 3 or more independent samples. The analysis involves two variables: the independent variable, which is a factor identifying individuals as belonging to one of three or more groups being compared and the dependent variable, a quantitative variable which is being compared between the three samples.
The null hypothesis states that all the populations have the same mean whereas the alternative hypothesis states that not all means are the same. The alternative hypothesis therefore states that ‘at least two means’ are different. The alternative hypothesis could be true if just one mean is out of line with the rest or if all of them are different to each other. At the conclusion of the test, if the null hypothesis is rejected, then further tests, called ‘post hoc tests’, may be used to determine which means differ significantly from each other.
Two-way or three-way analysis of variance extends this technique to analyses where two or more factors can influence the dependent variable. There are also procedures for applying ANOVA to data from related samples or ‘repeated measures’ data.
Analysis of variance is a PARAMETRIC method. It is based on the assumptions that the data is Normally distributed within each group and that the variance is the same within each group.
Example PDF »
The χ2 test for an association tests for:
A relationship between two categorical variables or A difference between groups in how members of the group answer a question which involves choosing a category.
The test is applied to data that can be arranged in a two-way table.
The chi-squared test is a NONPARAMETRIC TEST.
Example PDF » SPSS instructions PDF »
The Friedman test is used to compare data from three or more related samples. It is the non-parametric equivalent of a simple repeated measures analysis of variance (ANOVA). The dependent variable must be either ordinal or a quantitative variable that does not meet the assumptions for a parametric analysis.
The Friedman test is designed to test whether three or more populations have the same median values, using data collected from related samples.
The Kruskal Wallis test is used to compare three or more independent samples. It is the non-parametric equivalent of a one-way analysis of variance (ANOVA).
It is used when analysing either ordinal data or a quantitative variable that does not meet the assumptions for a parametric test.
The Kruskal Wallis test tests whether three or more populations have the same median values.
The Mann-Whitney U test is used to compare the medians of two independent samples.
The Mann-Whitney U test is the non-parametric equivalent of the two-sample (independent samples) t-test. The data must record the values of a variable which is either ordinal or a quantitative variable that does not meet the assumptions for a parametric test.
Example PDF » SPSS instructions PDF »
Post hoc tests are sometimes carried out to provide further detail after an analysis of variance. Typically an analysis of variance will have compared three or more groups to see whether their mean response is the same.
If the null hypothesis is rejected then the conclusion is that the mean response is not the same for all groups or treatments. This leads immediately to the question ‘which ones are different?’ and post hoc tests are used to answer this question.
Post hoc tests are designed to provide a way of deciding which pairs of means differ significantly from each other and which do not. There are many variations on post hoc tests, as it seems an area that several statisticians have produced a method for. The outcomes of post hoc tests can be anything ranging from:
As it follows an analysis of variance, post hoc testing is a PARAMETRIC method, based on the same assumptions as the analysis of variance.
Spearman’s correlation coefficient rho (ρ) is used to measure the correlation between two variables when the usual correlation coefficient is not suitable, for one of a number of reasons. Spearman’s rho is a nonparametric measure of correlation, calculated from the ranked data. Spearman’s correlation can be used to measure the association between two variables when:
The value of Spearman’s rho is between +1 and –1: and the sign and value of ρ are interpreted in the same way as the more conventional correlation coefficient, r. Hence ρ = 0 indicates no association between two variables, ρ = +1 indicates a ‘perfect’ positive association between variables ρ = -.5 indicates a moderate association between variables such that as one increases the other decreases.
There are three kinds of t-tests:
The paired-samples and two-sample t-tests are both used to test whether two population means are equal. The paired-samples t-test is used when the data is from related, paired or longitudinal samples, that is, when both sets of measurements have been obtained from the same individuals. The independent-samples t-test is used when the data has been collected from independent samples, that is, when two sets of measurements have been obtained from different individuals.
T- tests are PARAMETRIC tests and are based on assumptions that need to be checked.
Paired example PDF » SPSS instructions PDF »
Two-sample example PDF » SPSS instructions PDF »
The Wilcoxon (Signed Ranks) test is used to compare the medians of two related samples. It is the non-parametric equivalent of the paired-samples t-test.
The data must record the values of a variable which is either ordinal or a quantitative variable that does not meet the assumptions for a parametric test.