Skip to Main Content

Statistics Resources

This guide contains all of the ASC's statistics resources. If you do not see a topic, suggest it through the suggestion box on the Statistics home page.

Multiple Linear Regression

The multiple regression analysis expands the simple linear regression to allow for multiple independent (predictor) variables. The model created now includes two or more predictor variables, but still contains a single dependent (criterion) variable.

Assumptions

  1. Dependent variable is continuous (interval or ratio)
  2. Independent variables are continuous (interval or ratio) or categorical (nominal or ordinal)
  3. Independence of observations - assessed using Durbin-Waston statistic
  4. Linear relationship between the dependent variable and each independent variable - visual exam of scatterplots
  5. Homoscedasticity - assessed through a visual examination of a scatterplot of the residuals
  6. No multicollinearity (high correlation between independent variables) - inspection of correlation values and tolerance values
  7. No outliers or highly influential points - outliers can be detected using casewise diagnostics and studentized deleted residuals
  8. Residuals are approximately normally distributed - checked using histogram, P-P Plot, or Q-Q Plot of residuals.

Running Multiple Linear Regression in SPSS

  1. Analyze > Regression > Linear...
  2. Place all independent variables in the "Independent(s)" box and the dependent variable in the "Dependent" box
  3. Click on the "Statistics" button to select options for testing assumptions. Click "Continue" to go back to main box.
  4. Click "OK" to generate the results.

Interpreting Output

  • Model Summary
    • R = multiple correlation coefficient
    • R-Square = coefficient of determination - measure of variance accounted for by the model
    • Adjusted R-Square = measure of variance accounted for by the model adjusted for the number of independent variables in the model
  • ANOVA
    • F-ratio = measure of how effective the independent variables, collectively, are at predicting the dependent variable
    • Sig. (associated probability) = provides the probability of obtaining the F-ratio by chance
  • Coefficients
    • Unstandardized B(eta) = measure of how much the dependent variable varies with changes in one independent variable is changed when all other variables are held constant = used to create the multiple regression equation for predicting the outcome variable.
    • t and Sig. = used to determine the significance of each independent variable in the model

Reporting Results in APA Style

A multiple regression was run to predict job satisfaction from salary, years of experience, and perceived appreciation. This resulted in a significant model, F(3, 72) = 16.2132, p < .01, R2 = .638. The individual predictors were examined further and indicated that salary (t = 9.21, p < .01) and perceived appreciation (t = 15.329, p < .001) were significant predictors but, years of experience was not (t = 1.16, p = .135).

Was this resource helpful?