Factors within the Community College Survey of Student Engagement
Faculty Advisor Name
Dr. Ben Selznick
Description
Student engagement has become a buzzword within higher education practice over the past 15-20 years. There are numerous research studies that have identified high-impact practices designed to improve student success and ultimately degree completion. In an effort to measure student engagement, the baccalaureate-based National Survey of Student Engagement (NSSE) and its sister survey, the Community College Survey of Student Engagement (CCSSE) have been developed.
Despite the wide use of the CCSSE by community colleges, the instrument has been highly criticized for its lack of actual measurement of student engagement and a theoretically sound factor structure. Chickering and Gamson’s Seven principles of good practice in undergraduate education was one of the theories used in the development of the instrument. An initial CFA found a nine-factor structure for student engagement while utilizing only a third of the items in the instrument. Utilizing this information, a research panel decided to reduce the instrument to a five-factor structure (or benchmarks) using a modified set of items. Their justification was that the purpose of the instrument was to understand relationships between items and not to confirm or deny a particular factor structure. Other researchers have noted the suspect use of a CFA, calling into question the instrument’s validity, conceptual soundness, and ultimately its ability to measure student engagement without sound theoretical underpinnings.
With these concerns as a backdrop, this research study investigated whether there is a factor structure that exists and could parsimoniously explain student engagement. A sample of 884 valid responses from students at two community colleges in VA was used to test the original nine-factor model. Alternative models, including a five-factor model (utilizing the created “benchmarks” in the CCSSE), a seven-factor model (utilizing Chickering and Gamson’s original theory), and a one-factor model were tested to see if any of these fit as well as a nine-factor model.
Because the data was categorical in nature, normal theory estimators for the CFA were not appropriate. A polychoric correlation matrix with robust Diagonally Weighted Least Squares (rDWLS) was used to estimate the parameters of the models. Chi-square, RMSEA, CFI, along with residuals, were used as fit indices to evaluate global and localized model-data fit, respectively. Each of the four models converged to a solution with the nine-factor model fitting best, as expected. None of the lower factor models fit significantly better.
After running the different models, there was mixed evidence regarding whether there was evidence for a factor structure in the instrument. While the chi-square and indexes indicated slight misspecification in the eight-factor model, the lack of significant residuals showed no clear evidence of localized misfit. Even with this information, there are still serious flaws with the instrument including incorrect use of estimation methods, the reduction of the instrument from nine to five factors, and the validity of the instrument as a whole. Even with the flaws, colleges are using the collected data to make substantive changes to improve student success.
Factors within the Community College Survey of Student Engagement
Student engagement has become a buzzword within higher education practice over the past 15-20 years. There are numerous research studies that have identified high-impact practices designed to improve student success and ultimately degree completion. In an effort to measure student engagement, the baccalaureate-based National Survey of Student Engagement (NSSE) and its sister survey, the Community College Survey of Student Engagement (CCSSE) have been developed.
Despite the wide use of the CCSSE by community colleges, the instrument has been highly criticized for its lack of actual measurement of student engagement and a theoretically sound factor structure. Chickering and Gamson’s Seven principles of good practice in undergraduate education was one of the theories used in the development of the instrument. An initial CFA found a nine-factor structure for student engagement while utilizing only a third of the items in the instrument. Utilizing this information, a research panel decided to reduce the instrument to a five-factor structure (or benchmarks) using a modified set of items. Their justification was that the purpose of the instrument was to understand relationships between items and not to confirm or deny a particular factor structure. Other researchers have noted the suspect use of a CFA, calling into question the instrument’s validity, conceptual soundness, and ultimately its ability to measure student engagement without sound theoretical underpinnings.
With these concerns as a backdrop, this research study investigated whether there is a factor structure that exists and could parsimoniously explain student engagement. A sample of 884 valid responses from students at two community colleges in VA was used to test the original nine-factor model. Alternative models, including a five-factor model (utilizing the created “benchmarks” in the CCSSE), a seven-factor model (utilizing Chickering and Gamson’s original theory), and a one-factor model were tested to see if any of these fit as well as a nine-factor model.
Because the data was categorical in nature, normal theory estimators for the CFA were not appropriate. A polychoric correlation matrix with robust Diagonally Weighted Least Squares (rDWLS) was used to estimate the parameters of the models. Chi-square, RMSEA, CFI, along with residuals, were used as fit indices to evaluate global and localized model-data fit, respectively. Each of the four models converged to a solution with the nine-factor model fitting best, as expected. None of the lower factor models fit significantly better.
After running the different models, there was mixed evidence regarding whether there was evidence for a factor structure in the instrument. While the chi-square and indexes indicated slight misspecification in the eight-factor model, the lack of significant residuals showed no clear evidence of localized misfit. Even with this information, there are still serious flaws with the instrument including incorrect use of estimation methods, the reduction of the instrument from nine to five factors, and the validity of the instrument as a whole. Even with the flaws, colleges are using the collected data to make substantive changes to improve student success.