Skip to content

At-Risk Student Identification and Progress Monitoring

Machine Learning Methods for K-12 Early Warning Systems

by Michael L. Fuller, Muskingum Valley Educational Service Center and Mackenzi Brozovich, Muskingum Valley Educational Service Center

In educational settings, an early warning system (EWS) identifies students who are at-risk for one or more adverse educational outcomes related to student achievement, progression, and graduation. Specific examples include dropping out of school, not graduating on time, failing grade level assessments, not earning needed high school course credits, retention in grade, not meeting college and career readiness benchmarks, drug and alcohol abuse, discipline problems, and suicide. Traditionally used, an EWS is neither diagnostic for identifying the root causes of why students are at-risk nor provides targeted interventions specific to students’ unique needs (National Forum on Educational Statistics, 2018). According to recent reports (National Forum on Educational Statistics, 2018), nearly half of all public high schools use an early warning system.

In this paper, different at-risk methodologies, ranging from informal locally developed methods to next generation models based on machine learning algorithms will be discussed. The purpose is to give the reader an overview of what currently exists to identify risk in student outcomes and the merits and limitations of different approaches. Machine learning for early warning systems, given its well documented utility to predict outcomes in such diverse fields as corporate decision making, consumer spending, and health care, will be highlighted.

We will minimize technical and statistical discussions to make this article accessible to a wide educator audience. In addition, this article will not provide detailed steps in setting up an EWS. Readers interested in implementing an EWS can find guidance here (Frazelle & Nagel, 2015; National Forum on Educational Statistics, 2018)

Methods Used to Establish Warning Indicators

Methodologies for EWS can be broadly grouped into three, somewhat distinct, operational frameworks.

Locally Determined Models

The first grouping is based on ease of development and use, and largely rests on suggested guidelines from professional sources (e.g., John Hopkins School of Education, 2020), Given that most school districts do not have the in-house expertise, the use of sophisticated statistical analysis and modeling for EWS are either minimized or not conducted.

Education indicators are measures of an educational system in relation to its goals; such measures may reflect the status of the system or its change. In an EWS, indicators may measure the extent to which a student is at-risk. Student risk is inferred from a combination of poor attendance (e.g., missing more than 10% of instructional time), behavioral incidents (e.g., one or more in- or out-of-school suspensions), and poor course performance, including overall grade point average (e.g., failure in one or more courses or a GPA of 2.0 or lower on a 4 point scale). In some cases, districts may include student mobility and on-time grade level advancement.  Indicators may be identified through internal analysis of student data or a review of external research.

Once indicators are established, districts develop local thresholds to target students at-risk. Thresholds are validated based on student outcomes, such as not graduating on time or dropping out. That is, districts may determine the percentage of students who historically did not graduate on time and the extent to which they meet the assigned thresholds for that risk category. If many of the designated at-risk students do not graduate on time, it may be concluded that the thresholds are valid for this purpose. An advantage here is educators can readily see how students are assigned to risk categories and it provides an overview of trends and patterns within the district.

Districts may utilize this model for its simplicity in data retrieval, model execution, and understandable and relatable results. Thresholds in this model are more likely to be determined by consensus among district personnel as to how the indicators are combined into an overall metric than a formal psychometric weighting process. Rates related to over-identification of risk (false positives) and under-identification of risk (false negatives) may not be adequately assessed and incorporated in the model for ongoing validation. Variability among districts in determining EWS indicators is common (Suit, et al., 2016).

Classical Statistical Models

The second grouping employs classical statistical modeling to investigate relationships among independent variables to predict a dependent variable. Examples include SEM (structural equation models), path analysis, and multiple regression (Jeon, 2015). These models offer predictive precision and can be used to test reliably for intervention differences.

Use of these models requires educators to have a strong background in statistics. Parametric analysis is desired and assumes the populations from which the samples are drawn have specific characteristics and that samples are drawn under certain conditions. However, not all data adhere to parametric requirements, such as resembling a “bell-shaped” or normal distribution. In this case, the data either must be transformed to allow parametric analysis or nonparametric models are needed.  Regardless, the at-risk determinations may be opaque and require repeated guidance in their use.

Most studies that focus on EWS indicators use logistic regression (Koon & Petcsher, 2015). Logistic regression tests multiple possible predictors (e.g., language, early literacy, nonverbal cognitive measures, and mother’s education level) in a stepwise or hierarchical fashion that best explains the variation in a dichotomous outcome, such as students who are likely to have reading difficulties versus those who are not in grade two.  Predictors that do not statistically improve the model are not retained. With multiple interactive variables at play, it may be difficult to use logistic regressions in order to demystify the complex interactions among predictors. Additionally, models are typically adjusted and repeated, requiring executors to have the knowledge of running, interpreting, and problem-solving through statistical analyses.

Machine Learning Models

The third grouping is sometimes described as next generation methodologies and have limited research of their utility in the field of education. Predictive modeling in this grouping came about because of increases in computing power and access to big data. Models are trained to make predictions based on past data using machine learning (Chung & Lee, 2019), a set of artificial intelligence algorithms that learn the relationship between descriptive features and a target feature. These models may be considered nonparametric because they do not require certain underlying characteristics of the data be met (e.g., the data adhere to a normal distribution).

There are a variety of machine learning models (Lakkaraju, et al., 2013). In common use is: Random Forests, Adaboost, Support Vector Machines and Decision Trees.  Currently, machine learning methods are the least known and used for school-based EWS. However, this is likely to change as awareness of machine learning algorithms becomes more common among educators, and the increase it provides in the specificity of individual student prediction is demonstrated.

Although not extensive, research using machine learning algorithms for school-based EWS indicators does exist (Cheng, et al., 2016; Chung & Lee, 2019; Knowles, 2015; Koon & Petcsher, 2015; Lakkaraju, et al., 2013). Knowles (2015) developed a statewide dropout EWS in Wisconsin that is one of the most comprehensive machine learning examples currently in place.  He used available data related to student attendance, behavior, mobility, reading, math, demographics, and school cohort groups pulled from the Wisconsin student information system. Knowles then evaluated 28 separate algorithms, using an under the curve metric, to settle on a generalized linear model that provided accurate predictions of student drop out risk in grades 6 to 9, totaling more than 200,000 students.

Chung and Lee (2019) used a random forest model to accurately predict, exceeding 90%, high school students at risk of dropping out. To simplify, random forest models combine classifications from tree-like graphs, to make a synthesized decision that minimizes classification errors. Students who are not likely to drop out are not overidentified as at-risk (false positives) and conversely, students who are likely to drop-out are not missed (false negatives).

Lakkaraju, et al. (2013) partnered with two large U.S. school districts, totaling more than 200,000 students, to evaluate different machine learning models to predict students not likely to finish high school on time. Twenty-six student attributes (e.g., gender, age, absences, achievement, disability, mobility, etc.) were tested for the best predictive risk estimates using the different machine learning models. Random forest outperformed the other models studied. Importantly, the researchers were able to reliably identify risk before students were off-track to not graduate on time.

Implications for Educational Service Agencies

A reliable and valid EWS is becoming a necessary preventative tool in school districts’ efforts to identify and mitigate student risk (Faria, et al., 2017). However, many school districts use EWS that are relatively easy to construct and maintain but are rarely evaluated for their efficiency and effectiveness. Most districts do not have the in-house expertise or out-of-district support to undertake anything more sophisticated. However, more advanced methods demonstrate greater ability to accurately identify and prioritize at-risk students.

Educational Service Agencies (ESAs) can play a constructive role in educating their member districts on the merits and limitations of EWS models and support districts’ efforts in implementing systems that are relevant to their preventative strategies. This may mean that ESAs will first need to acquire the understanding and requisite skills to support districts in this regard.  Like school districts, ESAs may not be sufficiently staffed for this service.

Second, models, especially those based on classical statistics or machine learning, require persons who have the wherewithal to pull large student data sets and who have access to data management systems typically not found at a local district level. By default, districts will turn to out-of-district support, something an ESA, with training and know-how can take on.

Conclusion

There are several different EWS models a district can implement. We encourage districts, with ESA and other support, to consider a machine learning approach for its use as a proactive tool for at-risk student identification, intervention development, and group prioritization. While its use in education is limited by a lack of prior implementation in the field as well as trained staff to execute and interpret advanced modeling, the evidence backing machine learning as an accurate and effective method of prediction is growing in other industries. There is a need for school-based EWS models to modernize in order to address the needs of students so that districts can improve their institutional and individual students’ success.

In two articles published in the online journal Perspectives, Collins and Leddick (October and November 2019), described the emerging roles that artificial intelligence and machine learning are now playing in education, and the transformative impact they will have on how ESAs do their work.

In our article, we highlighted one application of machine learning that Collins and Leddick refer to as predictive analytics and decision support. Machine learning models will likely become the standard for how EWS are constructed and used. Not only can a machine learning model uniquely identify risk early in a student’s career, it offers promise as a means to monitor intervention effectiveness (Lakkaraju, et al., 2013) more rapidly than other EWS models. This is because a machine learning algorithm repeatedly updates risk ratios based on the most current data, such as changes in attendance, discipline incidents, test scores and so on.

A resource for readers interested in learning about or implementing a machine learning EWS is Dr. Rayid Ghani (rayid@cmu.edu), Dr. Ghani is a Distinguished Career Professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.

References

Cheng, X., Gutman, J., Torrence, J., Zhang, Z., Kenney, C., Vanderveld, R. A., & Wilson, K. H. (2016). Identifying and Influencing Students At Risk of Not Finishing High School. Chicago: Eric & Wendy Schmidt Data Science for Social Good Fellowship at the University of Chicago.

Chung, J. Y., & Lee, S. (2019). Dropout early warning systems for high school students using machine learning. Children and Youth Services Review, 346-353.

Collins, T., & Leddick, S. (2019, November). AI and the ESA: Continuing a Conversaton. Retrieved from aesa.us/blog/?p=308

Collins, T., & Leddick, S. (2019, October). AI and The ESA: Opening a Conversation. Retrieved from aesa.us/blog/?=301

Faria, A.-M., Sorensen, N., Heppen, J., Bowden, J., Taylor, S., Eisner, R., & Foster, S. (2017, April). Getting students on track for graduation: Impacts of the early warning intervention and monitoring system after one year. Retrieved from http://ies.ed.gov/ncee/edlabs

Frazelle, S., & Nagel, A. (2015, January). A Practitioner’s Guide to Implementing Early Warning Systems. Retrieved from http://ies.ed.gov/ncee/edlabs

Jeon, J. (2015). The Strengths and Limitations of Statistical Modeling of Complex Social Phenomenon: Focusing on SEM, Path Analysis or Multiple Regression Models. Internaltional Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering , 1634-1642.

John Hopkins School of Education. (2020, March 25). Retrieved from Center for Social Organization of Schools: https://education.jhu.edu/csos/

Knowles, J. E. (2015). Of Needles and Haystacks: Buidling an Accurate Statewide Dropout Early Warning System in Wisconsin. Journal of Educational Data Mining, 18-67.

Koon, S., & Petcsher, Y. (2015, February). Comparing methodologies for developing an early warning system: Classification and tree model versus logistic regression. Retrieved from http://ies.ed.gov/ncee/edlabs

Lakkaraju, H., Miller, D., Aguia, E., Bhanpuri, N., Addison, K., Shan, C., & Ghani, R. (2013, August 10). A machine learning framework to identify students at-risk of adverse academic outcomes. Retrieved from http://dx.doi.org/10.1145/2783258.2788620

MacIver, M. A. (2013). Early warning indicators of high school outcomes. Journal of Education for Students Placed At-Risk, 1-6.

National Forum on Educational Statistics. (2018). Forum Guide to Early Warning Systems. Washington, D.C.: U.S. Department of Education.

Suit, D., O’Cummings, M., Norbury, H., Heppen, J., Dhillon, S., Lindsay, J., & Zhu, B. (2016, July). Identifying early warning indicators in three Ohio school districts. Retrieved from http://ies.ed.gov/ncee/edlabs

Glossary

  • Early Warning System – A system that identifies students who are at-risk for adverse educational outcomes (NFES, 2018).
  • Indicator – A measure of status of a construct with regard to its goal (NFES, 2018).
  • Predictive Analytics – statistical techniques that use a variety of data to determine or estimate the likelihood of future outcomes or trends (NFES, 2018).
  • Machine Learning – A subset of artificial intelligence, it is a highly sophisticated, statistics- and computer-based system that uses multilayered data structures to predict outcomes through trained algorithms.
  • Regression Analysis – A statistical method used to model the relationship between predictor (independent) variables and outcome (dependent) variables. When more than one predictor variable is investigated, a multiple regression analysis is used to predict some phenomenon (Jeon, 2015).
  • Path Analysis – A statistical method used to investigate direct and indirect effects in order to explain relationships rather than predict outcomes (Jeon, 2015).
  • Structural Equation Model (SEM) – A multivariate statistical analysis used to study structural relationships between variables and latent constructs (Jeon, 2015).
  • Random Forests Model – A type of machine learning and supervised learning algorithm used to classify observations through regressions and modeling (decision trees) (Chung & Lee, 2019)

 

Adapted from NFES, 2018, pp. vii-viii, and Jeon, 2015

Scroll To Top