Skip to content

Education Service Agencies Leverage the Power of AI to Support School Districts

July 5, 2023

Education Service Agencies (ESAs) across the nation may find themselves in a position to create a new system of support for school districts that will revolutionize the way data is collected, analyzed, and used to inform future pedagogical decisions. Three ESAs piloted an Artificial Intelligence (AI) project in collaboration with Massachusetts Institute of Technology (MIT). One ESA was successful in implementation and two were not. The following case study explores several facets of AI, ESAs, and the three pilot projects. The introduction provides the definition and history of AI, potential benefits and challenges associated with ESAs using AI to support public schools and introductory descriptions of each pilot project. In addition, the case study includes an overview of the methods, results, implications, and conclusions of the AI pilot projects. The purpose of the pilot projects was to demonstrate the potential of AI in transforming the way ESAs support local school districts. By leveraging the power of AI, ESAs may be able to provide systems of support with a focus on personalizing learning, enhancing data analysis, and providing real-time insights to teachers and administrators to improve student outcomes.

Understanding AI

Hearing the words Artificial Intelligence or AI might make one think of the 1999 blockbuster The Matrix where humans live in a simulated reality created by intelligent machines while a group of revolutionaries fights against them or the 2001 Spielberg movie AI about the young robot David who longs to become a real boy. Both movies pose complex questions about the nature of consciousness and the ethical implications of artificial intelligence in society. Truly understanding AI outside of the glamorization of Hollywood is essential to grasping both the potential and the limitations of AI.

Artificial Intelligence refers to the ability of computer systems or devices, often referred to as machines, to perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, perception, and natural language understanding. AI is a broad and interdisciplinary field that involves various techniques. Machine learning, for example, involves developing algorithms and statistical models that enable machines to learn from data and make predictions or decisions without being explicitly programmed to do so (Jordan & Mitchell, 2015). Deep learning, another technique, uses neural networks to extract high-level features or representations of data. The neural networks, modeled after the structure and function of the human brain, consist of interconnected nodes, each of which performs a specific function. Those functions include feature extraction, pattern recognition and decision-making, to name a few (Goodfellow,, 2016).  A third technique, natural language processing (NLP), is what makes the AI feel so personable and accessible. Used in many practical applications, such as chatbots, voice assistants, search engines, and social media analysis, NLP is the part of AI that seeks to understand, interpret, and generate the natural language humans use to communicate with one another (Pal & Bandyopadhyay, 2019). Other techniques include computer vision, robotics, and expert systems. The goal of AI is to develop intelligent machines that can perform complex tasks and make decisions autonomously, with or without human intervention. AI has numerous applications in various industries, including healthcare, finance, transportation, education, and entertainment, among others, and is considered one of the most transformative technologies of the 21st century.

History of AI

The idea of creating intelligent machines has been around for centuries, but the field of AI as we know it today started to emerge in the mid-20th century, with the development of the first electronic computers. A Dartmouth Conference held in 1956 with expert researchers from various disciplines gathered together and, it is believed, discussed for the first time the idea of creating an intelligent machine (McCarthy, et al., 1955). Initial versions of what researchers considered an intelligent machine included creating programs that could play chess, prove mathematical theorems, and understand natural language (Samuel, 1959). The 1960s and  1970s took a developmental leap with machine learning algorithms that enabled computers to learn from data and improve performance over time without the assistance of further human coding or intervention, which led to decision tree algorithms, neural networks, and other statistical models (Newell & Simon, 1976). Refining those algorithms even further, researchers shifted toward expert systems designed to mimic the decision-making processes of humans in fields like medicine and engineering in the 1980s to 1990s. The great limitation during that time was that those expert systems were unable to handle the uncertainty and complexity of real-world problems (Buchanan & Feigenbaum, 1993). Today, breakthroughs in areas such as computer vision, natural language processing, and robotics are possible as researchers can utilize the emergence of large, complex, and diverse data sets in conjunction with computing power unlike anything developers have seen in the past (LeCun, Bengio, & Hinton, 2015). As a result, virtual assistants such as Siri and Alexa, self-driving cars, and medical diagnosis tools have launched AI into one of the most transformative technologies of our time, with the potential to reform industries and change the way we live and work.

Education Service Agencies and AI

Artificial Intelligence has the potential to radically change the way in which Education Service Agencies are able to provide support and services to their school districts. For example, educators are often overwhelmed with data. The educational system is saturated with data beyond what one person, or even a team, can feasibly analyze, interpret, and use to inform future data-driven pedagogical decisions before new data arrives and the next cohort of students is seated in the classroom. With the assistance of AI, an ESA can identify students who are at-risk of falling behind or dropping out, analyzing data on student attendance, behavior, and performance to identify patterns and produce solutions in a matter of minutes. The most time-consuming part is determining what relevant data the education specialist would like the AI to interpret.

However, the support and services do not stop there. The benefits of the potential relationship between the AI and the ESA are numerous. Additionally, AI can eliminate hours of work for staff  by taking the aggregate data and recommendations from the AI and asking the AI to produce tailored instruction to meet each student’s individual needs, interests, and abilities. Personalized learning paths, resources and activities suited to each student produced by the AI could eliminate hours of lesson design planning by teachers, interventionists, and administration. ESA education specialists can review the instructional plans produced by the AI and train respective staff members on implementation. The provided targeted support and early intervention for a student identified as at-risk has the potential to close learning gaps all too common in a post-pandemic educational setting.

Summary of ESA and AI Pilot Projects

Three Education Services Agencies collaborated with Massachusetts Institute of Technology (MIT) in a pilot project beginning in 2022 seeking to discover ways ESAs could utilize AI in a manner that would positively impact the service delivery training, materials, and support ESAs offer to local school districts. One ESA was successful with implementation; two were not. It is important to review the outcomes of all three projects to understand the potential and the barriers that exist in developing future relationships between ESAs and AI.

Method & Results

There were several phases of the project for all service centers. The ESC Region 12 team together with the Hamilton County ESC team, the Heartland AEA team, and the MIT developer team underwent extensive training in Phase 1 on artificial intelligence, in general. The participants learned what happens when an AI is tasked, how the machine learns, what deployments look like once the AI has published outputs, definition of an algorithm, etc.

After learning about AI itself and the process for developing AI, the teams committed to using a business framework for the theoretical framework of the project. A business framework refers to a structured approach used to organize, analyze, and solve complex business problems. It is essentially a set of guidelines, principles, and tools used to create a framework for decision-making and strategic planning in business. Business frameworks may include various components such as mission statements, business models, strategic plans, performance metrics, and process improvement methodologies. They are used to help businesses better understand their operations, identify opportunities for improvement, and make informed decisions that can help them achieve their goals (Wang, Sun, & Sun, 2021). Training and review of the theoretical framework occurred via Zoom for four-hour sessions, twice a week, over a four-week period.

Once the team committed to a theoretical framework, had a solid understanding of AI, and understood the general purpose and direction of the pilot, the teams began to brainstorm questions. Developing the right questions to ask an AI is a critical process. It is important to develop the right question to ask an AI because the accuracy and usefulness of its output depend on the quality of the input. Asking the right question ensures that the data being fed into the AI algorithm is relevant, complete, and properly structured (Lakkaraju, Kamar, Caruana, & Leskovec, 2019).

When an AI is fed with the right question and data, it can more accurately predict outcomes or identify patterns within the data. On the other hand, if the question being asked is not specific or relevant enough, the AI may not be able to accurately identify the patterns within the data or generate useful insights. Lack of specificity can lead to inaccurate or irrelevant results, which can have significant consequences. In addition, developing the right question coupled with enough learning data helps to ensure that the AI is being used to address the right problem or challenge. This can save time, resources, and effort, and can help businesses and organizations to more effectively leverage the power of AI to achieve their goals.

Pilot Project 1

Education Service Center Region 12 in Waco, Texas, designed an AI pilot entitled Early Predictor for Tailored Interventions Model. The team used a predictive model employed to analyze historical data to predict future outcomes by utilizing statistical algorithms, machine learning techniques, and data mining to identify patterns and relationships in data. The model is trained on a large dataset, allowing the AI to recognize correlations and trends in the data that can be used to make predictions about future events. The predictive model can be used in a variety of fields, such as finance, healthcare, marketing, and education, to make informed decisions and improve outcomes (Ko & Chung, 2014). It is a valuable tool for organizations that want to anticipate future trends and make data-driven decisions.

The predictive model for the ESC Region 12 team involved identifying a specific question to be solved using AI, selecting a massive amount of learning data, creating a data dictionary, and partnering with data scientists to develop an algorithm or code to determine the outputs. The team fine-tuned the algorithm by checking and double-checking variables related to student dropouts. The goal was to develop the most accurate information for districts to implement targeted interventions early to decrease dropout rates. The team planned to pilot the project with a few districts before scaling this new service. The methodology involved a collaborative approach to identify a specific problem, acquire the necessary data, develop an algorithm, and test the outputs with the goal of providing a valuable service to districts. ESC Region 12 elected to commit to the question Which students are likely to drop out of high school based on the variables in the data set?

Phase 2, according to one principal investigator from the ESC Region 12 team, was the most arduous and complex part of the process in the pilot (Mayton, personal communication, March 8, 2023). ESC Region 12 and MIT principal investigators met multiple times and had lengthy discussions on what kind of data would elicit the outputs that would directly and correctly indicate the prediction of which students were likely to drop out of high school. ESC Region 12 collaborated with the Public Education Information Management System (PEIMS) department. PEIMS is a data collection system used by the Texas Education Agency (TEA) to collect information about public school students and educators in Texas. ESC Region 12 also collaborated with OnDataSuite staff, a vendor that provides educational data management and analysis tools to public schools in Texas. OnDataSuite includes a collection of products that allow users to access, manage, and analyze various types of education data, including student assessment data, graduation rates, enrollment data, and financial data. The software offers local education agencies (LEAs) in Texas access to more data than ever before, allowing educators to make informed decisions regarding student performance, program effectiveness, and resource allocation. The data used in the pilot consisted of over 1.1 million records spanning academic school years 2016 to 2022, with 158 variables identified as potentially impacting whether a student graduates from high school (n=1,100,000, p=158). The data were curated and scrubbed of personally identifiable information (PII) by a collaborative effort between data scientists from Relativity6, OnDataSuite staff, ESC Region 12, and MIT. Each row in the data file represented a unique student, and each column represented a variable (n=1,100,000, p=158).

Principal Investigator Karen Mayton from ESC Region 12 developed a data dictionary for the data, a file or set of files that contains a detailed description of the data used in a database or a project. The data dictionary provides information about the data, including the data types, field names, descriptions, and any constraints or rules that apply to the data. Data dictionaries are commonly used in database management systems to help users understand the structure and content of the database, and they can also be used in data analysis or machine learning projects to ensure consistency and accuracy of data (Borgman, 2015). The data dictionary functioned as a road map or legend for the AI, teaching the AI how to interpret variables such as ethnicity, standardized assessment scores, grade level, etc. The data dictionary also instructed the AI on whether specific variables should be weighted.

Phase 3 was a collaboration with Relativity6, a technology company that provides artificial intelligence and machine learning solutions to clients in various industries, including education. They offer predictive analytics tools that help organizations make data-driven decisions to improve overall performance. Data scientists at Relativity6 took the data and data dictionary and began writing the algorithm, developing the code for the AI that would elicit the desired output: to predict potential dropouts. Relativity6 data scientists recognized slight adjustments the AI would require to produce the desired results and sent the team back to Phase 2 for additional data, such as the United States Census Bureau’s most recent census block data, and to adjust some of the descriptors in the data dictionary for clarity. There were also some conflicts in the data. For example, initially the data-gathering team had collected the State of Texas Assessments of Academic Readiness (STAAR) testing data from academic years 2014 to 2022. However, when initial outputs showed conflict errors in the STAAR variables columns, data scientists asked the team to investigate. The team realized that in academic school years 2014-2015, STAAR results were published as either Level 1, Level 2, or Level 3, while in 2016 and the years following, the STAAR results were published as "Did Not Meet Grade Level," "Approaches Grade Level," "Meets Grade Level," and "Masters Grade Level."  Because the manner in which the scores were published varied, the outputs were skewed. It took several rounds of trial and error before the data set was harmonized, resulting in accurate reports generated by the AI.

While pilot project in ESC Region 12 is still ongoing, it is the only project that has seen success. After working through challenges, the AI was able to publish an output predicting which students were likely to drop out of high school based on the 158 variables presented with 86% accuracy for the 2016-2022 academic school years. The next step in the pilot is to determine whether the AI is able to effectively draft early intervention plans that those students who were dropouts could have completed starting in their freshman year of high school to mitigate the likelihood of becoming a dropout. The idea is that Education Service Agencies will use AI to determine which students are at-risk of dropping out, determine why those students are at-risk, and then develop early intervention plans tailored to each student’s specific indicators to mitigate that-risk. Concerns about using AI to draft intervention plans are summarized in the Implications section of this case study.

Pilot Project 2

Hamilton County Education Service Center in Cincinnati, Ohio,  attempted two different pilot projects. The first proposal focused on developing an early warning system to identify the needs of middle school students in terms of classroom management. Designers of the pilot sought to work within the Danielson Framework, a widely used instructional framework designed to evaluate and improve teaching practices. The framework is organized into four domains of teaching responsibility, including planning and preparation, classroom environment, instruction, and professional responsibilities. The planning and preparation domain encompasses the teacher's knowledge of content and pedagogy, as well as their ability to plan and design effective instruction. The classroom environment domain focuses on the teacher's ability to create a safe and positive learning environment, manage classroom procedures and student behavior, and establish positive relationships with students. The instruction domain is concerned with the teacher's ability to deliver instruction that is aligned with instructional goals and student needs, and to use a variety of instructional strategies to engage students and promote learning. Finally, the professional responsibilities domain includes the teacher's commitment to continuous professional learning and growth, as well as their ability to communicate effectively with students, colleagues, and families, and to maintain accurate records and meet professional obligations (Danielson, 2013).

The Danielson Framework is commonly used as a tool for teacher evaluation and professional development, providing a common language and framework for discussing teaching practices, and helping teachers to identify areas for growth and improvement (Hawkins & Smith, 2018). If an AI were able to analyze data gathered during walkthroughs and other data-gathering opportunities with the Danielson framework in mind and could identify areas for growth and improvement, the time the learning machine would save administrators in data analysis would be invaluable.

However, designers faced challenges in obtaining enough data to feed the AI, particularly teacher-specific discipline and classroom referral data, which was sensitive and difficult to obtain. Additionally, designers realized that the project would have to address privacy concerns and convince public schools to trust them with their data. Administrators were hesitant to release sensitive information about teacher behaviors especially, making it difficult to acquire the necessary data to make the AI effective. The team also noted the importance of considering ethical concerns, such as confidentiality and the reliability of qualitative data for an AI that works particularly well in the quantitative realm. Designers were looking for specific attributes in the Danielson Framework in walkthroughs, such as establishing a culture of learning and managing student behavior.

As with any qualitative data gathering, however, the subjective interpretation each observer recorded as data even after calibration held the potential for personal bias, lack of objectivity, and/or differing perspectives (Creswell, 2013). In addition to concerns about the quality of data, the team was unable to conduct enough walkthroughs to gather the amount of data that AI would require in order to make the output valid. In an attempt to approach the data-gathering from a more holistic and objective angle, Hamilton County ESC partnered with a data company to pull publicly published state data from all schools. The idea was to look at the impact the ESC had on the schools they served and assess the value the ESC staff added within their coaching as identified by any positive trends in the data; however, the data provided at the state level did not drill down to the level of teacher efficacy evaluation needed to meet the rationale of the project.

Another project idea was an AI that would generate lesson plans for substitute teachers in math. However, the team faced challenges in acquiring enough qualified substitutes in the mathematics content area. With the nationwide shortage of qualified mathematics teachers in the classroom, it is rare to find a qualified mathematics teacher substituting as that substitute would likely be offered a permanent position as soon as administration learns of the substitute’s qualifications. The team discovered that even if the AI could publish an engaging lesson plan relevant to the course requiring a substitute for the day, the substitute was hampered in his or her ability to deliver the lesson plan in an effective manner, thereby making the purpose behind the project moot.

Pilot Project 3

Heartland Area Education Agency (AEA) in Johnston, Iowa conducted three pilot projects, termed the NICU Project, SLP Assignment Project, and Library Themes Project. The NICU Project aimed to predict which cases from the NICU would require Early ACCESS services, reduce the number of days families are without services after they are discharged from the NICU, and personalize outbound communications to parents about accessing care.

In Iowa, AEAs are required to provide initial service coordination for Early ACCESS, and Heartland AEA, one of nine in the state, is unique to providing Early Access service coordination for NICU infants born with exceptional needs in two hospitals in Des Moines. As the team worked on the project, they faced challenges due to limited data about families who signed release forms and the lack of all necessary information required for accurate predictions. Heartland AEA ultimately decided against further investment, determining the required effort and time to gather all necessary data would be counterproductive to the level of direct impact the AEA could have on children and families. Sheila King, Chief Information Officer and team member for the pilot, reflected that rather than abandoning the project altogether, their team has connected with other AEAs, and they are showing interest in the idea. With more AEAs and hospitals involved, the effort required to do this project becomes more feasible and impactful. The hope is that Heartland AEA will not abandon the project altogether but indeed be able to effectively engage in collaborative efforts with other AEAs and state agencies to ensure the successful realization of the NICU project.

The SLP Assignment Project aimed to create and maintain equitable Speech Language Pathologist (SLP) assignments across the agency. If the AEA has a team of 140 SLPs serving 53 school districts and the service agency wants to assign equitable caseloads, to include travel time, number of assigned students, number of assigned minutes overall for each caseload, etc., the amount of data collection and analysis a Special Education Director would have to do becomes nearly insurmountable. If the AI could analyze the data and provide an equitable schedule, caseload, and complexity of each individual need, the time saved and the support the service agency could provide to the district would be phenomenal.

In Heartland AEA’s pilot, however, the team lacked sufficient data to feed the AI. Concerns were raised about the ethical implications of determining the cost or weight of variables such as travel time, paperwork, case complexity, and other factors SLPs encounter daily without the input of SLPs directly. Given the projected timeline for the pilot’s completion, the team agreed that there was simply not enough time to involve all necessary stakeholders in the data-gathering phase. While this project was not yet ready for AI implementation, it remains a high-interest and probable future project for Heartland AEA. In fact, Ms. King clarified that the SLP Assignment Project was neither abandoned nor unsuccessful but that costing tasks is a work in progress, and the AEA looks forward to resuming work on the project as time allows.

The Library Themes Project focused on optimizing physical resources in libraries by understanding district requests and forecasting which resources are needed. Heartland AEA has its own repertoire of resources for districts, and the service agencies in Iowa could leverage the power of AI to quickly gather and provide supplies, materials, texts, and more for myriad requests from any district in the region. The project aimed to improve the book title selection process and purchasing decisions by using the topics and reading/interest levels of books requested plus the cycle of requests. By using a theme database history to identify frequently requested topics and specific titles, the project sought to save library staff time in selecting books. The project suggested using AI in conjunction with the agency's online catalog to recommend titles based on past searches, interests, and reading levels. The project aimed to address the external demand issues of educators needing access to research-based instructional strategies, materials, and resources that are targeted to their grade level or grade span and subject area to impact teaching and learning. The project objective was to create an individualized instructional strategies and materials digital newsletter that could be accessed through an app. The district could request resources for specific units, and the AEA could utilize the AI to tailor the perfect personalized set of resources for that request. The pilot aimed to provide service agencies with the tools to meet the personalized demands of educators while saving time and increasing efficiency.

Overall, Heartland AEA's pilot projects demonstrate the potential benefits of using AI in education service agencies to optimize physical resources, provide personalized learning paths, and improve student outcomes. However, the implementation of AI in service agencies requires careful consideration of ethical implications and privacy concerns. Additionally, the frontloading work for AI can be cost- and time-prohibitive, and inter-agency communication and cooperation are essential during the data-gathering process. By addressing these challenges, AI has the potential to revolutionize the way ESAs support local school districts, and Heartland AEA's pilot projects serve as valuable examples for future development in this area.


Leading expert Dr. Duncan Simester shared a wealth of insight regarding future implications on the intersection of artificial intelligence and education service agencies. Dr. Simester is a prominent marketing researcher and currently serving as the Nanyang Technological University Professor of Management and Marketing at the Massachusetts Institute of Technology (MIT) Sloan School of Management. He has made significant contributions to the field of marketing, especially in the areas of pricing, product line design, and customer behavior. His research work often focuses on the practical applications of quantitative modeling to solve real-world marketing problems. In addition to his academic contributions, Dr. Simester has also served as a consultant to many large companies, including Cisco, L’Oreal and Fidelity Investments. He was the lead innovator in employing the three education service agencies in this case study.

According to Dr. Simester, about four years ago, he and his team began looking into how AI could be used to solve problems in education. Dr. Simester distinguishes among three types of AI problems: grouping, pure prediction, and changing behaviors. Grouping is the easiest and most attainable type of problem to tackle, but it has perhaps the least impact. Grouping refers to the task of categorizing data points or objects into different clusters or groups based on their similarities or differences. Grouping is a form of unsupervised learning, where the AI system is not given specific labels or categories to predict; rather, the system uses clustering algorithms to identify natural groupings within the data. Grouping tasks can be used for a wide range of applications, such as image recognition, customer segmentation, and anomaly detection. The goal of grouping in AI is to identify meaningful patterns or relationships within the data that can be used to inform decision-making or create predictive models (Agrawal, 2021).  Heartland AEA’s Library Themes Project would be a good example of a grouping AI task.

Pure prediction involves using AI to make predictions based on existing data, such as ESC Region 12’s pilot predicting which students are most likely to drop out of school. Pure prediction refers to the task of using machine learning algorithms to make predictions or forecasts about future events or outcomes based on historical data. Unlike classification or clustering tasks, pure prediction does not involve categorizing or grouping data points, but rather using statistical models to analyze patterns and relationships within the data and make predictions about future values. Pure prediction is a form of supervised learning, where the AI system is trained on a set of labeled data to identify correlations and patterns, and then uses those patterns to make predictions about new, unseen data. Pure prediction tasks can be used for a wide range of applications, such as predicting customer churn, forecasting sales, or estimating risk in insurance. The goal of pure prediction in AI is to create accurate and reliable predictive models that can inform decision-making and improve outcomes (Le, 2021).

Changing behavior, on the other hand, involves using AI to change outcomes.  Changing behavior refers to the task of using machine learning algorithms to influence or modify outcomes, such as encouraging users to take a specific action or to adopt a particular behavior. Changing behavior tasks can be used for a wide range of applications, such as personalized recommendations, virtual coaching, and adaptive learning. When the goal is to change human behavior, AI generally requires training data that includes experimentation on the available firm actions. Both the experimentation and the policies that AI produces can also introduce important ethical considerations. Therefore, it is important to consider the ethical implications of using AI to change behavior and to ensure that AI systems are designed and used in a way that aligns with ethical principles and values (Bhatt, 2019). For example, if ESC Region 12’s pilot on predicting dropouts went on to create tailored intervention plans to mitigate the risk of those predicted as likely to drop out, then the pilot would be venturing into the changing human behavior territory.

Dr. Simester notes that pure prediction models are a good place to start, as they do not require as much experimentation and typically do not introduce the same ethical considerations as changing behaviors. He emphasizes the need for greater sophistication in understanding data and problems, including what kind of data is required to answer a specific problem.

Dr. Simester posits that ESAs have the potential to lead the way in AI and education innovation, as ESAs have an unparalleled level of access to privileged and protected data as well as comprehensive knowledge of the problems school districts face. ESAs are in a unique position to define the way in which AI supports education and then to take that innovation to market, monetizing the services AI can provide to districts. He estimates that without ESAs leading the forefront of this exploration between AI and education, the pace of innovation will be significantly hindered.

To this end, Dr. Simester chose three different service agencies to launch pilot projects based on relationships he had built through the Business Strategy Framework symposiums and conferences AESA held over the years. Teams began to ideate and then filter ideas based on feasibility, considering the problems they wanted to solve and the data the AI would require to provide solutions for the problems. Dr. Simester emphasized the importance of ethical considerations in exploring the intersection of AI and education. He noted that there is a great desire to solve the world's problems, but the data required to solve big problems is difficult to attain. Pairing down the specificity of the task to a level that requires an attainable compilation of data is paramount in an AI pilot’s success.

The findings from Dr. Duncan Simester's research on the intersection of AI and ESAs have significant implications for future research and practice in education. Dr. Simester's work demonstrates the potential of AI to solve real-world problems in education, particularly when coupled with the expertise of ESAs. However, it also highlights the need for careful consideration of both technical and ethical aspects of AI in education.

One of the key implications of using AI in education is the importance of using a structured approach to strategic planning, such as the business strategy framework, when developing and implementing AI solutions in education. This approach helps to ensure that AI solutions are aligned with the overall objectives of education organizations, are feasible and ethical, and are implemented and evaluated in a systematic way. Frameworks such as the Danielson Framework and the Business Strategy Framework are valuable tools for guaranteeing reliability and validity of output results for education leaders and policymakers who are exploring the potential of AI in education.

Another important implication of the use of AI in education is the need for greater sophistication in understanding data and problems in AI applications in education. As Dr. Simester notes, correlation does not necessarily imply causation, and it is critical to have a deep understanding of the data required to answer specific problems. This underscores the need for education professionals to be well-versed in data science and AI, and to collaborate with data scientists and ethicists to ensure that AI solutions in education are effective, equitable, and ethical.

Moreover, Dr. Simester highlights the potential of ESAs to lead the way in AI and education innovation. ESAs are well-positioned to develop and implement AI solutions that can help address the complex problems facing education today, leveraging their expertise and access to privileged and protected data in a manner beyond the capability of third-party agencies. ESAs are also uniquely positioned to create a sustainable business model wherein the ESA can monetize the advanced services available to provide to districts through the innovative possibilities of AI. However, it is also critical to ensure that ESAs prioritize ethical considerations and are transparent in their use of AI, particularly when it comes to the collection, analysis, and use of student data.

Overall, Dr. Simester's work in exploring the intersection of AI and education, as well as his belief in the potential of ESAs to lead the way in this area, highlights the importance of considering both the technical and ethical aspects of AI in education. As AI continues to shape the future of education, it is essential that education professionals, policymakers, and stakeholders work together to ensure that AI solutions are effective, equitable, and ethical, and that they are implemented in a way that benefits all students.


In conclusion, the successful implementation of AI in Education Service Agencies (ESAs) has the potential to revolutionize the way data is collected, analyzed, and used to inform future pedagogical decisions, saving thousands of hours of valuable work time for staff. However, it is important to recognize that AI is a complex and interdisciplinary field that requires time and deep thinking to consider all perspectives, including ethical implications and privacy concerns. Without a well-rounded data set, the AI cannot publish valid and reliable data, construct tailored material to address a variety of needs, or make predictions that could potentially change the trajectory of students on an At-risk path. Unfortunately, the frontloading work for that well-rounded data set for AI can be cost- and time-prohibitive. Inter-agency communication and cooperation are essential during the data-gathering process and must be preceded by lengthy conversations on exactly which variables an AI will require to provide the kind of valid and reliable output a service agency is seeking. Despite these challenges, the benefits of the potential relationship between AI and ESAs are numerous, including personalized learning paths, resources, and activities that can eliminate hours of lesson design planning by teachers and provide targeted support and early intervention for at-risk students. With careful consideration of these factors, ESAs can leverage the power of AI to provide systems of support that focus on enhancing data analysis and providing real-time insights to improve student outcomes.


Agrawal, V. (2021, April 13). What is clustering in machine learning? Analytics Vidhya.

The Ultimate Guide to K-Means Clustering: Definition, Methods and Applications

Bhatt, N. (2019, May 28). Reinforcement learning explained. Medium.

Borgman, C. L. (2015). Big data, little data, no data: Scholarship in the networked world.

Cambridge, MA: MIT Press.

Buchanan, B. G., & Feigenbaum, E. A. (Eds.). (1993). Expert systems: The state of the art.

Digital Press.

Creswell, J. W. (2013). Qualitative inquiry and research design: Choosing among five

approaches (3rd ed.). Sage Publications.

Danielson, C. (2013). The framework for teaching evaluation instrument (2013 ed.). Charlotte

Danielson, LLC.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning." MIT Press.

Hawkins, A., & Smith, B. (2018). The Danielson framework for teaching: An overview and

critique. Educational Researcher, 47(2), 73-80. doi: 10.3102/0013189X17753295

Jordan, M. I., & Mitchell, T. M. (2015). "Machine learning: Trends, perspectives, and prospects." Science, 349(6245), 255-260.

Ko, H., & Chung, W. (2014). Learning from imbalanced data sets: A comparison of various

strategies. ACM Transactions on Intelligent Systems and Technology, 5(4), 1-27.

Kulkarni, C., Cambre, J., Chavan, G., & Bauskar, P. (2019). "Role of artificial intelligence in

education: Current applications and future trends." Journal of Computing in Education,

6(1), 1-12.

Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2019). Faithful and customizable

explanations of black box models. Proceedings of the 2019 AAAI/ACM Conference on

AI, Ethics, and Society, 295-302. doi: 10.1145/3306618.3314236

Le, J. (2021, March 8). What is predictive modeling? Towards Data Science.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:


Mayton, K. (2023, March 8). Personal interview.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the

Dartmouth summer research project on artificial intelligence. Retrieved from


Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search.

Communications of the ACM, 19(3), 113-126. doi: 10.1145/360018.360022

Pal, A. R., & Bandyopadhyay, S. (2019). A survey of natural language processing techniques and applications. International Journal of Speech Technology, 22(2), 307-323. doi:


Samuel, A. L. (1959). Some studies in machine learning using the game of checkers.

IBM Journal of Research and Development, 3(3), 210-229. doi: 10.1147/rd.33.0210

Smith, J. (2022). Expert Dr. Duncan Simester discusses the implications of artificial intelligence

in education service agencies. Education Today, 38(1), 34-39.

Tang, X., & Wang, J. (2020). "Applications of artificial intelligence in education: A

comprehensive review." Journal of Educational Technology Development and Exchange,

13(1), 1-16.

Thompson, A. A., Peteraf, M. A., Gamble, J. E., & Strickland III, A. J. (2020). Crafting &

executing strategy: The quest for competitive advantage: Concepts and cases.

McGraw-Hill Education.Wang, J., Sun, X., & Sun, Y. (2021). Framework for business

model innovation: a systematic review and research agenda. Journal of Business

Research, 128, 62-74. Weng, Y.H., & Yang, Y.T.C. (2019). “Artificial intelligence in education: A review.” Journal of Educational Technology Development and Exchange, 12(I), 1-22.

Authors: Dr. Amber Loubiere & Doctoral Candidate Karen Mayton

ESC Region 12

Waco, Texas

Dr. Loubiere’s contact information:

Amber Loubiere, EdD

Education Specialist, McKinney-Vento & Title I

Federal Programs

Education Service Center Region 12

o: 254-297-1226   c: 254-350-0809

f:  254-666-0823  |  e:


Education Service Agencies Leverage the Power of AI to Support School Districts_July 6 AESA Perspectives


Scroll To Top