Skip to main content
Markkula Center for Applied Ethics

AI in Schools

HS student using tablet

HS student using tablet

Implications for Curriculum and Instruction

Yael Kidron

Yael Kidron is the director of the Character Education program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.

In a recent talk about the opportunities and risks of AI in education systems, Professor Rose Luckin identified several aspects of human intelligence that are likely to become essential in the Fourth Industrial Revolution: interdisciplinary academic intelligence; meta knowing; social intelligence; meta cognitive intelligence; meta-emotional intelligence; meta contextual intelligence; and accurate perceived self-efficacy.

Finding ways to advance these abilities in elementary, secondary, and post-secondary school students can support students’ academic, social, and emotional growth. Here is why.

1. AI can inhibit cognitive effort.

One can envision developing “AI-dependence”—i.e., letting the machine do our work when it requires high cognitive effort. Take, for example, the “Google Effect”: When we know information is available online, we put less effort into memorizing and mentally accessing this information. We basically switch from asking ourselves, “what do I know?” to asking, “where can I look up this information?” Moreover, just knowing that we can access the information can make us overestimate our knowledge. In the information silos of the digital world, succumbing to the temptation to have AI search for information for us can limit other types of cognitive effort, such as creativity, divergent thinking, and critical analysis. Interdisciplinary meta-knowledge, meta knowing, and accurate perceived self-efficacy can help individuals think outside the silos and maintain an appropriate amount of intellectual humility.

2. AI may create habits of impatience.

Even without AI, many types of digital technology offer instant relief from boredom or stress as well as the gratification of having quick answers at our fingertips. With AI, digital experiences will become even more appealing. Following Luckin’s reasoning, to identify what humans can do better than machines, we may need to prioritize patience over speed, because machines will always be faster. Similarly, the ability of humans to think outside rule-bound algorithms can be cultivated when individuals are rewarded for the quality of their work, rather than its timeliness. This value prioritization is especially pertinent to educational environments serving children and adolescents. The digital “infosphere” can distort young people’s sense of time and space at an age in which they are still developing their time management skills and learning how to find a healthy balance among virtual and real-world relationships. Therefore, meta-emotional intelligence and meta contextual intelligence can mitigate some of the risks introduced by AI.

3. AI can widen the social divide.

AI has the potential to keep people within their social networks and increase hostility and lack of trust towards members of outgroups. Also, it can promote mistrust in the news. Social intelligence – and, if I may add, ethical intelligence which pertains to concepts of justice and character virtues – may help counteract this potential impact of AI on society.

Putting the Pieces Together

Knowing what abilities should be nurtured by schools is an important first step. Identifying the educational approaches necessary to build those abilities is another step. It may be tempting to think about AI agents that teach social intelligence or time management skills to children. However, self-awareness, self-management, and social awareness develop through real-world practice and experiences. Advocates of the whole child, whole school approaches in schools have noted that effective programs aim to promote multiple facets of psychological and physical wellbeing and take into account the ecological systems of child development.

AI may provide educators with many labels that classify the child as well as the child’s history, learning goals, and current courses and services. It may even attempt to link these labels to profiles or developmental trajectories. However, it takes human intelligence to understand what distinguishes two students who might seem to have identical data. Datasets may well miss some aspects such as trauma-induced emotional reactions, personality traits, resilience, health, cultural and religious beliefs, or the bonds and pressures coming from a particular child’s family, friends, and neighborhood.

And this is where we need to both celebrate and be cautious about the power of AI to inform classroom management. Take, for example, the efforts described in Luckin’s article, to identify which students may better collaborate in their table groups. The results of the classification may yield positive social outcomes (e.g., students in the same group will discover they have more in common than they realized). But the classification attempt may also yield negative results (e.g., students in group A will feel even more distanced than students in group B). So, AI has the potential to improve education, but it takes human judgment and ethical decision-making to use AI effectively in preparing Generation Alpha to succeed.    

Teaching students that intelligence is not fixed and helping them grow the seven intelligences identified by Luckin could potentially be the future goals of the next school improvement initiatives. Perhaps it is time to revisit the current accountability system: Is it time to add new elements to the school report cards? Should states recognize schools that successfully promote social, emotional, moral, and meta-cognitive skills – not just academic performance? A policy environment that examines college and career readiness through the lenses of an AI environment can contribute to such conversations about the future of curriculum and instruction.   

Jun 18, 2019

Subscribe to Internet Ethics: Views from Silicon Valley

* indicates required