Zephyrnet Logo

President Biden’s AI Executive Order Won’t Fix AI in Schools, But It’s a Start

Date:

After President Joe Biden issued his sweeping executive order on AI recently, much was made about how quickly the order came together. AI technology is accelerating “at warp speed,” Biden said before signing the order. “Biden wants to move fast on AI safeguards,” read one headline about the declaration. 

As an educator on the front lines of AI’s impact in the classroom, nothing about the national or local response to AI has felt fast. 

ChatGPT debuted a year ago this month. Even before that, the power of GPT and similar technology had been clear. Yet over the past 12 months, the response to AI across education has been fragmented and unclear. By and large, district and higher ed leaders still seem unsure how to handle this new technology. Absent clear institutional guidance, many teachers have been left to craft their own AI policies on a class-by-class, and student-by-student, basis. 

Not surprisingly, this hasn’t gone particularly well. On one hand, we’ve seen overzealous instructors wrongly penalize students, and on the other, it’s naive to think a “do nothing” approach to AI is working.

Biden’s executive order isn’t designed to address the gaps in school AI policies, nor do we want elected officials dictating education policy, however, it does draw attention to both the opportunities for AI in schools and some of the challenges and concerns around it. 

This alone is a huge step in the right direction. Teachers need specific guidelines on AI best teaching practices and cheating prevention tools, as well as safety procedures for students interacting with this technology. AI tutors, AI detectors, and many more aspects of the technology, need to be rigorously tested and studied in an educational setting. Biden’s executive order marks the first small step toward doing all of that. 

Recommending Watermarks AI

Biden’s order calls on the Commerce Department to develop guidance for clearly labeling AI content through an embedded watermark in all AI-generated content, including video, images, and text. The idea is that these watermarks will make it easy for people to differentiate between AI-created content and human-created work. 

On paper, this sounds like it will solve many of the problems currently caused by AI in classrooms by providing an easy way to check for AI-generated content. However, in practice, this is more complicated. 

Biden is not requiring companies to add watermarks, just recommending it, so it’s very possible students will continue to be able to use AI to generate watermark-free content. More significantly, watermark technology is far from perfect. As with existing AI detection tools, research shows watermark detectors can be tricked and produce high numbers of both false-positives and -negatives. 

Even so, I think it’s encouraging that the White House is devoting resources to studying the best methods for effectively identifying human and machine-generated work. This feels vital in education and beyond. 

AI Classroom Guidance 

The executive order requires the Secretary of Education to create AI resources and policy guidance regarding AI. 

“These resources shall address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities,” the executive order states. “They shall also include the development of an ‘AI toolkit’ for education leaders.” 

The order, however, also acknowledges the potential benefits AI has for education, and called for the creation of resources to help support educators use of AI tools, including personalized AI tutors. 

I’m excited by this aspect of the order and would like to see more national attention and resources provided for developing and testing AI tutors. Again theoretically, these would seem incredibly helpful to educators so crunched for time, but we need to see how well any perform in the real world. 

Privacy and Protection  

The executive order emphasizes AI safety in general and safe use of AI in schools in particular. It also requires AI developers to share safety test results with the government. 

A draft guidance released to federal agencies shortly after the order was signed identifies many AI uses that poise civil rights and safety risks for students. These include tools that detect student cheating, monitor online activities, project academic outcomes, or make disciplinary recommendations, as well as online or in-person surveillance tools. 

In his remarks before signing the executive order, Biden highlighted some of the potential ways AI can be detrimental to students. “In some cases, AI is making life worse,” he said. “For example, using teenagers’ personal data to figure out what will keep them glued to their device, AI makes social media more addictive. It’s causing what our Surgeon General calls a ‘profound risk of harm’ to their mental health and well-being.” 

spot_img

Latest Intelligence

spot_img