Zephyrnet Logo

Simplifying AI for Educators: The 3 Things You Really Need to Know (For Right Now)

Date:

Remember when “big data” was the buzzword in education a decade ago? Books about big data were published, but what exactly was it? 

Big data described the large and continuously growing masses of data and the process of analyzing student performance. This became complicated, and big data was often just that, too big. In fact, shorter data sets were evidenced as effective, and I have written about the power of small wins, tracking microsteps in students progress. 

In a strange, eerie way, we are facing a similar challenge with the inundation of artificial intelligence. A frenzy of AI platforms and products are coming at educators in this pioneering stage, which can be exciting and exhausting. How can we possibly siphon off the irrelevant, too complicated, too expensive, too whatever, and sort out how to leverage AI innovative concepts and tools? 

I have been fortunate to practice navigating around AI challenges due to my extensive experience presenting and writing about it in education. During that time, I have quieted all the noise by identifying three components of AI that provide the foundation for effectively and efficiently capturing its remarkably imperfect potential and productivity.

To simplify it for educators, here are the three main facets to understand before using AI in classrooms:

  1. Ethics
  2. Prompting
  3. Resource Tools

That’s it! I could end here, but I suppose that would be a cliffhanger . . . so I will explain. 

1. Simplifying AI for Educators: Ethics

I recently presented to high school English teachers about AI. Ethics was important, and I took the time to break down the subcategories within that. 

First, plagiarism: If generative AI can produce unique passages, won’t students use it to cheat more? A legitimate concern. Historically, plagiarism and the broader context of cheating has evoked understandable alarm by teachers, charged with assessing student learning and mastery. If AI makes it easier, won’t students increase their cheating?

Here’s the good news. Surprisingly, a study involving thousands of students who were anonymously surveyed showed that plagiarism remained unchanged from pre-AI access and use. AI detectors have had uneven success, in spite of the claims, and like all good teaching, some of the best methods involve good old-fashioned detection, such as these seven ways to detect AI use in student writing

I discovered an additional technique for detecting AI. Following the completion of a written response, ask the students to write three Level Three DOK questions based on their essay (in class, on demand after the essay is drafted). This will signal if they wrote it, revised it, or copied and pasted it because this process will allow them to demonstrate their thought processes.

Providing safeguards helps comfort teachers, and letting students know these techniques are in play increases the likelihood they stay on high moral ground. 

Another ethical issue is the problem of AI hallucinations, such as recently when AI was asked to predict the outcome of the Super Bowl. Google’s Gemini and Microsoft’s Copilot chatbots responded to questions about the game with completely fabricated game stats and outcomes

This highlights the continued struggles of large language models (LLMs) in separating fact from fiction. Sharing this and other AI hallucinations, along with well-defined videos, further encourages students to proceed with caution, even if they use AI as a source. Students can properly cite generative AI for transparency, using clever plug-and-play tools.

I am frequently asked if AI is biased. My answer is no, but the massive internet ecosystem it derives information from is. I demonstrate this using an image generator to expose natural tendencies to produce bias, such as when I prompt it to create images of successful business persons and a clear stereotype emerges. Importantly, I explain how to work around these biases, which is a valuable lesson to teach students.

Educators must be conscious of the inevitable inequities that persist with AI too. Unfortunately, marginalized students already face the risk of inequities, just as they did during the pandemic. Being knowledgeable about inequities can help educators and school communities work around these. 

2. Prompting

With prompting we learn about the strategic ways we can gain information for educators. LLMs such as ChatGPT, Microsoft’s Copilot, and Google’s Gemini, all offer a variation of capacity to generate information based on the power of your prompting. 

Prompting can be most closely compared to a good old-fashioned Google search. Yet within AI, there is more power in both the output and risks associated with this power (example hallucinations). Therefore, the greater thought and practice that goes into prompting, the better output.

The good news is that LLMs have the capacity to provide ongoing and interactive chats based on prompts, so you can start broadly and refine as you desire. I have even said to the chatbot, “No, I mean this…” to clarify when it misunderstands if I don’t provide enough of a qualitative starting prompt. 

Knowing this can liberate a fearful user from worrying about prompting, and in my workshops, educators discover this.

This is the widest net because every edtech company is trying to get into the AI game. While some opportunities for tech geeks like me make this fun, it can be daunting. Most of the better resources are freemium models, which tease you into paying after a trial period or limit your capacity beyond a number of uses, or offer more sophisticated options if you upgrade. 

Microsoft and Google are both eager to get into the AI-education landscape, and are big enough to offer premium features, either free or inexpensively. The new Copilot looks impressive and provides 4.0 access to anyone, while ChatGPT (from OpenAI) still offers the 3.5 level for free, which has some benefits yet you pay to upgrade to 4.0. Google’s Gemini is similar to Copilot but like all things Google, it is hard to pin down what changes are made: for example, Google already changed its AI’s name from Bard to Gemini!

Many other AI tools are available and I offer a simplistic library of resources for educators, but remember that most want you to pay eventually. For now, stick with the LLMs that come from large companies such as Microsoft and thank capitalism for the fairly robust, free versions. I also like Perplexity, another up-and-coming AI tool.

So there you have it. Ethics, prompting, and resource tools. Those are the tangible and sustained components you need to know about artificial intelligence in schools. The ebb and flow of everything else is just noise. Stick to these components and Big AI will be manageable, even advantageous!

spot_img

Latest Intelligence

spot_img