Zephyrnet Logo

Survey: Over half of UK undergrads are using AI for uni work

Date:

AI In Brief More than half of undergraduates in the UK are using AI to complete their assignments, according to a study conducted by the Higher Education Policy Institute.

The study asked upwards of 1,000 university students whether they turned to tools like ChatGPT to help write essays or solve problems, and 53 percent admitted to using the technology. A smaller group – five percent of participants – said they just copied and pasted text generated by AI for their schoolwork.

“My primary concern is the significant number of students who are unaware of the potential for ‘hallucinations’ and inaccuracies in AI. I believe it is our responsibility as educators to address this issue directly,” Andres Guadamuz, a Reader in Intellectual Property Law at the University of Sussex, told The Guardian. 

The technology is still nascent, and teachers are getting to grips with how it should and shouldn’t be used in education. The Education Endowment Foundation (EFF), an independent charity, is launching an experiment to see how AI can help them create teaching materials, like lesson plans, exams, or practice questions. Fifty-eight schools are reportedly taking part in the study. 

“There’s already huge anticipation around how this technology could transform teachers’ roles, but the research into its actual impact on practice is – currently – limited,” said Becky Francis, the chief executive of the EEF. “The findings from this trial will be an important contribution to the evidence base, bringing us closer to understanding how teachers can use AI.”

OpenAI’s ChatGPT is breaking Europe’s GDPR laws, Italy says

The Italian Data Protection Authority believes OpenAI is violating the EU’s GDPR laws, and has given the startup a chance to respond to the allegations.

Last year, the Garante temporarily banned access to ChatGPT from within the country whilst it investigated data privacy concerns. Officials were alarmed that OpenAI may have scraped Italians’ personal information from the internet to train its models.

They feared that the AI chatbot may potentially recall and regurgitate people’s phone numbers, email addresses, or more from users trying to extract the data by querying the model. The regulator launched an investigation into OpenAI’s software and now reckons the company is breaking its data privacy laws. The startup could face fines of up to €20 million or 4 percent of the company’s annual income. 

“We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy,” a company spokesperson told TechCrunch in a statement. 

“We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people. We plan to continue to work constructively with the Garante.”

OpenAI was given 30 days to defend itself and explain how its model doesn’t violate GDPR.

Lawyer in trouble for citing fake cases made up by AI

Another lawyer has, once again, cited a case fabricated by ChatGPT in a lawsuit.

Jae Lee, an attorney from New York, has reportedly landed herself in hot water and was passed on to the attorney grievance panel in an order given by judges from the US Circuit Court of Appeals.

She admitted to citing a “non-existent state court decision” in court, and said she relied on the software “to identify precedent that might support her arguments” without bothering to “read or otherwise confirm the validity of the decision she cited,” the order read.

Unfortunately, her mistake now means her client’s lawsuit, which accused a doctor of malpractice, has been dismissed. It’s not the first time a lawyer has relied on ChatGPT in their work, only to later discover it had made up false cases. 

It can be tempting to turn to tools like ChatGPT because it’s an easy way to extract and generate text, but they often fabricate information, making them especially risky to use in legal applications. Many lawyers, however, continue to do so.

Last year, a pair of attorneys from New York were fined for false cases cited by ChatGPT, whilst a lawyer in Colorado was temporarily banned from practicing law. Meanwhile, in December last year, Michael Cohen, Donald Trump’s former lawyer, reportedly made the same mistake too.

UK could miss the AI ‘gold rush’ if it looks too hard at safety, say Lords

A report from an upper house committee in the UK says the country’s ability to take part in the so-called “AI gold rush” is at threat if it focuses too much on “far-off and improbable risks.”

A House of Lords AI report out last week states that the government’s wish to set guardrails on Large Language Models threatens to stifle domestic innovation in the nascent industry. It also cautions on the “real and growing” risk of regulatory capture, describing a “multi-billion pound race to dominate the market.”

The report adds that the government should prioritize open competition and transparency, as without this a small number of tech firms could quickly consolidate their control of the “critical market and stifle new players, mirroring the challenges seen elsewhere in internet services.” ®

spot_img

Latest Intelligence

spot_img