Zephyrnet Logo

‘I did not comprehend that ChatGPT could fabricate cases’

Date:

In-brief Lawyers facing legal sanctions from a federal court for filing a lawsuit containing fake legal cases generated by ChatGPT, say they feel duped by the software.

Attorneys Steven Schwartz and Peter LoDuca representing legal firm Levidow, Levidow & Oberman made headlines for submitting court documents with the Southern District Court of New York citing legal cases made up by ChatGPT.

Judge Kevin Castel is now considering whether to impose sanctions against the pair. In a hearing this week, they admitted to failing to verify the cases. “I did not comprehend that ChatGPT could fabricate cases,” Schwartz said, AP reported. Meanwhile, his partner LoDuca said: “It never dawned on me that this was a bogus case,” and that the error “pains [him] to no end.”

The lawyers were suing a Colombian airline Avianca on behalf of a passenger who suffered an injury aboard a flight in 2019, and turned to ChatGPT to find other similar cases. The chatbot then generated a list of false cases that they believed were true, and included them in their lawsuit. “Can we agree that’s legal gibberish?,” Castel said. 

The lawyers, however, overestimated the technology’s abilities without understanding how it works. 

Non-AI, good old “traditional code” is helping improving Bard

Google claimed its AI chatbot Bard’s logic and reasoning skills have improved thanks to a new technique called implicit code execution.

Large language models (LLMs) work by predicting the next likely words in a given sentence, meaning they can be good at open-ended creative tasks like poems but are far less accomplished at solving real problems.

Google has been working to make Bard more useful, and said it had implemented a new method that makes it better at answering mathematical problems or logic questions more accurately. 

The technique, described as implicit code execution, inspects whether a user’s prompt is something that can be solved computationally. The model then uses non-machine learning methods to generate code and answer the question.

“With this latest update, we’ve combined the capabilities of both LLMs and traditional code to help improve accuracy in Bard’s responses. Through implicit code execution, Bard identifies prompts that might benefit from logical code, writes it ‘under the hood,’ executes it and uses the result to generate a more accurate response,” it said this week.

“So far, we’ve seen this method improve the accuracy of Bard’s responses to computation-based word and math problems in our internal challenge datasets by approximately 30 per cent,” it added.

Bard should be more accurate at responding to questions like ‘what are the prime factors of 15683615?’ or ‘calculate the growth rate of my savings’, but it isn’t perfect and will still make mistakes. 

Large language model startup Cohere raises $270m in Series C round

Cohere, a startup launched four years ago shortly after OpenAI released GPT-3, announced it had raised $270 million in its latest round of funding.

The round was led by Inovia Capital, and included companies like Nvidia, Oracle, and Salesforce Ventures. Cohere began by building an API product to its large language models to help companies automate natural language processing tasks across a range of languages.

“AI will be the heart that powers the next decade of business success,” Aidan Gomez, CEO and co-founder, said in a statement. “As the early excitement about generative AI shifts toward ways to accelerate businesses, companies are looking to Cohere to position them for success in a new era of technology. The next phase of AI products and services will revolutionize business, and we are ready to lead the way.”

The company said it will work with Salesforce Ventures to advance generative AI for businesses, and LivePerson to create and deploy custom LLMs that are more flexible and private than existing models. 

Dutch privacy watchdog concerned about ChatGPT, sends OpenAI a letter

Officials from the Dutch Data Protection Authority have sent OpenAI a letter to better examine data privacy concerns with ChatGPT.They want to know what data the model was trained on, and how the company stores data generated in conversations between the chatbot and users. 

“The DPA is concerned about how organisations that make use of so-called ‘generative’ artificial intelligence treat personal information,” the agency said, Reuters reported this week. The privacy regulator said it “will be taking various actions in the future” and had sent the letter “as a first step [in clearing] up some things about ChatGPT.”

Several privacy watchdog groups in the European Union and in Canada have voiced similar concerns, and are investigating the technology as governments tackle regulation and safety issues. The fear is that ChatGPT could leak sensitive information people want to keep confidential like phone numbers or personal data. It is also known to generate false information on people, however.

One man, for example, sued OpenAI this week after a journalist using the software claimed he had embezzled money from a gun rights group. ®

spot_img

Latest Intelligence

spot_img