Zephyrnet Logo

American AI is Welcome in China: Xi Jinping Tells Bill Gates

Date:

The European Parliament, the main legislative body of the European Union (EU), approved a proposed law that would regulate AI, making the 27-nation bloc possibly the first major economic power to put in place comprehensive rules for the technology.

The law, known as the AI Act, would restrict the use of AI systems that are considered to be high-risk, such as facial recognition software. It would also require companies that develop AI systems like ChatGPT to disclose more information about the data used to train the chatbots.

Members of the France-based European Parliament voted in favor of the new legislation on Wednesday. The vote comes amid warnings from some experts that artificial intelligence could pose a threat to humanity if it is developed too quickly.

Also read: AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe

Setting the global standard

European Parliament president Roberta Metsola said the adoption of the new rules showed Europe’s commitment to the responsible development of AI.

“Europe is leading and will continue to lead a balanced and human-centered approach to the world’s first AI Act. Legislation that will no doubt be setting the global standard for years to come,” Metsola said in a video posted on Twitter.

“And all of this is perfectly consistent with our will to be world leaders in digital innovation based on EU values, such as privacy and respect for fundamental rights. This is all about Europe taking the lead and we do it our way – responsibly.”

The current draft of the European Parliament’s AI Act proposes a risk-based approach to regulating artificial intelligence systems. AI systems would be categorized into different levels of risk, based on their potential to harm consumers.

According to the law, the lowest risk category relates to AI used in video games or spam filters. The highest risk category includes AI that could be used for social scoring, a practice that assigns scores to individuals, either for loans or housing, based on their behavior.

EU says it will ban such programs. Companies that develop or use the so-called high-risk AI would be required to provide information about how their systems work. This is done to ensure that AI programs are fair and transparent, and that they do not discriminate against individuals, the rules say.

EU chief: Discrimination is big AI risk

EU Commissioner for Competition Margrethe Vestager, said that “guardrails” such as those proposed under the AI Act could help protect people against some of the biggest risks of AI, including discrimination.

For example, AI could be used to make decisions about who gets a mortgage or a job, and these decisions may be based on factors such as race, gender, or religion, she said.

“Probably [the risk of extinction] may exist, but I think the likelihood is quite small. I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are,” Vestager told the BBC after the European Parliament vote.

“If it’s a bank using it to decide whether I can get a mortgage or not, or if it’s social services on your municipality, then you want to make sure that you’re not being discriminated [against] because of your gender or your colour or your postal code,” she added.

On Tuesday, Ireland’s data protection authority (DPC) said it placed Google’s planned rollout of its AI chatbot Bard in the EU on hold, Politico reports. Google informed the regulator that it intended to launch Bard in the European Union this week.

But the DPC said it did not receive any information from Google about how the company had identified and minimized data protection risks to potential users. The regulator is concerned about the potential for Bard to collect and use personal data without users’ consent.

DPC Deputy Commissioner Graham Doyle said the authority wants the information “as a matter of urgency”. It also asked Google for further information about its data protection practices.

Imposing ‘strict AI guardrails

Under the European Parliament’s proposed new rules for AI, use of biometric identification systems and the indiscriminate collection of user data from social media or CCTV footage for purposes such as facial recognition software will be restricted.

The proposals ban the use of artificial intelligence for mass surveillance and would require companies to obtain explicit consent from users before collecting their. Per the BBC report, Vestager said:

“We want to put in strict guardrails so that it’s not used in real-time, but only in specific circumstances where you’re looking for a missing child or there’s a terrorist fleeing.”

The EU is ahead of the United States and other large Western governments in regulating AI. The bloc has been debating AI regulation for more than two years, and the issue gained new urgency after the release of ChatGPT in November.

ChatGPT is a large language model chatbot developed by OpenAI that can generate human-quality text. Its release intensified concerns about the potential negative impacts of AI on employment and society, such as job displacement and social isolation.

Both the U.S. and China have now started developing concrete policies to regulate AI. The White House has released a set of policy ideas for regulating AI. And China has already issued new regulations that ban the use of AI-generated content to spread “fake news.”

In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”. They urged international dialogue on the governance of AI, copyright, transparency, and the threat of disinformation.

Europe’s AI Act is not expected to take effect until 2025. The EU’s three branches of power: the Commission, Parliament and Council will all have to agree on its final version.

spot_img

Latest Intelligence

spot_img