Zephyrnet Logo

Warnings over emotional AIs, OpenAI explains how it became video-game king, plus ML climate impact probe

Date:

Roundup It’s nearly the end of the year, and if you’re not bored of AI yet, here’s more bits and bytes for you to consume.

How fair is your model? Check with TensorFlow: Folks over at Google have released Fairness Indicators, a range of tools to help developers calculate how fair a classifier model is.

The software looks for things like as the false positive rate and false negative rate and compares them across different classes to help highlight any biases in performance. Fairness Indicators helps show specific data points that affect a particular conclusion made by the model to show things like representation.

Google tested the tools using a model for content moderation. Fairness Indicators was used to analyze bias in a dataset used to train classifiers to label text as abusive, toxic, or inappropriate.

Fairness Indicators can be used to generate metrics for transparency reporting, such as those used for model cards, to help developers make better decisions about how to deploy models responsibly,” it said in a blog post this week.

The idea of model cards have been floating around for a little while, and Google proposes that important properties and limitations in a trained system are listed in a clear representation like a short document or card.

The tools won’t fix your models or minimize biases, they can only highlight where some of those issues might arise. It’s still a good start, however, and if you want to start using Fairness Indicators then here’s a link to a beta version of the software.

OpenAI’s Dota 2 paper: OpenAI wrapped up its Dota 2 project earlier this year and have now published a report describing its efforts to build an AI bot capable of beating the world champions at the e-sport game.

The 66-page report goes into detail on the the data, algorithms, and infrastructure needed to develop the agent known as OpenAI Five. The machine went through several iterations, continuously improving to first beat Dota 2 amateurs, semi-professionals, and eventually world champions.

The ambitious project spanned over three years and probably racked up tens of millions of dollars in computing costs. “When successfully scaled up, modern reinforcement learning techniques can achieve superhuman performance in competitive esports games. The key ingredients are to expand the scale of compute used, by increasing the batch size and total training time,” the paper said.

OpenAI believe cranking up compute is necessary to drive progress in AI. “While we focused on Dota 2, we hypothesize these results will apply more generally and these methods can solve any zero-sum two-team continuous environment which can be simulated in parallel across hundreds of thousands of instances. In the future, environments and tasks will continue to grow in complexity. Scaling will become even more important (for current methods) as the tasks become more challenging,” it concluded.

You can read the paper here and also see a whole recap of the Dota 2 project.

AI reports of 2019: Here’s two AI reports to read at the end of the year. One comes from the AI Now Institute and the other is from the AI Index project.

The AI Now Institute based at New York University focuses on the social impacts of machine learning. The report offers a sobering view of the emerging technology with the institute calling for a ban on all facial recognition systems used by the government or systems that affect people’s livelihoods, such as using emotion detection to analyse candidates in job interviews.

It also calls for the government to examine the energy consumed by the AI industry to assess its environmental impact. Any future AI policies should take into account its carbon footprint before being deployed. There are also important chapters addressing about bias in research, tech worker unionisation, and privacy and surveillance in AI.

“The spread of algorithmic management technology in the workplace is increasing the power asymmetry between workers and employers. AI threatens not only to disproportionately displace lower-wage earners, but also to reduce wages, job security, and other protections for those who need it most,” the report began.

“If the past year has shown us anything, it is that our future will not be determined by the inevitable progress of AI, nor are we doomed to a dystopic future. The implications of AI will be determined by us—and there is much work ahead to ensure that the future looks bright,” it concluded.

You can read it here [PDF].

The second one led by Stanford University tracks the progress of the field. The annual AI Index report is now in its third year and began in 2017.

It’s split into nine chapters and is a hefty read at 290 pages. Don’t worry, it’s not all text and there are charts, tables, and graphs sprinkled all throughout to help visualize the vital trends and patterns.

The report discusses everything from the number of peer-reviewed papers, attendance at AI conferences, time it takes to train the largest models, number of jobs in AI, investment for startups, to how the technology is shaping policies and laws.

So, sit down with a brew before you dive in here [PDF]. ®

Republished from https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/16/ai_roundup_131219/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?