Connect with us


Need to Build Trustworthy AI Systems Gains Importance as AI Progresses



As AI systems take on more responsibility, the strengths and weaknesses current AI systems need to be recognized to help build a foundation of trust. (GETTY IMAGES)

By John P. Desmond, Editor, AI Trends

The push is on to build trusted AI systems with an eye toward instilling confidence that results will be fair, accuracy will be sufficient, and safety will be preserved.

Gary Marcus, the successful entrepreneur who sold his startup Geometric Intelligence to Uber in 2016, issued a wakeup call to the AI industry as co-author with Ernest Davis of “Rebooting AI,” (Pantheon, 2019) an analysis of the strengths and weaknesses of current AI, where the field is going, and what we should be doing.

Marcus spoke about building trusted AI in a recent interview with The Economist. Here are some highlights:

“Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other fields. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.”

AI developers, “can’t even devise procedures for making guarantees that given systems work within a certain tolerance, the way an auto part or airplane manufacturer would be required to do.”

“The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high.”

IBM Team Identifies Four Pillars of Trusted AI

Support for building trust in AI systems was furthered in a recent paper by an IBM team suggesting Four Pillars to Trusted AI, as described in a recent account in Towards Data Science from Jesus Rodriguez, chief scientist and managing partner at Invector Labs.

“The non-deterministic nature of artificial intelligence(AI) systems breaks the pattern of traditional software applications and introduces new dimensions to enable trust in AI agents,” Rodriquez states. Trust in software development has been built through procedures around testing, auditing, documentation, and many other aspects of the discipline of software engineering. AI agents execute behavior based on knowledge that evolves over time. It’s difficult to understand.

Rodriguez suggests the Four Pillars from IBM are a viable idea for establishing the foundation of trust in AI systems. The foundations are:

  • Fairness: AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups.
  • Robustness: AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.
  • Explainability: AI systems should provide decisions or suggestions that can be understood by their users and developers.
  • Lineage: AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.

To help identify whether an AI system is built consistent with the four pillars of trust AI, IBM proposes a Supplier’s Declaration of Conformity (SDoC, or factsheet, for short) that helps to provide information. It should answer basic questions, including this selection:

  • Does the dataset used to train the service have a data sheet or data statement?
  • Was the dataset and model checked for biases? If “yes” describes bias policies that were checked, bias checking methods, and results.
  • Was any bias mitigation performed on the dataset? If “yes” describes the mitigation method.
  • Are algorithm outputs explainable/interpretable? If yes, explain how is explainability achieved (e.g. directly explainable algorithm, local explainability, explanations via examples).
  • Describe the testing methodology.

Human Observers Need to Understand the AI System

Trust in an AI system is built by repeated correct performance of the system, making it highly reliable, and also a system that human observers can understand. When working with resilient, intelligent robotic systems, for example in the military, that are built to adapt and evolve to yield increasingly improved performance, it is challenging to understand the system. The human observers need to be able to understand how the system is improving through experience, suggests Nathan Michael, CTO of Shield AI, writing recently in National Defense. Shield develops AI for national security and defense applications.

“One of the greatest challenges with artificial intelligence is that there is an overwhelming impression that magic underlies the system. But it is not magic, it’s mathematics.

What is being accomplished by AI systems is exciting, but it is also simply theory and fundamentals and engineering. As the development of AI progresses, we will see, more and more, the role of trust in this technology, Michael stated.

Read the source articles in The Economist, Towards Data Science and in National Defense.



Sony Envisions an AI-Fueled World, From Kitchen Bots to Games



In 1997, Hiroaki Kitano, a research scientist at Sony, helped organize the first Robocup, a robot soccer tournament that attracted teams of robotics and artificial intelligence researchers to compete in the picturesque city of Nagoya, Japan.

At the start of the first day, two teams of robots took to the pitch. As the machines twitched and surveyed their surroundings, a reporter asked Kitano when the match would begin. “I told him it started five minutes ago!” he says with a laugh.

Such was the state of AI and robotics at the time. It took a machine minutes to interpret its situation and work out what to do next. But much has changed, with AI increasingly helping machines, from self-driving cars to surveillance cameras, perceive and behave in clever ways.

Kitano now leads a new effort at Sony, announced in November, to infuse cutting-edge AI across the company. The Japanese giant believes AI will create smarter cameras, more cunning videogame characters, and even the first helpful kitchen robots. Kitano says Sony believes AI is making such rapid progress that the company needed to make the technology central to its strategy.

“We have decent AI researchers and engineers at Sony, and we have a good sense of what's going on,” says Kitano, who was attending the Association for the Advancement of Artificial Intelligence conference in New York this week. “We decided now is a moment that we should really push.”

Sony’s move stands out among big companies’ efforts to embrace AI. It lags behind Silicon Valley giants in researching and harnessing AI. It also has different aims: Sony is more focused on content creation and entertainment than the likes of Google, Facebook, or Apple. The Japanese giant is now looking to match America’s AI titans by betting heavily on a powerful but still relatively experimental approach to AI known as reinforcement learning. Google parent Alphabet and Amazon have made notable investments in this technology too.

The WIRED Guide to Artificial Intelligence

Alphabet’s DeepMind famously used reinforcement learning to create a program capable of beating one of the world’s best Go players in 2016. Inspired by animal behavior, it involves an algorithm refining its behavior in response to positive or negative feedback.

“We consider reinforcement learning is equally or possibly even more important,” than the technologies that have driven most progress in AI to date, Kitano says. “It's going to be the key.”

Besides research demos, reinforcement learning is being tested in areas ranging from finance to logistics. It is also emerging as a powerful way for robots to learn to deal with the real world and for training software agents to behave intelligently in simulated environments. So it may have huge potential to generate compelling videogame characters and scenarios.

Reinforcement learning has been part of AI for decades, but its promise has become apparent thanks to powerful neural network algorithms, roughly modeled on the way learning happens in the brain, as well as far more powerful computers and large amounts of training data. Even so, it is experimental and notoriously difficult to get right. Research has shown, for example, how reinforcement-learning algorithms can sometimes fixate on a reward that results in repetitive and useless behavior.

Sony will focus its AI on three domains, Kitano says: gaming, sensors, and, more curiously, culinary arts. These areas reflect the company’s current business focus and an aspirational direction for the future.

Sony is well known for making the PlayStation and games, but it also gets a large share of its revenue from digital sensors and imaging technology. It isn’t hard to see how AI could improve these areas, by making games more compelling or lively or helping cameras perceive the world more intelligently.


The effort to put AI to culinary use is about advancing robotics. So far, Sony has demonstrated a robot capable of placing food items on a plate in an artistic but preplanned way. Future systems might be able to recognize and grasp things without careful coding. Handling food is especially challenging for a robot because items are often irregularly shaped and arranged, and need to be handled with care.

Sony is, of course, no stranger to robots. A few years after the first Robocup, the company released Aibo, a doglike toy that gained a cult following but was canceled in 2006 amid a corporate streamlining. A new version of Aibo, released in 2018, includes some AI capabilities such as object and voice recognition. But it’s still relatively dumb. When I met with Kitano at the AAAI conference, one of the robots, brought by a Sony rep, explored the room and yapped away behind him.

Some outsiders see big potential in Sony investing in reinforcement learning. “It makes a ton of sense,” says Pieter Abbeel, a professor at UC Berkeley and cofounder of, a company using reinforcement learning to make more adaptive warehouse robots.

Keep Reading
The latest on artificial intelligence, from machine learning to computer vision and more

Abbeel points out that it’s expensive and time consuming to create videogames, and he notes that reinforcement learning has shown potential to take on much of the drudge work. He points to a project called DeepMimic, which shows how virtual characters trained with reinforcement learning can exhibit lifelike behavior. A character placed in a physically accurate environment and given a specific goal, like climbing over an obstacle, will eventually work out how to vault it. This could automate the process of programming videogame characters or even allow behavior to emerge on the fly in a game. “It has the potential to facilitate much faster content creation,” Abbeel says.

Abbeel says robots preparing meals are probably some way off, but he expects reinforcement learning to change the way these machines are programmed. “It will be really exciting to see a push there,” he says.

To fuel its AI endeavor, Sony last year acquired Cogitai, a company cofounded by Peter Stone, a professor at the University of Texas at Austin. Stone has successfully pioneered the use of reinforcement learning in a virtual version of the Robocup contest with trophy-winning success. Stone now leads the Sony AI operation in the US.

Before being acquired, Cogitai launched a platform designed to make reinforcement learning easier to use. Stone says this and other tools will now be made available to researchers and engineers throughout Sony. A game developer or a hardware designer should be able to use these tools to explore new ideas and innovations. He says the focus on reinforcement learning reflects Sony’s desire to get ahead of the curve in AI—by betting on what looks likely to be the next big thing.

Today’s Robocup matches show how rapidly things are moving. Players can be seen passing, moving, and shooting with remarkable speed and skill.

According to Stone, this also points to the next stage of progress in AI. “There’s been a revolution in perception and supervised learning,” he says. “The whole premise of Sony AI is that there are huge opportunities for automated decision-making in AI too. It’s really everywhere, and in some sense it’s untapped.”

Read more:

Continue Reading


How 4 Chinese Hackers Allegedly Took Down Equifax



In September 2017, credit reporting giant Equifax came clean: It had been hacked, and the sensitive personal information of 143 million US citizens had been compromised—a number the company later revised up to 147.9 million. Names, birth dates, Social Security numbers, all gone in an unprecedented heist. On Monday, the Department of Justice identified the alleged culprit: China.

In a sweeping nine-count indictment, the DOJ alleged that four members of China’s People’s Liberation Army were behind the Equifax hack, the culmination of a years-long investigation. In terms of the number of US citizens affected, it’s one of the biggest state-sponsored thefts of personally identifiable information on record. It also further escalates already tense relations with China on multiple fronts.

“This kind of attack on American industry is of a piece with other Chinese illegal acquisitions of sensitive personal data,” US attorney general William Barr said at a press conference announcing the charges. “For years we have witnessed China’s voracious appetite for the personal data of Americans.”

That aggression dates back to a hack of the Office of Personnel Management, revealed in 2015, in which Chinese hackers allegedly stole reams of highly sensitive data relating to government workers, up through the more recently disclosed breaches of the Marriott hotel chain and Anthem health insurance.

Even in that group of impactful attacks, Equifax stands out both for the sheer number of those affected and the type of information that the hackers obtained. While some had previously suspected China’s involvement—that none of the information had made its way to the dark web indicated a state actor rather than a common thief—Monday’s DOJ indictment lays out a thorough case.

The Big Hack

On March 7, 2017, the Apache Software Foundation announced that some versions of its Apache Struts software had a vulnerability that could allow attackers to remotely execute code on a targeted web application. It’s a serious type of bug, because it gives hackers an opportunity to meddle with a system from anywhere in the world. As part of its disclosure, Apache also offered a patch and instructions on how to fix the issue.

Equifax, which used the Apache Struts Framework in its dispute-resolution system, ignored both. Within a few weeks, the DOJ says, Chinese hackers were inside Equifax's systems.

The Apache Struts vulnerability had offered a foothold. From there, the four alleged hackers—Wu Zhiyong, Wang Qian, Xu Ke, and Liu Lei—conducted weeks of reconnaissance, running queries to give themselves a better sense of Equifax’s database structure and how many records it contained. On May 13, for instance, the indictment says that one of the hackers ran a Structured Query Language command to identify general details about an Equifax data table, then sampled a select number of records from the database.

Eventually, they went on to upload so-called web shells to gain access to Equifax’s web server. They used their position to collect credentials, giving them unfettered access to back-end databases. Think of breaking into a building: It’s a lot easier to do so if residents leave a first-floor window unlocked and you manage to steal employee IDs.

From there, they feasted. The indictment alleges that the hackers first ran a series of SQL commands to find especially valuable data. Eventually, they located a repository of names, addresses, Social Security numbers, and birth dates. The DOJ says the interlopers ran 9,000 queries in all, not stopping until the end of July.

Amassing that much data is one thing; getting it out undetected is another. China’s hackers allegedly used a few techniques to maintain access to the motherlode.


According to the DOJ, they stored the stolen data in temporary files; especially large files they compressed and broke up into more manageable sizes. (At one point, the indictment says, they split an archive containing 49 directories into 600-megabyte chunks.) That kept their transmissions small enough to avoid suspicion. After they had exfiltrated the data, they deleted the compressed files to minimize the trail. It also helped that they were deep enough inside Equifax’s network that they could use the company’s existing encrypted communication channels to send their queries and commands. It all looked like normal network activity.

The indictment also details how the PLA team allegedly set up 34 servers across 20 countries to infiltrate Equifax, making it difficult to pinpoint them as a potential problem. They used encrypted login protocols to mask their involvement in those servers, and in at least one instance wiped a server’s log files every day. They were effectively ghosts.

Take one incident detailed by the DOJ: On July 6, 2017, one of the hackers accessed the Equifax network from a Swiss IP address. They then used a stolen username and password for a service account to get into an Equifax database. From there, they queried the database for Social Security numbers, full names, and addresses, and stored them in output files. They created a compressed file archive of the results, copied it to a different directory, and downloaded it. Data safely in hand, they then deleted the archive.

Repeat over the course of several weeks, and you wind up with 147.9 million people’s information allegedly in the hands of a foreign government.

While the operation had a certain degree of complexity, Equifax itself made their job much easier than it should have. It should have patched that initial Apache Struts vulnerability, for starters. And an FTC complaint from last summer also found that the company stored administrative credentials in an unsecured file in plaintext. It kept 145 million Social Security numbers and other consumer data in plaintext as well, rather than encrypting them. It failed to segment the databases, which would have limited the fallout. It lacked appropriate file integrity monitoring and used long-expired security certificates. The list goes on. Equifax didn't just let the alleged Chinese hackers into the vault; it left the skeleton key for every safe deposit box in plain sight.

“We are grateful to the Justice Department and the FBI for their tireless efforts in determining that the military arm of China was responsible for the cyberattack on Equifax in 2017,” Equifax CEO Mark Begor said in a statement. “It is reassuring that our federal law enforcement agencies treat cybercrime—especially state-sponsored crime—with the seriousness it deserves.”

"Our goal collectively here, aside from just being sure this doesn’t happen to us again, is really to help to the best degree possible to help reduce the likelihood that it’ll happen with other organizations," Jamil Farshchi, chief information security officer at Equifax, told WIRED.

Name Game

Some elements of the Equifax hack—particularly the role of the Apache Struts vulnerability—had been public for some time. But pinning the attack on China adds an important new dimension, both in terms of the Equifax incident itself and international relations.

The US and China have gone through a turbulent few years on the cybersecurity front. In 2014, the DOJ charged five members of the PLA with hacking crimes against US companies. The following year, the two countries signed what amounted to a digital truce, one that more or less held fast throughout the remainder of the Obama administration.

Recent years, though, have seen indications that the détente is unraveling. The Marriott and Anthem hacks both began in 2014, prior to the Obama truce. But China has of late increasingly focused on cyberattacks in service of corporate espionage. That includes compromising the CCleaner security tool to create a backdoor into enterprise networks, and using its APT10 hackers to infiltrate so-called Managed Service Providers as a springboard to dozens of vulnerable companies.


That aggression, combined with allegations of rampant intellectual property theft and an ongoing trade war, have further stressed the US-China relationship. Adding Equifax to the pile is uniquely troubling.

“This data has economic value, and these thefts can feed China’s development of artificial intelligence tools as well as the creation of intelligence targeting packages,” Barr said. “Our cases reveal a pattern of state-sponsored computer intrusion and thefts by China targeting trade secrets and confidential business information.”

Monday's announcement marks only the second time that the US has indicted Chinese military hackers by name. (Linked with China’s Ministry of State Security, APT10 is considered non-military.) The first time was in 2014. As then, and as has increasingly been the case with named Russian hackers in DOJ allegations, the step has potential downsides.

“I worry that the Chinese will engage in tit-for-tat behavior,” says former National Security Agency analyst Dave Aitel. “It would be good to have a clear signal in terms of doctrine.”

There’s also the practicality of ever bringing the accused to face justice, given that they’re Chinese citizens working in the service of that government. “Some might wonder what good it does when these hackers are seemingly beyond our reach,” FBI deputy director David Bowdich said at Monday’s press conference. “We’ll use our unique authorities, our experiences, and our capabilities, with the help of our partners both at home or abroad, to fight this threat each and every day, and will continue to do so.”

For victims of the Equifax hack—nearly half of all US citizen—the apparent revelation that China was behind it doesn’t change much unless you’re someone the country might target for intelligence-gathering purposes. Personally identifiable information is leverage, after all. But for most people, the playbook remains the same: Keep an eye on your accounts, and get your settlement money.

The real concern is more existential. It’s unclear the extent to which this will exacerbate already troubled relationships between two global powers. Regardless, it’s unsettling how seemingly easy it was to pull off a data heist of such unprecedented proportion.

“There's a lot of interesting, mind-bending stuff here,” says Aitel. “Like that it only took four people to gather the private information of half of the United States population.”

Additional reporting by Lily Hay Newman

Read more:

Continue Reading


Mark Zuckerberg: Facebook must accept some state regulation



Co-founder says site sits between telephone company and newspaper as content provider

Facebook must accept some form of state regulation, acknowledging its status as a content provider somewhere between a newspaper and a telephone company, its co-founder Mark Zuckerberg has said.

He also claimed an era of clean democratic elections, free of interference by foreign governments, is closer due to Facebook now employing 35,000 staff working on monitoring content and security.

He admitted Facebook had been slow to understand the scale of the problem of foreign interference. He also defended his company from claims that it is leading to political polarisation, saying its purpose is to bring communities together.

Speaking at the Munich Security Conference, an annual high-level gathering of politicians, diplomats and security specialists, Zuckerberg sought to dispel the notion that his company had undermined democracy, weakened the social fabric or contributed to the weakening of the west through spreading distrust.

He said he supported state regulations in four fields covering elections, political discourse, privacy and data portability. He said: We dont want private companies making so many decision-balancing social equities without democratic processes.

Zuckerberg, who is due to have fresh discussions with the EU commission regulators on Monday said, so long as enough people have weighed in to come up with an answer on regulation, the answer will not necessarily be right, but the process by which the decision is taken will in itself help build greater trust in the internet.

By contrast, he said authoritarian states were introducing highly controlled forms of internet that limited free expression. I do think that there should be regulation in the west on harmful content theres a question about which framework you use for this, Zuckerberg said during a question-and-answer session at the event.

Right now there are two frameworks that I think people have for existing industries theres newspapers and existing media, and then theres the telco-type model, which is the data just flows through you, but youre not going to hold a telco responsible if someone says something harmful on a phone line. I actually think where we should be is somewhere in between, he said.

He pointed out Facebook publishes 100bn pieces of content every day, adding: It is simply not possible to have some kind of human editor responsible to check each one.

Facebooks responsibility for its content was not analogous to that of a newspaper editor, he said. Without expanding, he said some kind of third regulatory structure was required settled somewhere between newspapers and telephones.

Denying Facebooks choice of content led to confirmatory bias by only giving its subscribers information with which they agree, he said: We try to show some balance of views.

The average Facebook subscriber has about 200 friends, most of whom share similar views. It is not a technology problem, it is a social affirmation problem, he argued. The choice of what you see is based on the balance of what you share, rather than by choosing what you see. If your cousin has had a baby we had better make sure that is near the top, he said.

He said his firm had been slow to see how foreign powers were interfering in elections, but Facebook was now spending an amount on security and content equivalent to the total value of the company in 2012, and claimed this massive effort was producing a greater understanding about how to protect the integrity of elections. Nearly 1m accounts had been taken down, he said.

But he warned new domestic actors, as well as foreign powers, were seeking to disrupt elections. The outside forces were also becoming more sophisticated in covering their tracks by pretending their messages were coming from a variety of IP addresses in different countries.

Facebook was also offering election campaigns a new free service where the candidate provides the internet details of its staff, and if one or more of the staff is hacked, the campaigns security can be increased to a higher state of protection.

He said the firm had shifted from a reactive to proactive model, so much so that 99% of terrorist content is taken down before any external complaint is made. In the case of hate speech, 80% of content is removed without notification, but Facebooks Artificial Intelligence was still struggling to distinguish the small nuances between content that was hate speech, or content that was condemning the hate speech, he said.

AAsked by Ronen Bergman of the New York Times about Facebook and WhatsApps lawsuit against Israeli spyware company NSO Group, Zuckerberg shrugged off the idea that the case could damage governments ability to work against terrorism. They can defend themselves in court if what they think is legal, he said, but our view is that people should not be trying to hack into software that billions of people around the world use to try to communicate securely.

Read more:

Continue Reading