Connect with us

Ai

Why your cat is lousy at chess yet way smarter than even the most advanced AI

Published

on

Why your cat is lousy at chess yet way smarter than even the most advanced AI

If you share your home with a dog or a cat, look at it carefully and you will get a good overview of everything we don’t know how to do in artificial intelligence.

“But my cat does nothing all day except sleep, eat and wash herself,” you may think. And yet your cat knows how to walk, run, jump (and land on her feet), hear, see, watch, learn, play, hide, be happy, be sad, be afraid, dream, hunt, eat, fight, flee, reproduce, educate her kittens – and the list is still very long.

Each of these actions requires processes that are not directly “intelligence” in the most common sense but are related to cognition and animal intelligence. All animals have their own cognition, from the spider that weaves its web to the guide dogs that help people find their way. Some can even communicate with us. Not by speech, of course, but cats and dogs don’t hesitate to use body language and vocalization – meowing, barking, wagging their tails – to get what they want.

Let’s look again at your cat. When she comes carelessly to rub up against you or sits in front of her bowl or in front of a door, the message is quite clear. She is looking for a caress, is hungry or wants to go out (then get in, then out, then in…). She has learned to interact with you to achieve her goals.

Walking, a complex problem

Among all these cognitive skills, there is only a handful that we are beginning to know how to reproduce artificially. For example, bipedal locomotion – walking with two legs. It might be easy and natural to us, but it is actually something extremely complicated for robotics and it took decades of intensive research to build and program a robot that more or less walks properly on its own two legs. That is, without falling because of small pebble under its foot or when a person simply walked by a little too close.

Remember that it takes a baby an average of one full year to learn how to walk, demonstrating the complexity of what may seem like a “simple” problem. And I’m only talking about walking, not hopscotch or, say, soccer.

Today, one of the biggest challenges in autonomous robotics is to design and built two-legged robots that can successfully play one of the most popular human team sports. The Robocup 2020, which brings together nearly 3,500 researchers and 3,000 bipedal robots, will take place next year in Bordeaux, France. There you’ll be able to observe them playing soccer, and while great strides have been made (literally), they remain distinctly clumsy and a long way from the thrills of the human World Cup.

Identifying isn’t understanding

What about object recognition? Today we know how to create computer algorithms that can do that, don’t we? While it is true that some can now name the content of almost any image, this does not relate to intelligence or cognition.

To understand this, you have to look at how these algorithms work. Supervised learning, which remains the most popular method, consists of presenting images and a label describing the content of the image to the program. The total number of images is generally much higher than the number of labels. Each label is associated with a very large number of images representing the object in different situations, under different angles of view, under different lights, etc.

For example, for an AI program to be able to recognize cats, up to one million images must be presented. By doing so, it will build an internal visual representation of the object by calculating a kind of average of all the images. But this representation is ultimately only a simple description that is not anchored in any reality. Humans can recognize a cat from its purr, the feel of its fur against the leg, the delicate scent of a litter box that’s overdue for a cleaning. All these and a hundred more say “cat” to us, but mean nothing to even the most sophisticated AI program.

To do so, an algorithm would need a body that allows it to experience the world. But then, could it understand what a drink is if it’s never thirsty? Could it understand fire if it’s never been burned? Could it understand the cold if it never shudders? When an algorithm “recognizes” an object, it doesn’t understand at all – really, not at all – the nature of that object. It only proceeds by cross-checking with examples previously presented. This explains why there have been a number of autonomous-car crashes. While roadways are a highly constrained form of the world, they remain visually and functionally complex – vulnerable users such as pedestrians and cyclists can too easily be overlooked, or one street element mistaken for another. And the consequences of AI’s shortcomings have sometimes been fatal.

The sensible experience of the world

What about humans? Try the experience of showing a real puppy to a child and she will be able to recognize any other puppy (even if she doesn’t know the word yet). Parents, by designating and naming things, will help the child develop language based on concepts that she has experienced before. But this learning, which may seem easy, even obvious, to us is not.

This is beautifully illustrated by the life of Helen Keller, who lost her hearing, sight and power of speech at the age of two. Her educator, Anne Sullivan, tried for a long time to teach her the words by drawing signs on the palm of Helen’s hand and then touching the corresponding object. Anne Sullivan’s efforts were initially unsuccessful because Helen did not have the entry points for this strange dictionary. Until the day that Anne took Helen to a well, let water run over her hand and…

“Suddenly I felt a misty consciousness as of something forgotten – a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that “w-a-t-e-r” meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! … Everything had a name, and each name gave birth to a new thought. As we returned to the house every object which I touched seemed to quiver with life.“

Those are the words of Helen Keller herself. She wrote them a few years later in her book The Story of My Life (1905). For her, on that precise day, the symbols were forever grounded in reality.

While spectacular progress has been made in the field of machine learning, grounding the digital symbols into the real world remains completely unresolved. And without the resolution of this problem, which is necessary but probably not a sufficient, there will be no general artificial intelligence. So there are still a lot of things that we are far from knowing how to do with artificial intelligence. And remember, “elephants don’t play chess.”The Conversation

This article is republished from The Conversation by Nicolas P. Rougier, Chargé de Recherche en neurosciences computationelles, Université de Bordeaux under a Creative Commons license. Read the original article.

Read next:

How birds drop ‘unnecessary’ genes can help us understand evolution

Published at Thu, 26 Dec 2019 00:00:45 +0000

Ai

Artist Refik Anadol Turns Data Into Art, With Help From AI

Published

on

By

Giant stashes of data are valuable assets to corporations in industries from fossil fuels to finance. Artist Refik Anadol sees pools of data as something else—material for what he calls a new kind of “sculpture.”

Anadol creates mesmerizing art installations by seeking out interesting data sets and processing them into swirling visualizations of how computers capture the world and people in it. He does it by using techniques from artificial intelligence, specifically machine learning algorithms, to filter or expand on his raw material.

The results, shown on giant screens or projected onto walls or entire buildings, use data points in a kind of AI pointillism.

Anadol explains his creative process in a new WIRED video. It features works including Machine Hallucination, a 360-degree video installation made from 10 million photos of New York. Anadol used machine learning to group photos and morph between them, creating flickering images of the city as recorded by many different people. “It’s kind of like a collective memory,” he says. “A building in New York can be explored from multiple angles, from different times of the year.”

Keep Reading
The latest on artificial intelligence, from machine learning to computer vision and more

Anadol has also used his data sculptures to look inward, inside the human brain. After discovering his uncle did not recognize him due to the onset of Alzheimer's, the artist teamed up with neuroscientists to gather a new source of data. “I thought of the most precious and most private information that we hold as humanity,” he says.

The scientists used a hat studded with electrodes to capture the brain activity of people reflecting on childhood memories. Anadol turned the data into hypnotically moving fluids shown on a 20-foot-tall LED screen.

One theme of Anadol’s work is the symbiosis and tension between people and machines. The artist says his work is an example of how AI—like other technologies—will have a broad range of uses. “When we found fire, we cooked with it, we created communities; with the same technology we kill each other or destroy,” Anadol says. “Clearly AI is a discovery of humanity that has the potential to make communities, or destroy each other.”


Read more: https://www.wired.com/story/artist-refik-anadol-turns-data-art-help-ai/

Continue Reading

Ai

Feds Are Content to Let Cars Drive, and Regulate, Themselves

Published

on

By

The Trump administration Wednesday reaffirmed its policy to maintain a light touch in regulating self-driving vehicles, with a new document that is long on promoting the industry and silent on rules governing testing or operating the vehicles. “The takeaway from the [new policy] is that the federal government is all in” on automated driving systems, US transportation secretary Elaine Chao told an audience at CES in Las Vegas, where she announced the update.

Currently, the federal government offers voluntary safety guidelines for the 80-odd developers working on self-driving vehicles in the US, and it leaves most regulation to the states. Despite calls from some safety advocates—including the National Transportation Safety Board, following a fatal 2018 crash involving an Uber self-driving car—the updated policy doesn’t set out regulations for the tech. The Transportation Department has said it’s waiting for guidance from Congress, which has so far failed to pass any legislation related to self-driving vehicles.

Want the latest news on self-driving cars in your inbox? Sign up here!

The new policy seeks to demonstrate that the US government is firmly in developers’ corner. It outlines how the Trump administration has worked across 38 federal agencies—including the departments of Agriculture, Defense, and Energy, the White House, NASA, and the United States Postal Service—to unify its approach to self-driving, and to point billions towards its research and development. It says the government will help protect sensitive, AV-related intellectual property, and outlines tax incentives to those working on self-driving tech in the US. It also emphasizes the need for a unified industry approach to cybersecurity and consumer privacy. The DOT says it will publish a “comprehensive plan” for safe deployment of self-driving vehicles in the US sometime this year.

A full-speed-ahead approach is needed, Chao said, because “automated vehicles have the potential to save thousands of lives, annually.” Unlike humans, robots don’t get drunk, tired, or distracted (though they have lots of learning to do before they can be deployed on a wide scale). According to government data, 36,560 people died in highway crashes in 2018, 2.4 percent fewer than the prior year. Developers often argue it's too soon to regulate self-driving vehicles because the tech is still immature.

The policy reflects the light and tech-neutral touch the Trump administration has generally taken with developing tech, even as fears about surveillance and privacy swirl. Also on Wednesday at CES, US chief technology officer Michael Kratsios outlined the administration’s approach to artificial intelligence, which calls for development supported by “American values” and a process of “risk assessment and cost-benefit analyses” before regulatory action.

LEARN MORE
The WIRED Guide to Self-Driving Cars

In the US, states have taken the lead in regulating the testing of self-driving vehicles, and they are demanding varying levels of transparency from companies like Waymo, Cruise, Uber, and Aurora that are operating on public roads. (The Transportation Department has said that it provides technical assistance to state regulators.) As a result, no one has a crystal clear picture of where testing is happening, or how the tech is developing overall. (Waymo, which is currently carrying a limited number of paying passengers in totally driverless vehicles in metro Phoenix, is widely thought to be in the lead.) The National Highway Traffic Safety Administration, the federal government’s official auto regulator, has politely asked each developer to conduct a voluntary safety self-assessment and outline its approach to safety. But just 18 companies have submitted those assessments so far, and the quality of information within them ranges widely.

Advertisement

Not all road safety advocates are pleased with that approach. “The DOT is supposed to ensure that the US has the safest transportation system in the world, but it continues to put this mission second, behind helping industry rush automated vehicles,” Ethan Douglas, a senior policy analyst for cars and product safety at Consumer Reports, said in a statement.

Some calls are coming from within the US government. In November, the National Transportation Safety Board released its final report on a fatal 2018 collision between a testing Uber self-driving vehicle and an Arizona pedestrian crossing a road. The watchdog agency’s recommendations included calls to make the safety assessments mandatory and to set up a system through which NHTSA might evaluate them. “We’re just trying to put some bounds on the testing on the roadways,” NTSB chair Robert Sumwalt said. At the time, NHTSA said it would “carefully review” the recommendations.


Read more: https://www.wired.com/story/feds-content-cars-drive-regulate-themselves/

Continue Reading

Ai

Two Sigma Ventures raises $288M, complementing its $60B hedge fund parent

Published

on

By

Eight years ago, Two Sigma Investments began an experiment in early-stage investing.

The hedge fund, focused on data-driven quantitative investing, was well on its way to amassing the $60 billion in assets under management that it currently holds, but wanted more exposure to early-stage technology companies, so it created a venture capital arm, Two Sigma Ventures.

At the time of the firm’s launch it made a series of investments, totaling about $70 million, exclusively with internal capital. The second fund was a $150 million vehicle that was backed primarily by the hedge fund, but included a few external limited partners.

Now, eight years and several investments later, the firm has raised $288 million in new funding from outside investors and is pushing to prove out its model, which leverages its parent company’s network of 1,700 data scientists, engineers and industry experts to support development inside its portfolio.

The world is becoming awash in data and there’s continuing advances in the science of computing,” says Two Sigma Ventures co-founder Colin Beirne. “We thought eight years ago when when started, that more and more companies of the future would be tapping into those trends.”

Beirne describes the firm’s investment thesis as being centered on backing data-driven companies across any sector — from consumer technology companies like the social networking monitoring application, Bark, or the high-performance, high-end sports wearable company, Whoop.

Alongside Beirne, Two Sigma Ventures is led by three other partners: Dan Abelon, who co-founded SpeedDate and sold it to IAC; Lindsey Gray, who launched and led NYU’s Entrepreneurial Institute; and Villi Iltchev, a former general partner at August Capital.

Recent investments in the firm’s portfolio include Firedome, an endpoint security company; NewtonX, which provides a database of experts; Radar, a location-based data analysis company; and Terray Therapeutics, which uses machine learning for drug discovery.

Other companies in the firm’s portfolio are farther afield. These include the New York-based Amper Music, which uses machine learning to make new music; and Zymergen, which uses machine learning and big data to identify genetic variations useful in pharmaceutical and industrial manufacturing.

Currently, the firm’s portfolio is divided between enterprise investments, consumer-facing deals and healthcare-focused technologies. The biggest bucket is enterprise software companies, which Beirne estimates represents about 65% of the portfolio. He expects the firm to become more active in healthcare investments going forward.

“We really think that the intersection of data and biology is going to change how healthcare is delivered,” Beirne says. “That looks dramatically different a decade from now.”

To seed the market for investments, the firm’s partners have also backed the Allen Institute’s investment fund for artificial intelligence startups.

Together with Sequoia, KPCB and Madrona, Two Sigma recently invested in a $10 million financing to seed companies that are working with AI. “This is a strategic investment from partner capital,” says Beirne.

Typically startups can expect Two Sigma to invest between $5 million and $10 million with its initial commitment. The firm will commit up to roughly $15 million in its portfolio companies over time.

Read more: https://techcrunch.com/2020/01/22/two-sigma-ventures-raises-288-million-complementing-its-60-billion-hedge-fund-parent/

Continue Reading

Trending