Connect with us

Ai

Microsoft wields ML to catch child predators, city drops 7-year facial-recognition experiment after no arrests…

Published

on

Roundup Welcome to the first AI round up of this year. AI continues to spread like wildfire and everyone wants a slice of the pie – even Hollywood. Read on for the latest flop in facial recognition, too.

Hollywood is cosying up to AI algos: Warner Bros, the massive American film studio and entertainment conglomerate, is employing algorithmic tools to help it decide if a film will become a blockbuster, or go bust at the cinema.

Studios like Warner Bros have a limited amount of budget to splash on new projects every year. Directors bid fiercely to fund the films they believe will make everyone the most profits. But there are a vast number of factors to consider, and it can become all very time consuming and prove to be a total waste of time if a film eventually flops. So, why not employ a machine to help you decide?

Warner Bros have signed a deal with Cinelytic, an AI analytics startup based in Los Angeles, to do just that, according to Hollywood Reporter. Cinelytic’s software will help predict a particular film’s profits, helping studios decide issues like what to release and where.

“The platform reduces executives’ time spent on low-value, repetitive tasks and instead focuses on generating actionable insights for packaging, green-lighting, marketing and distribution decisions in real time,” according to a statement from Cinelytic.

The ultimate decision whether to fund a film or not, however, is still up to humans. Hopefully that’ll stop more mistakes like Cats.

Over Christmas… Nvidia improved its StyleGAN software – capable of generating realistic photos of faces, buildings, and so on, from scratch – to version two, ironing out artifacts that give away the fact the images were imagined by a computer.

Microsoft is licensing software that catches child groomers on Xbox: Redmond has deployed a tool it has been developing with academics to prevent online child abuse.

Codenamed Project Artemis, the software analyzes text conversations and rates how inappropriate the interactions are and if the messages should be flagged for human moderators to review. Those humans then report suspected sexual exploitation to law enforcement.

Microsoft’s chief digital safety officer Courtney Gregoire did not reveal how Project Artemis works in a blog post, this week, so we spoke to the boffins behind it directly.

The tool was developed internally with the help of academics, who participated in a hackathon in 2018. Hany Farid, a professor working at the University of California Berkeley’s department of electrical engineering and computer science and the school of information, told The Register that no fancy deep learning was used, instead the system is based on some “fairly standard non-linear regression to learn a numeric risk score based on the text-based conversation between two people.”

Companies interested in licensing the technology should contact Thorn, a tech company building software applications aimed at protecting children against sexual abuse.

Uh oh! Contractors have snooped in on thousands of Skype calls: Stop us if you’ve heard this one before, but contractors working on behalf of tech companies have been listening to sensitive audio clips gleaned from users in the hopes of improving its services.

This time it’s Skype owner Microsoft. A former contractor working in Beijing revealed that he had listened to thousands of sensitive and disturbing recordings over Skype and Cortana. There was little security and workers in China could access the clips via a web app on Google Chrome, as reported by The Guardian.

The leaker was also encouraged to use the same password for all his Microsoft accounts, apparently. Contractors were not given any security training either, a risky move considering the data could be stolen by miscreants. He said he heard “all kinds of unusual conversations, including what could have been domestic violence.”

Microsoft has since said that it has updated its privacy statement to make it clear that humans are sometimes listening in on Skype calls or interactions with its voice-enabled assistant Cortana. And it said that recorded audio clips flagged for review are only ten seconds long, so that contractors don’t have access to longer conversations.

Here’s how the White House wants America’s companies to develop AI tech: The Trump Administration is working to expand its national AI strategy to broach the topic of regulation.

There is little oversight or rules on how AI technology should be used by the private sector at the moment. So the US government wants to take a stab at changing that by, erm, “proposing a first-of-its-kind set of regulatory principles”.

These principles probably won’t do much, they’re not real policies unless backed up by law. But nevertheless, the Trump Administration wants to make some sort of attempt at guiding regulation.

“Must we decide between embracing this emerging technology and following our moral compass?,” the chief technology officer of the US, Michael Kratsios, wrote in an op-ed published in Bloomberg, this week.

“That’s a false choice. We can advance emerging technology in a way that reflects our values of freedom, human rights and respect for human dignity,” he continued. Kratsios proposes that federal agencies should make it easier for the public, academics, companies, and non-profits to make comments and leave feedback on any AI policies made.

Agencies like the National Institute of Standards and Technology (NIST) should assess a product’s risk and cost before regulating a particular technology. They should also take into account issues like transparency, safety, security and fairness that support American values.

“Americans have long embraced technology as a tool to improve people’s lives. With artificial intelligence, we are ready to do it again,” Kratsios concluded.

San Diego has ended its seven-year experiment with facial recognition: Finally, here’s a long read on how San Diego’s law enforcement used facial recognition over seven years to hunt for criminals prowling the American city’s streets.

A network of 1,300 cameras embedded on smartphones and tablets manipulated by staff recorded over 65,000 faces from 2012 to 2019. These images were then ran against a database of mugshots to look for any potential matches.

And over these last seven years, no arrests as result from the technology were ever made, according to Fast Company. But for some bizarre reason, police didn’t track the results of its experiment so there is no solid evaluation on the system’s performance.

As of 2020, San Diego has shut down the experiment. You can read more about that here. ®

Sponsored: Detecting cyber attacks as a small to medium business

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/13/ai_roundup_100120/

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Ai

Artist Refik Anadol Turns Data Into Art, With Help From AI

Published

on

By

Giant stashes of data are valuable assets to corporations in industries from fossil fuels to finance. Artist Refik Anadol sees pools of data as something else—material for what he calls a new kind of “sculpture.”

Anadol creates mesmerizing art installations by seeking out interesting data sets and processing them into swirling visualizations of how computers capture the world and people in it. He does it by using techniques from artificial intelligence, specifically machine learning algorithms, to filter or expand on his raw material.

The results, shown on giant screens or projected onto walls or entire buildings, use data points in a kind of AI pointillism.

Anadol explains his creative process in a new WIRED video. It features works including Machine Hallucination, a 360-degree video installation made from 10 million photos of New York. Anadol used machine learning to group photos and morph between them, creating flickering images of the city as recorded by many different people. “It’s kind of like a collective memory,” he says. “A building in New York can be explored from multiple angles, from different times of the year.”

Keep Reading
The latest on artificial intelligence, from machine learning to computer vision and more

Anadol has also used his data sculptures to look inward, inside the human brain. After discovering his uncle did not recognize him due to the onset of Alzheimer's, the artist teamed up with neuroscientists to gather a new source of data. “I thought of the most precious and most private information that we hold as humanity,” he says.

The scientists used a hat studded with electrodes to capture the brain activity of people reflecting on childhood memories. Anadol turned the data into hypnotically moving fluids shown on a 20-foot-tall LED screen.

One theme of Anadol’s work is the symbiosis and tension between people and machines. The artist says his work is an example of how AI—like other technologies—will have a broad range of uses. “When we found fire, we cooked with it, we created communities; with the same technology we kill each other or destroy,” Anadol says. “Clearly AI is a discovery of humanity that has the potential to make communities, or destroy each other.”


Read more: https://www.wired.com/story/artist-refik-anadol-turns-data-art-help-ai/

Continue Reading

Ai

Feds Are Content to Let Cars Drive, and Regulate, Themselves

Published

on

By

The Trump administration Wednesday reaffirmed its policy to maintain a light touch in regulating self-driving vehicles, with a new document that is long on promoting the industry and silent on rules governing testing or operating the vehicles. “The takeaway from the [new policy] is that the federal government is all in” on automated driving systems, US transportation secretary Elaine Chao told an audience at CES in Las Vegas, where she announced the update.

Currently, the federal government offers voluntary safety guidelines for the 80-odd developers working on self-driving vehicles in the US, and it leaves most regulation to the states. Despite calls from some safety advocates—including the National Transportation Safety Board, following a fatal 2018 crash involving an Uber self-driving car—the updated policy doesn’t set out regulations for the tech. The Transportation Department has said it’s waiting for guidance from Congress, which has so far failed to pass any legislation related to self-driving vehicles.

Want the latest news on self-driving cars in your inbox? Sign up here!

The new policy seeks to demonstrate that the US government is firmly in developers’ corner. It outlines how the Trump administration has worked across 38 federal agencies—including the departments of Agriculture, Defense, and Energy, the White House, NASA, and the United States Postal Service—to unify its approach to self-driving, and to point billions towards its research and development. It says the government will help protect sensitive, AV-related intellectual property, and outlines tax incentives to those working on self-driving tech in the US. It also emphasizes the need for a unified industry approach to cybersecurity and consumer privacy. The DOT says it will publish a “comprehensive plan” for safe deployment of self-driving vehicles in the US sometime this year.

A full-speed-ahead approach is needed, Chao said, because “automated vehicles have the potential to save thousands of lives, annually.” Unlike humans, robots don’t get drunk, tired, or distracted (though they have lots of learning to do before they can be deployed on a wide scale). According to government data, 36,560 people died in highway crashes in 2018, 2.4 percent fewer than the prior year. Developers often argue it's too soon to regulate self-driving vehicles because the tech is still immature.

The policy reflects the light and tech-neutral touch the Trump administration has generally taken with developing tech, even as fears about surveillance and privacy swirl. Also on Wednesday at CES, US chief technology officer Michael Kratsios outlined the administration’s approach to artificial intelligence, which calls for development supported by “American values” and a process of “risk assessment and cost-benefit analyses” before regulatory action.

LEARN MORE
The WIRED Guide to Self-Driving Cars

In the US, states have taken the lead in regulating the testing of self-driving vehicles, and they are demanding varying levels of transparency from companies like Waymo, Cruise, Uber, and Aurora that are operating on public roads. (The Transportation Department has said that it provides technical assistance to state regulators.) As a result, no one has a crystal clear picture of where testing is happening, or how the tech is developing overall. (Waymo, which is currently carrying a limited number of paying passengers in totally driverless vehicles in metro Phoenix, is widely thought to be in the lead.) The National Highway Traffic Safety Administration, the federal government’s official auto regulator, has politely asked each developer to conduct a voluntary safety self-assessment and outline its approach to safety. But just 18 companies have submitted those assessments so far, and the quality of information within them ranges widely.

Advertisement

Not all road safety advocates are pleased with that approach. “The DOT is supposed to ensure that the US has the safest transportation system in the world, but it continues to put this mission second, behind helping industry rush automated vehicles,” Ethan Douglas, a senior policy analyst for cars and product safety at Consumer Reports, said in a statement.

Some calls are coming from within the US government. In November, the National Transportation Safety Board released its final report on a fatal 2018 collision between a testing Uber self-driving vehicle and an Arizona pedestrian crossing a road. The watchdog agency’s recommendations included calls to make the safety assessments mandatory and to set up a system through which NHTSA might evaluate them. “We’re just trying to put some bounds on the testing on the roadways,” NTSB chair Robert Sumwalt said. At the time, NHTSA said it would “carefully review” the recommendations.


Read more: https://www.wired.com/story/feds-content-cars-drive-regulate-themselves/

Continue Reading

Ai

Two Sigma Ventures raises $288M, complementing its $60B hedge fund parent

Published

on

By

Eight years ago, Two Sigma Investments began an experiment in early-stage investing.

The hedge fund, focused on data-driven quantitative investing, was well on its way to amassing the $60 billion in assets under management that it currently holds, but wanted more exposure to early-stage technology companies, so it created a venture capital arm, Two Sigma Ventures.

At the time of the firm’s launch it made a series of investments, totaling about $70 million, exclusively with internal capital. The second fund was a $150 million vehicle that was backed primarily by the hedge fund, but included a few external limited partners.

Now, eight years and several investments later, the firm has raised $288 million in new funding from outside investors and is pushing to prove out its model, which leverages its parent company’s network of 1,700 data scientists, engineers and industry experts to support development inside its portfolio.

The world is becoming awash in data and there’s continuing advances in the science of computing,” says Two Sigma Ventures co-founder Colin Beirne. “We thought eight years ago when when started, that more and more companies of the future would be tapping into those trends.”

Beirne describes the firm’s investment thesis as being centered on backing data-driven companies across any sector — from consumer technology companies like the social networking monitoring application, Bark, or the high-performance, high-end sports wearable company, Whoop.

Alongside Beirne, Two Sigma Ventures is led by three other partners: Dan Abelon, who co-founded SpeedDate and sold it to IAC; Lindsey Gray, who launched and led NYU’s Entrepreneurial Institute; and Villi Iltchev, a former general partner at August Capital.

Recent investments in the firm’s portfolio include Firedome, an endpoint security company; NewtonX, which provides a database of experts; Radar, a location-based data analysis company; and Terray Therapeutics, which uses machine learning for drug discovery.

Other companies in the firm’s portfolio are farther afield. These include the New York-based Amper Music, which uses machine learning to make new music; and Zymergen, which uses machine learning and big data to identify genetic variations useful in pharmaceutical and industrial manufacturing.

Currently, the firm’s portfolio is divided between enterprise investments, consumer-facing deals and healthcare-focused technologies. The biggest bucket is enterprise software companies, which Beirne estimates represents about 65% of the portfolio. He expects the firm to become more active in healthcare investments going forward.

“We really think that the intersection of data and biology is going to change how healthcare is delivered,” Beirne says. “That looks dramatically different a decade from now.”

To seed the market for investments, the firm’s partners have also backed the Allen Institute’s investment fund for artificial intelligence startups.

Together with Sequoia, KPCB and Madrona, Two Sigma recently invested in a $10 million financing to seed companies that are working with AI. “This is a strategic investment from partner capital,” says Beirne.

Typically startups can expect Two Sigma to invest between $5 million and $10 million with its initial commitment. The firm will commit up to roughly $15 million in its portfolio companies over time.

Read more: https://techcrunch.com/2020/01/22/two-sigma-ventures-raises-288-million-complementing-its-60-billion-hedge-fund-parent/

Continue Reading

Trending