Connect with us

Artificial Intelligence

Chatbots spotlight machine learning’s trillion-dollar potential



Chatbots spotlight machine learning’s trillion-dollar potential

The global industry potential of artificial intelligence is well-documented, yet the vision of this AI future is uncertain.

AI and automation trends are generating significant debate among economists and governments, particularly around employment impact and uncertain social outcomes. The mainstream attention is warranted. According to PwC, AI “could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined.”

AI is at a crossroads, and its long-term outlook is still hotly debated. Despite social media giants, automotive companies and numerous other industries investing hundreds of billions of dollars in AI, many automation technologies are not yet directly generating revenue and instead are forecasted to become profitable in the coming decades. This creates additional uncertainty of AI’s true market potential. The realistic potential value of AI is unknown, yet, as the technology advances, the ultimate impact could be of great consequence to virtually every economy.

There are many reasons to view AI’s future from an optimistic lens, however: chatbots provide significant evidence for AI’s positive impact on both business growth and employment markets. Today, chatbots are increasingly capable of mimicking human interactions and conversations to assist business-to-business, business-to-consumer, business-to-government, advertising audiences and other diverse groups. The evolution of the cognitive computer science behind conversational chatbots is perhaps one of the best examples of AI technologies driving revenue. Further, chatbot technology shows some of the greatest promise for augmenting, rather than replacing human workers.

AI is driving value while augmenting human workers

Chatbots are delivering real revenue today for some of the world’s leading financial services (Bank of America), retail (Levi’s), and technology companies (Zendesk) . We’re seeing more consumers taking the next step in a transaction or even making a purchase decision based off conversations with chatbots. Beyond driving sales, chatbots have numerous applications to a wide range of organizations. Nonprofits, NGOs, and even political campaigns find value in deploying chatbots to help handle the influx of inquiries from stakeholders and relevant audiences.

Rather than these chatbots replacing human workers, organizations are finding chatbots to be a helpful and value-creating opportunity that frees employees to focus on more strategic tasks. Apple’s Siri, Amazon Alexa and Microsoft Cortana aren’t replacing executive assistants today, but these technologies are all capable of supporting the executive assistant function in the workplace.

Gartner predicts AI augmentation, defined as a “human-centered partnership model of people and AI working together to enhance cognitive performance,” could generate $2.9 trillion of business value by 2021. Many industries see potential for chatbots to augment functions like sales, customer support and IT, enabling workers to create value in more strategic ways. Bain & Company finds chatbots to be among the most notable examples of artificial intelligence and automation in practice: “Companies use AI applications to understand industry trends, manage their workforce, address problems, power chatbots and personalize content to enable self-service.”

Clearly, the implications of scaled, human-like engagement are stunning in their capacity to carry out tasks. A chatbot’s ability to simultaneously hold tens of thousands of conversations — pulling from many millions of data points — is comparable to what a human customer service rep could accomplish in more than 1,000 years of nonstop work. Scaling customer service via AI allows service professionals to focus on big picture and more complex issues, and it provides rich data on customer interactions. We anticipate seeing more companies look to build better customer service experiences through chatbots, as Google and Salesforce announced in April.

The transformative impact of chatbots across industries

From our research and work with leading global companies, it’s clear that enterprises are finding that chatbots bring about tremendous value while supporting both people employment and long-term business growth opportunities today. Ultimately, chatbots are on track to showcase some of the most optimistic examples of AI augmentation. Consider three examples:

Published at Thu, 12 Dec 2019 15:26:07 +0000

Continue Reading


Deepfakes are getting easier to make and the internet’s just not ready




With the proliferation of deepfake apps and features, AI-powered media manipulation technology is becoming more mainstream.
With the proliferation of deepfake apps and features, AI-powered media manipulation technology is becoming more mainstream.
Image: Elyse Samuels / The Washington Post via Getty Images

One of the coolest videos I’ve seen in the past year is a YouTube clip from The Late Show with David Letterman featuring actor and comedian Bill Hader. 

Or… was that actually Tom Cruise? It’s hard to tell sometimes because they keep seamlessly switching back and forth.

So, what exactly are you watching here? Well, someone took an unedited clip of Letterman interviewing Hader and then swapped in Cruise’s face using artificial intelligence.

The video is what is known as a deepfake, or manipulated media created through the power of AI. 

Deepfakes can be as straightforward as face-swapping one actor onto another in a clip from your favorite movie. Or, you can even have an impersonator provide audio to synced mouth movement and create an entirely new moment for that targeted individual. This Obama deepfake, voiced by Jordan Peele, is a perfect example of that usage.  

While the manipulated media is ultimately generated by AI, the human behind it still needs time and patience to craft a good quality deepfake. In the case of that altered Letterman clip, the creator of the video had to take the original clip and feed it to a powerful cloud computer alongside a slew of varying still images of Tom Cruise’s face. 

During this time, the computer is, in essence, studying the image and video. It’s “learning” how to best swap Hader and Cruise’s face and output a flawless piece of manipulated video. Sometimes, the AI takes weeks to perfect the deepfake. Plus, it can be expensive, too. You’ll need a computer with some pretty powerful specs or you’ll have to rent a virtual machine in the cloud to pull off high-quality deepfake creation. 

But, that’s quickly changing. Big tech companies are jumping on the trend and developing their own software so that users can create deepfake content. And now, deepfakes are becoming easier to create.

Earlier this week, the face-swapping mobile app Doublicat launched. Founded by artificial intelligence company RefaceAI, Doublicat is perhaps the simplest media manipulation tool yet. Users just need to download the app, snap a selfie, and choose from one of hundreds of GIFs portraying popular scenes from movies, TV shows, and the internet. Within seconds, your short, looping deepfake GIF is ready to share. 

“We’ve gone from worrying about sharing our personal data to now having to worry about sharing our personal images,” says Singer.

The GIFs are fairly simple and likely chosen based on which image would be easiest for the app to spit out an accurate face swap. It’s far from perfect, but it’s extremely fast. And what it can do with even low-quality selfies is impressive. In time, the technology is only going to get even better.

Doublicat told Mashable that “updates will be coming to allow users to upload their own GIFs, search for GIFs in-app, and use pictures from their phone’s camera roll.” 

Doublicat may be the simplest media manipulation tool in the U.S., but similar apps exist in international markets.

Zao, Snap’s new Cameos, Doublicat — face swapping is becoming a commodity thanks to creative entrepreneurs from China and Ukraine,” said Jean-Claude Goldenstein, founder and CEO of CREOpoint, a firm which helps businesses handle disinformation. Goldenstein points out that Snapchat recently acquired AI Factory, the company behind its Cameos feature for $166 million. 

TikTok, the massively popular video app owned by the Chinese-based Bytedance, has reportedly already developed a yet-to-launch deepfake app as well.

But, it’s not all fun and games.

“A deepfake can ruin a reputation in literally seconds, so if public figures don’t start prepping for these threats before they hit, they’re going to be in for a rude awakening if they ever have the misfortune of being featured in one of these videos,” Marathon Strategies CEO Phil Singer told Mashable. Singer’s PR firm recently launched a service specifically to deal with disinformation via deepfakes.

To understand the concern behind this seemingly harmless tech that’s been used to create funny videos, one needs to understand how deepfakes first rose to prominence.

In late 2017, the term “deepfake” was coined on Reddit to refer to AI-manipulated media. The best examples at the time were some funny Nicolas Cage-related videos. But, then, the fake sex videos took over. Using deepfake technology, users started taking their favorite Hollywood actresses and face-swapping them into adult film movies. Reddit moved to ban the pornographic use of deepfake in 2018 and expanded its deepfake policy just last week.

In an age of fake news and disinformation easily spread via the internet, it doesn’t take long to see how fake pornographic videos can ruin one’s life. Factor in that we’re now in a presidential election year, the first since coordinated disinformation campaigns ran amok in 2016, and you’ll understand why people are worried about malicious uses of this growing technology.

“We’ve gone from worrying about sharing our personal data to now having to worry about sharing our personal images,” says Singer. “People need to be extra judicious about sharing images of themselves because one never knows how they will be used.”

“It is only a matter of time before they become as ubiquitous as any of the social media tools people currently use,” he continued.

Most alarming is that some of the world’s biggest tech companies are still wondering how to combat nefarious deepfakes.

Just this month, Facebook announced its deepfake ban. One problem, though: How do you spot a deepfake? It’s an issue the largest social networking platform on the planet still hasn’t been able to properly solve. 

Facebook launched its Deepfake Detection Challenge to work with researchers and academics on solving this problem, but we’re still not there and we’ll likely never be there one hundred percent. 

According to Facebook’s Deepfake Detection website: “The AI technologies that power tampered media are rapidly evolving, making deepfakes so hard to detect that, at times, even human evaluators can’t reliably tell the difference.”

“That’s a serious problem since AI can’t reliably detect fake news or fact check fast enough,” explains CREOpoint’s Goldstein.

During our exchange, Goldstein sent me the following quote: “A lie is heard halfway around the world before the truth has a chance to put its pants on.”

While looking up the quote’s origin, interestingly, I discovered that different versions of the quote have often been misattributed over the years to Winston Churchill. 

If one really wanted to double-down on the belief that Churchill did say this, it seems like it wouldn’t be all that difficult to create a deepfake that “proves” he did.


Continue Reading

Artificial Intelligence

Echodyne steers its high-tech radar beam on autonomous cars with EchoDrive




Echodyne set the radar industry on its ear when it debuted its pocket-sized yet hyper-capable radar unit for drones and aircraft. But these days all the action is in autonomous vehicles — so they reinvented their technology to make a unique sensor that doesn’t just see things but can communicate intelligently with the AI behind the wheel.

EchoDrive, the company’s new product, is aimed squarely at AVs, looking to complement lidar and cameras with automotive radar that’s as smart as you need it to be.

The chief innovation at Echodyne is the use of metamaterials, or highly engineered surfaces, to create a radar unit that can direct its beam quickly and efficiently anywhere in its field of view. That means that it can scan the whole horizon quickly, or repeatedly play the beam over a single object to collect more detail, or anything in between, or all three at once for that matter, with no moving parts and little power.

But the device Echodyne created for release in 2017 was intended for aerospace purposes, where radar is more widely used, and its capabilities were suited for that field: a range of kilometers but a slow refresh rate. That’s great for detecting and reacting to distant aircraft, but not at all what’s needed for autonomous vehicles, which are more concerned with painting a detailed picture of the scene within a hundred meters or so.

“They said they wanted high-resolution, automotive bands [i.e. radiation wavelengths], high refresh rates, wide field of view, and still have that beam-steering capability — can you build a radar like that?,” recalled Echodyne co-founder and CEO Eben Frankenberg. “And while it’s taken a little longer than I thought it would, the answer is yes, we can!”

The EchoDrive system meets all the requirements set out by the company’s automotive partners and testers, with up to 60hz refresh rates, higher resolution than any other automotive radar and all the other goodies.

An example of some raw data — note that Doppler information lets the system tell which objects are moving which direction.

The company is focused specifically on level 4-5 autonomy, meaning their radar isn’t intended for basic features like intelligent cruise control or collision detection. But radar units on cars today are intended for that, and efforts to juice them up into more serious sensors are dubious, Frankenberg said.

“Most ADAS [advanced driver assist system] radars have relatively low resolution in a raw sense, and do a whole lot of processing of the data to make it clearer and make it more accurate as far as the position of an object,” he explained. “The level 4-5 folks say, we don’t want all that processing because we don’t know what they’re doing. They want to know you’re not doing something in the processing that’s throwing away real information.”

More raw data, and less processing — but Echodyne’s tech offers something more. Because the device can change the target of its beam on the fly, it can do so in concert with the needs of the vehicle’s AI.

Say an autonomous vehicle’s brain has integrated the information from its suite of sensors and can’t be sure whether an object it sees a hundred meters out is a moving or stationary bicycle. It can’t tell its regular camera to get a better image, or its lidar to send more lasers. But it can tell Echodyne’s radar to focus its beam on that object for a bit longer or more frequently.

The two-way conversation between sensor and brain, which Echodyne calls cognitive radar or knowledge-aided measurement, isn’t really an option yet — but it will have to be if AVs are going to be as perceptive as we’d like them to be.

Some companies, Frankenberg pointed out, are putting on the sensors themselves the responsibility for deciding which objects or regions need more attention — a camera may very well be able to decide where to look next in some circumstances. But on the scale of a fraction of a second, and involving the other resources available to an AV — only the brain can do that.

EchoDrive is currently being tested by Echodyne’s partner companies, which it would not name but which Frankenberg indicated are running level 4+ AVs on public roads. Given the growing number of companies that fit those once-narrow criteria, it would be irresponsible to speculate on their identities, but it’s hard to imagine an automaker not getting excited by the advantages Echodyne claims.


Continue Reading

Artificial Intelligence

Apple buys edge-based AI startup for a reported $200M



By, spun off in 2017 from the nonprofit Allen Institute for AI (AI2), has been acquired by Apple for about $200 million. A source close to the company corroborated a report this morning from GeekWire to that effect.

Apple confirmed the reports with its standard statement for this sort of quiet acquisition: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (I’ve asked for clarification just in case.) began as a process for making machine learning algorithms highly efficient — so efficient that they could run on even the lowest tier of hardware out there, things like embedded electronics in security cameras that use only a modicum of power. Yet using Xnor’s algorithms they could accomplish tasks like object recognition, which in other circumstances might require a powerful processor or connection to the cloud.

CEO Ali Farhadi and his founding team put the company together at AI2 and spun it out just before the organization formally launched its incubator program. It raised $2.7M in early 2017 and $12M in 2018, both rounds led by Seattle’s Madrona Venture Group, and has steadily grown its local operations and areas of business.

The $200M acquisition price is only approximate, the source indicated, but even if the final number were less by half that would be a big return for Madrona and other investors.

The company will likely move to Apple’s Seattle offices; GeekWire, visiting the offices (in inclement weather, no less), reported that a move was clearly underway. AI2 confirmed that Farhadi is no longer working there, but he will retain his faculty position at the University of Washington.

An acquisition by Apple makes perfect sense when one thinks of how that company has been directing its efforts towards edge computing. With a chip dedicated to executing machine learning workflows in a variety of situations, Apple clearly intends for its devices to operate independent of the cloud for such tasks as facial recognition, natural language processing, and augmented reality. It’s as much for performance as privacy purposes.

Its camera software especially makes extensive use of machine learning algorithms for both capturing and processing images, a compute-heavy task that could potentially be made much lighter with the inclusion of Xnor’s economizing techniques. The future of photography is code, after all — so the more of it you can execute, and the less time and power it takes to do so, the better.

It could also indicate new forays in the smart home, toward which with HomePod Apple has made some tentative steps. But Xnor’s technology is highly adaptable and as such rather difficult to predict as far as what it enables for such a vast company as Apple.


Continue Reading