Connect with us


Amazon Personalize now available in EU (Frankfurt) Region




Amazon Personalize is a machine learning (ML) service that enables you to personalize your website, app, ads, emails, and more with private, custom ML models that you can create with no prior ML experience. We’re excited to announce the general availability of Amazon Personalize in the EU (Frankfurt) Region. You can use Amazon Personalize to create higher-quality recommendations that respond to the specific needs, preferences, and changing behavior of your users, improving engagement and conversion. For more information, see Amazon Personalize Is Now Generally Available.

To use Amazon Personalize, you need to provide the service user interaction(events) data (such as page views, sign-ups, purchases etc.) from your applications, along with optional user demographic information (such as age, location) and a catalog of the items you want to recommend (such as articles, products, videos, or music). This data can be provided via Amazon S3 or be sent as a stream of user events via a JavaScript tracker or a server-side integration (learn more). Amazon Personalize then automatically processes and examines the data, identifies what is meaningful, and trains and optimizes a personalization model that is customized for your data. You can then easily invoke Amazon Personalize APIs from your business application and fetch personalized recommendations for your users.

Learn how our customers are using Amazon Personalize to improve product and content recommendations and for targeted marketing communications.

For more information about all the Regions Amazon Personalize is available in, see the AWS Region Table. Get started with Amazon Personalize by visiting the Amazon Personalize console and Developer Guide.

About the Author

Vaibhav Sethi is the Product Manager for Amazon Personalize. He focuses on delivering products that make it easier to build machine learning solutions. In his spare time, he enjoys hiking and reading.



Executive Interview: Steve Bennett, Director Global Government Practice, SAS 




Steve Bennett of SAS seeks to use AI and analytics to help drive government decision-making, resulting in better outcomes for citizens.   

Using AI and analytics to optimize delivery of government service to citizens  

Steve Bennett is Director of the Global Government Practice at SAS, and is the former director of the US National Biosurveillance Integration Center (NBIC) in the Department of Homeland Security, where he worked for 12 years. The mission of the NBIC was to provide early warning and situational awareness of health threats to the nation. He led a team of over 30 scientists, epidemiologists, public health, and analytics experts. With a PhD in computational biochemistry from Stanford University, and an undergraduate degree in chemistry and biology from Caltech, Bennet has a strong passion for using analytics in government to help make better public better decisions. He recently spent a few minutes with AI Trends Editor John P. Desmond to provide an update of his work.  

AI Trends: How does AI help you facilitate the role of analytics in the government?  

Steve Bennett, Director of Global Government Practice, SAS

Steve Bennett: Well, artificial intelligence is something we’ve been hearing a lot about everywhere, even in government, which can often be a bit slower to adopt or implement new technologies. Yet even in government, AI is a pretty big deal. We talk about analytics and government use of data to drive better government decision-making, better outcomes for citizens. That’s been true for a long time.   

A lot of government data exists in forms that are not easily analyzed using traditional statistical methods or traditional analytics. So AI presents the opportunity to get the sorts of insights from government data that may not be possible using other methods. Many folks in the community are excited about the promise of AI being able to help government unlock the value of government data for its missions.  

Are there any examples you would say that exemplify the work? 

AI is well-suited to certain sorts of problems, like finding anomalies or things that stick out in data, needles in a haystack, if you will. AI can be very good at that. AI can be good at finding patterns in very complex datasets. It can be hard for a human to sift through that data on their own, to spot the things that might require action. AI can help detect those automatically.  

For example, we’ve been partnering with the US Food and Drug Administration to support efforts to keep the food supply safe in the United States. One of the challenges for the FDA as the supply chain has gotten increasingly global, is detecting contamination of food. The FDA often has to be reactive. They have to wait for something to happen or wait for something to get pretty far down the line before they can identify it and take action. We worked with FDA to help them implement AI and apply it to that process, so they can more effectively predict where they might see an increased likelihood of contamination in the supply chain and act proactively instead of reactively. So that’s an example of how AI can be used to help support safer food for Americans. 

In another example, AI is helping with predictive maintenance for government fleets and vehicles. We work quite closely with Lockheed Martin to support predictive maintenance with AI for some of the most advanced airframes in the world, like the C-130 [transport] and the F-35 [combat aircraft]. AI helps to identify problems in very complex machines before those problems cause catastrophic failure. The ability for a machine to tell you before it breaks is something AI can do.   

Another example was around unemployment. We have worked with several cities globally to help them figure out how to best put unemployed people back to work. That is something top of mind now as we see increase unemployment because of Covid. For one city in Europe, we have a goal of getting people back to work in 13 weeks or less. They compiled racial and demographic data on the unemployed such as education, previous work experience, whether they have children, where they live—lots of data.  

They matched that to data about government programs, such as job training requested by specific employers, reskilling, and other programs. We built an AI system using machine learning to optimally match people based on what we knew to the best mix of government programs that would get them back to work the fastest. We are using the technology to optimize the government benefits, The results were good at the outset. They did a pilot prior to the Covid outbreak and saw promising results.    

Another example is around juvenile justice. We worked with a particular US state to help them figure out the best way to combat recidivism among juvenile offenders. They had data on 19,000 cases over many years, all about young people who came into juvenile corrections, served their time there, got out and then came back. They wanted to know how they could lower the recidivism rate. We found we could use machine learning to look at aspects of each of these kids, and figure out which of them might benefit from certain special programs after they leave juvenile corrections, to get skills that reduce the likelihood we would see them back in the system again.  

To be clear, this was not profiling, putting a stigma or mark on these kids. It was trying to figure out how to match limited government programs to the kids who would best benefit from those.   

What are key AI technologies that are being employed in your work today? 

Much of what we talk about having a near-term impact falls into the family of what we call machine learning. Machine learning has this great property of being able to take a lot of training data and being able to learn which parts of that data are important for making predictions or identifying patterns. Based on what we learn from that training data, we can apply that to new data coming in.  

A specialized form of machine learning is deep learning, which is good at automatically detecting things in video streams, such as a car or a person. That relies on deep learning.  We have worked in healthcare to help radiologists do a better job detecting cancer from health scans. Police and defense applications in many cases rely on real time video. The ability to make sense of that video very quickly is greatly enhanced by machine learning and deep learning.  

Another area to mention are real time interaction systems, AI chatbots. We’re seeing governments increasingly seeking to turn to chatbots to help them connect with citizens. If a benefits agency or a tax agency is able to build a system that can automatically interact with citizens, it makes government more responsive to citizens. It’s better than waiting on the phone on hold.   

How far along would you say the government sector is in its use of AI and how does it compare to two years ago? 

The government is certainly further along than it was two years ago. In the data we have looked at, 70% of government managers have expressed interest in using AI to enhance their mission. That signal is stronger than what we saw two years ago. But I would say that we don’t see a lot of enterprise-wide applications of AI in the government. Often AI is used for particular projects or specific applications within an agency to help fulfill its mission. So as AI continues to mature, we would expect it to have more of an enterprise-wide use for large scale agency missions.  

What would you say are the challenges using AI to deliver on analytics in government?  

We see a range of challenges in several categories. One is around data quality and execution. One of the first things an agency needs to figure out is whether they have a problem that is well-suited for AI. Would it show patterns or signals in the data? If so, would the project deliver value for the government?  

A big challenge is data quality. For machine learning to work well requires a lot of examples of a lot of data. It’s a very data-hungry sort of technology. If you don’t have that data or you don’t have access to it, even if you’ve got a great problem that could normally be very well-suited for government, you’re not going to be able to use AI.  

Another problem that we see quite often in governments is that the data exists, but it’s not very well organized. It might exist on spreadsheets on a bunch of individual computers all over the agency. It’s not in a place where it can be all brought together and analyzed in an AI way. So the ability for the data to be brought to bear is really important.   

Another one that’s important. Even if you have all of your data in the right place, and you have a problem very well-suited for AI, it could be that culturally, the agency just isn’t ready to make use of the recommendations coming from an AI system in its day-to-day mission. This might be called a cultural challenge. The people in the agency might not have a lot of trust in the AI systems and what they can do. Or it might be an operational mission where there always needs to be a human in the loop. Either way, sometimes culturally there might be limitations in what an agency is ready to use. And we would advise not to bother with AI if you haven’t thought about whether you can actually use it for something when you’re done. That’s how you get a lot of science projects in government.  

We always advise people to think about what they will get at the end of the AI project, and make sure they are ready to drive the results into the decision-making process. Otherwise, we don’t want to waste time and government resources. You might do something different that you are comfortable using in your decision processes. That’s really important to us.  As an example of what not to do, when I worked in government, I made the mistake of spending two years building an outstanding analytics project, using high-performance modeling and simulation, working in Homeland Security. But we didn’t do a good job working on the cultural side, getting those key stakeholders and senior leaders ready to use it. And so we delivered a great technical solution, but we had a bunch of senior leaders that weren’t ready to use it. We learned the hard way that the cultural piece really does matter. 

We also have challenges around data privacy. Government, more than many industries, touches very sensitive data. And as I mentioned, these methods are very data-hungry, so we often need a lot of data. Government has to make doubly sure that it’s following its own privacy protection laws and regulations, and making sure that we are very careful with citizen data and following all the privacy laws in place in the US. And most countries have privacy regulations in place to protect personal data.  

The second component is a challenge around what government is trying to get the systems to do. AI in retail is used to make recommendations, based on what you have been looking at and what you have bought. An AI algorithm is running in the background. The shopper might not like the recommendation, but the negative consequences of that are pretty mild.   

But in government, you might be using AI or analytics to make decisions with bigger impacts—determining whether somebody gets a tax refund, or whether a benefits claim is approved or denied. The outcomes of these decisions have potentially serious impacts. The stakes are much higher when the algorithms get things wrong. Our advice to government is that for key decisions, there always should be that human-in-the-loop. We would never recommend that a system automatically drives some of these key decisions, particularly if they have potential adverse actions for citizens.   

Finally, the last challenge that comes to mind is the challenge of where the research is going. This idea of “could you” versus “should you.” Artificial intelligence unlocks a whole set of areas that you can use such as facial recognition. Maybe in a Western society with liberal, democratic values, we might decide we shouldn’t use it, even though we could. Places like China in many cities are tracking people in real time using advanced facial recognition. In the US, that’s not in keeping with our values, so we choose not to do that.   

That means any government agency thinking about doing an AI project needs to think about values up front. You want to make sure that those values are explicitly encoded in how the AI project is set up. That way we don’t get results on the other end that are not in keeping with our values or where we want to go.  

You mentioned data bias. Are you doing anything in particular to try to protect against bias in the data? 

Good question. Bias is the real area of concern in any kind of AI machine learning work. The AI machine learning system is going to perform in concert with the way it was trained on the training data. So developers need to be careful in the selection of training data, and the team needs systems in place to review the training data so that it’s not biased. We’ve all heard and read the stories in the news about the facial recognition company in China—they make this great facial recognition system, but they only train it on Asian faces. And so guess what? It’s good at detecting Asian faces, but it’s terrible at detecting faces that are darker in color or that are lighter in color, or that have different facial features.  

We have heard many stories like that. You want to make sure you don’t have racial bias, gender bias, or any other kind of bias we want to avoid in the data training set. Encode those explicitly up front when you’re planning your project; that can go a long way towards helping to limit bias. But even if you’ve done that, you want to make sure you’re checking for bias in a system’s performance. We have many great technologies built into our machine learning tools to help you automatically look for those biases and detect if they are present. You also want to be checking for bias after the system has been deployed, to make sure if something pops up, you see it and can take care of it.  

From your background in bioscience, how well would you say the federal government has done in responding to the COVID-19 virus? 

There really are two industries that bore the brunt, at least initially from the COVID-19 spread: government and health care. In most places in the world, health care is part of government. So it has been a big public sector effort to try to deal with COVID. It’s been hit and miss, with many challenges. No other entity can marshal financial resources like the government, so getting economic support out to those that need is really important. Analytics plays a role in that.  

So one of the things that we did in supporting government using what we’re good at—data and analytics in AI—was to look at how we could help use the data to do a better job responding to COVID. We did a lot of work on the simple side of taking what government data they had and putting it into a simple dashboard that displayed where resources were. That way they could quickly identify if they had to move a supply such as masks to a different location. We worked on a more complex AI system to optimize the use of intensive care beds for a government in Europe that wanted to plan use of its medical resources. 

Contact tracing, the ability to very quickly identify people that are exposed and then identify who they’ve been around so that we can isolate those people, is something that can be greatly supported and enhanced by analytics. And we’ve done a lot of work around how to take contact tracing that’s been used for centuries and make it fit for supporting COVID-19 work. The government can do a lot with its data, with analytics and with AI in the fight against COVID-19. 

Do you have any advice for young people, either in school now or early in their careers, for what they should study if they are interested in pursuing work in AI, and especially if they’re interested in working in the government? 

If you are interested in getting into AI, I would suggest two things to focus on. One would be the technical side. If you have a solid understanding of how to implement and use AI, and you’ve built experience doing it as part of your coursework or part of your research work in school, you are highly valuable to government. Many people know a little about AI; they may have taken some business courses on it. But if you have the technical chops to be able to implement it, and you have a passion for doing that inside of government, you will be highly valuable. There would not be a lot of people like you. 

Just as important as the AI side and the data science technical piece, I would highly advise students to work on storytelling. AI can be highly technical when you get into the details. If you’re going to talk to a government or agency leader or an elected official, you will lose them if you can’t quickly tie the value of artificial intelligence to their mission. We call them ‘unicorns’ in SAS, people that have high technical ability and a detailed understanding of how these models can help government, and they have the ability to tell good stories and draw that line to the “so what?” How can a senior agency official in government, how can they use it? How is it helpful to them? 

To work on good presentation skills and practice them is just as important as the technical side. You will find yourself very influential and able to make a difference if you’ve got a good balance of those skills. That’s my view.  

I would also say, in terms of where you specialize technically, being able to converse in SAS has been recently ranked as one of the most highly valued jobs skills. The specific aspects of those technical pieces that can be very, very marketable to you inside and outside of government. 

Learn more about Steve Bennett on the SAS Blog. 


Continue Reading


Getting AI to Learn Like a Baby is Goal of Self-Supervised Learning 




Scientists are studying how to create AI systems that learn from self-supervision, akin to how babies learn from observing their environment. (Credit: Getty Images) 

By AI Trends Staff  

Scientists are working on creating better AI that learns through self-supervision, with the pinnacle being AI that could learn like a baby, based on observation of its environment and interaction with people.  

This would be an important advance because AI has limitations based on the volume of data required to train machine learning algorithms, and the brittleness of the algorithms when it comes to adjusting to changing circumstances. 

Yann LeCun, chief AI scientist at Facebook

“This is the single most important problem to solve in AI today,” stated Yann LeCun, chief AI scientist at Facebook, in an account in the Wall Street Journal. Some early success with self-supervised learning has been seen in the natural language processing used in mobile phones, smart speakers, and customer service bots.   

Training AI today is time-consuming and expensive. The promise of self-supervised learning is for AI to train itself without the need for external labels attached to the data. Dr. LeCun is now focused on applying self-supervised learning to computer vision, a more complex problem in which computers interpret images such as a person’s face.  

The next phase, which he thinks is possible in the next decade or two, is to create a machine that can “learn how the world works by watching video, listening to audio, and reading text,” he stated. 

More than one approach is being tried to help AI learn by itself. One is the neuro-symbolic approach, which combines deep learning and symbolic AI, which represents human knowledge explicitly as facts and rules. IBM is experimenting with this approach in its development of a bot that works alongside human engineers, reading computer logs to look for system failure, understand why a system crashed and offer a remedy. This could increase the pace of scientific discovery, with its ability to spot patterns not otherwise evident, according to Dario Gil, director of IBM Research. “It would help us address huge problems, such as climate change and developing vaccines,” he stated. 

Child Psychologists Working with Computer Scientists on MESS  

DARPA is working with the University of California at Berkeley on a research project, Machine Common Sense, funding collaborations between child psychologists and computer scientists. The system is called MESS, for Model-Building, Exploratory, Social Learning System.   

Alison Gopnik, Professor of Psychology, University of California, Berkeley and the author of “The Philosophical Baby”

“Human babies are the best learners in the universe. How do they do it? And could we get an AI to do the same?,” queried Alison Gopnik, a professor of psychology at Berkeley and the author of “The Philosophical Baby” and “The Scientist in the Crib,” among other books, in a recent article she wrote for the Wall Street Journal.  

“Even with a lot of supervised data, AIs can’t make the same kinds of generalizations that human children can,” Gopnik said. “Their knowledge is much narrower and more limited, and they are easily fooled. Current AIs are like children with super-helicopter-tiger moms—programs that hover over the learner dictating whether it is right or wrong at every step. The helicoptered AI children can be very good at learning to do specific things well, but they fall apart when it comes to resilience and creativity. A small change in the learning problem means that they have to start all over again.” 

The scientists are also experimenting with AI that is motivated by curiosity, which leads to a more resilient learning style, called “active learning” and is a frontier in AI research.  

The challenge of the DARPA Machine Common Sense program is to design an AI that understands the basic features of the world as well as an 18-month-old. “Some computer scientists are trying to build common sense models into the AIs, though this isn’t easy. But it is even harder to design an AI that can actually learn those models the way that children do,” Dr. Gopnik wrote. “Hybrid systems that combine models with machine learning are one of the most exciting developments at the cutting edge of current AI.” 

Training AI models on labeled datasets is likely to play a diminished role as self-supervised learning comes into wider use, LeCun said during a session at the virtual International Conference on Learning Representation (ICLR) 2020, which also included Turing Award winner and Canadian computer scientist Yoshua Bengio.  

The way that self-supervised learning algorithms generate labels from data by exposing relationships between the data’s parts is an advantage.   

“Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It’s basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way,” stated LeCun, in an account from VentureBeat “This is the type of [learning] that we don’t know how to reproduce with machines.” 

Bengio was optimistic about the potential for AI to gain from the field of neuroscience, in particular for its explorations of consciousness and conscious processing. Bengio predicted that new studies will clarify the way high-level semantic variables connect with how the brain processes information, including visual information. These variables that humans communicate using language could lead to an entirely new generation of deep learning models, he suggested. 

“There’s a lot of progress that could be achieved by bringing together things like grounded language learning, where we’re jointly trying to understand a model of the world and how high-level concepts are related to each other,” said Bengio“Human conscious processing is exploiting assumptions about how the world might change, which can be conveniently implemented as a high-level representation.”  

Bengio Delivered NeurIPS 2019 Talk on System 2 Self-Supervised Models 

At the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019), Bengio spoke on this topic in a keynote speech entitled,  “From System 1 Deep Learning to System 2 Deep Learning,” with System 2 referring to self-supervised models.  

“We want to have machines that understand the world, that build good world models, that understand cause and effect, and can act in the world to acquire knowledge,” he said in an account in TechTalks.  

The intelligent systems should be able to generalize to different distributions in data, just as children learn to adapt as the environment changes around them. “We need systems that can handle those changes and do continual learning, lifelong learning, and so on,” Bengio stated. “This is a long-standing goal for machine learning, but we haven’t yet built a solution to this.”  

Read the source articles in the Wall Street Journal, Alison for the Wall Street Journal, in VentureBeat and in TechTalks. 


Continue Reading


Support for Remote Workers Providing Extra Boost for Conversational AI




Since the coronavirus hit in mid-March and the number of remote workers skyrocketed, conversational AI is being employed in a support role. (Credit: Getty Images) 

By AI Trends Staff 

Conversational AI refers to the use of chatbots, messaging apps, and voice-based assistants to automate customer communications with a brand.   

Software that combines these features to carry on a human-like conversation might be called a “bot.” The term “chatbot” might refer to text-only bots. Amazon Alexa or Google Home virtual assistants use conversational AI; they learn about the customer and the customer learns about them. With deep learning underlying the interaction, the conversation experience should improve over time.  

The advantages of conversational AI in marketing include an instant response, which leads to higher conversion rates of queries to sales.  

Shane Barker, digital marketing consultant, cofounder of Attrock

The adoption of conversational AI is being fueled by the rise in use of messaging apps and voice-based assistants, according to an account from the site of Shane Barker, a digital marketing consultant and cofounder of Attrock, a digital marketing agency.  

The most popular messaging app, according to Statista, is WhatsApp, from a US startup now owned by Facebook, with over 1.6 billion users. That is followed by: Facebook Messenger with 1.3 billion users; WeChat, developed by TenCent of China, with 1.1 billion users;  QQMobile, also from Tencent, with 800 million users; Snapchat from Snap, Inc. of the US, with 314 million users; and Telegram from Telegram Messenger, founded in Russia in 2013 on the macOS and released on Android in May of this year, with 200 million users.  

“If you are not using conversational AI platforms yet, you should start now,” advised Barker. 

The conversations could be text-based or audio-based, and can be done on any messaging or voice-based communication platform. While conversational AI is the technology behind chatbots and voice-based assistants, it is not synonymous with either. You can use a messaging service, a website chatbot or a voice-based assistant, and use conversational AI to automate conversations on it, Barker advises. 

How Conversational AI Can Help Your Business 

Some conversational AI technologies are advanced enough to understand the context and personalize the conversations. User-friendly chatbots can generate leads and help drive sales. The first and most common use of conversational AI is to provide around-the-clock customer service. The bot can answer commonly-asked customer questions, resolve problems and point to solutions. The user company can build a customized database of information that can feed the conversational AI platform to make it more accurate.   

A website chatbot can interact with users and direct them to the right pages, products, or services — basically leading them down the sales funnel. The bot can also drive conversions by cross-selling or up-selling products. The bot can be trained to suggest complementary or higher-value products. The platform can also deliver offers and promotions to customers.  

As far as lead generation is concerned, conversational AI-based chatbots can schedule appointments and collect email addresses during non-working hours. You can then pass that information on to your sales team, who can then nurture those leads.  

Among the conversational AI platforms recommended by Barker are:   

  • LivePerson from LivePerson of New York City, with an AI offering released in 2018 from the company founded in 1998; 
  • SAP Conversational AI from SAP, the German multinational software company; 
  • KAI from Kasisto of New York City, founded in 2013;  
  • MindMeld now from Cisco Systems, founded in 2011 and acquired in 2017; 
  • Mindsay from Mindsay, headquartered in Paris; founded in 2016.

iAdvize Taps Network of Freelance Experts for Customer Service  

Another player is iAdvize, founded in France in 2010, offering a chat tool focused on customer service. Today iAdvize is a leading conversational platform in Europe and is now expanding in the US. The company says the tool is currently being used by over 2,000 e-commerce websites worldwide including Samsung, Disney and Lowe’s. 

The platform uses AI to identify each customer’s needs and connects them to a mix of in-store associates, in-house agents, chatbots and on-demand product experts from ibbu. Founded by iAdvize in 2016, ibbu today uses over 20,000 knowledgeable product experts from around the world who chat with customers and are paid for the advice.   

The freelancers are vetted to be experts in electronics, home improvement, sporting goods, hobbies, and other product segments. They get paid a percentage of sales they generate. Ibbu experts the company says have conducted over 1 million conversations with iAdvize’s e-commerce customers. 

Customers using iAdvize have seen an increase in online sales of 5% to 15%, according to the company. iAdvize was co-founded by Julien Hervouet, now the CEO. He stated in a press release on the announcement of ibbu in the UK in 2016, “We believe the future of marketing is conversational commerce, where brands use genuine fans to improve the customer’s experience of the brand.” 

How Adobe Used an AI Chatbot to Support 22,000 Remote Workers  

Cynthia Stoddard, Senior VP and CIO at Adobe

When the COVID-19 virus hit in March throughout the US, Adobe like many companies sent their workers home and shifted into remote work over a single weekend. “Not surprisingly, our existing processes and workflows weren’t equipped for this abrupt change,” stated Cynthia Stoddard, Senior VP and CIO at Adobe, in a written account published in VentureBeat. “Customers, employees, and partners — many also working at home — couldn’t wait days to receive answers to urgent questions.” 

The first step was to launch an organization-wide channel using Slack, a business communications platform from Slack Technologies, launched in 2013 in San Francisco. The 24×7 global IT help desk would support the channel, with the rest of IT available for rapid event escalation. 

The same questions and issues came up frequently. “We decided to optimize our support for frequently asked questions and issues,” Stoddard stated. They combined AI, machine learning and natural language processing to build a chatbot. Its answers could be as simple as directing employees to an existing knowledge base or FAQ, or walking them through steps to solve a problem. The team focused on the eight most frequently-reported topics, then continued to add capabilities based on what delivers the biggest benefits.  

“The results have been remarkable,” she wrote. Since going live on April 14, the system has responded to more than 3,000 queries and has noticed improvement in some critical issues. For example, more employees are seeking IT support through email. It was important to speed the turnaround time on these queries.  

With the help of a deep learning and NLP based routing mechanism, 38% of email tickets are now automatically routed to the correct support queue within six minutes,” she stated. “The AI routing bot uses a neural network-based classification technique to sort email tickets into classes, or support queues. Based on the predicted classification, the ticket is automatically assigned to the correct support queue.” 

The average time required to dispatch and route email tickets has been reduced by the AI chatbot from about 10 hours to less than 20 minutes. Continuous supervised training on the bot has helped Adobe achieve 97% accuracy, nearly on a par with a human expert. Call volumes for internal support have dropped by 35% as a result.  

The neural network model is retrained every two weeks by adding new data from resolved tickets to the training set. They leveraged the work done for a company chatbot for finance. Adobe continues to look at robotic process automation, to explore business improvements through the combination of autonomous software robots and AI.   

Keeping employees in the loop about the AI and chatbot technology being employed is critical. “When introducing a new/unknown technology tool, it’s critical to keep employee experience at the core of the training and integration process – to ensure they feel comfortable and confident with the change,” Stoddard wrote. 

Read the source articles from the sites of Shane Barker and Statista, from the website of  iAdvize and in VentureBeat 


Continue Reading
SaaS33 seconds ago

SaaS35 seconds ago

SaaS38 seconds ago

SaaS53 seconds ago

SaaS1 min ago

Blockchain5 mins ago

Seoul Police Summons Bithumb Chairman For Interrogation

Automotive11 hours ago

New Nissan Z Not Using Adaptive Steering, Goal Is A ‘Connected’ Feel

AR/VR12 hours ago

How Augmented Reality is changing Retail

AR/VR12 hours ago

Benefits & Use Cases of Augmented and Virtual Reality in Education

AR/VR13 hours ago

Facebook Lowers Price of Enterprise-focused Quest to $800

AR/VR13 hours ago

Felix & Paul Studios’ Space Explorers is Going Travelling as ∞INFINITY: Living Among Stars

AR/VR14 hours ago

The VR Game Launch Roundup: A Bumper Sept Lineup

Cyber Security15 hours ago

Deepfake Detection Poses Problematic Technology Race

Cyber Security15 hours ago

Mitigating Cyber-Risk While We’re (Still) Working from Home

AR/VR15 hours ago

Get Front-Row Access to TIDAL Concerts in Oculus Venues in 2020

Blockchain16 hours ago

Binance US Joins Chicago Defi Alliance For Defi Industry Development

Esports16 hours ago

U.S. Department of Commerce Announces Sanctions Against TikTok, WeChat

Blockchain16 hours ago

Nvidia Signs Definitive Agreement With SoftbankGroup Corp. To Acquire ARM

Blockchain16 hours ago

Everything you need to know about Blockchain Programming

Blockchain16 hours ago

Abkhazia Facing Energy Crisis, Government Blames Illegal Crypto Mining

Gaming17 hours ago

Evening Reading – September 17, 2020

Gaming17 hours ago

Let’s talk about This Week at Bungie – September 17, 2020

Blockchain18 hours ago

KuCoin and Poloniex Team Up To Research Crypto Industry

Fintech18 hours ago

IFC Acquires Stake in Collectius To Launch US$60M Investment Platform

Blockchain18 hours ago

VeChain Associates With China Animal Health And Food Safety Alliance (CAFA)

Gaming19 hours ago

Unboxing & review: JunkBots – One man’s junk

Start Ups19 hours ago

Paytm App Down From Google Play Store

Blockchain19 hours ago

Ethereum Miners Hourly Revenue Hits Five-Year Record

CNBC20 hours ago

U.S. stock futures are mixed after Dow snaps a 4-day winning streak

Start Ups20 hours ago

Origa Lease Finance Secures 2 Million USD in Debt

Fintech20 hours ago

Visa y Mastercard apuestan fuerte a tarjetas contactless: American Express, fuera de juego

Payments20 hours ago

Angolan National Bank and Beta-i create fintech regulatory sandbox

Payments20 hours ago

Why this prominent crypto analyst thinks Ethereum DeFi has topped for now

CNBC20 hours ago

The next wave of the global recovery could send commodity prices soaring

Gaming20 hours ago

Trump administration wants Tencent-owned companies’ data-security protocols

Gaming20 hours ago

Octopath Traveler: Champions of the Continent, the Prequel Story to ‘Octopath Traveler’ Finally Has a Confirmed Release Date for Japan

Payments20 hours ago

Swift unveils expansion plans to “fundamentally transform” transaction management

Cannabis20 hours ago

8 of the best strains for focus

Gaming21 hours ago

Elgato Key Light Air unboxing: Illuminate yourself like a pro

Blockchain21 hours ago

Ledger Live New Version Launches Coin Control Feature To Protect Bitcoin Transaction