By Lance Eliot, the AI Trends Insider
Green AI is arising.
Recent news about the benefits of Machine Learning (ML) and Deep Learning (DL) has taken a slightly downbeat turn toward pointing out that there is a potential ecological cost associated with these systems. In particular, AI developers and AI researchers need to be mindful of the adverse and damaging carbon footprint that they are generating while crafting ML/DL capabilities.
It is a so-called “green” or environmental wake-up call for AI that is worth hearing.
Let’s first review the nature of carbon footprints (CFPs) that are already quite familiar to all of us, such as the carbon belching transportation industry.
A carbon footprint is usually expressed as the amount of carbon dioxide emissions spewed forth, including for example when you fly in a commercial plane from Los Angeles to New York, or when you drive your gasoline-powered car from Silicon Valley to Silicon Beach.
Carbon accounting is used to figure out how much a machine or system produces in terms of its carbon footprint when being utilized and can be calculated for planes, cars, washing machines, refrigerators, and just about anything that emits carbon fumes.
We all seem to now know that our cars are emitting various greenhouse gasses including the dreaded carbon dioxide vapors that have numerous adverse environmental impacts. Some are quick to point out that hybrid cars that use both gasoline and electrical power tend to have a lower carbon footprint than conventional cars, while Electrical Vehicles (EV’s) are essentially zero carbon emissions at the tailpipe.
Calculating Carbon Footprints For A Car
When ascertaining the carbon footprint of a machine or device, it is easy to fall into the mental trap of only considering the emissions that occur when the apparatus is in use. A gasoline car might emit 200 grams of carbon dioxide per kilometer traveled, while a hybrid-electric might produce about half at 92 grams, and an EV presumably at 0 grams, per EPA and Department of Energy.
See this U.S. government website for detailed estimates about carbon emissions of cars: https://www.fueleconomy.gov/feg/info.shtml#guzzler
Though the direct carbon footprint aspect does indeed involve what happens during the utilization effort of a machine or device, there is also the indirect carbon footprint that requires our equal attention, involving both upstream and downstream elements that contribute to a fuller picture of the true carbon footprint involved. For example, a conventional gasoline-powered car might generate perhaps 28 percent of its total life-time carbon dioxide emissions when the car was originally manufactured and shipped to being sold.
You might at first be normally thinking like this:
- Total CFP of a car = CFP while burning gasoline
But it should be more like this:
- Total CFP of a car = CFP when the car is made + CFP while burning gasoline
Let’s define “CFP Made” as a factor about the carbon footprint when a car is manufactured and shipped, and another factor we’ll call “CFP FuelUse” that represents the carbon footprint while the car is operating.
For the full lifecycle of a car, we need to add more factors into the equation.
There is a carbon footprint when the gasoline itself is being generated, I’ll call it “CFP FuelGen,” and thus we should include not just the CFP when the fuel is consumed but also when the fuel was originally processed or generated. Furthermore, once a car has seen its day and will be put aside and no longer used, there is a carbon footprint associated with disposing or scrapping of the car (“CFP Disposal”).
This also brings up a facet about EV’s. The attention of EV’s as having zero CFP at the tailpipe is somewhat misleading when considering the total lifecycle CFP since you should also be including the carbon footprint required to generate the electrical power that gets charged into the EV and then is consumed while the EV is driving around. We’ll assign that amount to the CFP FuelGen factor.
The expanded formula is:
- Total CFP of a car = CFP Made + CFP FuelUse + CFP FuelGen + CFP Disposal
Let’s rearrange the factors to group together the one-time carbon footprint amounts, which would be the CFP Made and CFP Disposal, and group together the ongoing usage carbon footprint amounts, which would be the CFP FuelUse and CFP FuelGen. This makes sense since the fuel used and the fuel generated factors are going to vary depending upon how much a particular car is being driven. Presumably, a low mileage driven car that mainly sits in your garage would have a smaller grand-total over its lifetime of the CFP consumption amount than would a car that’s being driven all the time and racking up tons of miles.
The rearranged overall formula is:
- Total CFP of a car = (CFP Made + CFP Disposal) + (CFP FuelUse + CFP FuelGen)
Next, I’d like to add a twist that very few are considering when it comes to the emergence of self-driving autonomous cars, namely the carbon footprint associated with the AI Machine Learning for driverless cars.
Let’s call that amount as “CFP ML” and add it to the equation.
- Total CFP of a car = (CFP Made + CFP Disposal) + (CFP FuelUse + CFP FuelGen) + CFP ML
You might be puzzled as to what this new factor consists of and why it is being included. Allow me to elaborate.
AI Machine Learning As A Carbon Footprint
In a recent study done at the University of Massachusetts, researchers examined several AI Machine Learning or Deep Learning systems that are being used for Natural Language Processing (NLP) and tried to estimate how much of a carbon footprint was expended in developing those NLP systems (see the study at this link here: https://arxiv.org/pdf/1906.02243.pdf).
You likely already know something about NLP if you’ve ever had a dialogue with Alexa or Siri. Those popular voice interactive systems are trained via a large-scale or deep Artificial Neural Network (ANN), a kind of computer-based model that simplistically mimics brain-like neurons and neural networks, and are a vital area of AI for having systems that can “learn” based on datasets provided to them.
Those of you versed in computers might be perplexed that the development of an AI Machine Learning system would somehow produce CFP since it is merely software running on computer hardware, and it is not a plane or a car.
Well, if you consider that there is electrical energy used to power the computer hardware, which is used to be able to run the software that then produces the ML model, you could then assert that the crafting of the AI Machine Learning system has caused some amount of CFP via however the electricity itself was generated to power the ML training operation.
According to the calculations done by the researchers, a somewhat minor or modest NLP ML model consumed an estimated 78,468 pounds of carbon dioxide emissions for its training, while a larger NLP ML consumed an estimated 626,155 pounds during training. As a basis for comparison, they report that an average car over its lifetime might consume 126,000 pounds of carbon dioxide emissions.
A key means of calculating the carbon dioxide produced was based on the EPA’s formula of total electrical power consumed is multiplied by a factor of 0.954 to arrive at the average CFP in pounds per kilowatt-hour and as based on assumptions of power generation plants in the United States.
Significance Of The CFP For Machine Learning
Why should you care about the CFP of the AI Machine Learning for an autonomous car?
Presumably, conventional cars don’t have to include the CFP ML factor since a conventional car does not encompass such a capability, therefore the factor would have a value of zero in the case of a conventional car. Meanwhile, for a driverless car, the CFP ML would have some determinable value and would need to be added into the total CFP calculation for driverless cars.
Essentially, it burdens the carbon footprint of a driverless car and tends to heighten the CFP in comparison to a conventional car.
For those of you that might react instantly to this aspect, I don’t think though that this means that the sky is falling and that we should somehow put the brakes on developing autonomous cars, you ought to consider these salient topics:
- If the AI ML is being deployed across a fleet of driverless cars, perhaps in the hundreds, thousands, or eventually millions of autonomous cars, and if the AI ML is the same instance for each of those driverless cars, the amount of CFP for the AI ML production is divided across all of those driverless cars and therefore likely a relatively small fractional addition of CFP on a per driverless car basis.
- Autonomous cars are more than likely to be EVs, partially due to the handy aspect that an EV is adept at storing electrical power, of which the driverless car sensors and computer processors slurp up and need profusely. Thus, the platform for the autonomous car is already going to be significantly cutting down on CFP due to using an EV.
- Ongoing algorithmic improvements in being able to produce AI ML is bound to make it more efficient to create such models and therefore either decrease the amount of time required to produce the models (accordingly likely reducing the electrical power consumed) or can better use the electrical power in terms of faster processing by the hardware or software.
- For semi-autonomous cars, you can expect that we’ll see AI ML being used there too, in addition to the fully autonomous cars, and therefore the reality will be that the CFP of the AI ML will apply to eventually all cars since conventional cars will gradually be usurped by semi-autonomous and fully autonomous cars.
- Some might argue that the CFP of the AI ML ought to be tossed into the CFP Made bucket, meaning that it is just another CFP component within the effort to manufacture the autonomous car. And, if so, based on preliminary analyses, it would seem like the CFP AI ML is rather inconsequential in comparison to the rest of the CFP for making and shipping a car.
For those of you interested in trying out an experimental impact tracker in your AI ML developments, there are various tools coming available, including for example this one posted at GitHub that was developed jointly by Stanford University, Facebook AI Research, and McGill University: https://github.com/Breakend/experiment-impact-tracker.
As they say, your mileage may vary in terms of using any of these emerging tracking tools and you should proceed mindfully and with appropriate due diligence for applicability and soundness.
For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/
For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
There’s an additional consideration for the CFP of AI ML.
You could claim that there is a CFP AI ML for the originating of the Machine Learning model that will be driving the autonomous car, and then there is the ongoing updating and upgrading involved too.
Therefore, the CFP AI ML is more than just a one-time CFP, it is also part of the ongoing grouping too.
Let’s split it across the two groupings:
- Total CFP of a car = (CFP Made + CFP Disposal + CFP ML1) + (CFP FuelUse + CFP FuelGen + CFP ML2)
You can go even deeper and point out that some of the AI ML will be taking place in-the-cloud of the automaker or tech firm and then be pushed down into the driverless car (via Over-The-Air or OTA electronic communications), while some of the AI ML might be also occurring in the on-board systems of the autonomous car. In that case, there’s the CFP to be calculated for the cloud-based AI ML and then a different calculation to determine the CFP of the onboard AI ML.
There are some that point out that you can burden a lot of things in our society if you are going to be considering the amount of electrical power that they use, and perhaps it is unfair to suddenly bring up the CFP of AI ML, doing so in isolation of the myriad of other ways in which CFP arises due to any kind of computer-based system.
In the case of autonomous cars, it is also pertinent to consider not just the “costs” side of things, which includes the carbon footprint factor, but also the benefits side of things.
Even if there is some attributable amount of CFP for driverless cars, it would be prudent to consider what kinds of benefits we’ll derive as a society and weigh that against the CFP aspects. Without taking into account the hoped-for benefits, including the potential of human lives saved, the potential for mobility access to all and including the mobility marginalized, and other societal transformations, you get a much more robust picture.
In that sense, we need to figure out this equation:
- Societal ROI of autonomous cars = Societal benefits – Societal costs
We don’t yet know how it is going to pan out, but most are hoping that the societal benefits will readily outweigh the societal costs, and therefore the ROI for self-driving driverless autonomous cars will be hefty and leave us all nearly breathless as such.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
Executive Interview: Steve Bennett, Director Global Government Practice, SAS
Using AI and analytics to optimize delivery of government service to citizens
Steve Bennett is Director of the Global Government Practice at SAS, and is the former director of the US National Biosurveillance Integration Center (NBIC) in the Department of Homeland Security, where he worked for 12 years. The mission of the NBIC was to provide early warning and situational awareness of health threats to the nation. He led a team of over 30 scientists, epidemiologists, public health, and analytics experts. With a PhD in computational biochemistry from Stanford University, and an undergraduate degree in chemistry and biology from Caltech, Bennet has a strong passion for using analytics in government to help make better public better decisions. He recently spent a few minutes with AI Trends Editor John P. Desmond to provide an update of his work.
AI Trends: How does AI help you facilitate the role of analytics in the government?
Steve Bennett: Well, artificial intelligence is something we’ve been hearing a lot about everywhere, even in government, which can often be a bit slower to adopt or implement new technologies. Yet even in government, AI is a pretty big deal. We talk about analytics and government use of data to drive better government decision-making, better outcomes for citizens. That’s been true for a long time.
A lot of government data exists in forms that are not easily analyzed using traditional statistical methods or traditional analytics. So AI presents the opportunity to get the sorts of insights from government data that may not be possible using other methods. Many folks in the community are excited about the promise of AI being able to help government unlock the value of government data for its missions.
Are there any examples you would say that exemplify the work?
AI is well-suited to certain sorts of problems, like finding anomalies or things that stick out in data, needles in a haystack, if you will. AI can be very good at that. AI can be good at finding patterns in very complex datasets. It can be hard for a human to sift through that data on their own, to spot the things that might require action. AI can help detect those automatically.
For example, we’ve been partnering with the US Food and Drug Administration to support efforts to keep the food supply safe in the United States. One of the challenges for the FDA as the supply chain has gotten increasingly global, is detecting contamination of food. The FDA often has to be reactive. They have to wait for something to happen or wait for something to get pretty far down the line before they can identify it and take action. We worked with FDA to help them implement AI and apply it to that process, so they can more effectively predict where they might see an increased likelihood of contamination in the supply chain and act proactively instead of reactively. So that’s an example of how AI can be used to help support safer food for Americans.
In another example, AI is helping with predictive maintenance for government fleets and vehicles. We work quite closely with Lockheed Martin to support predictive maintenance with AI for some of the most advanced airframes in the world, like the C-130 [transport] and the F-35 [combat aircraft]. AI helps to identify problems in very complex machines before those problems cause catastrophic failure. The ability for a machine to tell you before it breaks is something AI can do.
Another example was around unemployment. We have worked with several cities globally to help them figure out how to best put unemployed people back to work. That is something top of mind now as we see increase unemployment because of Covid. For one city in Europe, we have a goal of getting people back to work in 13 weeks or less. They compiled racial and demographic data on the unemployed such as education, previous work experience, whether they have children, where they live—lots of data.
They matched that to data about government programs, such as job training requested by specific employers, reskilling, and other programs. We built an AI system using machine learning to optimally match people based on what we knew to the best mix of government programs that would get them back to work the fastest. We are using the technology to optimize the government benefits, The results were good at the outset. They did a pilot prior to the Covid outbreak and saw promising results.
Another example is around juvenile justice. We worked with a particular US state to help them figure out the best way to combat recidivism among juvenile offenders. They had data on 19,000 cases over many years, all about young people who came into juvenile corrections, served their time there, got out and then came back. They wanted to know how they could lower the recidivism rate. We found we could use machine learning to look at aspects of each of these kids, and figure out which of them might benefit from certain special programs after they leave juvenile corrections, to get skills that reduce the likelihood we would see them back in the system again.
To be clear, this was not profiling, putting a stigma or mark on these kids. It was trying to figure out how to match limited government programs to the kids who would best benefit from those.
What are key AI technologies that are being employed in your work today?
Much of what we talk about having a near-term impact falls into the family of what we call machine learning. Machine learning has this great property of being able to take a lot of training data and being able to learn which parts of that data are important for making predictions or identifying patterns. Based on what we learn from that training data, we can apply that to new data coming in.
A specialized form of machine learning is deep learning, which is good at automatically detecting things in video streams, such as a car or a person. That relies on deep learning. We have worked in healthcare to help radiologists do a better job detecting cancer from health scans. Police and defense applications in many cases rely on real time video. The ability to make sense of that video very quickly is greatly enhanced by machine learning and deep learning.
Another area to mention are real time interaction systems, AI chatbots. We’re seeing governments increasingly seeking to turn to chatbots to help them connect with citizens. If a benefits agency or a tax agency is able to build a system that can automatically interact with citizens, it makes government more responsive to citizens. It’s better than waiting on the phone on hold.
How far along would you say the government sector is in its use of AI and how does it compare to two years ago?
The government is certainly further along than it was two years ago. In the data we have looked at, 70% of government managers have expressed interest in using AI to enhance their mission. That signal is stronger than what we saw two years ago. But I would say that we don’t see a lot of enterprise-wide applications of AI in the government. Often AI is used for particular projects or specific applications within an agency to help fulfill its mission. So as AI continues to mature, we would expect it to have more of an enterprise-wide use for large scale agency missions.
What would you say are the challenges using AI to deliver on analytics in government?
We see a range of challenges in several categories. One is around data quality and execution. One of the first things an agency needs to figure out is whether they have a problem that is well-suited for AI. Would it show patterns or signals in the data? If so, would the project deliver value for the government?
A big challenge is data quality. For machine learning to work well requires a lot of examples of a lot of data. It’s a very data-hungry sort of technology. If you don’t have that data or you don’t have access to it, even if you’ve got a great problem that could normally be very well-suited for government, you’re not going to be able to use AI.
Another problem that we see quite often in governments is that the data exists, but it’s not very well organized. It might exist on spreadsheets on a bunch of individual computers all over the agency. It’s not in a place where it can be all brought together and analyzed in an AI way. So the ability for the data to be brought to bear is really important.
Another one that’s important. Even if you have all of your data in the right place, and you have a problem very well-suited for AI, it could be that culturally, the agency just isn’t ready to make use of the recommendations coming from an AI system in its day-to-day mission. This might be called a cultural challenge. The people in the agency might not have a lot of trust in the AI systems and what they can do. Or it might be an operational mission where there always needs to be a human in the loop. Either way, sometimes culturally there might be limitations in what an agency is ready to use. And we would advise not to bother with AI if you haven’t thought about whether you can actually use it for something when you’re done. That’s how you get a lot of science projects in government.
We always advise people to think about what they will get at the end of the AI project, and make sure they are ready to drive the results into the decision-making process. Otherwise, we don’t want to waste time and government resources. You might do something different that you are comfortable using in your decision processes. That’s really important to us. As an example of what not to do, when I worked in government, I made the mistake of spending two years building an outstanding analytics project, using high-performance modeling and simulation, working in Homeland Security. But we didn’t do a good job working on the cultural side, getting those key stakeholders and senior leaders ready to use it. And so we delivered a great technical solution, but we had a bunch of senior leaders that weren’t ready to use it. We learned the hard way that the cultural piece really does matter.
We also have challenges around data privacy. Government, more than many industries, touches very sensitive data. And as I mentioned, these methods are very data-hungry, so we often need a lot of data. Government has to make doubly sure that it’s following its own privacy protection laws and regulations, and making sure that we are very careful with citizen data and following all the privacy laws in place in the US. And most countries have privacy regulations in place to protect personal data.
The second component is a challenge around what government is trying to get the systems to do. AI in retail is used to make recommendations, based on what you have been looking at and what you have bought. An AI algorithm is running in the background. The shopper might not like the recommendation, but the negative consequences of that are pretty mild.
But in government, you might be using AI or analytics to make decisions with bigger impacts—determining whether somebody gets a tax refund, or whether a benefits claim is approved or denied. The outcomes of these decisions have potentially serious impacts. The stakes are much higher when the algorithms get things wrong. Our advice to government is that for key decisions, there always should be that human-in-the-loop. We would never recommend that a system automatically drives some of these key decisions, particularly if they have potential adverse actions for citizens.
Finally, the last challenge that comes to mind is the challenge of where the research is going. This idea of “could you” versus “should you.” Artificial intelligence unlocks a whole set of areas that you can use such as facial recognition. Maybe in a Western society with liberal, democratic values, we might decide we shouldn’t use it, even though we could. Places like China in many cities are tracking people in real time using advanced facial recognition. In the US, that’s not in keeping with our values, so we choose not to do that.
That means any government agency thinking about doing an AI project needs to think about values up front. You want to make sure that those values are explicitly encoded in how the AI project is set up. That way we don’t get results on the other end that are not in keeping with our values or where we want to go.
You mentioned data bias. Are you doing anything in particular to try to protect against bias in the data?
Good question. Bias is the real area of concern in any kind of AI machine learning work. The AI machine learning system is going to perform in concert with the way it was trained on the training data. So developers need to be careful in the selection of training data, and the team needs systems in place to review the training data so that it’s not biased. We’ve all heard and read the stories in the news about the facial recognition company in China—they make this great facial recognition system, but they only train it on Asian faces. And so guess what? It’s good at detecting Asian faces, but it’s terrible at detecting faces that are darker in color or that are lighter in color, or that have different facial features.
We have heard many stories like that. You want to make sure you don’t have racial bias, gender bias, or any other kind of bias we want to avoid in the data training set. Encode those explicitly up front when you’re planning your project; that can go a long way towards helping to limit bias. But even if you’ve done that, you want to make sure you’re checking for bias in a system’s performance. We have many great technologies built into our machine learning tools to help you automatically look for those biases and detect if they are present. You also want to be checking for bias after the system has been deployed, to make sure if something pops up, you see it and can take care of it.
From your background in bioscience, how well would you say the federal government has done in responding to the COVID-19 virus?
There really are two industries that bore the brunt, at least initially from the COVID-19 spread: government and health care. In most places in the world, health care is part of government. So it has been a big public sector effort to try to deal with COVID. It’s been hit and miss, with many challenges. No other entity can marshal financial resources like the government, so getting economic support out to those that need is really important. Analytics plays a role in that.
So one of the things that we did in supporting government using what we’re good at—data and analytics in AI—was to look at how we could help use the data to do a better job responding to COVID. We did a lot of work on the simple side of taking what government data they had and putting it into a simple dashboard that displayed where resources were. That way they could quickly identify if they had to move a supply such as masks to a different location. We worked on a more complex AI system to optimize the use of intensive care beds for a government in Europe that wanted to plan use of its medical resources.
Contact tracing, the ability to very quickly identify people that are exposed and then identify who they’ve been around so that we can isolate those people, is something that can be greatly supported and enhanced by analytics. And we’ve done a lot of work around how to take contact tracing that’s been used for centuries and make it fit for supporting COVID-19 work. The government can do a lot with its data, with analytics and with AI in the fight against COVID-19.
Do you have any advice for young people, either in school now or early in their careers, for what they should study if they are interested in pursuing work in AI, and especially if they’re interested in working in the government?
If you are interested in getting into AI, I would suggest two things to focus on. One would be the technical side. If you have a solid understanding of how to implement and use AI, and you’ve built experience doing it as part of your coursework or part of your research work in school, you are highly valuable to government. Many people know a little about AI; they may have taken some business courses on it. But if you have the technical chops to be able to implement it, and you have a passion for doing that inside of government, you will be highly valuable. There would not be a lot of people like you.
Just as important as the AI side and the data science technical piece, I would highly advise students to work on storytelling. AI can be highly technical when you get into the details. If you’re going to talk to a government or agency leader or an elected official, you will lose them if you can’t quickly tie the value of artificial intelligence to their mission. We call them ‘unicorns’ in SAS, people that have high technical ability and a detailed understanding of how these models can help government, and they have the ability to tell good stories and draw that line to the “so what?” How can a senior agency official in government, how can they use it? How is it helpful to them?
To work on good presentation skills and practice them is just as important as the technical side. You will find yourself very influential and able to make a difference if you’ve got a good balance of those skills. That’s my view.
I would also say, in terms of where you specialize technically, being able to converse in SAS has been recently ranked as one of the most highly valued jobs skills. The specific aspects of those technical pieces that can be very, very marketable to you inside and outside of government.
Learn more about Steve Bennett on the SAS Blog.
Getting AI to Learn Like a Baby is Goal of Self-Supervised Learning
By AI Trends Staff
Scientists are working on creating better AI that learns through self-supervision, with the pinnacle being AI that could learn like a baby, based on observation of its environment and interaction with people.
This would be an important advance because AI has limitations based on the volume of data required to train machine learning algorithms, and the brittleness of the algorithms when it comes to adjusting to changing circumstances.
“This is the single most important problem to solve in AI today,” stated Yann LeCun, chief AI scientist at Facebook, in an account in the Wall Street Journal. Some early success with self-supervised learning has been seen in the natural language processing used in mobile phones, smart speakers, and customer service bots.
Training AI today is time-consuming and expensive. The promise of self-supervised learning is for AI to train itself without the need for external labels attached to the data. Dr. LeCun is now focused on applying self-supervised learning to computer vision, a more complex problem in which computers interpret images such as a person’s face.
The next phase, which he thinks is possible in the next decade or two, is to create a machine that can “learn how the world works by watching video, listening to audio, and reading text,” he stated.
More than one approach is being tried to help AI learn by itself. One is the neuro-symbolic approach, which combines deep learning and symbolic AI, which represents human knowledge explicitly as facts and rules. IBM is experimenting with this approach in its development of a bot that works alongside human engineers, reading computer logs to look for system failure, understand why a system crashed and offer a remedy. This could increase the pace of scientific discovery, with its ability to spot patterns not otherwise evident, according to Dario Gil, director of IBM Research. “It would help us address huge problems, such as climate change and developing vaccines,” he stated.
Child Psychologists Working with Computer Scientists on MESS
DARPA is working with the University of California at Berkeley on a research project, Machine Common Sense, funding collaborations between child psychologists and computer scientists. The system is called MESS, for Model-Building, Exploratory, Social Learning System.
“Human babies are the best learners in the universe. How do they do it? And could we get an AI to do the same?,” queried Alison Gopnik, a professor of psychology at Berkeley and the author of “The Philosophical Baby” and “The Scientist in the Crib,” among other books, in a recent article she wrote for the Wall Street Journal.
“Even with a lot of supervised data, AIs can’t make the same kinds of generalizations that human children can,” Gopnik said. “Their knowledge is much narrower and more limited, and they are easily fooled. Current AIs are like children with super-helicopter-tiger moms—programs that hover over the learner dictating whether it is right or wrong at every step. The helicoptered AI children can be very good at learning to do specific things well, but they fall apart when it comes to resilience and creativity. A small change in the learning problem means that they have to start all over again.”
The scientists are also experimenting with AI that is motivated by curiosity, which leads to a more resilient learning style, called “active learning” and is a frontier in AI research.
The challenge of the DARPA Machine Common Sense program is to design an AI that understands the basic features of the world as well as an 18-month-old. “Some computer scientists are trying to build common sense models into the AIs, though this isn’t easy. But it is even harder to design an AI that can actually learn those models the way that children do,” Dr. Gopnik wrote. “Hybrid systems that combine models with machine learning are one of the most exciting developments at the cutting edge of current AI.”
Training AI models on labeled datasets is likely to play a diminished role as self-supervised learning comes into wider use, LeCun said during a session at the virtual International Conference on Learning Representation (ICLR) 2020, which also included Turing Award winner and Canadian computer scientist Yoshua Bengio.
The way that self-supervised learning algorithms generate labels from data by exposing relationships between the data’s parts is an advantage.
“Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It’s basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way,” stated LeCun, in an account from VentureBeat. “This is the type of [learning] that we don’t know how to reproduce with machines.”
Bengio was optimistic about the potential for AI to gain from the field of neuroscience, in particular for its explorations of consciousness and conscious processing. Bengio predicted that new studies will clarify the way high-level semantic variables connect with how the brain processes information, including visual information. These variables that humans communicate using language could lead to an entirely new generation of deep learning models, he suggested.
“There’s a lot of progress that could be achieved by bringing together things like grounded language learning, where we’re jointly trying to understand a model of the world and how high-level concepts are related to each other,” said Bengio. “Human conscious processing is exploiting assumptions about how the world might change, which can be conveniently implemented as a high-level representation.”
Bengio Delivered NeurIPS 2019 Talk on System 2 Self-Supervised Models
At the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019), Bengio spoke on this topic in a keynote speech entitled, “From System 1 Deep Learning to System 2 Deep Learning,” with System 2 referring to self-supervised models.
“We want to have machines that understand the world, that build good world models, that understand cause and effect, and can act in the world to acquire knowledge,” he said in an account in TechTalks.
The intelligent systems should be able to generalize to different distributions in data, just as children learn to adapt as the environment changes around them. “We need systems that can handle those changes and do continual learning, lifelong learning, and so on,” Bengio stated. “This is a long-standing goal for machine learning, but we haven’t yet built a solution to this.”
Support for Remote Workers Providing Extra Boost for Conversational AI
By AI Trends Staff
Conversational AI refers to the use of chatbots, messaging apps, and voice-based assistants to automate customer communications with a brand.
Software that combines these features to carry on a human-like conversation might be called a “bot.” The term “chatbot” might refer to text-only bots. Amazon Alexa or Google Home virtual assistants use conversational AI; they learn about the customer and the customer learns about them. With deep learning underlying the interaction, the conversation experience should improve over time.
The advantages of conversational AI in marketing include an instant response, which leads to higher conversion rates of queries to sales.
The adoption of conversational AI is being fueled by the rise in use of messaging apps and voice-based assistants, according to an account from the site of Shane Barker, a digital marketing consultant and cofounder of Attrock, a digital marketing agency.
The most popular messaging app, according to Statista, is WhatsApp, from a US startup now owned by Facebook, with over 1.6 billion users. That is followed by: Facebook Messenger with 1.3 billion users; WeChat, developed by TenCent of China, with 1.1 billion users; QQMobile, also from Tencent, with 800 million users; Snapchat from Snap, Inc. of the US, with 314 million users; and Telegram from Telegram Messenger, founded in Russia in 2013 on the macOS and released on Android in May of this year, with 200 million users.
“If you are not using conversational AI platforms yet, you should start now,” advised Barker.
The conversations could be text-based or audio-based, and can be done on any messaging or voice-based communication platform. While conversational AI is the technology behind chatbots and voice-based assistants, it is not synonymous with either. You can use a messaging service, a website chatbot or a voice-based assistant, and use conversational AI to automate conversations on it, Barker advises.
How Conversational AI Can Help Your Business
Some conversational AI technologies are advanced enough to understand the context and personalize the conversations. User-friendly chatbots can generate leads and help drive sales. The first and most common use of conversational AI is to provide around-the-clock customer service. The bot can answer commonly-asked customer questions, resolve problems and point to solutions. The user company can build a customized database of information that can feed the conversational AI platform to make it more accurate.
A website chatbot can interact with users and direct them to the right pages, products, or services — basically leading them down the sales funnel. The bot can also drive conversions by cross-selling or up-selling products. The bot can be trained to suggest complementary or higher-value products. The platform can also deliver offers and promotions to customers.
As far as lead generation is concerned, conversational AI-based chatbots can schedule appointments and collect email addresses during non-working hours. You can then pass that information on to your sales team, who can then nurture those leads.
Among the conversational AI platforms recommended by Barker are:
- LivePerson from LivePerson of New York City, with an AI offering released in 2018 from the company founded in 1998;
- SAP Conversational AI from SAP, the German multinational software company;
- KAI from Kasisto of New York City, founded in 2013;
- MindMeld now from Cisco Systems, founded in 2011 and acquired in 2017;
- Mindsay from Mindsay, headquartered in Paris; founded in 2016.
iAdvize Taps Network of Freelance Experts for Customer Service
Another player is iAdvize, founded in France in 2010, offering a chat tool focused on customer service. Today iAdvize is a leading conversational platform in Europe and is now expanding in the US. The company says the tool is currently being used by over 2,000 e-commerce websites worldwide including Samsung, Disney and Lowe’s.
The platform uses AI to identify each customer’s needs and connects them to a mix of in-store associates, in-house agents, chatbots and on-demand product experts from ibbu. Founded by iAdvize in 2016, ibbu today uses over 20,000 knowledgeable product experts from around the world who chat with customers and are paid for the advice.
The freelancers are vetted to be experts in electronics, home improvement, sporting goods, hobbies, and other product segments. They get paid a percentage of sales they generate. Ibbu experts the company says have conducted over 1 million conversations with iAdvize’s e-commerce customers.
Customers using iAdvize have seen an increase in online sales of 5% to 15%, according to the company. iAdvize was co-founded by Julien Hervouet, now the CEO. He stated in a press release on the announcement of ibbu in the UK in 2016, “We believe the future of marketing is conversational commerce, where brands use genuine fans to improve the customer’s experience of the brand.”
How Adobe Used an AI Chatbot to Support 22,000 Remote Workers
When the COVID-19 virus hit in March throughout the US, Adobe like many companies sent their workers home and shifted into remote work over a single weekend. “Not surprisingly, our existing processes and workflows weren’t equipped for this abrupt change,” stated Cynthia Stoddard, Senior VP and CIO at Adobe, in a written account published in VentureBeat. “Customers, employees, and partners — many also working at home — couldn’t wait days to receive answers to urgent questions.”
The first step was to launch an organization-wide channel using Slack, a business communications platform from Slack Technologies, launched in 2013 in San Francisco. The 24×7 global IT help desk would support the channel, with the rest of IT available for rapid event escalation.
The same questions and issues came up frequently. “We decided to optimize our support for frequently asked questions and issues,” Stoddard stated. They combined AI, machine learning and natural language processing to build a chatbot. Its answers could be as simple as directing employees to an existing knowledge base or FAQ, or walking them through steps to solve a problem. The team focused on the eight most frequently-reported topics, then continued to add capabilities based on what delivers the biggest benefits.
“The results have been remarkable,” she wrote. Since going live on April 14, the system has responded to more than 3,000 queries and has noticed improvement in some critical issues. For example, more employees are seeking IT support through email. It was important to speed the turnaround time on these queries.
“With the help of a deep learning and NLP based routing mechanism, 38% of email tickets are now automatically routed to the correct support queue within six minutes,” she stated. “The AI routing bot uses a neural network-based classification technique to sort email tickets into classes, or support queues. Based on the predicted classification, the ticket is automatically assigned to the correct support queue.”
The average time required to dispatch and route email tickets has been reduced by the AI chatbot from about 10 hours to less than 20 minutes. Continuous supervised training on the bot has helped Adobe achieve 97% accuracy, nearly on a par with a human expert. Call volumes for internal support have dropped by 35% as a result.
The neural network model is retrained every two weeks by adding new data from resolved tickets to the training set. They leveraged the work done for a company chatbot for finance. Adobe continues to look at robotic process automation, to explore business improvements through the combination of autonomous software robots and AI.
Keeping employees in the loop about the AI and chatbot technology being employed is critical. “When introducing a new/unknown technology tool, it’s critical to keep employee experience at the core of the training and integration process – to ensure they feel comfortable and confident with the change,” Stoddard wrote.
How Augmented Reality is changing Retail
Benefits & Use Cases of Augmented and Virtual Reality in Education
Facebook Lowers Price of Enterprise-focused Quest to $800
Felix & Paul Studios’ Space Explorers is Going Travelling as ∞INFINITY: Living Among Stars
The VR Game Launch Roundup: A Bumper Sept Lineup
Deepfake Detection Poses Problematic Technology Race
Mitigating Cyber-Risk While We’re (Still) Working from Home
Get Front-Row Access to TIDAL Concerts in Oculus Venues in 2020
U.S. Department of Commerce Announces Sanctions Against TikTok, WeChat
Abkhazia Facing Energy Crisis, Government Blames Illegal Crypto Mining
Evening Reading – September 17, 2020
Let’s talk about This Week at Bungie – September 17, 2020
KuCoin and Poloniex Team Up To Research Crypto Industry
VeChain Associates With China Animal Health And Food Safety Alliance (CAFA)
Unboxing & review: JunkBots – One man’s junk
Paytm App Down From Google Play Store
Ethereum Miners Hourly Revenue Hits Five-Year Record
U.S. stock futures are mixed after Dow snaps a 4-day winning streak
Origa Lease Finance Secures 2 Million USD in Debt
Visa y Mastercard apuestan fuerte a tarjetas contactless: American Express, fuera de juego
Angolan National Bank and Beta-i create fintech regulatory sandbox
Why this prominent crypto analyst thinks Ethereum DeFi has topped for now
The next wave of the global recovery could send commodity prices soaring
Trump administration wants Tencent-owned companies’ data-security protocols
Octopath Traveler: Champions of the Continent, the Prequel Story to ‘Octopath Traveler’ Finally Has a Confirmed Release Date for Japan
Swift unveils expansion plans to “fundamentally transform” transaction management
8 of the best strains for focus
Elgato Key Light Air unboxing: Illuminate yourself like a pro
Ledger Live New Version Launches Coin Control Feature To Protect Bitcoin Transaction
Report: Most firms in Korea’s ‘blockchain city’ don’t know what blockchain is
Trio of UAE banks go live with klip digital wallet solution
The Boys Season 2 Episode 5: 30 Easter Eggs, Comics References, And Other Things You Might Have Missed
The Boys Season 2 Lamplighter Explained: Who Is Shawn Ashmore’s Character?
Allica Bank plans £100m funding round after £26m injection
Project Aria is Facebook’s Prototype AR Glasses Which Start Real-world Testing This Month
GenieTalks Secures 750,000 USD in Angel Investment Round
West African microfinance confederation goes live with SAB’s AT
Two major crypto exchanges are teaming up to push the industry forward
Somalia’s Amana Bank selects Path Solutions for Islamic core banking
Ecommerce Companies Expecting 7 Billion USD Festive Sales
Gaming1 week ago
Forest Warden Omu tips & strategies – Hearthstone Battlegrounds
Gaming1 week ago
Out Now: ‘Hyena Squad’, ‘PAKO Caravan’, ‘Dungeoning’, ‘Neuroshima Convoy’, ‘Conjurer Andy’s Repeatable Dungeon’, ‘Crux: A Climbing Game’, ‘LegendArya’, ‘OLO Loco’ and More
AI1 week ago
What is a Sign Up Bonus and How Does it Work?
Esports4 days ago
Valorant Ego Skins Teased
Gaming1 week ago
Tony Hawk’s Pro Skater 1+2 review: Welcome back to The 9 Club, bro
Gaming1 week ago
‘Company of Heroes’ for iPhone and Android Is Out Now Worldwide with iCloud Save Backup on iOS
SaaS1 week ago
SaaS Growth: Top Strategies and Trends for SaaS Growth
Cyber Security1 week ago
Meet the Middlemen Who Connect Cybercriminals With Victims