Connect with us

AI

AI-powered transcription startup Verbit raises $157M

Published

on

Elevate your enterprise data technology and strategy at Transform 2021.


Verbit today announced the close of a $157 million series D round the company says will bolster its product R&D and hiring efforts. CEO Tom Livne, who noted that the raise brings the company’s post-money valuation to more than $1 billion, said the capital will also support Verbit’s geographic expansion as it prepares for an initial public offering.

The voice and speech recognition tech market is anticipated to be worth $31.82 billion by 2025, driven by new applications in the banking, health care, and automotive industries. In fact, it’s estimated that one in five people in the U.S. interacts with a smart speaker on a daily basis and that the share of Google searches conducted by voice in the country recently surpassed 30%.

Livne, who cofounded Verbit.ai with Eric Shellef and Kobi Ben Tzvi in 2017, asserts that the New York-based startup will contribute substantially to the voice transcription segment’s rise. “The transcription market has been ripe for innovation. That’s the initial reason why I founded Verbit. The shift to remote work and accelerated digitization amid the pandemic has been a major catalyst … and has further driven Verbit’s already-rapid development,” Livne said in a press release. “Securing this new funding is yet another milestone that brings us closer to becoming a public company, which will further fuel our expansion through strategic acquisitions and investments.”

AI-powered technology

Verbit’s voice transcription and captioning services aren’t novel — well-established players like Nuance, Cisco, Otter, Voicera, Microsoft, Amazon, and Google have offered rival products for years, including enterprise-focused platforms like Microsoft 365. But Verbit’s adaptive speech recognition tech can generate transcriptions it claims deliver over 99.9% accuracy.

Verbit customers first upload audio or video files to a dashboard for AI-guided processing. Then a team of more than 33,000 human freelancers in over 120 countries edits and reviews the material, taking into account customer-supplied notes and guidelines. Finished transcriptions from Verbit are available for export to services like Blackboard, Vimeo, YouTube, Canvas, and BrightCode. A web frontend shows the progress of jobs and lets users edit and share files or define the access permissions for each, as well as adding inline comments, requesting reviews, or viewing usage reports.

Verbit

Above: Verbit’s transcription dashboard.

Image Credit: Verbit

Customers have to make a minimum commitment of $10,000, a pricing structure that has apparently paid dividends. Annual recurring revenue grew 6 times from 2020 despite pandemic-related headwinds, according to Livne, and now stands at close to $100 million.

Rapid growth

Verbit’s suite has wooed a healthy client base of over 400 educational institutions and commercial customers (up from 70 as of January 2019), including Harvard, the NCAA, the London Business School, and Stanford University. Following its recent acquisition of captioning provider VITAC, Verbit claims it’s the “No. 1 player” in the professional transcription and captioning market as it supports more than 1,500 customers across the legal, media, education, government, and corporate sectors. Clients include CNBC, CNN, and Fox.

Verbit plans to add 200 new business and product roles and explore verticals in the insurance and financial sectors, as well as media and medical use cases. To this end, it recently launched a human-in-the-loop transcription service for media firms with a delay of only a few seconds. And the company inked an agreement with the nonprofit Speech to Text Institute to invest in court reporting and legal transcription technologies.

Sapphire Ventures led Verbit’s series C round, with participation from Third Point, More Capital, Lion Investment Partners, and ICON fund, as well as existing investors such as Stripes, Vertex Ventures, HV Capital, Oryzn Capital, and CalTech. This brings the four-year-old company’s total capital raised to more than $250 million, following a $60 million series C in November 2020.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/06/08/ai-powered-transcription-startup-verbit-raises-157m/

Artificial Intelligence

Persistent fraud threats drive consumer biometrics for payments and mobile credentials

Published

on

A biometric spoof attack and new fraud report this week both indicate the challenge of ensuring financial transactions are legitimate, showing why the latest Goode Intelligence forecast includes biometrics being used for billions of dollars in payments in the years ahead.

Biometric technologies from Idex and partners Zwipe and Tag Systems are each a step closer to consumer’s wallets to help cut fraud, Google SE partners including G+D and Thales are working on mobile digital identity credentials, in addition to the ongoing work industry stakeholders are investing in health passes, and a Mitek executive shares insights on the evolving consumer biometrics ecosystem in some of our most widely-read stories of the week.

Top biometrics news of the week

By 2026, biometrics will secure more than $5.6 billion in payments, according to the latest forecast from Goode Intelligence and the most widely-read story of the week. The latest biometric payments report from Goode comes as Idex Biometrics announced a new order of its TrustedBio sensors and a Nilson Report was released highlighting the biometric card cost reduction from the partnership between Zwipe and Tag Systems.

The secure element Google is planning to use in its forthcoming Pixel 3 phones is optimized to secure digital copies of biometric passports and support mobile driver’s licenses. The company and partners are accelerating the development of the technology through the Android Ready SE Alliance, and OEM partners including G+D, NXP, STMicroelectronics and Thales are already working with the associated StrongBox applet.

A tale of an in-the-wild biometric spoof attack of some sophistication netting over $75 million in China has been reported, after a pair of hackers were prosecuted by law enforcement. The scam involved high-resolution images of people performing different actions made with data obtained on the black market, fraudulent tax invoices and hijacked smartphones.

UbiSecure’s ‘Let’s Talk About Digital Identity’ podcast is joined by NIST Computer Scientist Mei Ngan, discussing her path to joining NIST, the expansion of face biometrics both in terms of applications and market, the work the Institute has done on facial recognition with masks and demographic differentials, with an interesting segment on the serious threat of face morphing on identity documents.

The latest Identity Fraud Study from Javelin Research finds $43 billion was lost to digital identity fraud last year, meaning there has never been a better time to take advantage of increased consumer willingness to adopt biometrics. Consumers are also not willing to tolerate failed fraud claims resolution, which is too frequent, so financial institutions are under pressure from both sides.

The pandemic has driven many businesses from industries other than financial services to approach Buguroo about fending off online fraud with its behavioral biometrics, Founder and CEO Pablo de la Riva tells Startup Info. The company’s focus on comparison against personal behavioral history, rather than cluster of ‘good’ and ‘bad’ users and experience securing financial services customers gives it the edge in an industry “gaining massive momentum,” de la Riva says.

In a highlight from Biometric Update’s growing network of media partners, we present IEC e-tech magazine Co-editor Antoinette Price’s recent interview with ISO/IEC biometrics standards editor Mike Thieme on biometric presentation attacks. Thieme talks about a broader conception of presentation attacks than is sometimes thought of, challenges with PAD systems, and what Part 4 of the ISO/IEC 30107-3 standard does.

The biometrics and technologies for delegating authorization and authentication to online accounts and various digital devices for the full range of consumer applications are available now, Mitek CTO Stephen Ritter tells Biometric Update, but the broader ecosystem to support it is yet to be established. Creating the right environment for consumer trust in smart homes and the IoT will mean building trust, and may require the efforts of business giants like big banks, but in the meantime appropriate choices in biometrics implementation can give companies an edge right now.

Two International Monetary Fund officials want to break down the Big Tech silos that are preventing big data and AI from being fully utilized. Too much data is probably being collected, and too little value shared with individuals, but major barriers related to privacy and policy stand in the way of change. The prioritization of policy around data sharing and digital identity for proving vaccination status may present the opportunity to overcome those barriers, according to an opinion piece by Yan Carriere-Swallow and Vikram Haksar.

Innovatrics’ SmartFace platform now includes pedestrian and body part detection to aid with anonymous real-time detection, with its latest update. The company has also introduced an application to provide instant feedback from mobile devices placed beside a SmartFace entry point, such as a reminder to put on a mask.

Digital health pass plans continue to be announced by governments around the world, with Japan, Estonia, and New York State the latest to adopt QR-code based credentials. Pangea has developed a ‘Green Pass’ authentication system to prevent spoofs of Israel’s COVID vaccine credential, meanwhile.

A pair of new health passes have been launched, with Global ID and Unisys each partnering with healthcare organizations. A white paper was released on the topic as part of the Digital Document Security Online Event 2021, and Aware reminds of the importance of liveness for digital identity authentication, meanwhile.

Nomidio and Post-Quantum executives talk with Biometric Update about the importance of how data is encrypted to data security, and how data can be secured in the future against quantum computers capable of breaking today’s standard encryption algorithms. That future may arrive in less than five years.

A new research partnership to bring biometric pre-registration to airport experiences by Idiap and Facedapter has been announced, in the latest attempt to reduce touchpoints and time spent waiting for flights. India’s government is moving forward with its Digi Yatra plans, while SITA offers tips for airports and a K2 Security executive weighs in on impacts of COVID-19 on TSA checkpoints.

Former IATA Director General and CEO Alexandre de Juniac believes the digital identity benefits of the IATA Travel Pass could not only play a key role in restarting the industry, but also boost the OneID project and transform passenger experience, he tells Airlines. De Juniac talks about the timing of IATA’s transition to a new CEO, and how the pandemic has brought closer collaboration between industry stakeholders.

The deadline for Nigerians to register their NINs with their SIMs has been extended by two months by court order, as numerous people faced having their mobile service cut off. The biometrics-backed national identity number is necessary for ever more parts of life in the country, with the NIN now required for writing university entrance exams.

Coppernic Co-founder and CEO Kevin Lecuivre tells Provence Business about the company’s roots in Psion Teklogix, how far France lags behind many African countries in digitizing electoral processes, and the company’s prospects for 2021 in a French-language interview. The pandemic may have set back Coppernic’s plans to reach €20 million in turnover by the end of 2022, but the company is internationalizing; and hiring.

Simprints has now reached more than 1.2 million beneficiaries, Chief Product Officer Alexandra Grigore announced in a LinkedIn post. The non-profit has provided fingerprint biometrics to support social benefits programs in 14 countries so far.

Please let us know of any interviews, editorials or other content we should share with the biometrics and digital identity communities in the comments below or through social media.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.fintechnews.org/persistent-fraud-threats-drive-consumer-biometrics-for-payments-and-mobile-credentials/

Continue Reading

AI

How to Perform Sentiment Analysis with Amazon Comprehend

Published

on

image

Shreya Ghate Hacker Noon profile picture

@shreyaghateShreya Ghate

Member of Technical Staff at www.udgama.com

Amazon Comprehend Service

Amazon Comprehend is a natural language processing (NLP) service by AWS that uses machine learning algorithms to extract insights from text. It provides various features like sentiment analysis, keyphrase extraction, entity recognition, and language detection APIs so you can easily integrate natural language processing into your applications.

There are various sets of text that need analysis for your business: product reviews, support emails, even advertising copy. Analyzing customer sentiment can be very useful for your business growth.

The question is how to get at it? As it turns out, Machine Learning accurately identifies the specific items from a text and then uses the context to analyze the sentiment behind the language, such as identifying positive or negative reviews at any particular scale of data.

You can explore the service here

Sentiment Analysis

Sentiment analysis is the process of analyzing the sentiments in a piece of text with the help of various algorithms. Using sentiment analysis, we can analyze various emotions, particularly if it is positive, neutral, or negative. This has enabled the development of intelligent applications with a better understanding of the text received. 

Companies like Twitter, Facebook, and Google use such algorithms to monitor the content posted on their apps to make a secure experience for their users. 

In this article, we will learn how to analyze the sentiments from a piece of text using AWS services like Amazon Comprehend, AWS IAM, AWS Lambda, and Amazon S3. So before you get started, make sure you have access to all these services. You can access all of these services through your management console.

Let’s get started!

IAM Permission for Amazon Comprehend

Using your management console, select the IAM service. Once you land on the IAM dashboard, on the left side, you will see Access Management. In that, select Roles to add an IAM role to give various permissions. Click on Create role button. 

As we will be using AWS Lambda, we need to select Lambda and click on Permissions (right-hand side below) to add various permissions to our role.

Select

ComprehendFullAccess

and

AWSLambdaExecute

image

You can filter out the permissions, and then once selected, click next!

image

Give an appropriate name and description to your role, and once you select Create role, a new role gets created, which can be accessed anytime from your existing list of roles from the IAM Management Console. 

Uploading a text file to Amazon S3 

Once the role is created, we will move to the S3 Management Console. Create a text document on your local device and then use that to analyze the docs’ sentiments. For that, first, create a bucket. Make sure you are keeping the bucket as public to test with the Lambda function for text analysis.

image

Once it is created, then upload a text document like that, and then the S3 bucket will look like this after uploading various documents. Max size of the file needs to be lesser than 5000 bytes.

image

Once we upload the files, this is how our bucket will look like; you can manage the permissions and perform various actions over it as per your needs.

image

Setting up a Lambda Function 

The last and final step is to create a Lambda function to analyze the text document we uploaded in the S3 bucket. 

image

Here, I have created the function

textanalysis_lambda

. We will use Python 3.8 for our runtime. The existing role

Comprehend_Lambda

we created earlier will be used with the Lambda function.

image

Through the Lambda Management Console, we can access the created function, and then we will click on that to write a function code for performing our task of sentiment analysis using Python 3.8. Here, we will enter the name of the S3 bucket we created, and the key will be the name of the file to be analyzed (uploaded in the same bucket)

import boto3
from pprint import pprint
def lambda_handler(event, context): s3 = boto3.client("s3") bucket = "comprehend-lambda-analysis" key = "analysisdata.txt" file = s3.get_object(Bucket = bucket, Key = key) analysisdata = str(file['Body'].read()) comprehend = boto3.client("comprehend") sentiment = comprehend.detect_sentiment(Text = analysisdata, LanguageCode = "en") print(sentiment) return 'Sentiment detected'

Save your function code and then test your code to see the results with the analysis of the text you directed through your bucket and key.

image

Once you test your function code, it will give the results for your text. The section of results shows the logging calls in the code. They correspond to a single row within the CloudWatch log group with this Lambda function.

Access the analysis text I uploaded (it is a keynote extract from the WWDC 2020 keynote by Apple): https://comprehend-lambda-analysis.s3.us-east-2.amazonaws.com/analysisdata.txt

image

It identifies the Sentiment of the Keynote extract to be Neutral with the Sentiment scores for each sentiment.

You can use the same for analyzing various text files like the ones I have tried. 

Here is a Twitter comment I have chosen to analyze: https://comprehend-lambda-analysis.s3.us-east-2.amazonaws.com/twitterhatecomment.txt

image

This is a hate comment from Twitter, so it shows the Sentiment to be Negative.

Here is another file I have chosen to analyze. It is a short story; you will enjoy it when you read: https://comprehend-lambda-analysis.s3.us-east-2.amazonaws.com/shortstory.txt

image

It is a very short and motivating story, so it gives the result to be Positive.

You can use various other files to analyze; this process will enable you to understand the sentiments of your files. 

Conclusion

Amazon Comprehend offers a range of Natural Language Processing features in addition to sentiment analysis. The key features are identifying the language of the given text, extracting specific elements to understand how positive or negative the text is. This is then used to organize a set of text by topic automatically. For extracting complex medical information, you can use Amazon Comprehend Medical

I hope this blog helped you understand how to perform sentiment analysis on text using the AWS Lambda interface. You can use various other approaches like serverless framework, or using React or Angular to test your output on the frontend side.

For any queries/suggestions, feel free to contact me. 😃

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/how-to-perform-sentiment-analysis-with-amazon-comprehend-8l4p351p?source=rss

Continue Reading

AI

The Third Pillar of Trusted AI: Ethics

Published

on

Click to learn more about author Scott Reed.

Building an accurate, fast, and performant model founded upon strong Data Quality standards is no easy task. Taking the model into production with governance workflows and monitoring for sustainability is even more challenging. Finally, ensuring the model is explainable, transparent, and fair based on your organization’s ethics and values is the most difficult aspect of trusted AI.

We have identified three pillars of trust: performance, operations, and ethics. In our previous articles, we covered performance and operations. In this article, we will look at our third and final pillar of trust, ethics.

Ethics relates to the question: “How well does my model align with my organization’s ethics and values?” This pillar primarily focuses on understanding and explaining the mystique of model predictions, as well as identifying and neutralizing any hidden sources of bias. There are four primary components to ethics: 

  • Privacy
  • Bias and fairness
  • Explainability and transparency
  • Impact on the organization

In this article, we will focus on two in particular: bias and fairness and explainability and transparency. 

Bias and Fairness

Examples of algorithmic bias are everywhere today, oftentimes relating to the protected attributes of gender or race, and existing across almost every vertical, including health care, housing, and human resources. As AI becomes more prevalent and accepted in society, the number of incidents of AI bias will only increase without standardized responsible AI practices.

Let’s define bias and fairness before moving on. Bias refers to situations in which,  mathematically, the model performed differently (better or worse) for distinct groups in the data. Fairness, on the other hand, is a social construct and subjective based on stakeholders, legal regulations, or values. The intersection between the two lies in context and the interpretation of test results.

At the highest level, measuring bias can be split into two categories: fairness by representation and fairness by error. The former means measuring fairness based on the model’s predictions among all groups, while the latter means measuring fairness based on the model’s error rate among all groups. The idea is to know if the model is predicting favorable outcomes at a significantly higher rate for a particular group in fairness by representation, or if the model is wrong more often for a particular group in fairness by error. Within these two families, there are individual metrics that can be applied. Let’s look at a couple of examples to demonstrate this point.

In a hiring use case where we are predicting if an applicant will be hired or not, we would measure bias within a protected attribute such as gender. In this case, we may use a metric like proportional parity, which satisfies fairness by representation by requiring each group to receive the same percentage of favorable predictions (i.e., the model predicts “hired” 50% of the time for both males and females). 

Next, consider a medical diagnosis use case for a life-threatening disease. This time, we may use a metric like favorable predictive value parity, which satisfies fairness by equal error by requiring each group to have the same precision, or probability of the model being correct. 

Once bias is identified, there are several different ways to mitigate and force the model to be fair. Initially, you can analyze your underlying data, and determine if there are any steps in data curation or feature engineering that may assist. However, if a more algorithmic approach is required, there are a variety of techniques that have emerged to assist. At a high level, those techniques can be classified by the stage of the machine learning pipeline in which they are applied:

  • Pre-processing
  • In-processing
  • Post-processing

Pre-processing mitigation happens before any modeling takes place, directly on the training data. In-processing techniques relate to actions taken during the modeling process (i.e., training). Finally, post-processing techniques occur after modeling the process and operate on the model predictions to mitigate bias.

Explainability and Transparency

All Data Science practitioners have been in a meeting where they were caught off-guard trying to explain the inner workings of a model or the model’s predictions. From experience, I know that isn’t a pleasant feeling, but those stakeholders had a point. Trust in ethics also means being able to interpret, or explain, the model and its results as well as possible. 

Explainability should be a part of the conversation when selecting which model to put into production. Choosing a more explainable model is a great way to build rapport between the model and all stakeholders. Certain models are more easily explainable and transparent than others – for example, models that use coefficients (i.e., linear regression) or ones that are tree-based (i.e., random forest). These are very different from deep learning models, which are far less intuitive. The question becomes, should we sacrifice a bit of model performance for a model that we can explain?

At the model prediction level, we can leverage explanation techniques like XEMP or SHAP to understand why a particular prediction was assigned to the favorable or unfavorable outcome. Both methods are able to show which features contribute most, in a negative or positive way, to an individual prediction. 

Conclusion

In this series, we have covered the three pillars of trust in AI: performance, operations, and ethics. Each plays a significant role in the lifecycle of an AI project. While we’ve covered them in separate articles, in order to fully trust an AI system, there are no trade-offs between the pillars. Enacting trusted AI requires buy-in at all levels and a commitment to each of these pillars. It won’t be an easy journey, but it is a necessity if we want to ensure the maximum benefit and minimize the potential for harm through AI. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.dataversity.net/the-third-pillar-of-trusted-ai-ethics/

Continue Reading

AI

Evolution, rewards, and artificial intelligence

Published

on

Elevate your enterprise data technology and strategy at Transform 2021.


Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands.

Natural selection

In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.”

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated.

According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.”

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

Reinforcement learning and artificial general intelligence

Reinforcement learning artificial intelligence

In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.”

In an online debate in December, computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.”

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].”

This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world.

Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Robot working in kitchen

Above: Image credit: Depositphotos

Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.”

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/06/20/evolution-rewards-and-artificial-intelligence/

Continue Reading
Esports2 days ago

Select Smart Genshin Impact: How to Make the Personality Quiz Work

Esports5 days ago

World of Warcraft 9.1 Release Date: When is it?

Energy5 days ago

Biocides Market worth $13.6 billion by 2026 – Exclusive Report by MarketsandMarkets™

Esports5 days ago

Here are the patch notes for Brawl Stars’ Jurassic Splash update

Blockchain2 days ago

Bitmain Released New Mining Machines For DOGE And LTC

Blockchain5 days ago

PancakeSwap (CAKE) Price Prediction 2021-2025: Will CAKE Hit $60 by 2021?

Esports4 days ago

How to complete Path to Glory Update SBC in FIFA 21 Ultimate Team

Esports4 days ago

Here are the patch notes for Call of Duty: Warzone’s season 4 update

Energy5 days ago

XCMG dostarcza ponad 100 sztuk żurawi dostosowanych do regionu geograficznego dla międzynarodowych klientów

Blockchain4 days ago

Will Jeff Bezos & Kim Kardashian Take “SAFEMOON to the Moon”?

Gaming5 days ago

MUCK: Best Seeds To Get Great Loot Instantly | Seeds List

Esports4 days ago

How to Get the Valorant ‘Give Back’ Skin Bundle

Esports4 days ago

How to unlock the MG 82 and C58 in Call of Duty: Black Ops Cold War season 4

Blockchain3 days ago

Digital Renminbi and Cash Exchange Service ATMs Launch in Beijing

Aviation3 days ago

Southwest celebrates 50 Years with a new “Freedom One” logo jet on N500WR

Esports4 days ago

How to unlock the Call of Duty: Black Ops Cold War season 4 battle pass

Blockchain3 days ago

Bitcoin isn’t as Anonymous as People Think it is: Cornell Economist

Aviation3 days ago

Delta Air Lines Drops Cape Town With Nonstop Johannesburg A350 Flights

AR/VR4 days ago

Larcenauts Review-In-Progress: A Rich VR Shooter With Room To Improve

Blockchain3 days ago

Index Publisher MSCI Considers Launching Crypto Indexes

Trending