Connect with us

AI

IBM Security announces new ways for customers to adopt a zero trust approach

Published

on

In addition to new blueprints, IBM Security also announced a partnership with the cloud and network security provider Zscaler.

cybersecurity concept

Image: iStock/sdecoret

On Wednesday, IBM Security announced new ways the company will help customers adopt a zero trust approach to security. A zero trust approach, the announcement explained, is founded on the principles of providing the least privileged access: never trust, always verify and assume breach. IBM Security also announced an alliance partnership with cloud and network security provider Zscaler and new blueprints for common zero trust use cases, a SaaS version of IBM Cloud Pak for Security and a strategic partnership with a leading cloud and network security provider to help customers modernize and secure remote work. 

More about cybersecurity

Security professionals should apply zero trust as a framework to update security programs. It will facilitate a smoother adaptation to the risks which emerge from the changing business environment. IBM Security cited a recent ESG study which found that 45% of organizations who were more mature in their zero-trust strategies were able to smoothly transition to a remote or work-from-home model, which was in contrast to those who were the least mature at only 8%.

“With a mobile workforce and data residing everywhere, the internet has become our primary network,” said Mauricio Guerra, CISO for The Dow Chemical Company in a press release. Guerra will participate in IBM Think on May 11, 2021.

Guerra continued, “Embracing a zero trust architecture enables us to add new capabilities and strengthen security. Working with partners like IBM Security and Zscaler can help us provide users with secure remote access to all of our locations, as well as access to applications wherever and however they are hosted.”

SEE: Security incident response policy (TechRepublic Premium)

IBM Security’s new zero trust blueprints will offer tech pros a framework for creating a security program which applies the aforementioned three core principles of zero trust. With the blueprints, companies will have “a prescriptive roadmap of security capabilities along with guidance on how to integrate them as part of a zero-trust architecture.” 

IBM Security used customer engagements to develop the capability and guidance for the blueprints. The company said that this will help organizations plan zero trust journeys and investments, with a “pragmatic approach that better aligns security and business objectives.”

Business initiatives which can use the blueprints: 

  • Preserve customer privacy

  • Secure the hybrid and remote workforce

  • Reduce the risk of insider threat

  • Protect the hybrid cloud

To address the fragmentation and complex challenges security teams face as they adopt a zero trust strategy, IBM security said there must be an open approach. IBM is collaborating with leading technology partners to help simplify and connect security for an organization’s vendor ecosystem

“Working from anywhere, combined with enterprises’ move to SaaS and the cloud, has effectively rendered the perimeter security model obsolete and traditional security defenses ineffective,” said Jay Chaudhry, chairman, CEO and founder of Zscaler in the press release. He said validated user identity should be combined with business policies for direct access to authorized applications and resources. Chaudry also added that the IBM Security alliance will help organizations and employees “embrace working from anywhere and protect enterprise data.”

The announcement further said that IBM will collaborate with ecosystem partners to help them implement zero trust strategies with worldwide customers. 

IBM Cloud Pak for Security will combine threat management capabilities and data security into a single, modular, easier to consume solution. With the new IBM Cloud Pak for Security as a Service, customers gain the option to choose between an owned or hosted deployment model—whichever is best suited for their environment and needs. It also provides access to a unified dashboard across threat management tools, with the option to easily scale with a usage-based pricing approach. 

“Our customers need to secure their rapidly changing business environments without causing delays or friction in their daily operations,” said Mary O’Brien, general manager, IBM Security in the press release. “It’s not uncommon to have users, data and applications operating in different environments. They all need to connect to one another quickly, seamlessly and securely. A zero trust approach offers a better way to address the security complexity that is challenging businesses today.” 

For more about zero trust from O’Brien, Guerra and Chaudhry join IBM Think on May 11,2021 in North America and May 12, 2021 in Europe and Asia. 

Also see

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.techrepublic.com/article/ibm-security-announces-new-ways-for-customers-to-adopt-a-zero-trust-approach/#ftag=RSS56d97e7

AI

Raquel Urtasun’s Waabi Autonomous Vehicle Software Company is Launched   

Published

on

Waabi, the autonomous driving software company recently launched by Raquel Urtasun, will initially focus on the trucking industry. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor  

Raquel Urtasun hit the ground running as an entrepreneur on June 8, with the announcement of her autonomous driving software company Waabi, complete with $83.5 million in backing.  

Raquel Urtasun, Founder and CEO, Waabi

Urtasun has a long track record as a computer scientist, especially working to apply AI to self-driving car software. Uber hired her in May 2017 to lead a research team based in Toronto for the company’s self-driving car program. (See AI Trends, June 29, 2018) 

“Self-driving is one of the most exciting and important technologies of our generation. Once solved at scale, it will change the world as we know it,” stated Urtasun in the Waabi launch press release“Waabi is the culmination of my life’s work to bring commercially viable self-driving technology to society and I’m honoured to be joined by a team of extraordinary scientists, engineers and technologists who are equally committed to executing on this bold vision.”  

The Waabi launch was greeted with some skepticism, given the state of the self-driving car industry working to get off the ground. But Urtasan knows what she’s doing.  

The latest financing round was led by Khosla Ventures, with additional participation from Urtasun’s former employer, Uber, and Aurora, the AV startup that ended up acquiring Uber ATG in a deal last year, according to an account in The Verge. Money was also raised from 8VC, Radical Ventures, Omers Ventures, BDC, AI luminaries Geoffrey Hinton, Fei-Fei Li, Pieter Abbeel, Sanja Fidler, and others, the report said.  

Waabi will initially focus on the trucking industry, offering its software to automate driving on commercial delivery routes. One reason is, the industry has a shortage of truck drivers. Second, the highways are simpler than city streets for autonomous vehicles to navigate.   

Wasabi’s technical approach will lean heavily on simulation, using techniques Urtasan has developed in her research. The company’s simulation approach will reduce the need for the miles of testing on real roads and highways that autonomous driving competitors have logged   

“For us in simulation, we can test the entire system,” Urtasun stated to The Verge.  “We can train an entire system to learn in simulation, and we can produce the simulations with an incredible level of fidelity, such that we can really correlate what happens in simulation with what is happening in the real world.”  

To have an autonomous vehicle startup founded by a woman who developed the technology and is the CEO is unusual; Urtasan hopes to inspire other women to join the industry. “This is a field that is very dominated by white dudes,” she said. “The way to build integrating knowledge is to build technology with diverse perspectives, because by challenging each other, we build better things.”  

Earlier Career at Uber, Toyota 

Urtasun started at Uber in May 2017, to pursue her work on machine perception for self-driving cars. The work entails machine learning, computer vision, robotics, and remote sensing. Before coming to the university, Urtasun worked at the Toyota Technological Institute at Chicago. Uber committed to hiring dozens of researchers and made a multi-year, multi-million dollar commitment to Toronto’s Vector Institute, which Urtasun co-founded. 

Urtasan has argued that self-driving vehicles need to wean themselves off Lidar (Light Detection and Ranging), a remote sensing method that uses a pulsed laser to measure variable distances. Her research has shown in some cases that vehicles can obtain similar 3D data about the world from ordinary cameras, which are much less expensive than Lidar units, which cost thousands of dollars. 

“If you want to build a reliable self-driving car right now, we should be using all possible sensors,” Urtasun told Wired in an interview published in November 2017. “Longer term, the question is how can we build a fleet of self-driving cars that are not expensive.” 

Ben Dickson, Founder and Editor, TechTalks

The company’s technical “AI-first approach” implies that they will put more emphasis on better machine learning models and less on complementary technologies including Lidar, radar, and mapping data, according to an account in TechTalks. “The benefit of having a software-heavy stack is the very low costs of updating the technology. And there will be a lot of updating in the coming years,” stated Ben Dickson, author of the report and founder of TechTalks.  

Urtasun described the AI system the company uses as a “family of algorithms,” in an account of the launch in TechCrunch. Its closed-loop simulation environment is a replacement for sending real cars on real roads.  

“I’m a bit on the fence on the simulation component,” Dickson stated  “Most self-driving car companies are using simulations as part of the training regime of their deep learning models. But creating simulation environments that are exact replications of the real world is virtually impossible, which is why self-driving car companies continue to use heavy road testing.”  

Waymo Leads in Simulated and Real Testing Miles 

Waymo has at least 20 billion miles of simulated driving to go with its 20 million miles of real-road testing, a record in the industry, according to Dickson. To gain more insight into Waabi’s technology, he looked at some of Urtasun’s recent academic work at the University of Toronto. Her name appears on many papers about autonomous driving; one, uploaded on the arXiv preprint server in January, caught Dickson’s attention.  

Titled “MP3: A Unified Model to Map, Perceive, Predict and Plan,” the paper discusses an approach to self-driving close to the description in Waabi’s launch press release. 

The researchers describe MP3 as “an end-to-end approach to mapless driving that is interpretable, does not incur any information loss, and reasons about uncertainty in the intermediate representations.” In the paper, researchers also discuss the use of “probabilistic spatial layers to model the static and dynamic parts of the environment.” 

MP3 is end-to-end trainable. It uses Lidar input to create scene representations, predict future states and plan trajectories. “The machine learning model obviates the need for finely detailed mapping data that companies like Waymo use in their self-driving vehicles,” Dickson stated. 

Urtasun posted a video, A Future with Self-Driving Vehicles,  on her YouTube channel that provides a brief explanation of how MP3 works. Some researchers commented that it is a clever combination of existing techniques. “There’s also a sizable gap between academic AI research and applied AI,” Dickson stated. How the Waabi model performs in practical settings will be interesting to watch.   

Read the source articles and information in AI Trends, the Waabi launch press release, in The Verge, in TechTalks, in TechCrunch and in a YouTube video, A Future with Self-Driving Vehicles.

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/selfdrivingcars/raquel-urtasuns-waabi-autonomous-vehicle-software-company-is-launched/

Continue Reading

AI

Market for Emotion Recognition Projected to Grow as Some Question Science 

Published

on

Emotion recognition software is growing in use and is being questioned for its scientific foundation at the same time. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

The emotion recognition software segment is projected to grow dramatically in coming years, spelling success for companies that have established a beachhead in the market, while causing some who are skeptical about its accuracy and fairness to raise red flags.  

The global emotion detection and recognition market is projected to grow to $37.1 billion by 2026, up from an estimated $19.5 billion in 2020, according to a recent report from MarketsandMarkets. North America is home to the largest market.  

Software suppliers covered in the report include: NEC Global (Japan), IBM (US), Intel (US), Microsoft (US), Apple (US), Gesturetek (Canada), Noldus Technology (Netherlands), Google (US), Tobii (Sweden), Cognitec Systems (Germany), Cipia Vision Ltd (Formerly Eyesight Technologies) (Israel), iMotions (Denmark), Numenta (US), Elliptic Labs (Norway), Kairos (US), PointGrab (US), Affectiva (US), nViso (Switzerland), Beyond Verbal (Israel), Sightcorp (Holland), Crowd Emotion (UK), Eyeris (US), Sentiance (Belgium), Sony Depthsense (Belgium), Ayonix (Japan), and Pyreos (UK). 

Among the users of emotion recognition software today are auto manufacturers, who use it to detect drowsy drivers, and to identify whether the driver is engaged or distracted 

Some question whether emotion recognition software is effective, and whether its use is ethical. One research study recently summarized in Sage journals is examining the assumption that facial expressions are a reliable indicator of emotional state.  

Lisa Feldman Barrett, professor of psychology, Northeastern University

“How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation,” stated the report, from a team of researchers led by Lisa Feldman Barrett, of Northeastern University, Mass General Hospital and Harvard Medical School.   

The research team is suggesting that further study is needed. “Our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life,” the report stated. 

Technology companies are spending millions on projects to read emotions from faces. “A more accurate description, however, is that such technology detects facial movements, not emotional expressions,” the report authors stated.  

Affectiva to be Acquired by $73.5 Million by Smart Eye of Sweden 

Recent beneficiaries of the popularity of emotion recognition software are the founders of Affectiva, which recently reached an agreement to be acquired by Smart Eye, a Swedish company providing driver monitoring systems for about a dozen automakers, for $73.5 million in cash and stock. 

Affectiva was spun out of MIT in 2009 by founders Rana el Kaliouby, who had been CEO, and Rosalind Picard, who is head of the Affective Computing group at MIT. Kaliouby authored a book about her experience founding Affectiva in the book, Girl Decoded. 

“As we watched the driver monitoring system category evolve into Interior Sensing, monitoring the whole cabin, we quickly recognized Affectiva as a major player to watch.” stated Martin Krantz, CEO and founder of Smart Eye, in a  press release. “Affectiva’s pioneering work in establishing the field of Emotion AI has served as a powerful platform for bringing this technology to market at scale,“ he stated.  

Affectiva CEO Kaliouby stated, “Not only are our technologies very complementary, so are our values, our teams, our culture, and perhaps most importantly, our vision for the future.”  

Kate Crawford, senior principal researcher, Microsoft Research

Some have called for government regulation of emotion intelligence software. Kate Crawford, senior principal research at Microsoft Research New York, and author of the book Atlas of AI  (Yale, 2021), wrote recently in Nature, “We can no longer allow emotion-recognition technologies to go unregulated. It is time for legislative protection from unproven uses of these tools in all domains—education, health care, employment, and criminal justice.”   

The reason is, companies are selling software that affects the opportunities available to individuals, “without clearly documented, independently-audited evidence of effectiveness,” Crawford stated. This includes job applicants being judged on facial expressions or vocal tones, and students flagged at school because their faces may seem angry.  

The science behind emotion recognition is increasingly being questioned. A review of 1,000 studies found the science behind tying facial expressions to emotions is not universal, according to a recent account in OneZero. The researchers found people made the expected facial expression to match their emotional state only 20% to 30% of the time.   

Startups including Find Solution AI base their emotion recognition technology on the work of Paul Ekman, a psychologist who published on the similarities between facial expressions around the world, popularizing the notion of “seven universal emotions.”   

The work has been challenged in the real world. A TSA program that trained agents to spot terrorists using Ekman’s work found little scientific basis, did not result in arrests, and fueled racial profiling, according to filings from the Government Accountability Office and the ACLU.   

Dr. Barrett’s team of researchers concluded, “The scientific path forward begins with the explicit acknowledgment that we know much less about emotional expressions and emotion perception than we thought we did.”  

Read the source articles and information from MarketsandMarkets, in Sage journals, in a press release from Smart Eye, in Nature and in OneZero. 

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/emotion-recognition/market-for-emotion-recognition-projected-to-grow-as-some-question-science/

Continue Reading

AI

Generic AI Models Save Time; Prebuilt AI Models for Verticals Save More 

Published

on

Like prefabricated housing, prebuilt AI models are emerging, some targeting vertical industries such as oil and gas with applications including predictive maintenance. (Credit: Getty Images)

By AI Trends Staff  

Generic AI models save time by packing up a percentage of the work involved in launching an AI application and offering it for reuse. A prime example is Vision AI from Google Cloud, with access to pre-built models for detecting emotion and understanding text.  

Some emerging companies aim to build on this trend by supplying pre-built models developed for specific vertical industries, to go beyond the advantages of generic pre-built models for any industry.  

DJ Das, founder and CEO, ThirdEye Data

“While effective in some use cases, these solutions do not suit industry-specific needs right out of the box. Organizations that seek the most accurate results from their AI projects will simply have to turn to industry-specific models,” stated DJ Das, founder and CEO of ThirdEye Data, in a recent account in TechCrunch. ThirdEye builds AI applications for enterprises. 

Companies have options for generating industry-specific results. “One would be to adopt a hybrid approach—taking an open-source generic AI model and training it further to align with the business’ specific needs,” Das stated. “Companies could also look to third-party vendors, such as IBM or C3, and access a complete solution right off the shelf. Or—if they really needed to—data science teams could build their own models in-house, from scratch.”  

In a recent engagement, ThirdEye worked with a utility company to detect defects in electric utility poles by using AI to analyze thousands of images. “We started off using Google Vision API and found that it was unable to produce our desired results,” which was to get 90% or better accuracy, Das stated. For example, the generic Google Vision generic models did not identify the nonstandard font and different background colors used in utility pole tags. 

“So, we took base computer vision models from TensorFlow and optimized them to the utility company’s precise needs,” Das stated. The team spent two months developing AI models to detect and decipher tags on the electric poles, and another two months training the models. “The results are displaying accuracy levels of over 90%,” Das stated.  

Sees Need for Industry-Specific Pre-Trained Models 

A similar sentiment was expressed by the CEO and founder of CrowdAnalytix, in a recent account in Forbes“There is a catch to Google Vision, just as there is to all generic AI: These generic models know nothing about the particular industry or organization using them,” stated Divyabh Mishra.  

Generic AI models are trained on general sets of data, often publicly accessible, and applicable to many use cases across industries. “The result is AI that is undeniably powerful, but extremely limited in its usefulness to businesses,” he stated.  

A large library of narrowly-trained, AI applications working in specific vertical industries is needed. “We need models pre-trained on large datasets for relatively specific use cases: an AI marketplace of business-specific solutions that can be implemented directly by the consumer, without a huge data science team and without having to deal with additional training,” Mishra stated.  

CrowdAnalytix works in a crowdsource model, with a community of “solvers” numbering over 25,000 to work on projects, which the company calls “competitions.” Its website states, “We leverage our community to create a host of pre-built solutions that are then tuned and customized for each client.”  

New York Times Working with Google Cloud to Digitize its Photo Archive  

In an example rooted in Google’s investment in prebuilt models, The New York Times is working with Google Cloud on a project to digitize its photo archive. For over 100 years, The Times has archived photos in file cabinets three levels below the street near their officers in Times Square. The archive now has between five and seven million photos, according to an account on the blog of Google Cloud.  

“The morgue is a treasure trove of perishable documents that are a priceless chronicle of not just The Times’s history, but of nearly more than a century of global events that have shaped our modern world,” stated Nick Rockwell, chief technology officer, The New York Times.  

Sam Greenfield, technical director, Cloud Office of the CTO for Google

“A working asset management system must allow the users to be able to browse and search for photos easily,” stated Sam Greenfield, technical director, Cloud Office of the CTO for Google, author of the post. Google brought its AI tech and expertise to the table, to create a system useful to the Times photo editors. The system scans the photo image and all the text information on the back of the photo, which enables the system to further classify the photo. A photo of Penn Station gets put into “travel” and “bus and rail” classifications, for instance.  

C3.ai Offering Prebuilt AI Applications for Vertical Industries  

C3.ai, the AI software company founded by Tom Siebel, who was the founder of Siebel Systems, the human resource application supplier, is replicating the packaged software industry for AI. The company offers the C3 AI Suite, offering prebuilt AI applications that can be configured, for applications including predictive maintenance, fraud detection, energy management and customer engagement.  

Working with Baker Hughes, an industrial services company, C3 developed the BHC3 AI Suite targeting the oil and gas industry with predictive maintenance use cases, according to a customer story on the C3 website. Within months, the team deployed predictive maintenance applications at scale, according to an account on the C3 website. “These applications notify instrument engineers when asset components are behaving abnormally,” stated the account.   

“The combination of our data science expertise and the software development expertise that c3.ai brings is really powerful,” stated Dan Jeavons, who is general manager of data science for Shell Oil.   

The market is setting up well for software suppliers and consultants with expertise in applying prebuilt AI models to specific vertical industries. 

Read the source articles and information in TechCrunchin Forbeson the blog of Google Cloud, and from a customer story on the C3 website. 

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/software-development-2/generic-ai-models-save-time-prebuilt-ai-models-for-verticals-save-more/

Continue Reading

AI

Lip-Reading AI is Under Development, Under Watchful Eyes 

Published

on

A lip-reading app using AI developed by startup Liopa was developed as an assist for the hearing-impaired, and is also being applied to surveillance. (Credit: Getty Images)   

By AI Trends Staff 

A lip-reading app from Irish startup Liopa is said to represent a breakthrough in the field of visual speech recognition (VSR), which trains AI to read lips without any audio input.   

Liopa’s product, SRAVI (Speech Recognition App for the Voice Impaired) is a communication aid for speech-impaired patients. It is likely to be the first lip-reading AI app available for public purchase, according to an account from Vice/Motherboard.  

Researchers driven by a range of potential commercial applications including surveillance tools have been working for years to teach computers to lip-read, and it has proven a challenging task. Liopa is working to certify SRAVI as a Class I medical device in Europe, hoping to complete the certification by August. That would allow it to begin selling to healthcare providers. 

Many tech giants are also working on lip-reading AI. Scientists affiliated with or working directly for Google, Huawei, Samsung, and Sony are all researching VSR systems and appear to be making rapid advances, according to the Motherboard account.   

Liopa Wins Second Contract for UK Defense and Security Research  

How lip-reading AI is being developed and how it might be deployed are becoming causes for concern. Liopa recently announced that it has been selected to take part in Phase 2 of the DASA Behavioural Analytics initiative, aimed at helping the UK’s Defense and Security Accelerator develop capability in behavioral analytics. These are defined as “context-specific insights” derived from data on individuals and groups, which could enable “reliable predictions about how they are likely to act in the future.”   

The hoped-for tool would allow law enforcement agencies to search through silent CCTV footage and identify when people say certain keywords.   

The Liopa VSR engine takes video of a subject(s) speaking as input, and uses AI to predict the subject’s most likely utterances, according to a press release from Liopa, which is based in Belfast, Northern Ireland. The engine can be used to identify key words spoken in surveillance video content (CCTV) where audio is either not present or of poor quality.  

DASA Delivery Manager, Eleanor Humphrey, stated, “Behavioural Analytics is a fascinating and emerging capability that is finding innovative ways to keep our people safe from major threats. We are delighted to be working with Liopa to accelerate their technology and look forward to seeing the results.”  

Liam McQuillan, Founder and CEO, Liopa

Liam McQuillan, Founder and CEO, Liopa, stated in the release, “This contract allows us to build on the progress made in the Phase 1 project. It’s great validation of our VSR technology in a practical use case that will provide invaluable information for Defence & Security personnel.”  

Liopa is not alone in its quest to tap AI for lip-reading. Surveillance company Motorola Solutions has a patent for a lip-reading system designed to aid police. Skylark Labs, a startup whose founder has ties to the US Defense Advanced Research Projects Agency (DARPA), told Motherboard that its lip-reading system is currently deployed in private homes and a state-controlled power company in India to detect foul and abusive language. 

VSR Tech Could Be Ensnared in Ethical Issues Akin to Facial Recognition 

Some see the sticky wicket ahead similar to what has befallen the facial recognition market, which has been ensnared in ethical issues.  

“This is one of those areas, from my perspective, which is a good example of ‘just because we can do it, doesn’t mean we should,’” stated Fraser Sampson, the UK’s biometrics and surveillance camera commissioner, to Motherboard. “My principal concern in this area wouldn’t necessarily be what the technology could do and what it couldn’t do, it would be the chilling effect of people believing it could do what it says. If that then deterred them from speaking in public, then we’re in a much bigger area than simply privacy, and privacy is big enough.” 

AI researchers are now more cognizant of the ethical implications of how AI is applied. For example, the NeurIPS conference now requires AI scientists to submit, along with their proposed papers, impact statements about how their findings might affect society.  

Stavros Petridis, Research Scientist, Facebook AI Applied Research

Stavros Petridis, who has conducted related research at Imperial College London and is now working for Facebook, spoke to Motherboard about the dilemma. “In the last year there have been several discussions in the published literature around ethical considerations for VSR technology,” he stated. “Given that there are no commercial applications available yet, there are pretty good chances that this time, ethical considerations will be taken into account before this technology is fully commercialized.”  

Liopa CEO Liam McQuillan also spoke to Motherboard about the issue, saying the company is at least a year away from having a system that can lip-read keywords from silent CCTV footage at the required level of accuracy. He said the company has considered the possibility of a privacy backlash. “There may be concerns here that actually forbid the ultimate use of this technology,” McQuillan stated.  

At the Consumer Electronics Show in January, Sony provided an overview of its Visual Speech Enablement product in development, that uses camera sensor and AI for augmented lip reading. Mark Hanson, Sony’s VP of Product Technology and Innovation, said the product isolates a user’s lips and translates their movements into words, independent of background or foreground noise, according to an account in PCMag.  

The new product’s technology only captures lips, not faces, so no user-identifiable data is retained, Hanson indicated.   

Read the source articles and information in Vice/Motherboardpress release from Liopa and in PCMag. 

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/image-recognition/lip-reading-ai-is-under-development-under-watchful-eyes/

Continue Reading
Esports2 days ago

Dungeons & Dragons: Dark Alliance Voice Actors: Who Voices Utaar?

Blockchain2 days ago

Is Margex A Scam?

Esports4 days ago

Genshin Impact Grand Line Conch Locations

Energy3 days ago

Inna Braverman, Founder and CEO of Eco Wave Power Will be Speaking at the 2021 Qatar Economic Forum, Powered by Bloomberg

Esports2 days ago

Valorant Patch 3.00 Agent Tier List

Blockchain2 days ago

Yearn Finance (YFI) and Synthetix (SNX) Technical Analysis: What to Expect?

HRTech2 days ago

TCS bats for satellite offices, more women in the workforce

Esports2 days ago

Is Dungeons and Dragons: Dark Alliance Crossplay?

AI3 days ago

New Modular SaaS Platform for Financial Services Sector Launched by Ezbob, a Customer Acquisition Tech Provider

Aviation3 days ago

SAS Was The First Airline To Operate A Polar Route

Blockchain3 days ago

Cardano, Chainlink, Filecoin Price Analysis: 21 June

Blockchain5 hours ago

Digital Assets AG Launching Stock Tokens on Solana

Esports2 days ago

Ruined Pantheon Prestige Edition Splash Art, Price, Release, How to Get

Blockchain4 days ago

Amplifying Her Voice June 22, 10:45AM to June 24, 4:00PM EST BERMUDA

Aviation4 days ago

The Antonov An-124 Vs An-225: What Are The Differences?

Esports20 hours ago

Valve releases 2021 Dota 2 Battle Pass, includes Spectre Arcana, Davion Dragon Knight Persona, and Nemestice event

Blockchain4 days ago

Texas supermarket will now accept crypto payments

Esports4 days ago

Warzone Nail Gun Attachments: Are There Any?

Private Equity4 days ago

Zuckerberg Group Funnels $433,000 to Wuhan Lab Partner, illegal Gain-of-Function Advocate.

Energy4 days ago

Cresol Market: APAC to Offer Maximum Regional Opportunities for Vendors

Trending