Connect with us

AI

How enterprise work assistants are changing workplace communication?

Published

on

Chatbots have quickly risen from a trend to one of the most important assisting resources in many industries and fields. Without a doubt, they are one of the most disruptive factors in workplace communication in digital industries and beyond.

kore.ai

Source

According to Tangowork CEO Chris McGrath, in the next five years, 90% of companies will be utilizing a chatbot in one way or another. In ten years’ time, chatbots will be able to answer any workplace question we can come up with, either textually or via voice.

In this article, we’re looking at how chatbots are changing our work environments, the way we talk to each other, and what we can expect in the future.

Messaging is already embedded deep into the workflow, so employing chatbots would not be a leap to using an unknown technology. With the introduction of Slack, Microsoft Teams, WhatsApp, and other messaging platforms, collaborative chatting is already something that’s engraved in today’s work culture.

In the same way, messaging is how we retrieve information — if something is unclear, you contact a colleague to clarify. If these questions could be predicted and identified, why wouldn’t a chatbot be the designated go-to ‘person’ for these kinds of tasks?

In other words, introducing chatbots will be just another messaging channel with a helpful colleague!

Another perk of the fact that chatbots work on messaging platforms is that it’s easier to access automated help than ever before. Back when chatbots didn’t exist, users had to maneuver dozens and hundreds of different command words that trigger computational action. Now, it’s all done through the chatbot.

Starting to use a chatbot requires no prior experience or training — it is designed to cater to everyone, with exactly the same level of service. This is also one of the reasons why the majority of organizations are now utilizing the power of chatbots: they are incredibly easy to adopt.

If you have a rich response database for your chatbot platform, which also uses advanced NLP technology to identify exactly what users are looking for, naturally, you will have an easier time-solving problem and answering unclear issues within the company.

Due to these dilemmas and problems constantly cropping up, there is more pressure on intra-company support teams than ever before. According to Zendesk, internal IT teams receive almost 500 tickets monthly on average. Imagine the savings you can achieve in work hours and department size if you could reduce this number by even 10%.

This is where chatbots come in. In the previous sentence, we said imagine the savings, but chatbot technology is already so developed that it’s not necessary to just imagine anymore — even if chatbots manage to solve up to 10% of support tickets, you’ll see massive savings at a very low cost.

When we say employee communication, we’re referring to two things:

1) how employees communicate with each other

2) how employees communicate with the company

Chatbots have an impact in both of these scenarios.

In the first one, the reduction of redundant questions will take off a lot of communicational burden from employees, which will enable a higher degree of productive, collaborative, and creative communication.

In the employee-to-company case, questions like “What are my sales targets?“, “When is the deadline for this project?“, “Can I take tomorrow off?“ will be able to be answered by a chatbot, instead of a superior within a company.

In line with the previous point, employee-company communication can also be ameliorated with a chatbot. For example, if there’s a particular event or situation that’s receiving a lot of attention and similar questions directed to management or HR, the company can set up a one-topic-only chatbot that will clarify any issues.

For example, if your company is preparing for a merger with another company, it can spark an avalanche of similar questions, like “Will my job role change?“, “Will we move to another location?“, “Are we changing up teams?“ and so on. Instead of burdening management or other departments with answering these FAQs, a simple chatbot will do.

One of the challenges that a lot of companies are facing nowadays is multiculturalism and multilingualism in the workplace. Due to globalization and higher mobility, it’s no rarity that companies hire and host employees that don’t even speak the native language of the country they are working in.

Usually, this problem was solved with on-site bilingual and multilingual people who help these employees find their way around, but there is also a lot of potential in employing chatbots in this section as well.

“Setting up a multilingual chatbot can be a lifesaver for non-native employees. It’s also a huge step that was necessary if we wanted to continue to grow in the same way as if we were all speaking the same language“, says Yulia Berekova, a Russian language translation expert at PickWriters.

The modern workplace is a paradox: many things are simplified and automated, but the high number of apps and tools makes working more hectic and disorganized than ever before. To tackle this, chatbots can help by integrating several apps and offering an all-in-one platform, which employees can access instead of signing into dozens of different apps.

For example, if a company uses several different applications for the same thing (like different analytical tools for marketing), it can be very hard to eliminate the confusion that stems from different platforms, results and findings.

If you utilize a chatbot enterprise assistant, employees will always know that they have their go-to place for any type of issue. However, you will have to integrate your chatbot (or at least, its response database) to all the applications your company is using, which can be quite time-consuming.

In all the previous arguments, we illustrated some of the benefits that can be attained by employing enterprise chatbot assistants within a company. However, the question of whether there are possible downsides still remains.

Chatbots definitely has the potential to reduce or even eliminate the rate of human error, but they are still devised by humans, so there can be mistakes. To solve this, every employee needs to maintain a critical approach and think twice about everything they find out from a chatbot, especially if the issue is sensitive or time-critical.

Finally, there are possible downsides in the decrease of employee-to-employee communication. We’ve mentioned that a chatbot has the potential to decrease the need for question-asking among team members, but it’s quite unclear whether this can have a negative impact on their team dynamic.

Conclusion

Just like any other technological advancement, chatbots take some getting used to, but there is a variety of benefits when an enterprise decides to use them. Next to financial benefits in terms of saving work hours and financial resources, there is also a potential for improved productivity, better support, and a higher degree of integration with other technology.

If you’re still not implementing chatbots in your day-to-day business activities, consider some of the benefits we have outlined above and stay ahead of the curve!

Author: Elisa Abbott

Elisa Abbott completed a degree in Computer Science. She closely collaborated with many small business owners which brought her many insights into various industries. Elisa aims to provide excellent quality content to her readers and clients. In her free time, she loves watching movies in different languages.

Source: https://chatbotslife.com/how-enterprise-work-assistants-are-changing-workplace-communication-1068903dec68?source=rss—-a49517e4c30b—4

AI

Raquel Urtasun’s Waabi Autonomous Vehicle Software Company is Launched   

Published

on

Waabi, the autonomous driving software company recently launched by Raquel Urtasun, will initially focus on the trucking industry. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor  

Raquel Urtasun hit the ground running as an entrepreneur on June 8, with the announcement of her autonomous driving software company Waabi, complete with $83.5 million in backing.  

Raquel Urtasun, Founder and CEO, Waabi

Urtasun has a long track record as a computer scientist, especially working to apply AI to self-driving car software. Uber hired her in May 2017 to lead a research team based in Toronto for the company’s self-driving car program. (See AI Trends, June 29, 2018) 

“Self-driving is one of the most exciting and important technologies of our generation. Once solved at scale, it will change the world as we know it,” stated Urtasun in the Waabi launch press release“Waabi is the culmination of my life’s work to bring commercially viable self-driving technology to society and I’m honoured to be joined by a team of extraordinary scientists, engineers and technologists who are equally committed to executing on this bold vision.”  

The Waabi launch was greeted with some skepticism, given the state of the self-driving car industry working to get off the ground. But Urtasan knows what she’s doing.  

The latest financing round was led by Khosla Ventures, with additional participation from Urtasun’s former employer, Uber, and Aurora, the AV startup that ended up acquiring Uber ATG in a deal last year, according to an account in The Verge. Money was also raised from 8VC, Radical Ventures, Omers Ventures, BDC, AI luminaries Geoffrey Hinton, Fei-Fei Li, Pieter Abbeel, Sanja Fidler, and others, the report said.  

Waabi will initially focus on the trucking industry, offering its software to automate driving on commercial delivery routes. One reason is, the industry has a shortage of truck drivers. Second, the highways are simpler than city streets for autonomous vehicles to navigate.   

Wasabi’s technical approach will lean heavily on simulation, using techniques Urtasan has developed in her research. The company’s simulation approach will reduce the need for the miles of testing on real roads and highways that autonomous driving competitors have logged   

“For us in simulation, we can test the entire system,” Urtasun stated to The Verge.  “We can train an entire system to learn in simulation, and we can produce the simulations with an incredible level of fidelity, such that we can really correlate what happens in simulation with what is happening in the real world.”  

To have an autonomous vehicle startup founded by a woman who developed the technology and is the CEO is unusual; Urtasan hopes to inspire other women to join the industry. “This is a field that is very dominated by white dudes,” she said. “The way to build integrating knowledge is to build technology with diverse perspectives, because by challenging each other, we build better things.”  

Earlier Career at Uber, Toyota 

Urtasun started at Uber in May 2017, to pursue her work on machine perception for self-driving cars. The work entails machine learning, computer vision, robotics, and remote sensing. Before coming to the university, Urtasun worked at the Toyota Technological Institute at Chicago. Uber committed to hiring dozens of researchers and made a multi-year, multi-million dollar commitment to Toronto’s Vector Institute, which Urtasun co-founded. 

Urtasan has argued that self-driving vehicles need to wean themselves off Lidar (Light Detection and Ranging), a remote sensing method that uses a pulsed laser to measure variable distances. Her research has shown in some cases that vehicles can obtain similar 3D data about the world from ordinary cameras, which are much less expensive than Lidar units, which cost thousands of dollars. 

“If you want to build a reliable self-driving car right now, we should be using all possible sensors,” Urtasun told Wired in an interview published in November 2017. “Longer term, the question is how can we build a fleet of self-driving cars that are not expensive.” 

Ben Dickson, Founder and Editor, TechTalks

The company’s technical “AI-first approach” implies that they will put more emphasis on better machine learning models and less on complementary technologies including Lidar, radar, and mapping data, according to an account in TechTalks. “The benefit of having a software-heavy stack is the very low costs of updating the technology. And there will be a lot of updating in the coming years,” stated Ben Dickson, author of the report and founder of TechTalks.  

Urtasun described the AI system the company uses as a “family of algorithms,” in an account of the launch in TechCrunch. Its closed-loop simulation environment is a replacement for sending real cars on real roads.  

“I’m a bit on the fence on the simulation component,” Dickson stated  “Most self-driving car companies are using simulations as part of the training regime of their deep learning models. But creating simulation environments that are exact replications of the real world is virtually impossible, which is why self-driving car companies continue to use heavy road testing.”  

Waymo Leads in Simulated and Real Testing Miles 

Waymo has at least 20 billion miles of simulated driving to go with its 20 million miles of real-road testing, a record in the industry, according to Dickson. To gain more insight into Waabi’s technology, he looked at some of Urtasun’s recent academic work at the University of Toronto. Her name appears on many papers about autonomous driving; one, uploaded on the arXiv preprint server in January, caught Dickson’s attention.  

Titled “MP3: A Unified Model to Map, Perceive, Predict and Plan,” the paper discusses an approach to self-driving close to the description in Waabi’s launch press release. 

The researchers describe MP3 as “an end-to-end approach to mapless driving that is interpretable, does not incur any information loss, and reasons about uncertainty in the intermediate representations.” In the paper, researchers also discuss the use of “probabilistic spatial layers to model the static and dynamic parts of the environment.” 

MP3 is end-to-end trainable. It uses Lidar input to create scene representations, predict future states and plan trajectories. “The machine learning model obviates the need for finely detailed mapping data that companies like Waymo use in their self-driving vehicles,” Dickson stated. 

Urtasun posted a video, A Future with Self-Driving Vehicles,  on her YouTube channel that provides a brief explanation of how MP3 works. Some researchers commented that it is a clever combination of existing techniques. “There’s also a sizable gap between academic AI research and applied AI,” Dickson stated. How the Waabi model performs in practical settings will be interesting to watch.   

Read the source articles and information in AI Trends, the Waabi launch press release, in The Verge, in TechTalks, in TechCrunch and in a YouTube video, A Future with Self-Driving Vehicles.

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/selfdrivingcars/raquel-urtasuns-waabi-autonomous-vehicle-software-company-is-launched/

Continue Reading

AI

Market for Emotion Recognition Projected to Grow as Some Question Science 

Published

on

Emotion recognition software is growing in use and is being questioned for its scientific foundation at the same time. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

The emotion recognition software segment is projected to grow dramatically in coming years, spelling success for companies that have established a beachhead in the market, while causing some who are skeptical about its accuracy and fairness to raise red flags.  

The global emotion detection and recognition market is projected to grow to $37.1 billion by 2026, up from an estimated $19.5 billion in 2020, according to a recent report from MarketsandMarkets. North America is home to the largest market.  

Software suppliers covered in the report include: NEC Global (Japan), IBM (US), Intel (US), Microsoft (US), Apple (US), Gesturetek (Canada), Noldus Technology (Netherlands), Google (US), Tobii (Sweden), Cognitec Systems (Germany), Cipia Vision Ltd (Formerly Eyesight Technologies) (Israel), iMotions (Denmark), Numenta (US), Elliptic Labs (Norway), Kairos (US), PointGrab (US), Affectiva (US), nViso (Switzerland), Beyond Verbal (Israel), Sightcorp (Holland), Crowd Emotion (UK), Eyeris (US), Sentiance (Belgium), Sony Depthsense (Belgium), Ayonix (Japan), and Pyreos (UK). 

Among the users of emotion recognition software today are auto manufacturers, who use it to detect drowsy drivers, and to identify whether the driver is engaged or distracted 

Some question whether emotion recognition software is effective, and whether its use is ethical. One research study recently summarized in Sage journals is examining the assumption that facial expressions are a reliable indicator of emotional state.  

Lisa Feldman Barrett, professor of psychology, Northeastern University

“How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation,” stated the report, from a team of researchers led by Lisa Feldman Barrett, of Northeastern University, Mass General Hospital and Harvard Medical School.   

The research team is suggesting that further study is needed. “Our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life,” the report stated. 

Technology companies are spending millions on projects to read emotions from faces. “A more accurate description, however, is that such technology detects facial movements, not emotional expressions,” the report authors stated.  

Affectiva to be Acquired by $73.5 Million by Smart Eye of Sweden 

Recent beneficiaries of the popularity of emotion recognition software are the founders of Affectiva, which recently reached an agreement to be acquired by Smart Eye, a Swedish company providing driver monitoring systems for about a dozen automakers, for $73.5 million in cash and stock. 

Affectiva was spun out of MIT in 2009 by founders Rana el Kaliouby, who had been CEO, and Rosalind Picard, who is head of the Affective Computing group at MIT. Kaliouby authored a book about her experience founding Affectiva in the book, Girl Decoded. 

“As we watched the driver monitoring system category evolve into Interior Sensing, monitoring the whole cabin, we quickly recognized Affectiva as a major player to watch.” stated Martin Krantz, CEO and founder of Smart Eye, in a  press release. “Affectiva’s pioneering work in establishing the field of Emotion AI has served as a powerful platform for bringing this technology to market at scale,“ he stated.  

Affectiva CEO Kaliouby stated, “Not only are our technologies very complementary, so are our values, our teams, our culture, and perhaps most importantly, our vision for the future.”  

Kate Crawford, senior principal researcher, Microsoft Research

Some have called for government regulation of emotion intelligence software. Kate Crawford, senior principal research at Microsoft Research New York, and author of the book Atlas of AI  (Yale, 2021), wrote recently in Nature, “We can no longer allow emotion-recognition technologies to go unregulated. It is time for legislative protection from unproven uses of these tools in all domains—education, health care, employment, and criminal justice.”   

The reason is, companies are selling software that affects the opportunities available to individuals, “without clearly documented, independently-audited evidence of effectiveness,” Crawford stated. This includes job applicants being judged on facial expressions or vocal tones, and students flagged at school because their faces may seem angry.  

The science behind emotion recognition is increasingly being questioned. A review of 1,000 studies found the science behind tying facial expressions to emotions is not universal, according to a recent account in OneZero. The researchers found people made the expected facial expression to match their emotional state only 20% to 30% of the time.   

Startups including Find Solution AI base their emotion recognition technology on the work of Paul Ekman, a psychologist who published on the similarities between facial expressions around the world, popularizing the notion of “seven universal emotions.”   

The work has been challenged in the real world. A TSA program that trained agents to spot terrorists using Ekman’s work found little scientific basis, did not result in arrests, and fueled racial profiling, according to filings from the Government Accountability Office and the ACLU.   

Dr. Barrett’s team of researchers concluded, “The scientific path forward begins with the explicit acknowledgment that we know much less about emotional expressions and emotion perception than we thought we did.”  

Read the source articles and information from MarketsandMarkets, in Sage journals, in a press release from Smart Eye, in Nature and in OneZero. 

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/emotion-recognition/market-for-emotion-recognition-projected-to-grow-as-some-question-science/

Continue Reading

AI

Market for Emotion Recognition Projected to Grow as Some Question Science 

Published

on

Emotion recognition software is growing in use and is being questioned for its scientific foundation at the same time. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

The emotion recognition software segment is projected to grow dramatically in coming years, spelling success for companies that have established a beachhead in the market, while causing some who are skeptical about its accuracy and fairness to raise red flags.  

The global emotion detection and recognition market is projected to grow to $37.1 billion by 2026, up from an estimated $19.5 billion in 2020, according to a recent report from MarketsandMarkets. North America is home to the largest market.  

Software suppliers covered in the report include: NEC Global (Japan), IBM (US), Intel (US), Microsoft (US), Apple (US), Gesturetek (Canada), Noldus Technology (Netherlands), Google (US), Tobii (Sweden), Cognitec Systems (Germany), Cipia Vision Ltd (Formerly Eyesight Technologies) (Israel), iMotions (Denmark), Numenta (US), Elliptic Labs (Norway), Kairos (US), PointGrab (US), Affectiva (US), nViso (Switzerland), Beyond Verbal (Israel), Sightcorp (Holland), Crowd Emotion (UK), Eyeris (US), Sentiance (Belgium), Sony Depthsense (Belgium), Ayonix (Japan), and Pyreos (UK). 

Among the users of emotion recognition software today are auto manufacturers, who use it to detect drowsy drivers, and to identify whether the driver is engaged or distracted 

Some question whether emotion recognition software is effective, and whether its use is ethical. One research study recently summarized in Sage journals is examining the assumption that facial expressions are a reliable indicator of emotional state.  

Lisa Feldman Barrett, professor of psychology, Northeastern University

“How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation,” stated the report, from a team of researchers led by Lisa Feldman Barrett, of Northeastern University, Mass General Hospital and Harvard Medical School.   

The research team is suggesting that further study is needed. “Our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life,” the report stated. 

Technology companies are spending millions on projects to read emotions from faces. “A more accurate description, however, is that such technology detects facial movements, not emotional expressions,” the report authors stated.  

Affectiva to be Acquired by $73.5 Million by Smart Eye of Sweden 

Recent beneficiaries of the popularity of emotion recognition software are the founders of Affectiva, which recently reached an agreement to be acquired by Smart Eye, a Swedish company providing driver monitoring systems for about a dozen automakers, for $73.5 million in cash and stock. 

Affectiva was spun out of MIT in 2009 by founders Rana el Kaliouby, who had been CEO, and Rosalind Picard, who is head of the Affective Computing group at MIT. Kaliouby authored a book about her experience founding Affectiva in the book, Girl Decoded. 

“As we watched the driver monitoring system category evolve into Interior Sensing, monitoring the whole cabin, we quickly recognized Affectiva as a major player to watch.” stated Martin Krantz, CEO and founder of Smart Eye, in a  press release. “Affectiva’s pioneering work in establishing the field of Emotion AI has served as a powerful platform for bringing this technology to market at scale,“ he stated.  

Affectiva CEO Kaliouby stated, “Not only are our technologies very complementary, so are our values, our teams, our culture, and perhaps most importantly, our vision for the future.”  

Kate Crawford, senior principal researcher, Microsoft Research

Some have called for government regulation of emotion intelligence software. Kate Crawford, senior principal research at Microsoft Research New York, and author of the book Atlas of AI  (Yale, 2021), wrote recently in Nature, “We can no longer allow emotion-recognition technologies to go unregulated. It is time for legislative protection from unproven uses of these tools in all domains—education, health care, employment, and criminal justice.”   

The reason is, companies are selling software that affects the opportunities available to individuals, “without clearly documented, independently-audited evidence of effectiveness,” Crawford stated. This includes job applicants being judged on facial expressions or vocal tones, and students flagged at school because their faces may seem angry.  

The science behind emotion recognition is increasingly being questioned. A review of 1,000 studies found the science behind tying facial expressions to emotions is not universal, according to a recent account in OneZero. The researchers found people made the expected facial expression to match their emotional state only 20% to 30% of the time.   

Startups including Find Solution AI base their emotion recognition technology on the work of Paul Ekman, a psychologist who published on the similarities between facial expressions around the world, popularizing the notion of “seven universal emotions.”   

The work has been challenged in the real world. A TSA program that trained agents to spot terrorists using Ekman’s work found little scientific basis, did not result in arrests, and fueled racial profiling, according to filings from the Government Accountability Office and the ACLU.   

Dr. Barrett’s team of researchers concluded, “The scientific path forward begins with the explicit acknowledgment that we know much less about emotional expressions and emotion perception than we thought we did.”  

Read the source articles and information from MarketsandMarkets, in Sage journals, in a press release from Smart Eye, in Nature and in OneZero. 

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/emotion-recognition/market-for-emotion-recognition-projected-to-grow-as-some-question-science/

Continue Reading

AI

Generic AI Models Save Time; Prebuilt AI Models for Verticals Save More 

Published

on

Like prefabricated housing, prebuilt AI models are emerging, some targeting vertical industries such as oil and gas with applications including predictive maintenance. (Credit: Getty Images)

By AI Trends Staff  

Generic AI models save time by packing up a percentage of the work involved in launching an AI application and offering it for reuse. A prime example is Vision AI from Google Cloud, with access to pre-built models for detecting emotion and understanding text.  

Some emerging companies aim to build on this trend by supplying pre-built models developed for specific vertical industries, to go beyond the advantages of generic pre-built models for any industry.  

DJ Das, founder and CEO, ThirdEye Data

“While effective in some use cases, these solutions do not suit industry-specific needs right out of the box. Organizations that seek the most accurate results from their AI projects will simply have to turn to industry-specific models,” stated DJ Das, founder and CEO of ThirdEye Data, in a recent account in TechCrunch. ThirdEye builds AI applications for enterprises. 

Companies have options for generating industry-specific results. “One would be to adopt a hybrid approach—taking an open-source generic AI model and training it further to align with the business’ specific needs,” Das stated. “Companies could also look to third-party vendors, such as IBM or C3, and access a complete solution right off the shelf. Or—if they really needed to—data science teams could build their own models in-house, from scratch.”  

In a recent engagement, ThirdEye worked with a utility company to detect defects in electric utility poles by using AI to analyze thousands of images. “We started off using Google Vision API and found that it was unable to produce our desired results,” which was to get 90% or better accuracy, Das stated. For example, the generic Google Vision generic models did not identify the nonstandard font and different background colors used in utility pole tags. 

“So, we took base computer vision models from TensorFlow and optimized them to the utility company’s precise needs,” Das stated. The team spent two months developing AI models to detect and decipher tags on the electric poles, and another two months training the models. “The results are displaying accuracy levels of over 90%,” Das stated.  

Sees Need for Industry-Specific Pre-Trained Models 

A similar sentiment was expressed by the CEO and founder of CrowdAnalytix, in a recent account in Forbes“There is a catch to Google Vision, just as there is to all generic AI: These generic models know nothing about the particular industry or organization using them,” stated Divyabh Mishra.  

Generic AI models are trained on general sets of data, often publicly accessible, and applicable to many use cases across industries. “The result is AI that is undeniably powerful, but extremely limited in its usefulness to businesses,” he stated.  

A large library of narrowly-trained, AI applications working in specific vertical industries is needed. “We need models pre-trained on large datasets for relatively specific use cases: an AI marketplace of business-specific solutions that can be implemented directly by the consumer, without a huge data science team and without having to deal with additional training,” Mishra stated.  

CrowdAnalytix works in a crowdsource model, with a community of “solvers” numbering over 25,000 to work on projects, which the company calls “competitions.” Its website states, “We leverage our community to create a host of pre-built solutions that are then tuned and customized for each client.”  

New York Times Working with Google Cloud to Digitize its Photo Archive  

In an example rooted in Google’s investment in prebuilt models, The New York Times is working with Google Cloud on a project to digitize its photo archive. For over 100 years, The Times has archived photos in file cabinets three levels below the street near their officers in Times Square. The archive now has between five and seven million photos, according to an account on the blog of Google Cloud.  

“The morgue is a treasure trove of perishable documents that are a priceless chronicle of not just The Times’s history, but of nearly more than a century of global events that have shaped our modern world,” stated Nick Rockwell, chief technology officer, The New York Times.  

Sam Greenfield, technical director, Cloud Office of the CTO for Google

“A working asset management system must allow the users to be able to browse and search for photos easily,” stated Sam Greenfield, technical director, Cloud Office of the CTO for Google, author of the post. Google brought its AI tech and expertise to the table, to create a system useful to the Times photo editors. The system scans the photo image and all the text information on the back of the photo, which enables the system to further classify the photo. A photo of Penn Station gets put into “travel” and “bus and rail” classifications, for instance.  

C3.ai Offering Prebuilt AI Applications for Vertical Industries  

C3.ai, the AI software company founded by Tom Siebel, who was the founder of Siebel Systems, the human resource application supplier, is replicating the packaged software industry for AI. The company offers the C3 AI Suite, offering prebuilt AI applications that can be configured, for applications including predictive maintenance, fraud detection, energy management and customer engagement.  

Working with Baker Hughes, an industrial services company, C3 developed the BHC3 AI Suite targeting the oil and gas industry with predictive maintenance use cases, according to a customer story on the C3 website. Within months, the team deployed predictive maintenance applications at scale, according to an account on the C3 website. “These applications notify instrument engineers when asset components are behaving abnormally,” stated the account.   

“The combination of our data science expertise and the software development expertise that c3.ai brings is really powerful,” stated Dan Jeavons, who is general manager of data science for Shell Oil.   

The market is setting up well for software suppliers and consultants with expertise in applying prebuilt AI models to specific vertical industries. 

Read the source articles and information in TechCrunchin Forbeson the blog of Google Cloud, and from a customer story on the C3 website. 

PlatoAi. Web3 Reimagined. Data Inteligence Amplifed.
Click here for Free Trial.

Source: https://www.aitrends.com/software-development-2/generic-ai-models-save-time-prebuilt-ai-models-for-verticals-save-more/

Continue Reading
Esports2 days ago

Dungeons & Dragons: Dark Alliance Voice Actors: Who Voices Utaar?

Blockchain5 days ago

Bitmain Released New Mining Machines For DOGE And LTC

Blockchain2 days ago

Is Margex A Scam?

Esports4 days ago

Genshin Impact Grand Line Conch Locations

Energy3 days ago

Inna Braverman, Founder and CEO of Eco Wave Power Will be Speaking at the 2021 Qatar Economic Forum, Powered by Bloomberg

Esports2 days ago

Valorant Patch 3.00 Agent Tier List

Blockchain2 days ago

Yearn Finance (YFI) and Synthetix (SNX) Technical Analysis: What to Expect?

HRTech1 day ago

TCS bats for satellite offices, more women in the workforce

Esports5 days ago

Chivalry 2 Crossplay Not Working: Is There a Fix?

Esports2 days ago

Is Dungeons and Dragons: Dark Alliance Crossplay?

Esports5 days ago

5 Things to Do Before Shadowlands 9.1

AI3 days ago

New Modular SaaS Platform for Financial Services Sector Launched by Ezbob, a Customer Acquisition Tech Provider

Aviation3 days ago

SAS Was The First Airline To Operate A Polar Route

Blockchain3 days ago

Cardano, Chainlink, Filecoin Price Analysis: 21 June

Esports2 days ago

Ruined Pantheon Prestige Edition Splash Art, Price, Release, How to Get

Aviation4 days ago

The Antonov An-124 Vs An-225: What Are The Differences?

Blockchain4 days ago

Amplifying Her Voice June 22, 10:45AM to June 24, 4:00PM EST BERMUDA

Blockchain4 days ago

Texas supermarket will now accept crypto payments

Esports17 hours ago

Valve releases 2021 Dota 2 Battle Pass, includes Spectre Arcana, Davion Dragon Knight Persona, and Nemestice event

Energy4 days ago

Cresol Market: APAC to Offer Maximum Regional Opportunities for Vendors

Trending