Connect with us

AI

Business Process Outsourcing (BPO) Automation

Avatar

Published

on

What is a BPO?

In general, the idea of outsourcing is to use outside vendors to carry out standard business functions that are not core to the business . This simplifies the management task, allowing for the company to retain only core staff with focus on the high-value activities of growing the business and researching new opportunities, while regular and well-understood operations, like manufacturing and workflow management, can be delegated to external vendors.

Automation in Business Process Outsourcing (BPOs)
Automation in Business Process Outsourcing (BPOs)

Overseas vendors are often favored, because they bring competitive advantages to the combined enterprise, like lower labor costs for vertically-specialized workers, multilingual skills, overnight operations or better disaster-recovery response due to geographically distributed operations.

The industries that need to process a large daily volume of paperwork spend much effort and money managing workflows. A workflow consists of a sequence of administrative checkpoints and actions, each performed by a different worker, like the steps involved in paying an invoice or approving a health insurance claim.


Automation in Business Process Outsourcing (BPOs)

The manual and repetitive nature of the tasks embedded in a workflow often lead to human errors and data loss causing delays and re-work. This is magnified in complex businesses operated at large scale. All the above makes workflow management a good target for automation via software. Computers will perform the repetitive tasks without introducing random errors due to human attention fatigue.

An example of a repetitive task is Data Entry. In the case of the health insurance claim process, the workflow begins with a staff member uploading scanned images of paper documents to a cloud storage. The next step involves a worker who looks at the document image, reads it, interprets it well enough to understand the relevant pieces of information, and types them into a system for storage as numeric and text fields and further processing by the subsequent steps of the workflow.

The Data Entry task can be automated with the help of Optical Character Recognition (OCR) and Information Extraction (IE) technologies (see [1] for an in-depth technical example), eliminating the risk of human errors.


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


BPO Fulfillment Services

Since the early 1990s, supply chains have been run to maximize their efficiency, driving the concentration of specialized services in providers that offer economies of scale. For example, the iPhone supply chain comprises vendors in up to 50 countries. This globalization of the manufacturing networks has a parallel in the business processing world, as companies have learned to rely on BPO fulfillment vendors from across the globe.

Challenges for Business Process Outsourcing (BPO) Industry
Challenges for Business Process Outsourcing (BPO) Industry

BPO not only is indispensable to a small company that wants to capitalize on a sudden surge in demand of its products, but it also makes sense for most companies. For example, consider outsourcing activities like telemarketing, that is why many BPO companies offer services on lead generation, sales, and customer service. Although BPO Fulfillment has already become a multi billion dollar industry, its growth may accelerate with the adoption of AI technologies.

Impact of Artificial Intelligence on BPO Services

Onshore AI-powered solutions now present viable alternatives to the traditional offshore BPO services with:

  • Equivalent levels of quality and superior geographic independence
  • Labor cost efficiency
  • Processing speed
  • Scalability
  • Accuracy
  • Carbon footprint

Considering that only 10 years ago AI was not even in this race, many observers can foresee that in the near future most BPO will be making the transition to partial or fully AI-powered offerings.

(source: https://enterprisersproject.com/article/2020/1/rpa-robotic-process-automation-5-lessons-before-start)

A Brief History of AI

For the last 20 years, AI-powered retail and marketing has enjoyed great success. The mining of actionable insights from customer behaviour data captured all over the internet has done the trick for most companies allowing them to maximize the ROI of their retail operations and marketing investments.

The data that defined AI in the decade of the 2000’s was tabular, this means data neatly organized in columns and rows. That explains why the first wave of commercial AI was limited to processing spreadsheet-like data (just bigger), it was the golden era of:

  1. recommender systems based on collaborative filtering algorithms
  2. search portals powered by graph algorithms
  3. sentiment and spam classifiers built on n-grams

In the next decade, the 2010s, commercial-grade AI broke the tabular data barrier, beginning to process data in the form of sound waves, images and to understand simple nuances in text or conversation.

This was enabled by the development of deep neural networks, a new breed of bold and sophisticated machine learning algorithms that power most of  today’s AI applications and that, given enough data and computing resources, do everything better than the previous generation, plus hear, see, talk, translate and even imagine things.

All this progress was based on machine learning systems that codify their knowledge into millions (sometimes billions) of numeric parameters, and later make their decisions combining those parameters though millions of algebraic operations, making it extremely hard or practically  impossible for a human to track and understand how a particular decision was made, this is why those models have been characterized as black boxes, and the need to understand them has motivated the study of a new buzzword: Explainable AI or XAI for short.

(source: https://www.kdnuggets.com/2019/12/googles-new-explainable-ai-service.html)


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


You Have the Right to an Explanation

With AI enjoying more attention from academia and investors like never before, it will continue to improve its human-like abilities, and now it is time for it to grow enough sense of responsibility and civic duty before it is put in charge of deciding who gets a loan or advicing on which patients can be discharged from a hospital.

The main concern is that decisions about human subjects become hidden inside complex AI-powered decisions that no-one cares to understand, as long as the decision appears to be optimal, even if it is based on racial bias or other socially damaging criteria.

In this regard, one of the most active areas of research in AI is about developing tools that allow for humans to interpret model decisions with the same clarity that all human-made decisions within an organization can be analyzed.

This is known as right to explanation and it has a sensible and straightforward expression in countries like France who have updated a code from the 1970s aimed to ensure transparency in decisions made by government functionaries, by simply extending it to AI-made decisions. These should include the following:

  • the degree and the mode of contribution of the algorithmic processing to the decision- making;
  • the data processed and its source;
  • the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned;
  • the operations carried out by the treatment.

(source: https://www.darpa.mil/program/explainable-artificial-intelligence)

A more practical concern raised by AI-based BPO is about liability, who is ultimately responsible for a failure of the AI system?

Finally, there is the question of intellectual property, as AI systems can learn from experience, who is the owner of the improved knowledge that the AI system has distilled from the data produced by the BPO customer operation?

These concerns have clear implications for AI-powered BPO, which may need to address them in service contracts.


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


Examples of BPO Offerings Incorporating AI

Although AI is currently unable to match humans in mental flexibility to deal with new situations or even to have a child-level understanding of the world we live in, it has demonstrated ability to perform well in narrowly-defined knowledge domains, like the ones that make good candidates for BPOs.

Document Management

A large vertical of the BPO Industry is Document Management, this sector is undergoing a massive transformation as large companies in document-centric industries standardize and document their internal processes getting ready to outsource them in order to focus on their core competencies.

This trend is driving traditional document management service providers to develop a more sophisticated service offering, including business verticals like:

  • Invoice Processing
  • Digitization of Healthcare Records
  • Claims Processing
  • Bank Statement Ingestion
  • Loan Application Processing, etc,.

As these BPO providers evolve beyond their basic services of document scanning and archiving, with the occasional reporting and printing, towards higher value-added services, they need to develop an integrated stack of technologies, comprising high-speed and high-volume document scanners, advanced document capture, data recognition and workflow management software.

The AI-powered data extraction from scanned images is the key that opens the door to high-value added services like ARP. In the image below we see how an AI system sees an image. To the AI model, the image is represented by a group of text areas and the relative distances between these areas. With this information the system applies statistical inference to arrive to the most likely meaning of each text and number in the image.

(source: https://nanonets.com/blog/information-extraction-graph-convolutional-networks/)

Consumption of ML service through API

The above discussion explains the need and value of integrating AI services with the document processing workflows. In this section, I discuss the integration of a cloud-based OCR and Data Recognition service. This service extracts all the interesting information from a scanned form, and returns it in computer-readable format to be further processed by workflow management software.
This AI component comprises two workflows: the first one trains a machine learning model with examples provided by the model developer, and the second workflow just calls the model trained in the first step to extract information from the documents.
In consideration of the readers’ time constraints, I only show an outline of the main steps involved in a typical process.

Training Workflow

  • Step 1: Document Scanning. This process converts a physical paper document into an image file, in a standard format like JPEG, PDF, etc.
  • Step 2: OCR processing. This process recognizes areas of the document containing letters and digits and outputs their contents as a list of text segments, together with their bounding box coordinates.
  • Step 3: Manual annotation of the images: this process is performed manually by a human with the help of a special editor that allows to select an area of text and assign a tag which basically identifies the type of information contained in the text, for example the date of purchase in an invoice, etc.
  • Step 4: Upload the examples to the could. This process is performed by calling an API and has the purpose of making the training examples available to the AI cloud software so they can be utilized in the next step to train the model.
  • Step 5: Train an Information Extraction Model. This process is triggered by calling an API. After the training is completed the model is available to be used in the production workflow.

Production Workflow

This is the workflow that produces useful results on the customer data. The first two steps are common with the training workflow, the difference starts in the 3rd step, which in the language of machine learning is known as “prediction” (although its meaning is closer to “making educated guesses”).

  • Step 3 (predict): Automatic Information Extraction: this process is performed in the cloud by the AI model. The model goes over the OCR output and recognizes the numbers and text segments that are useful for further processing. The output can be tabular data, in a format that is easy to process by the software that performs the next task in the workflow.

Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


Other AI Opportunities in the BPO Industry

The opportunities to apply AI in the BPO industry usually fall in two large categories: robotic process automation (RPA) and chatbots.

What is RPA?

In the context of BPOs, robotic means the kind of technology that automatically makes decisions about finance and accounting spreadsheets. In this category, we can include a huge number of insight-mining services that are commonplace in most large-size companies but not yet affordable to all, like customer personalization based on recommender systems, classifiers that approve loans or detecting churn in customer accounts, also text classifiers to categorize customer feedback into positive or negative, or even classifiers to detect bots and trolling in the company’s social media. This is in parallel to the processing of data from sensors and the subsequent computation of analytic indicators that drive efficiencies in the supply chain.

(source: https://www.processmaker.com/blog/how-do-banks-benefit-from-robotic-process-automation-rpa/)

Conversational Agents

Generally known as “chatbots”, conversational agents can be divided in two large categories: task-oriented and chatbots.

A chatbot just makes up conversation and it can be found in social media participating in blogs. It is expected to express opinions on a wide range of topics of which it knows nothing about,  a famous case of Microsoft’s Tay bot is a typical example of this category (although this one did not have a happy ending, it was a great lesson on the subtleties of AI adoption).

On the other side of the spectrum, we find task-oriented agents, whose only mission is to handle some practical task in a specific domain, this would be the case of your phone handling a restaurant booking or routing you to a destination.  

With enough training, task-oriented agents can diligently handle the most common questions processed in a telephone help desk and customer contact services. The chatbot can only operate satisfactorily in a narrow field of knowledge and for that reason is usually deployed as the first tier that tries to answer simple cases and for all other issues it tries to route the thread to a human specialist.

Even though the chatbot may not be able to answer a wide variety of questions, (questions that fall in the “long tale” of the distribution) just by answering the most frequent ones it has a valuable impact on reducing the number of calls answered by humans.

Conversational agents have come a long way but they are still an area of research, there is progress in certain areas, with impressive AI designs published in scientific papers but very limited application in industry due to the difficulty in acquiring high quality datasets.  The typical workflow of a conversational agent can be seen below.

(source: https://arxiv.org/abs/1703.01008)

The Data Trove

We have developed AI systems that are able to absorb knowledge specific to a field of business, and we have also developed sophisticated ways to represent that knowledge in a way that machines can process it.

Now, all we need is a dataset, a properly annotated and depurated set of data, with lots of examples and enough details for the AI system to learn. This proves to be one of the most important challenges, given that AI systems do not learn like humans from just a few examples.

AI typically needs many examples of every little thing you want it to learn. If your data presents a large variety of cases, then you need several examples of each one of these cases. And then, human language has tens of thousands of cases called words, which can be combined to form a huge number of sentences and express a innumerable number of concepts. Sentences in turn can be combined to form conversations.

At this point you see why acquiring conversational datasets can be a challenge. Luckily for us, we don’t need to train an AI system from scratch. Thanks to a technique known as transfer learning, we can start from a system that already understands language, and all we need to teach is the meaning of words in a vertical business.

The cost of conversational dataset development makes this type of AI systems prohibitive for small companies to train and they remain only affordable to the largest data powerhouses. This is why some of the most operationally-significant break-throughs in conversational agents research consist not so much in model development but in the design of a training mechanism that makes efficient use of the dataset.

(source: https://arxiv.org/abs/1703.01008)

This involves the use of simulators, that are able to generate new combinations of sentences in the dataset and effectively multiplying the size of the dataset. The diagram above depicts a workflow used to train a goal-oriented conversational agent with the help of a rule-based simulator that combines the sentences from a static dataset according to some simple hard-coded rules.

The chart below shows the performance of an AI system learning to converse with a rule-based simulator. After a number of training episodes, the conversational agent catches up and surpasses its rule-based trainer.

(source: Spoken Dialog System trained with user simulator)

Beyond Chatbots

Omnichannel call centers are a recent evolution of the traditional call center, these are services that manage customer communications across multiple channels, including emails, documents, voice calls and chat sessions.

A huge opportunity lies in harnessing the data collected along with the business processes to maximize process efficiency. For example, by extracting insights from customer communications, companies can personalize marketing and customer services, which has the potential to dramatically maximize the marketing ROI.

But in order to extract actionable insights from multichannel customer threads, it is not enough to store the media in a central repository, an AI stack is needed to extract readable text and also to properly contextualize the communications for example, detecting sentiment or emotional content.

This has motivated the integration of the call center software with AI technologies like natural language processing (NLP) and voice analytics that examines vocal tone in audio or emotional clues in video.

Although most management leaders are only starting to figure out how to gain access to it, AI has the power to transform the heaps of disparate media generated by omnichannel call centers into a valuable trove of interpretable and actionable insights that enable the company leadership to apply data-driven management techniques.

For example, the company can undertake root-cause analysis of agent performance by engaging NLP algorithms to find out if they are empathetic to customers, or whether they follow call scripts and observe company policies. These are actionable insights that can guide decisions about training, hiring and performance management of agents. The same data can be analyzed to make product and process improvement decisions to minimize support calls.


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


Conclusion

We have discussed AI integration in the BPO software stacks of different BPO sectors, including omnichannel call centers, document processing and workflow management.

The integration of AI is going to prove essential for the BPO services to continue developing high value-added services to satisfy the evolving demand of their customers.

In response to this huge potential market, some AI companies are specialized in document processing and NLP functionality over SaaS platforms that are easy to consume through a simple cloud API.

Nanonets has perfected an OCR + IE stack and packed it conveniently behind a high-performance service API, for the developers of workflow management software to take advantage of it without having to incur the costs of building and maintaining this highly specialized stack of AI technologies.

Nanonets is committed to continue developing an AI platform, with the added benefit of an active online community of users and a strong network of partners offering solutions, consulting, and training.

Start using Nanonets for Automation

Try out the model or request a demo today!

TRY NOW

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://nanonets.com/blog/business-process-outsourcing-bpo/

AI

Predictive Maintenance is a Killer AI App 

Avatar

Published

on

Predictive maintenance resulting from IoT and AI working together has been identified as a killer app, with a track record of ROI. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

Predictive maintenance (PdM) has emerged as a killer AI app. 

In the past five years, predictive maintenance has moved from a niche use case to a fast-growing, high return on investment (ROI) application that is delivering true value to users. These developments are an indication of the power of the Internet of Things (IoT) and AI together, a market considered in its infancy today. 

These observations are from research conducted by IoT Analytics, consultants who supply market intelligence, which recently estimated that the $6.9 billion predictive maintenance market will reach $28.2 billion by 2026.  

The company began its research coverage of the IoT-driven predictive maintenance market in 2016, at an industry maintenance conference in Dortmund, Germany. Not much was happening. “We were bitterly disappointed,” stated Knud Lasse Lueth, CEO at IoT Analytics, in an account in IoT Business News. “Not a single exhibitor was talking about predictive maintenance.”  

Things have changed. IoT Analytics analyst Fernando Alberto Brügge stated, Our research in 2021 shows that predictive maintenance has clearly evolved from the rather static condition-monitoring approach. It has become a viable IoT application that is delivering overwhelmingly positive ROI.” 

Technical developments that have contributed to the market expansion include: a simplified process for connecting IoT assets, major advances in cloud services, and improvements in the accessibility of machine learning/data science frameworks, the analysts state.  

Along with the technical developments, the predictive maintenance market has seen a steady increase in the number of software and service providers offering solutions. IoT Analytics identified about 100 companies in the space in 2016; today the company identifies 280 related solution providers worldwide. Many of them are startups who recently entered the field. Established providers including GE, PTC, Cisco, ABB, and Siemens, have entered the market in the past five years, many through acquisitions.  

The market still has room; the analysts predict 500 companies will be in the business in the next five years.  

In 2016, the ROI from predictive analytics was unclear. In 2021, a survey of about 100 senior IT executives from the industrial sector found that predictive maintenance projects have delivered a positive ROI in 83% of the cases. Some 45% of those reported amortizing their investments in less than a year. “This data demonstrated how attractive the investment has become in recent years,” the analysts stated.   

More IoT Sensors Means More Precision 

Implemented projects that the analysts studied in 2016 relied on a limited number of data sources, typically one sensor value, such as vibration or temperature. Projects described in the 2021 report described 11 classes of data sources, such as data from existing sensors or data from the controllers. As more sources are tapped, the precision of the predictions increase, the analysts state.  

Many projects today are using hybrid modeling approaches that rely on domain expertise, virtual sensors and augmented data. AspenTech and PARC are two suppliers identified in the report as embracing hybrid modeling approaches. AspenTech has worked with over 60 companies to develop and test hybrid models that combine physics with ML/data science knowledge, enhancing prediction accuracy. 

The move to edge computing is expected to further benefit predictive modeling projects, by enabling algorithms to run at the point where data is collected, reducing response latency. The supplier STMicroelectronics recently introduced some smart sensor nodes that can gather data and do some analytic processing. 

More predictive maintenance apps are being integrated with enterprise software systems, such as enterprise resource planning (ERP) or computerized maintenance  management systems (CMMS). Litmus Automation offers an integration service to link to any industrial asset, such as a programmable logic controller, a distributed control system, or a supervisory control and data acquisition system.   

Reduced Downtime Results in Savings 

Gains come from preventing downtime. Predictive maintenance is the result of monitoring operational equipment and taking action to prevent potential downtime or an unexpected or negative outcome,” stated Mike Leone, an analyst at IT strategy firm Enterprise Strategy Group, in an account from TechTarget.  

Felipe Parages, Senior Data Scientist, Valkyrie

Advances that have made predictive maintenance more practical today include sensor technology becoming more widespread, and the ability to monitor industrial machines in real time, stated Felipe Parages, senior data scientist at Valkyrie, data sense consultants. With more sensors, the volume of data has grown exponentially, and data analytics via cloud services has become available. 

It used to be that an expert had to perform an analysis to determine if a machine was not operating in an optimal way. “Nowadays, with the amount of data you can leverage and the new techniques based on machine learning and AI, it is possible to find patterns in all that data, things that are very subtle and would have escaped notice by a human being,” stated Parages. 

As a result, one person can now monitor hundreds of machines, and companies are accumulating historical data, which enables deeper trend analysis. “Predictive maintenance “is a very powerful weapon,” he stated.  

In an example project, Italy’s primary rail operator, Trenitalia, adopted predictive maintenance for its high-speed trains. The system is expected to save eight to 10% of an annual maintenance budget of 1.3 billion Euros, stated Paul Miller, an analyst with research firm Forrester, which recently issued a report on the project.  

They can eliminate unplanned failures which often provide direct savings in maintenance but just as importantly, by taking a train out of service before it breaks—that means better customer service and happier customers,” Miller stated. He recommended organizations start out with predictive maintenance by fielding a pilot project. 

In an example of the types of cooperation predictive maintenance projects are expected to engender, the CEOs of several European auto and electronics firms recently announced plans to join forces to form the “Software Republique,” a new ecosystem for innovation in intelligent mobility. Atos, Dassault Systèmes, Groupe Renault, and STMicroelectronics and Thales announced their decision to pool their expertise to accelerate the market.   

Luca de Meo, Chief Executive Officer, Groupe Renault

Luca de Meo, Chief Executive Officer of Groupe Renault, stated in a press release from STMicroelectronics, In the new mobility value chain, on-board intelligence systems are the new driving force, where all research and investment are now concentrated. Faced with this technological challenge, we are choosing to play collectively and openly. There will be no center of gravity, the value of each will be multiplied by others. The combined expertise in cybersecurity, microelectronics, energy and data management will enable us to develop unique, cutting-edge solutions for low-carbon, shared, and responsible mobility, made in Europe.”    

The Software République will be based in Guyancourt, a commune in north-central France at the Renault Technocentre in a building called Odyssée, a 12,000 square meter space which is eco-responsible. For example, its interior and exterior structure is 100 percent wood, and the building is covered with photovoltaic panels. 

Read the source articles in IoT Business News TechTarget, and in a press release from STMicroelectronics.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/predictive-analytics/predictive-maintenance-is-a-killer-ai-app/

Continue Reading

AI

Post Office Looks to Gain an Edge With Edge Computing 

Avatar

Published

on

By AI Trends Editor John P. Desmond  

NVIDIA on May 6 detailed a partnership with the US Postal Service underway for over a year to speed up mail service using AI, with a goal of reducing current processing time tasks that take days to hours.   

The project fields edge servers at 195 Post Services sites across the nation, which review 20 terabytes of images a day from 1,000 mail processing machines, according to a post on the NVIDIA blog.  

Anthony Robbins, Vice President of Federal, Nvidia

“The federal government has been for the last several years talking about the importance of artificial intelligence as a strategic imperative to our nation, and as an important funding priority. It’s been talked about in the White House, on Capitol Hill, in the Pentagon. It’s been funded by billions of dollars, and it’s full of proof of concepts and pilots,” stated Anthony Robbins, Vice President of Federal for NVIDIA, in an interview with Nextgov “And this is one of the few enterprisewide examples of an artificial intelligence deployment that I think can serve to inspire the whole of the federal government.”  

The project started with USPS AI architect at the time Ryan Simpson, who had the idea to try to expand an image analysis system a postal team was developing, into something much bigger, according to the blog post. (Simpson worked for USPS for over 12 years, and moved to NVIDIA as a senior data scientist eight months ago.) He believed that a system could analyze billions of images each center generated, and gain insights expressed in a few data points that could be shared quickly over the network.  

In a three-week sprint, Simpson worked with half a dozen architects at NVIDIA and others to design the needed deep-learning models. The work was done within the Edge Computing Infrastructure Program (ECIP), a distributed edge AI system up and running on Nvidia’s EGX platform at USPS. The EGX platform enables existing and modern, data-intensive applications to be accelerated and secure on the same infrastructure, from data center to edge. 

“It used to take eight or 10 people several days to track down items, now it takes one or two people a couple of hours,” stated Todd Schimmel, Manager, Letter Mail Technology, USPS. He oversees USPS systems including ECIP, which uses NVIDIA-Certified edge servers from Hewlett-Packard Enterprise.  

In another analysis, a computer vision task that would have required two weeks on a network of servers with 800 CPUs can now get done in 20 minutes on the four NVIDIA V100 Tensor Core GPUs in one of the HPE Apollo 6500 servers.  

Contract Awarded in 2019 for System Using OCR  

USPS had put out a request for proposals for a system using optical character recognition (OCR) to streamline its imaging workflow. “In the past, we would have bought new hardware, software—a whole infrastructure for OCR; or if we used a public cloud service, we’d have to get images to the cloud, which takes a lot of bandwidth and has significant costs when you’re talking about approximately a billion images,” stated Schimmel. 

AI algorithms were developed on these NVIDIA DGX servers at a US Postal Service Engineering facility. (Credit: Nvidia)

Today, the new OCR application will rely on a deep learning model in a container on ECIP managed by Kubernetes, the open source container orchestration system, and served by NVIDIA Triton, the company’s open-source inference-serving software. Triton allows teams to deploy trained AI models from any framework, such as TensorFlow or PyTorch. 

The deployment was very streamlined,” Schimmel stated. “We awarded the contract in September 2019, started deploying systems in February 2020 and finished most of the hardware by August—the USPS was very happy with that,” he added 

Multiple models need to communicate to the USPS OCR application to work. The app that checks for mail items alone requires coordinating the work of more than a half dozen deep-learning models, each checking for specific features. And operators expect to enhance the app with more models enabling more features in the future. 

“The models we have deployed so far help manage the mail and the Postal Service—they help us maintain our mission,” Schimmel stated.  

One model, for example, automatically checks to see if a package carries the right postage for its size, weight, and destination. Another one that will automatically decipher a damaged barcode could be online this summer.  

“We’re at the very beginning of our journey with edge AI. Every day, people in our organization are thinking of new ways to apply machine learning to new facets of robotics, data processing and image handling,” he stated. 

Accenture Federal Services, Dell Technologies, and Hewlett-Packard Enterprise contributed to the USPS OCR system incorporating AI, Robbins of NVIDIA stated. Specialized computing cabinets—or nodes—that contain hardware and software specifically tuned for creating and training ML models, were installed at two data centers.   

The AI work that has to happen across the federal government is a giant team sport,” Robbins stated to Nextgov. “And the Postal Service’s deployment of AI across their enterprise exhibited just that.” 

The new solutions could help the Postal Service improve delivery standards, which have fallen over the past year. In mid-December, during the last holiday season, the agency delivered as little as 62% of first-class mail on time—the lowest level in years, according to an account in VentureBeat . The rate rebounded to 84% by the week of March 6 but remained below the agency’s target of about 96%. 

The Postal Service has blamed the pandemic and record peak periods for much of the poor service performance. 

Read the source articles and information on the Nvidia blog, in Nextgov and in VentureBeat.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/edge-computing/post-office-looks-to-gain-an-edge-with-edge-computing/

Continue Reading

AI

Here Come the AI Regulations  

Avatar

Published

on

New proposed laws to govern AI are being entertained in the US and Europe, with China following a government-first approach. (Credit: Getty Images)  

By AI Trends Staff 

New laws will soon shape how companies use AI.   

The five largest federal financial regulators in the US recently released a request for information how banks use AI, signaling that new guidance is coming for the finance business. Soon after that, the US Federal Trade Commission released a set of guidelines on “truth, fairness and equity” in AI, defining the illegal use of AI as any act that “causes more harm than good,” according to a recent account in Harvard Business Review  

And on April 21, the European Commission issued its own proposal for the regulation of AI (See AI Trends, April 22, 2021)  

Andrew Burt, Managing Partner, bnh.ai

While we don’t know what these regulation will allow, “Three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations,” stated article author Andrew Burt, the managing partner of bnh.ai, a boutique law firm focused on AI and analytics.  

First, conduct assessments of AI risks. As part of the effort, document how the risks have been minimized or resolved. Regulatory frameworks that refer to these “algorithmic impact assessments,” or “IA for AI,” are available.  

For example, Virginia’s recently-passed Consumer Data Protection Act, requires assessments for certain types of high-risk algorithms. 

The EU’s new proposal requires an eight-part technical document to be completed for high-risk AI systems that outlines “the foreseeable unintended outcomes and sources of risks” of each AI system, Burt states. The EU proposal is similar to the Algorithmic Accountability Act filed in the US Congress in 2019. The bill did not go anywhere but is expected to be reintroduced.  

Second, accountability and independence. This suggestion is that the data scientists, lawyers and others evaluating the AI system have different incentives than those of the frontline data scientists. This could mean that the AI is tested and validated by different technical personnel than those who originally developed it, or organizations may choose to hire outside experts to assess the AI system.   

“Ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI,” Burt states.  

Third, continuous review. AI systems are “brittle and subject to high rates of failure,” with risks that grow and change over time, making it difficult to mitigate risk at a single point in time. “Lawmakers and regulators alike are sending the message that risk management is a continual process,” Burt stated.  

Approaches in US, Europe and China Differ  

The approaches between the US, Europe and China toward AI regulation differ in their approach, according to a recent account in The Verdict, based on analysis by Global Data, the data analytics and consulting company based in London. 

“Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of over regulation,”’ the account states. Meanwhile, “China continues to follow a government-first approach” and has been widely criticized for the use of AI technology to monitor citizens. The account noted examples in the rollout by Tencent last year of an AI-based credit scoring system to determine the “trust value” of people, and the installation of surveillance cameras outside people’s homes to monitor the quarantine imposed after the breakout of COVID-19. 

Whether the US’ tech industry-led efforts, China’s government-first approach, or Europe’s privacy and regulation-driven approach is the best way forward remains to be seen,” the account stated. 

In the US, many companies are aware of the risk of new AI regulation that could stifle innovation and their ability to grow in the digital economy, suggested a recent report from pwc, the multinational professional services firm.   

It’s in a company’s interests to tackle risks related to data, governance, outputs, reporting, machine learning and AI models, ahead of regulation,” the pwc analysts state. They recommended business leaders assemble people from across the organization to oversee accountability and governance of technology, with oversight from a diverse team that includes members with business, IT and specialized AI skills.  

Critics of European AI Act Cite Too Much Gray Area 

While some argue that the European Commission’s proposed AI Act leaves too much gray area, the hope of the European Commission is that their proposed AI Act will provide guidance for businesses wanting to pursue AI, as well as a degree of legal certainty.   

Thierry Breton, European Commissioner for the Internal Market

“Trust… we think is vitally important to allow the development we want of artificial intelligence,” stated Thierry Breton, European Commissioner for the Internal Market, in an account in TechCrunch. AI applications “need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.” 

“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines—we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also to the continent where you will have the largest amount of industrial data created on the planet for the next ten years.” 

“So come here—because artificial intelligence is about data—we’ll give you the guidelines. We will also have the tools to do it and the infrastructure,” Breton suggested. 

Another reaction was that the Commission’s proposal has overly broad exemptions, such as for law enforcement to use remote biometric surveillance including facial recognition technology, and it does not go far enough to address the risk of discrimination. 

Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough. 

“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice,” stated Griff Ferris, legal and policy officer for Fair Trials, the global criminal justice watchdog based in London. “The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.”  

To accomplish this, he suggested, “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice. 

Read the source articles and information in Harvard Business Review, in The Verdict and in TechCrunch. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/data-privacy-and-security/here-come-the-ai-regulations/

Continue Reading

AI

Pandemic Spurred Identity Fraud; AI and Biometrics Are Responding 

Avatar

Published

on

AI and biometrics are being more widely incorporated in new cybersecurity products, as losses from cyberattacks and identity theft increased dramatically in 2020. (Credit: Getty Images) 

By AI Trends Staff 

Cyberattacks and identity fraud losses increased dramatically in 2020 as the pandemic made remote work the norm, setting the stage for AI and biometrics to combine in efforts to attain a higher level of protection. 

One study found banks worldwide saw a 238% jump in cyberattacks between February and April 2020; a study from Javelin Strategy & Research found that identity fraud losses grew to $56 billion last year as fraudsters used stolen personal information to create synthetic identities, according to a recent account from Pymnts.com. In addition, automated bot attacks shot upward by 100 million between July and December, targeting companies in a range of industries.  

Companies striving for better protection risk making life more difficult for their customers; another study found that 40% of financial institutions frequently mistake the online actions of legitimate customers to those of fraudsters. 

Caleb Callahan, Vice President of Fraud, Stash Financial

“As we look toward the post-pandemic—or, more accurately, inter-pandemic—era, we see just how good fraudsters were at using synthetic identities to defeat manual and semi-manual onboarding processes,” stated Caleb Callahan, Vice President of Fraud at Stash Financial of New York, offering a personal finance app, in an interview with Pymnts. 

SIM Sway Can Create a Synthetic Identity  

One technique for achieving a synthetic identity is a SIM swap, in which someone contacts your wireless carrier and is able to convince the call center employee that they are you, using personal data that may have been exposed in hacks, data breaches or information publicly shared on social networks, according to an account on CNET.  

Once your phone number is assigned to a new card, all of your incoming calls and text messages will be routed to whatever phone the new SIM card is in.  

Identity theft losses were $712.4 billion-plus in 2020, up 42% from 2019, Callahan stated. “To be frank, our defenses are fragmented and too dependent on technologies such as SMS [texting] that were never designed to provide secure services. Banks and all businesses should be looking at how to unify data signals and layer checkpoints in order to keep up with today’s sophisticated fraudsters,” he stated.  

Asked what tools and technologies would help differentiate between fraudsters and legitimate customers, Callahan stated, “in an ideal world, we would have a digital identity infrastructure that banks and others could depend on, but I think that we are some ways away from that right now.”  

Going forward, “The needs of the travel and hospitality, health, education and other sectors might accelerate the evolution of infrastructure for safety and security,” Callahan foresees. 

AI and Biometrics Seen as Offering Security Advantages 

AI can be employed to protect digital identity fraud, such as by offering greater accuracy and speed when it comes to verifying a person’s identity, or by incorporating biometric data so that a cybercriminal would not be able to gain access to information by only providing credentials, according to an account in Forbes. 

Deepak Gupta, Cofounder and CTO, LoginRadius

AI has the power to save the world from digital identity fraud,” stated Deepak Gupta, author of the Forbes article and cofounder and CTO of LoginRadius, a cloud-based consumer identity platform. “In the fight against ID theft, it is already a strong weapon. AI systems are entirely likely to end the reign of the individual hacker.”  

While he sees AI authentication as being in an early phase, Gupta recommended that companies examine the following: the use of intelligent adaptive authentication, such as local and device fingerprint; biometric authentication, based on the face or fingerprints; and smart data filters. “A well-developed AI protection system will have the ability to respond in nanoseconds to close a leak,” he stated. 

Pandemic Altered Consumer Financial Behavior, Spurred Identity Fraud  

The global pandemic has had a dramatic impact on consumer financial behavior. Consumers spent more time at home in 2020, transacted less than in previous years, and relied heavily on streaming services, digital commerce, and payments. They also corresponded more via email and text, for both work and personal life.  

“The pandemic inspired a major shift in how criminals approach fraud,” stated John Buzzard, Lead Analyst, Fraud & Security, with Javelin Strategy & Research in a press release. “Identity fraud has evolved and now reflects the lengths criminals will take to directly target consumers in order to steal their personally identifiable information.” 

Companies made quick adjustments to their business models, such as by increasing remote interactions with borrowers for loan originations and closings, and criminals pounced on new vulnerabilities they discovered. Nearly one-third of identity fraud victims say their financial services providers did not satisfactorily resolve their problems, and 38% of victims closed their accounts because of lack of resolution, the Javelin researchers found.   

“It is clear that financial institutions must continue to proactively and transparently manage fraud as a means to deepen their customer relationships,” stated Eric Kraus, Vice President and General Manager of Fraud, Risk and Compliance, FIS. The company offers technology solutions for merchants, banks, and capital markets firms globally. “Through our continuing business relationships with financial institutions, we know firsthand that consumers are looking to their banks to resolve instances of fraud, regardless of how the fraud occurred,” he added.  

This push from consumers who are becoming increasingly savvy online will lay a foundation for safer digital transactions.  

“Static forms of consumer authentication must be replaced with a modern, standards-based approach that utilizes biometrics,” stated David Henstock, Vice President of Identity Products at Visa, the world’s leader in digital payments. “Businesses benefit from reduced customer friction, lower abandonment rates and fewer chargebacks, while consumers benefit from better fraud prevention and faster payment during checkout.” 

The 2021 Identity Fraud Study from Javelin is now in its 18th year. 

Read the source articles and information from Pymnts.com, from CNETin Forbes and in a press release from Javelin Strategy & Research. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/security/pandemic-spurred-identity-fraud-ai-and-biometrics-are-responding/

Continue Reading
Aviation5 days ago

JetBlue Hits Back At Eastern Airlines On Ecuador Flights

Blockchain5 days ago

“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi

AI2 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Blockchain23 hours ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Energy3 days ago

ONE Gas to Participate in American Gas Association Financial Forum

Blockchain2 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

SaaS5 days ago

SaaS5 days ago

Blockchain4 days ago

Yieldly announces IDO

Esports3 days ago

Pokémon Go Special Weekend announced, features global partners like Verizon, 7-Eleven Mexico, and Yoshinoya

Fintech3 days ago

Credit Karma Launches Instant Karma Rewards

Blockchain5 days ago

Opimas estimates that over US$190 billion worth of Bitcoin is currently at risk due to subpar safekeeping

Blockchain2 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

SaaS5 days ago

Esports2 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Business Insider3 days ago

Bella Aurora launches its first treatment for white patches on the skin

Esports1 day ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Esports4 days ago

5 Best Mid Laners in League of Legends Patch 11.10

Cyber Security4 days ago

Top Tips On Why And How To Get A Cyber Security Degree ?

Esports3 days ago

How to download PUBG Mobile’s patch 1.4 update

Trending