Connect with us

AI

Business Process Outsourcing (BPO) Automation

Avatar

Published

on

What is a BPO?

In general, the idea of outsourcing is to use outside vendors to carry out standard business functions that are not core to the business . This simplifies the management task, allowing for the company to retain only core staff with focus on the high-value activities of growing the business and researching new opportunities, while regular and well-understood operations, like manufacturing and workflow management, can be delegated to external vendors.

Automation in Business Process Outsourcing (BPOs)
Automation in Business Process Outsourcing (BPOs)

Overseas vendors are often favored, because they bring competitive advantages to the combined enterprise, like lower labor costs for vertically-specialized workers, multilingual skills, overnight operations or better disaster-recovery response due to geographically distributed operations.

The industries that need to process a large daily volume of paperwork spend much effort and money managing workflows. A workflow consists of a sequence of administrative checkpoints and actions, each performed by a different worker, like the steps involved in paying an invoice or approving a health insurance claim.


Automation in Business Process Outsourcing (BPOs)

The manual and repetitive nature of the tasks embedded in a workflow often lead to human errors and data loss causing delays and re-work. This is magnified in complex businesses operated at large scale. All the above makes workflow management a good target for automation via software. Computers will perform the repetitive tasks without introducing random errors due to human attention fatigue.

An example of a repetitive task is Data Entry. In the case of the health insurance claim process, the workflow begins with a staff member uploading scanned images of paper documents to a cloud storage. The next step involves a worker who looks at the document image, reads it, interprets it well enough to understand the relevant pieces of information, and types them into a system for storage as numeric and text fields and further processing by the subsequent steps of the workflow.

The Data Entry task can be automated with the help of Optical Character Recognition (OCR) and Information Extraction (IE) technologies (see [1] for an in-depth technical example), eliminating the risk of human errors.


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


BPO Fulfillment Services

Since the early 1990s, supply chains have been run to maximize their efficiency, driving the concentration of specialized services in providers that offer economies of scale. For example, the iPhone supply chain comprises vendors in up to 50 countries. This globalization of the manufacturing networks has a parallel in the business processing world, as companies have learned to rely on BPO fulfillment vendors from across the globe.

Challenges for Business Process Outsourcing (BPO) Industry
Challenges for Business Process Outsourcing (BPO) Industry

BPO not only is indispensable to a small company that wants to capitalize on a sudden surge in demand of its products, but it also makes sense for most companies. For example, consider outsourcing activities like telemarketing, that is why many BPO companies offer services on lead generation, sales, and customer service. Although BPO Fulfillment has already become a multi billion dollar industry, its growth may accelerate with the adoption of AI technologies.

Impact of Artificial Intelligence on BPO Services

Onshore AI-powered solutions now present viable alternatives to the traditional offshore BPO services with:

  • Equivalent levels of quality and superior geographic independence
  • Labor cost efficiency
  • Processing speed
  • Scalability
  • Accuracy
  • Carbon footprint

Considering that only 10 years ago AI was not even in this race, many observers can foresee that in the near future most BPO will be making the transition to partial or fully AI-powered offerings.

(source: https://enterprisersproject.com/article/2020/1/rpa-robotic-process-automation-5-lessons-before-start)

A Brief History of AI

For the last 20 years, AI-powered retail and marketing has enjoyed great success. The mining of actionable insights from customer behaviour data captured all over the internet has done the trick for most companies allowing them to maximize the ROI of their retail operations and marketing investments.

The data that defined AI in the decade of the 2000’s was tabular, this means data neatly organized in columns and rows. That explains why the first wave of commercial AI was limited to processing spreadsheet-like data (just bigger), it was the golden era of:

  1. recommender systems based on collaborative filtering algorithms
  2. search portals powered by graph algorithms
  3. sentiment and spam classifiers built on n-grams

In the next decade, the 2010s, commercial-grade AI broke the tabular data barrier, beginning to process data in the form of sound waves, images and to understand simple nuances in text or conversation.

This was enabled by the development of deep neural networks, a new breed of bold and sophisticated machine learning algorithms that power most of  today’s AI applications and that, given enough data and computing resources, do everything better than the previous generation, plus hear, see, talk, translate and even imagine things.

All this progress was based on machine learning systems that codify their knowledge into millions (sometimes billions) of numeric parameters, and later make their decisions combining those parameters though millions of algebraic operations, making it extremely hard or practically  impossible for a human to track and understand how a particular decision was made, this is why those models have been characterized as black boxes, and the need to understand them has motivated the study of a new buzzword: Explainable AI or XAI for short.

(source: https://www.kdnuggets.com/2019/12/googles-new-explainable-ai-service.html)


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


You Have the Right to an Explanation

With AI enjoying more attention from academia and investors like never before, it will continue to improve its human-like abilities, and now it is time for it to grow enough sense of responsibility and civic duty before it is put in charge of deciding who gets a loan or advicing on which patients can be discharged from a hospital.

The main concern is that decisions about human subjects become hidden inside complex AI-powered decisions that no-one cares to understand, as long as the decision appears to be optimal, even if it is based on racial bias or other socially damaging criteria.

In this regard, one of the most active areas of research in AI is about developing tools that allow for humans to interpret model decisions with the same clarity that all human-made decisions within an organization can be analyzed.

This is known as right to explanation and it has a sensible and straightforward expression in countries like France who have updated a code from the 1970s aimed to ensure transparency in decisions made by government functionaries, by simply extending it to AI-made decisions. These should include the following:

  • the degree and the mode of contribution of the algorithmic processing to the decision- making;
  • the data processed and its source;
  • the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned;
  • the operations carried out by the treatment.

(source: https://www.darpa.mil/program/explainable-artificial-intelligence)

A more practical concern raised by AI-based BPO is about liability, who is ultimately responsible for a failure of the AI system?

Finally, there is the question of intellectual property, as AI systems can learn from experience, who is the owner of the improved knowledge that the AI system has distilled from the data produced by the BPO customer operation?

These concerns have clear implications for AI-powered BPO, which may need to address them in service contracts.


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


Examples of BPO Offerings Incorporating AI

Although AI is currently unable to match humans in mental flexibility to deal with new situations or even to have a child-level understanding of the world we live in, it has demonstrated ability to perform well in narrowly-defined knowledge domains, like the ones that make good candidates for BPOs.

Document Management

A large vertical of the BPO Industry is Document Management, this sector is undergoing a massive transformation as large companies in document-centric industries standardize and document their internal processes getting ready to outsource them in order to focus on their core competencies.

This trend is driving traditional document management service providers to develop a more sophisticated service offering, including business verticals like:

  • Invoice Processing
  • Digitization of Healthcare Records
  • Claims Processing
  • Bank Statement Ingestion
  • Loan Application Processing, etc,.

As these BPO providers evolve beyond their basic services of document scanning and archiving, with the occasional reporting and printing, towards higher value-added services, they need to develop an integrated stack of technologies, comprising high-speed and high-volume document scanners, advanced document capture, data recognition and workflow management software.

The AI-powered data extraction from scanned images is the key that opens the door to high-value added services like ARP. In the image below we see how an AI system sees an image. To the AI model, the image is represented by a group of text areas and the relative distances between these areas. With this information the system applies statistical inference to arrive to the most likely meaning of each text and number in the image.

(source: https://nanonets.com/blog/information-extraction-graph-convolutional-networks/)

Consumption of ML service through API

The above discussion explains the need and value of integrating AI services with the document processing workflows. In this section, I discuss the integration of a cloud-based OCR and Data Recognition service. This service extracts all the interesting information from a scanned form, and returns it in computer-readable format to be further processed by workflow management software.
This AI component comprises two workflows: the first one trains a machine learning model with examples provided by the model developer, and the second workflow just calls the model trained in the first step to extract information from the documents.
In consideration of the readers’ time constraints, I only show an outline of the main steps involved in a typical process.

Training Workflow

  • Step 1: Document Scanning. This process converts a physical paper document into an image file, in a standard format like JPEG, PDF, etc.
  • Step 2: OCR processing. This process recognizes areas of the document containing letters and digits and outputs their contents as a list of text segments, together with their bounding box coordinates.
  • Step 3: Manual annotation of the images: this process is performed manually by a human with the help of a special editor that allows to select an area of text and assign a tag which basically identifies the type of information contained in the text, for example the date of purchase in an invoice, etc.
  • Step 4: Upload the examples to the could. This process is performed by calling an API and has the purpose of making the training examples available to the AI cloud software so they can be utilized in the next step to train the model.
  • Step 5: Train an Information Extraction Model. This process is triggered by calling an API. After the training is completed the model is available to be used in the production workflow.

Production Workflow

This is the workflow that produces useful results on the customer data. The first two steps are common with the training workflow, the difference starts in the 3rd step, which in the language of machine learning is known as “prediction” (although its meaning is closer to “making educated guesses”).

  • Step 3 (predict): Automatic Information Extraction: this process is performed in the cloud by the AI model. The model goes over the OCR output and recognizes the numbers and text segments that are useful for further processing. The output can be tabular data, in a format that is easy to process by the software that performs the next task in the workflow.

Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


Other AI Opportunities in the BPO Industry

The opportunities to apply AI in the BPO industry usually fall in two large categories: robotic process automation (RPA) and chatbots.

What is RPA?

In the context of BPOs, robotic means the kind of technology that automatically makes decisions about finance and accounting spreadsheets. In this category, we can include a huge number of insight-mining services that are commonplace in most large-size companies but not yet affordable to all, like customer personalization based on recommender systems, classifiers that approve loans or detecting churn in customer accounts, also text classifiers to categorize customer feedback into positive or negative, or even classifiers to detect bots and trolling in the company’s social media. This is in parallel to the processing of data from sensors and the subsequent computation of analytic indicators that drive efficiencies in the supply chain.

(source: https://www.processmaker.com/blog/how-do-banks-benefit-from-robotic-process-automation-rpa/)

Conversational Agents

Generally known as “chatbots”, conversational agents can be divided in two large categories: task-oriented and chatbots.

A chatbot just makes up conversation and it can be found in social media participating in blogs. It is expected to express opinions on a wide range of topics of which it knows nothing about,  a famous case of Microsoft’s Tay bot is a typical example of this category (although this one did not have a happy ending, it was a great lesson on the subtleties of AI adoption).

On the other side of the spectrum, we find task-oriented agents, whose only mission is to handle some practical task in a specific domain, this would be the case of your phone handling a restaurant booking or routing you to a destination.  

With enough training, task-oriented agents can diligently handle the most common questions processed in a telephone help desk and customer contact services. The chatbot can only operate satisfactorily in a narrow field of knowledge and for that reason is usually deployed as the first tier that tries to answer simple cases and for all other issues it tries to route the thread to a human specialist.

Even though the chatbot may not be able to answer a wide variety of questions, (questions that fall in the “long tale” of the distribution) just by answering the most frequent ones it has a valuable impact on reducing the number of calls answered by humans.

Conversational agents have come a long way but they are still an area of research, there is progress in certain areas, with impressive AI designs published in scientific papers but very limited application in industry due to the difficulty in acquiring high quality datasets.  The typical workflow of a conversational agent can be seen below.

(source: https://arxiv.org/abs/1703.01008)

The Data Trove

We have developed AI systems that are able to absorb knowledge specific to a field of business, and we have also developed sophisticated ways to represent that knowledge in a way that machines can process it.

Now, all we need is a dataset, a properly annotated and depurated set of data, with lots of examples and enough details for the AI system to learn. This proves to be one of the most important challenges, given that AI systems do not learn like humans from just a few examples.

AI typically needs many examples of every little thing you want it to learn. If your data presents a large variety of cases, then you need several examples of each one of these cases. And then, human language has tens of thousands of cases called words, which can be combined to form a huge number of sentences and express a innumerable number of concepts. Sentences in turn can be combined to form conversations.

At this point you see why acquiring conversational datasets can be a challenge. Luckily for us, we don’t need to train an AI system from scratch. Thanks to a technique known as transfer learning, we can start from a system that already understands language, and all we need to teach is the meaning of words in a vertical business.

The cost of conversational dataset development makes this type of AI systems prohibitive for small companies to train and they remain only affordable to the largest data powerhouses. This is why some of the most operationally-significant break-throughs in conversational agents research consist not so much in model development but in the design of a training mechanism that makes efficient use of the dataset.

(source: https://arxiv.org/abs/1703.01008)

This involves the use of simulators, that are able to generate new combinations of sentences in the dataset and effectively multiplying the size of the dataset. The diagram above depicts a workflow used to train a goal-oriented conversational agent with the help of a rule-based simulator that combines the sentences from a static dataset according to some simple hard-coded rules.

The chart below shows the performance of an AI system learning to converse with a rule-based simulator. After a number of training episodes, the conversational agent catches up and surpasses its rule-based trainer.

(source: Spoken Dialog System trained with user simulator)

Beyond Chatbots

Omnichannel call centers are a recent evolution of the traditional call center, these are services that manage customer communications across multiple channels, including emails, documents, voice calls and chat sessions.

A huge opportunity lies in harnessing the data collected along with the business processes to maximize process efficiency. For example, by extracting insights from customer communications, companies can personalize marketing and customer services, which has the potential to dramatically maximize the marketing ROI.

But in order to extract actionable insights from multichannel customer threads, it is not enough to store the media in a central repository, an AI stack is needed to extract readable text and also to properly contextualize the communications for example, detecting sentiment or emotional content.

This has motivated the integration of the call center software with AI technologies like natural language processing (NLP) and voice analytics that examines vocal tone in audio or emotional clues in video.

Although most management leaders are only starting to figure out how to gain access to it, AI has the power to transform the heaps of disparate media generated by omnichannel call centers into a valuable trove of interpretable and actionable insights that enable the company leadership to apply data-driven management techniques.

For example, the company can undertake root-cause analysis of agent performance by engaging NLP algorithms to find out if they are empathetic to customers, or whether they follow call scripts and observe company policies. These are actionable insights that can guide decisions about training, hiring and performance management of agents. The same data can be analyzed to make product and process improvement decisions to minimize support calls.


Looking for an AI based solution to enable automation in BPOs ? Give Nanonetsa spin and put all document related activities in Business Process Outsourcing on autopilot!


Conclusion

We have discussed AI integration in the BPO software stacks of different BPO sectors, including omnichannel call centers, document processing and workflow management.

The integration of AI is going to prove essential for the BPO services to continue developing high value-added services to satisfy the evolving demand of their customers.

In response to this huge potential market, some AI companies are specialized in document processing and NLP functionality over SaaS platforms that are easy to consume through a simple cloud API.

Nanonets has perfected an OCR + IE stack and packed it conveniently behind a high-performance service API, for the developers of workflow management software to take advantage of it without having to incur the costs of building and maintaining this highly specialized stack of AI technologies.

Nanonets is committed to continue developing an AI platform, with the added benefit of an active online community of users and a strong network of partners offering solutions, consulting, and training.

Start using Nanonets for Automation

Try out the model or request a demo today!

TRY NOW

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://nanonets.com/blog/business-process-outsourcing-bpo/

AI

Falsified Satellite Images in Deepfake Geography Seen as Security Threat

Avatar

Published

on

Scientists who have identified a potential national security threat from deepfake geography, such as in false satellite images, are studying ways to identify them and take countermeasures. (Credit: Getty Images)

By John P. Desmond, AI Trends Editor

Deepfake is a portmanteau of “deep learning” and “fake”, and refers to a synthetic media usually in which a person in an existing image or video is replaced with someone else’s likeness. Deepfakes use techniques from machine learning and AI to manipulate visual and audio content with a high potential to deceive.

Deepfakes applied to geography have the potential to falsify satellite image data, which could pose a national security threat. Scientists at the University of Washington (UW) are studying this, in the hopes of finding ways to detect fake satellite images and warn of its dangers.

Bo Zhao, Assistant Professor of Geography, University of Washington

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” stated Bo Zhao, assistant professor of geography at the UW and lead author of the study, in a news release from the University of Washington. The study was published on April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it,” Zhao stated.

Fake locations and other inaccuracies have been part of mapmaking since ancient times, due to the nature of translating real-life locations to map form. But some inaccuracies in maps are created by the mapmakers to prevent copyright infringement.

National Geospatial Intelligence Agency Director Sounds Alarm

Now with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, the spoofing involves great sophistication and carries more risks. The director of the federal agency in charge of geospatial intelligence, the National Geospatial Intelligence Agency (NGA), sounded the alarm at an industry conference in 2019.

“We’re currently faced with a security environment that is more complex, inter­connected, and volatile than we’ve experienced in recent memory—one which will require us to do things differently if we’re to navigate ourselves through it successfully,” stated NGA Director Vice Adm. Robert Sharp, according to an account from SpaceNews.

To study how satellite images can be faked, Zhao and his team at WU used an AI framework that has been used to manipulate other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map. The researchers employed a generative adversarial network machine learning framework to achieve this.

The researchers combined maps and satellite images from three cities—Tacoma, Seattle and Beijing—to compare features and create new images of one city, drawn from the characteristics of the other two. The untrained eye may have difficulty detecting the differences between real and fake, the researchers noted. The researchers studied color histograms and frequency, texture, contrast, and spatial domains, to try to identify the fakes.

Simulated satellite imagery can serve a legitimate purpose when used to represent how an area is affected by climate change over time, for example. If there are no images for a certain period, filling in the gaps to provide perspective can provide perspective. The simulations need to be labeled as such.

The researchers hope to learn how to detect fake images, to help geographers develop data literacy tools, similar to fact-checking services. As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data, Zhao stated. “We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary,” he said.

In an interview with The Verge, Zhao stated the aim of his study “is to demystify the function of absolute reliability of satellite images and to raise public awareness of the potential influence of deep fake geography.” He stated that although deepfakes are widely discussed in other fields, his paper is likely the first to touch upon the topic in geography.

“While many GIS [geographic information system] practitioners have been celebrating the technical merits of deep learning and other types of AI for geographical problem-solving, few have publicly recognized or criticized the potential threats of deep fake to the field of geography or beyond,” stated the authors.

US Army Researchers Also Working on Deepfake Detection

Professor C.-C. Jay Kuo, Professor of Electrical and Computer Engineering, University of Southern California

US Army researchers are also working on a deepfake detection method. Researchers at the US Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory, in collaboration with Professor C.C. Jay Kuo’s research group at the University of Southern California, are examining the threat that deepfakes pose to our society and national security, according to a release from the US Army Research Laboratory (ARL).

Their work is featured in the paper titled “DefakeHop: A light-weight high-performance deepfake detector,” which will be presented at the IEEE International Conference on Multimedia and Expo 2021 in July.

ARL researchers Dr. Suya You and Dr. Shuowen (Sean) Hu noted that most state-of-the-art deepfake video detection and media forensics methods are based upon deep learning, which has inherent weaknesses in robustness, scalability, and portability.

“Due to the progression of generative neural networks, AI-driven deepfakes have advanced so rapidly that there is a scarcity of reliable techniques to detect and defend against them,” You stated. “We have an urgent need for an alternative paradigm that can understand the mechanism behind the startling performance of deepfakes, and to develop effective defense solutions with solid theoretical support.”

Relying on their experience with machine learning, signal analysis, and computer vision, the researchers developed a new theory and mathematical framework they call the Successive Subspace Learning, or SSL, as an innovative neural network architecture. SSL is the key innovation of DefakeHop, the researchers stated.

“SSL is an entirely new mathematical framework for neural network architecture developed from signal transform theory,” Kuo stated. “It is radically different from the traditional approach. It is very suitable for high-dimensional data that have short-, mid- and long-range covariance structures. SSL is a complete data-driven unsupervised framework, offering a brand-new tool for image processing and understanding tasks such as face biometrics.”

Read the source articles and information in a news release from the University of Washington, in the journal Cartography and Geographic Information Science,  an account from SpaceNews,a release from the US Army Research Laboratory, and in the paper titled “DefakeHop: A light-weight high-performance deepfake detector.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/ai-in-science/falsified-satellite-images-in-deepfake-geography-seen-as-security-threat/

Continue Reading

AI

Data Science is Where to Find the Most AI Jobs and Highest Salaries

Avatar

Published

on

AI is a hot job market and the hottest jobs in AI are in data science. And data science jobs also pay the highest salaries. (Credit: Getty Images)

By John P. Desmond, AI Trends Editor

Jobs in data science grew nearly 46% in 2020, with salaries in the range of $100,000 to $130,000 annually, according to a recent account in TechRepublic based on information from LinkedIn and LHH, formerly Lee Hecht Harrison, a global provider of talent and leadership development.

Related job titles include data science specialist and data management analyst. Companies hiring were called out in the TechRepublic account, including:

Paul Anderson, CEO, Novacoast

Novacoast, which helps organizations build a cybersecurity posture through engineering, development, and managed services. Founded in 1996 in Santa Barbara, the company has many remote employees and a presence in the UK, Canada, Mexico, and Guatemala.

The company offers a security operations center (SOC) cloud offering called novaSOC, that analyzes emerging challenges. “We work to have an answer ready before we’ve been asked,” stated CEO Paul Anderson in a press release issued on the company’s inclusion on a list of the top 250 Managed Service Providers from MSSP Alert. novaSOC automatically collects endpoint data and correlates it with threat intelligence sources, adding in analysis and reporting to make a responsive security monitoring service. Novacoast is planning to hire 60 employees to open a new SOC in Wichita, Kansas.

Pendo is an information technology services company that provides step-by-step guides to help workers master new software packages. The software aims to boost employee proficiency through personalized training and automated support. Founded in 2013 in Raleigh, N.C., the company has raised $209.5 million to date, according to Crunchbase. Demand for the company’s services soared in 2020 as schools shifted to online teaching and many companies permitted employees to work from home.

“More people are using digital products. Many had planned to go digital but they could not afford to wait. That created opportunities for us,” stated Todd Olson, cofounder and CEO, in an account in Newsweek. The company now has about 2,000 customers, including Verizon, RE/MAX, Health AB, John Wiley & Sons, LabCorp, Mercury Insurance, OpenTable, Okta, Salesforce and Zendesk. The company plans to hire 400 more employees this year to fuel its growth as it invests in its presence overseas in an effort to win more large customers. The company recently had 169 open positions.

Ravi Kumar, President, Infosys

Infosys is a multinational IT services company headquartered in India that is expanding its workforce in North America. The company recently announced it would be hiring 500 people in Calgary, Alberta, Canada over the next three years, which would double its Canadian workforce to 4,000 employees. “Calgary is a natural next step of our Canadian expansion. The city is home to a thriving talent pool. We will tap into this talent and offer skills and opportunities that will build on the city’s economic strengths,” stated Ravi Kumar, President of Infosys, in a press release.

Over the last two years, Infosys has created 2,000 jobs across Toronto, Vancouver, Ottawa, and Montreal. The Calgary expansion will enable Infosys to scale work with clients in Western Canada, Pacific Northwest, and the Central United States across various industries, including natural resources, energy, media, retail, and communications. The company will hire tech talent from fourteen educational institutions across the country, including the University of Calgary, University of Alberta, Southern Alberta Institute of Technology, University of British Columbia, University of Toronto, and Waterloo. Infosys also plans to hire 300 workers in Pennsylvania as part of its US hiring strategy, recruiting for a range of opportunities across technology and digital services, administration and operations.

AI is Where the Money Is

In an analysis of millions of job postings across the US, the labor market information provider Burning Glass wanted to see which professions had the highest percentage of job postings requesting AI skills, according to an account from Dice. Data science was requested by 22.4% of the postings, by far the highest. Next was data engineer at 5.5%, database architect at 4.6% and network engineer/architect at 3.1%.

Burning Glass sees machine learning as a “defining skill” among data scientists, needed for day-to-day work. Overall, jobs requiring AI skills are expected to grow 43.4% over the next decade. The current median salary for jobs heavily using AI skills is $105,000, good compared to many other professions.

Hiring managers will test for knowledge of fundamental concepts and ability to execute. A portfolio of AI-related projects can help a candidate’s prospects.

Burning Glass recently announced an expansion and update of its CyberSeek source of information on America’s cybersecurity workforce. “These updates are timely as the National Initiative for Cybersecurity Education (NICE) Strategic Plan aims to promote the discovery of cybersecurity careers and multiple pathways to build and sustain a diverse and skilled workforce,” stated Rodney Petersen, Director of the NICE, in a Burning Glass press release

NICE is a partnership between government, academia, and the private sector focused on supporting the country’s ability to address current and future cybersecurity education and workforce challenges.

Trends for AI in 2021 in the beginning of the latter stages of the global pandemic were highlighted in a recent account in VentureBeat as:

  • Hyperautomation, the application of AI and machine learning to augment workers and automate processes to a higher degree;
  • Ethical AI, because consumers and employees expect companies to adopt AI in a responsible manner; companies will choose to do business with partners that commit to data ethics and data handling practices that reflect appropriate values;
  • And Workplace AI, to help with transitions to new models of work, especially with knowledge workers at home; AI will be used to augment customer services agents, to track employee health and for intelligent document extraction.

Read the source articles and information in TechRepublic, in a press release from Novacoast, in Newsweek, in a press release from Infosys, in an account from Dice, in a Burning Glass press release and in an account in VentureBeat.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/data-science/data-science-is-where-to-find-the-most-ai-jobs-and-highest-salaries/

Continue Reading

Artificial Intelligence

Benefits of Using AI for Facebook Retargeting In 2021

Avatar

Published

on

Artificial intelligence has really transformed the state of digital marketing. A growing number of marketers are using AI to connect with customers across various platforms. This includes Facebook.

There are a lot of great reasons to integrate AI technology into your Facebook marketing campaigns. One of the benefits is that you can use retargeting. AI algorithms have made it easier to reach customers that have already engaged with your website. These users might be a lot more likely to convert, which will help you grow your sales and improve your brand image.

AI Can Help Improve Your Facebook Marketing Dramatically with Retargeting

Untapped resources that come from engagement can lead to a better understanding of Facebook retargeting. SAAS SEO agency shared some recommendations that are solid, and a great starting point for any business. To understand Facebook retargeting with AI technology in-depth, take these tips to heart when organizing your resources.

What is Facebook Retargeting?

On average, each Facebook user clicks on ads at least eight times per month. These are considered to be high intent clicks from the biggest advertising platform in the world. Even the most successful marketing and advertising campaigns miss consumers on their first run.

Retargeting uses Facebook campaigns most essential tools to target specific people based on their most relevant data. This is one of the best ways to use data to improve your social media marketing strategies.

The data used is recycled from previous information attached to your old advertising. This includes information from apps, customer files, engagement and offline activity. Anything that has a metric attached to an individual can be used with Facebook retargeting.

How Can It Help Your Business?

There will always be missed opportunities before, during and after a marketing campaign. Reinvesting the data gained from the previous campaign prevents you from starting completely over. Instead of starting from scratch, you’ll gain a clear insight into what gets consumers to cross the finish line at checkout.  Retargeting is meant to be a powerful tool that thrives on previous data that would otherwise go unnoticed.

Retention comes into play, but doesn’t make up the entire story of retargeting on Facebook. You can run a retargeting campaign and only look into new consumers. It’s flexible, and meant to enhance your business based on your specific needs.

The Different Types of Retargeting

The two main types of Facebook retargeting are list-based and pixel-based. Each serves a purpose, with their own specific pros and cons.


Pixel-based retargeting uses JavaScript code to attach a cookie to each unique person that visits your website. After the visitor leaves, the cookie sends its data to your ad provider for a personalized experience. This is the most common type of retargeting used on Facebook, and is often used in other parts of the internet. Microsoft has shown favoritism to pixel-based retargeting by using their own modified version.

List-based retargeting is a limited but fascinating concept. It uses the data you already have on hand to create a specialized list that Facebook uses to show ads. This method works on many of the major social media platforms, but has shown significant advantages on Facebook. Since list-based targeting uses email lists as its base, companies are at the mercy of that particular resource. An outdated or inaccurate email list will lead to low quality retargeting efforts.

When relying on list-based retargeting, a larger email list is not always a guaranteed win for a company.

Upselling and Cross Selling

Even when the customer is happy, proving the value of an upsell is an ongoing process. This led to a rise in cross selling, but was only beneficial to companies that had the resources. As you reconnect with old and new customers, upselling or cross selling becomes part of the closing process.

Both methods are difficult, but become trivial once you have the data to back up your new campaign. Most companies see an increase in profits in a short amount of time. This makes Facebook retargeting a valuable way to test drive upselling and cross selling methods.

Brand Awareness

Brand awareness is the golden goose that all businesses constantly chase. Once you have a notable brand, it becomes the identity of your entire company. Protecting the brand is important, and sometimes entire marketing campaigns are launch to reinvigorate the company image. So, how does Facebook retargeting work its way into this?

Facebook lookalike audiences became a thing when companies wanted to reach new customers with similar interests and habits as their current best customers. Creating a lead that finds this new audience is possible when brand awareness reaches its peak. If you want to keep brand awareness high, then Facebook retargeting does all of the tough work while increasing your reach to new consumers.

Improve Conversions

Conversions are hard to pull off without a specific time investment. All of that goes to waste if you’re not positioning yourself to use previous data to convert customers. No matter how visitors arrive to your website, their presence is proof that there is an interest to purchase a product or service.

If they leave without making a purchase, it’s up to you to figure out why. A lost sale is not the same as losing a customer. Being able to convert that customer into an actual sale is a major strength of retargeting. And even if it’s unsuccessful, you’ll be able to use the additional data to convert another customer.

Influence Buying Decisions

When a consumer becomes firm in their buying decision, then your influence gains a significant bump. At this point your retargeting is directly influencing the buying decisions of individuals or groups. You’ll see a visual representation of this with online feedback and testimonials. All of the positive information provided comes from consumers that are satisfied with the entire sales experience.

Even the negative feedback plays a role, and can serve as the proof you need to reuse data to improve a weak point in your marketing. When a company puts effort into retargeting their ads, they gain monumental increases in customer conversions, ad recognition, clicks, sales and branded searches.

Remarketing Vs. Retargeting

Learn the difference between retargeting and remarketing. Retargeting gains the attention of interested customers that never purchased your products or services. Remarketing leans more towards gaining the attention of inactive or lost customers. Don’t make the mistake of running a retargeting campaign when remarketing would work better. The good news is that the data used from one is still essential for the other. An email list with decent accuracy can be a valuable asset for remarketing or retargeting.

Making the Right Choice

Facebook retargeting should be a priority with how you manage your data collection. Embracing its use will optimize the most important part of your business. Once you get the hang of things, your ROI will reach a whole new level.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.smartdatacollective.com/benefits-of-using-ai-for-facebook-retargeting/

Continue Reading

AI

Big bank demand for AI talent outpaces supply

Avatar

Published

on

Demand for artificial intelligence (AI) experts at financial institutions continues to grow, with banks looking to leverage digital channels and incorporate data-driven analytics into their workflows as the U.S. economy moves toward reopening. American Express and Wells Fargo led the pack among financial services in AI-related job offerings posted in the past quarter, with the […]

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://bankautomationnews.com/allposts/center-of-excellence/big-bank-demand-for-ai-talent-outpaces-supply/

Continue Reading
Blockchain4 days ago

Ethereum hits $3,000 for the first time, now larger than Bank of America

Blockchain4 days ago

Munger ‘Anti-Bitcoin’ and Buffett ‘Annoyance’ Towards Crypto Industry

Blockchain2 days ago

The Reason for Ethereum’s Recent Rally to ATH According to Changpeng Zhao

Aviation2 days ago

American Airlines Passenger Arrested After Alleged Crew Attack

Gaming5 days ago

New Pokemon Snap: How To Unlock All Locations | Completion Guide

Blockchain22 hours ago

Chiliz Price Prediction 2021-2025: $1.76 By the End of 2025

Blockchain2 days ago

Mining Bitcoin: How to Mine Bitcoin

Blockchain4 days ago

BNY Mellon Regrets Not Owning Stocks of Companies Investing in Bitcoin

Automotive4 days ago

Ford Mach-E Co-Pilot360 driver monitoring system needs an update ASAP

Blockchain2 days ago

Mining Bitcoin: How to Mine Bitcoin

Fintech5 days ago

Telcoin set to start remittance operations in Australia

Blockchain5 days ago

Mining Bitcoin: How to Mine Bitcoin

Blockchain4 days ago

Here’s the long-term ROI potential of Ethereum that traders need to be aware of

Blockchain4 days ago

Turkey Jails 6 Suspects Connected to the Thodex Fraud Including Two CEO Siblings

Aviation4 days ago

TV Stars Fined After Disorderly Conduct Onboard British Airways

Fintech3 days ago

Talking Fintech: Customer Experience and the Productivity Revolution

Blockchain5 days ago

Coinbase to Acquire Crypto Analytics Company Skew

Blockchain5 days ago

A Year Later: Uzbekistan Plans to Lift its Cryptocurrency Ban

AR/VR5 days ago

The dangers of clickbait articles that explore VR

Nano Technology4 days ago

Less innocent than it looks: Hydrogen in hybrid perovskites: Researchers identify the defect that limits solar-cell performance

Trending