Connect with us

Big Data

Success with automation and AI requires a high ‘RQ’

Avatar

Published

on

Companies know that a high IQ can help drive business value. But the analyst outfit Forrester Research believes that if companies are going to successfully work side by side with artificially intelligent systems, they’re also going to need a high “RQ.”

RQ, or robotics quotient, is a measurement of how competent a company will be at automation and AI implementation. The Forrester assessment is based on three main areas: people, leadership and organizational structures. A fourth area, trust, will influence the three main categories and change depending on the type of technology being deployed.

J.P. Gownder, a Forrester analyst serving CIOs, described RQ as the “human contribution” companies need when deploying automation and AI technologies. “It’s not just about the bots; it’s not just about artificial intelligences,” he said in a July presentation at the New Tech and Innovation 2018 conference in Boston. “It’s about real people, real leaders and real organizational structures that you need to put in place to make sure you’re most likely to succeed.”

Toronto moments

Automation and AI technologies are on a spectrum from more deterministic, where A always leads to B, to more probabilistic, where A could lead to B but could also lead to C or to D.

And these probabilistic systems create a new wrinkle for companies: No matter how swanky the user interface or how cutting-edge the technology, probabilistic systems can produce incorrect — and even illogical — results that can erode the trust humans have in the machine’s abilities.

Gownder pointed to IBM Watson as an example. During its Jeopardy! debut in 2011, Watson answered a final question about U.S. cities with “Toronto,” causing the audience to gasp. When researchers did a post-mortem, it became clear that even Watson doubted the response. Using probabilistic judgement, the machine determined that Toronto had only a 30% chance of being correct, but it was the best answer it could come up with at the time.

These “Toronto moments,” as Forrester now refers to them, “teach us something about the intersection between human beings and AI and the trust that is part of this,” Gownder said.

The more probabilistic a system is, the more human intervention it might need. But designing systems and processes that strike a balance between trust and intervention will be a challenging step for companies. That’s where Forrester believes RQ will come in handy.

What is RQ?

The robotics quotient is a self-assessment that “measures the ability of individuals and organizations to learn and adapt to and collaborate with automated entities,” Gownder said. It’s composed of 39 characteristics that Forrester regards as a collection of automation and AI best practices.

Forrester Research, RQ, robotics quotient, PLOT frameworkJ.P. Gownder

The higher the score, the more prepared a company is to tackle the new challenges that come with automation and AI technologies. But RQ doesn’t just measure readiness, according to Gownder. It also enables CIOs to “identify gaps or areas where you need to prioritize resources before you make a big bet on automation and AI,” he said.

The 39 characteristics fall into one of three categories — people, leadership and organizational structure. People, for example, are measured across different dimensions — such as facilitation, which considers how effective an employee might be at communicating with an automated entity, and perception, which includes things like basic digital literacy and “constructive ambition,” or an eagerness to learn.

For leaders, the RQ highlights vision, adaptability, the ability to inspire trust and influence. The final category refers to IT employees and beyond; CIOs will need to influence the C-suite and even the board of directors to secure the budget, buy-in and support that automation and AI tools can demand. “The CIO is no longer a benign dictator who has all the power,” Gownder said. “This is the creation of an ecosystem across business units with lots of participation from the workers themselves.”

Organizational structures will also need to adapt. Automation and AI may require new titles such as bot manager, new training and mentoring opportunities for humans and machines alike, new processes that encourage human-machine team creation, and new metrics. “After all, we can have all the good intentions, and the well-educated employees and the leaders who are on board,” Gownder said, “but if we do not create structures, processes and budgets — the b word — we’re going to have a hard time getting this through.”

Don’t forget about trust

The categories of people, leadership and the organization are then measured against one final category — trust. Gownder called trust “a multiplier in this model.” Automation and AI technologies exist on a spectrum from transparent to opaque, and where the technology falls on that spectrum will influence employee trust.

“If you’re implementing something that is very transparent, that is very deterministic, your employees will bring a high level of inherent trust to the machine. They’re used to these sorts of systems,” Gownder said. “If you’re using probabilistic systems, where the machine is often uncertain of its results, then you’re going to have a higher burden of RQ investment.”

Forrester’s model breaks down the complexity of trust by providing a numeric value for how deterministic the technology is, how transparent the technology is and how much change the technology could have on the workplace.

The changes that automation and AI will have on the workplace could be a sensitive area for leaders, especially as automation and AI instigate changes in the workforce. “As you might imagine, when employees are losing their jobs as part of a deployment of automation, you magnify the mistrust among remaining employees,” Gownder said. “It raises the bar for the change management.”

But the efforts could be worthwhile. As repetitive tasks become automated, job satisfaction generally goes up, Gownder said. And although AI remains in its early stages, it is poised to transform how companies operate and interact with customers.

Whether companies choose Forrester’s RQ method or not, Gownder argued that an organizational competency in AI and automation is needed.

“If you want to be successful in creating a mixed workforce that incorporates digital workers, human workers, lots of automated processes, lots of probabilities, lots of real-time data and AI, you’re going to have to measure your people, your leaders, your organization and the inherent trust that is associated with technology,” he said.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://searchcio.techtarget.com/feature/Success-with-automation-and-AI-requires-a-high-RQ

AI

Understanding dimensionality reduction in machine learning models

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

The curse of dimensionality

Machine learning models map features to outcomes. For instance, say you want to create a model that predicts the amount of rainfall in one month. You have a dataset of different information collected from different cities in separate months. The data points include temperature, humidity, city population, traffic, number of concerts held in the city, wind speed, wind direction, air pressure, number of bus tickets purchased, and the amount of rainfall. Obviously, not all this information is relevant to rainfall prediction.

Some of the features might have nothing to do with the target variable. Evidently, population and number of bus tickets purchased do not affect rainfall. Other features might be correlated to the target variable, but not have a causal relation to it. For instance, the number of outdoor concerts might be correlated to the volume of rainfall, but it is not a good predictor for rain. In other cases, such as carbon emission, there might be a link between the feature and the target variable, but the effect will be negligible.

In this example, it is evident which features are valuable and which are useless. in other problems, the excessive features might not be obvious and need further data analysis.

But why bother to remove the extra dimensions? When you have too many features, you’ll also need a more complex model. A more complex model means you’ll need a lot more training data and more compute power to train your model to an acceptable level.

And since machine learning has no understanding of causality, models try to map any feature included in their dataset to the target variable, even if there’s no causal relation. This can lead to models that are imprecise and erroneous.

On the other hand, reducing the number of features can make your machine learning model simpler, more efficient, and less data-hungry.

The problems caused by too many features are often referred to as the “curse of dimensionality,” and they’re not limited to tabular data. Consider a machine learning model that classifies images. If your dataset is composed of 100×100-pixel images, then your problem space has 10,000 features, one per pixel. However, even in image classification problems, some of the features are excessive and can be removed.

Dimensionality reduction identifies and removes the features that are hurting the machine learning model’s performance or aren’t contributing to its accuracy. There are several dimensionality techniques, each of which is useful for certain situations.

Feature selection

Feature selection

A basic and very efficient dimensionality reduction method is to identify and select a subset of the features that are most relevant to target variable. This technique is called “feature selection.” Feature selection is especially effective when you’re dealing with tabular data in which each column represents a specific kind of information.

When doing feature selection, data scientists do two things: keep features that are highly correlated with the target variable and contribute the most to the dataset’s variance. Libraries such as Python’s Scikit-learn have plenty of good functions to analyze, visualize, and select the right features for machine learning models.

For instance, a data scientist can use scatter plots and heatmaps to visualize the covariance of different features. If two features are highly correlated to each other, then they will have a similar effect on the target variable, and including both in the machine learning model will be unnecessary. Therefore, you can remove one of them without causing a negative impact on the model’s performance.

Heatmap

Above: Heatmaps illustrate the covariance between different features. They are a good guide to finding and culling features that are excessive.

The same tools can help visualize the correlations between the features and the target variable. This helps remove variables that do not affect the target. For instance, you might find out that out of 25 features in your dataset, seven of them account for 95 percent of the effect on the target variable. This will enable you to shave off 18 features and make your machine learning model a lot simpler without suffering a significant penalty to your model’s accuracy.

Projection techniques

Sometimes, you don’t have the option to remove individual features. But this doesn’t mean that you can’t simplify your machine learning model. Projection techniques, also known as “feature extraction,” simplify a model by compressing several features into a lower-dimensional space.

A common example used to represent projection techniques is the “swiss roll” (pictured below), a set of data points that swirl around a focal point in three dimensions. This dataset has three features. The value of each point (the target variable) is measured based on how close it is along the convoluted path to the center of the swiss roll. In the picture below, red points are closer to the center and the yellow points are farther along the roll.

Swiss roll

In its current state, creating a machine learning model that maps the features of the swiss roll points to their value is a difficult task and would require a complex model with many parameters. But with the help of dimensionality reduction techniques, the points can be projected to a lower-dimension space that can be learned with a simple machine learning model.

There are various projection techniques. In the case of the above example, we used “locally-linear embedding,” an algorithm that reduces the dimension of the problem space while preserving the key elements that separate the values of data points. When our data is processed with the LLE, the result looks like the following image, which is like an unrolled version of the swiss roll. As you can see, points of each color remain together. In fact, this problem can still be simplified into a single feature and modeled with linear regression, the simplest machine learning algorithm.

Swiss roll, projected

While this example is hypothetical, you’ll often face problems that can be simplified if you project the features to a lower-dimensional space. For instance, “principal component analysis” (PCA), a popular dimensionality reduction algorithm, has found many useful applications to simplify machine learning problems.

In the excellent book Hands-on Machine Learning with Python, data scientist Aurelien Geron shows how you can use PCA to reduce the MNIST dataset from 784 features (28×28 pixels) to 150 features while preserving 95 percent of the variance. This level of dimensionality reduction has a huge impact on the costs of training and running artificial neural networks.

dimensionality reduction mnist dataset

There are a few caveats to consider about projection techniques. Once you develop a projection technique, you must transform new data points to the lower dimension space before running them through your machine learning model. However, the costs of this preprocessing step are not comparable to the gains of having a lighter model. A second consideration is that transformed data points are not directly representative of their original features and transforming them back to the original space can be tricky and in some cases impossible. This might make it difficult to interpret the inferences made by your model.

Dimensionality reduction in the machine learning toolbox

Having too many features will make your model inefficient. But cutting removing too many features will not help either. Dimensionality reduction is one among many tools data scientists can use to make better machine learning models. And as with every tool, they must be used with caution and care.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/16/understanding-dimensionality-reduction-in-machine-learning-models/

Continue Reading

AI

How AIOps can benefit businesses

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


AIOps,” which stands for “AI for IT operations,” refers to the way data and information from a dev environment is managed by an IT team — in this case, using AI. AIOps platforms leverage big data, machine learning, and analytics to enhance IT operations via monitoring, automation, and service desk functions with proactive and personal insights, enabling the use of multiple data sources and data collection methods. In theory, AIOps can provide faster resolutions to outages and other performance problems, in the process decreasing the costs associated with IT challenges.

The benefits of AIOps are driving enterprise adoption. Eighty-seven percent of respondents to a recent OpsRamp survey agree that AIOps tools are improving their data-driven collaboration, and Gartner predicts that AIOps service usage will rise from 5% in 2018 to 30% in 2023.

But when deploying an AIOps solution, businesses without a clear idea of potential blockers can run into challenges. That’s why it’s important to have a holistic understanding of AIOps before formulating a strategy.

What is AIOps?

AIOps platforms collect data from various IT operations tools in order to automatically spot issues while providing historical analytics. They typically have two components — big data and machine learning — and require a move away from siloed IT data in order to aggregate observational data alongside the engagement data in ticket, incident, and event recording.

As Seth Paskin, director of operations at BMC Software, writes: “The outcomes IT professionals expect from AIOps can be categorized generally as automation and prediction … Their first expectation from AIOps is that it will allow them to automate what they are currently doing manually and thus increase the speed at which those tasks are performed. Some specific examples I’ve heard include: correlate customer profile information with financial processing applications and infrastructure data to identify transaction duration outliers and highlight performance impacting factors; evaluate unstructured data in service tickets to identify problem automation candidates; categorize workloads for optimal infrastructure placement; and correlate incidents with changes, work logs, and app dev activities to measure production impact of infrastructure and application changes.”

An AIOps platform canvasses data on logs, performance alerts, tickets, and other items using an auto-discovery process that automatically collects data across infrastructure and application domains. The process identifies infrastructure devices, running apps, and business transactions and correlates all the data in a contextual form. Automatic dependency mapping determines the relationships between elements such as the physical and virtual connections at the networking layer by mapping app flows to the supporting infrastructure and between the business transactions and the apps.

AIOps’ automated dependency mapping has another benefit: helping to track relationships between hybrid infrastructure entities. AIOps platforms can create service and app topology maps across technology domains and environments, allowing IT teams to accelerate incident response and quantify the business impact of outages.

To identify patterns and predict future events, like service outages, AIOps employs supervised learning, unsupervised learning, and anomaly detection based on expected behaviors and thresholds. Particularly useful is unsupervised machine learning, which enables AIOps platforms to learn to recognize expected behavior and set thresholds across data and performance metrics. The platforms can analyze event patterns in real time and compare those to expected behavior, alerting IT teams when a sequence of events (or groups of events) demonstrates activity that indicates anomalies are present.

The insights from AIOps platforms can be turned into a range of intelligent actions performed automatically, from expediting service desk requests to end-to-end provisioning to deployment of network, compute, cloud, and applications. In sum, AIOps brings together data from both IT operations management and IT service management, allowing security teams to observe, engage, and act on issues more efficiently than before.

Challenges

Not every AIOps deployment goes as smoothly as planned. Challenges can stand in the way, including poor-quality data and IT team errors. Employees sometimes face difficulty in learning how to use AIOps tools, and handing over control to autonomous systems can pose concerns among the C-Suite. Moreover, adopting new AIOps solutions can be time-consuming — a majority of respondents to the OpsRamp survey said it takes three to six months to implement an AIOps solution, with 25% saying that it takes greater than six months.

Because AIOps platforms rely so heavily on machine learning, challenges in data science can impact the success of AIOps strategies. For example, getting access to quality data to train machine learning systems isn’t easy. According to a 2021 Rackspace Technology survey, poor data quality was the main reason for machine learning R&D failure among 34% of respondents. Thirty-one percent said they lacked production-ready data.

Beyond data challenges, the skills gap also presents a barrier to AIOps adoption. A majority of respondents in a 2021 Juniper report said their organizations were struggling with expanding their workforce to integrate with AI systems. Laments over the AI talent shortage have become a familiar refrain from private industry — O’Reilly’s 2021 AI Adoption in the Enterprise paper found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing it as a “significant” blocker.

Unrealistic expectations from the C-suite are another top reason for failure in machine learning projects. While 9 in 10 of C Suite survey respondents characterized AI as the “next technological revolution,” according to Edelman, Algorithmia found that a lack of executive buy-in contributes to delays in AI deployment.

Benefits

Successfully adopting AIOps isn’t a sure-fire thing, but many businesses find the benefits worth wrestling with the challenges. AIOps systems reduce the torrent of alerts that inundate IT teams and learn over time which types of alerts should be sent to which teams, reducing redundancy. They can be used to handle routine tasks like backups, server restarts, and low-risk maintenance activities. And they can predict events before they occur, such when network bandwidth is reaching its limit.

As Accenture explains in a recent whitepaper, AIOps ultimately improves an IT organization’s ability to be an effective partner to the business. “An IT operations platform with built-in AIOps capabilities can help IT operations proactively identify potential issues with the services and technology it delivers to the business and correct them before they become problems,” the consultancy wrote. “That’s the value of having a single data model that service and operations management applications can share seamlessly.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/16/how-aiops-can-benefit-businesses/

Continue Reading

Big Data

Artificial Intelligence Vs Machine Learning Vs Deep Learning: What exactly is the difference ?

Avatar

Published

on



Artificial Intelligence Vs Machine Learning Vs Deep Learning





















Learn everything about Analytics


Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.analyticsvidhya.com/blog/2021/05/ai-ml-dl/

Continue Reading

Big Data

Progressive Growing GAN- ProGAN

Avatar

Published

on



ProGAN | What is Progressive Growing GAN- ProGAN





















Learn everything about Analytics


Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.analyticsvidhya.com/blog/2021/05/progressive-growing-gan-progan/

Continue Reading
AI5 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Esports4 days ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Blockchain4 days ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Blockchain5 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Esports5 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Blockchain5 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

Blockchain4 days ago

Texas House Passes Bill that Recognizes Crypto Under Commercial Law

Aviation4 days ago

American Airlines Continues To Build Up Its Core Hub Strategy

Aviation5 days ago

Reuters: American Airlines adds stops to two flights after pipeline outage

ACN Newswire5 days ago

Duet Protocol closes first-round funding at US$3 million

Cyber Security5 days ago

Pending Data Protection and Security Laws At-A-Glance: APAC

Esports4 days ago

Video: s1mple – MVP of DreamHack Masters Spring 2021

Blockchain5 days ago

QAN Raises $2.1 Million in Venture Capital to Build DeFi Ecosystem

Blockchain4 days ago

Facebook’s Diem Enters Crypto Space With Diem USD Stablecoin

Business Insider5 days ago

Rally Expected To Stall For China Stock Market

Blockchain4 days ago

NSAV ANNOUNCES LAUNCH OF VIRTUABROKER’S PROPRIETARY CRYPTOCURRENCY PRICE SEARCH FEATURE

AR/VR1 day ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Business Insider4 days ago

HDI Announces Voting Results for Annual General and Special Meeting

Esports4 days ago

TiMi Studios partners with Xbox Game Studios to bring a “new game sensory experience” to players

Business Insider4 days ago

Apple employees speak publicly about recent hire who called women in Silicon Valley ‘soft and weak’ and ‘full of s–t’

Trending