Connect with us

Big Data

AI-enabled enterprise starts with education, not tech

Avatar

Published

on

The AI-enabled enterprise won’t be built in a day. Take it from representatives at companies knee-deep in building…

AI hardware, software and services for their customers and clients, including IBM, Affectiva Inc. and Grant Thornton LLP.

At the recent AI and the Future of Work event hosted by MIT, these representatives provided advice on how CIOs can start to build the AI-enabled enterprise — as quickly as tomorrow morning. One of the first steps they suggested CIOs take? Get caught up on what the AI terrain looks like.

“The number one thing I would say is to invest the time to really understand what is happening in AI,” said Nichole Jordan, managing partner of markets, clients and industry at accounting and advisory firm Grant Thornton.

AI literacy is a must

Jordan pointed to AI Magazine and O’Reilly Media’s artificial intelligence newsletter as two “simple examples” of how CIOs can incorporate AI education into their daily routines and that of their teams. She described this as just “a sprinkling,” but said the reading material can encourage discussions about artificial intelligence and how its resurgence might affect the future of the company.

Reading up on AI could be worthwhile even for the smallest organizations, according to Jordan. “It no longer requires a multimillion-dollar budget to get AI started in your organization,” she said.

AI, Grant ThorntonNichole Jordan

Take mergers and acquisitions, which require advisors to monitor and analyze disparate and often siloed data sources such as patent filings or regulatory findings. Today, AI is doing that kind of work and even collecting metrics on company culture, customer feedback and employee engagement that it scrapes from sites such as Glassdoor.

“Over time, the AI is able to develop and monitor trends, patterns, make recommendations to you for potentially other companies to put into your acquisitions portfolio,” Jordan said. “It is about speed and accuracy and being able to analyze a lot of data that we didn’t historically have the opportunity to bring together into one place.”

Knowledge overhype

Affectiva’s Gabi Zijderveld echoed Jordan’s remarks, saying that education is a must.

“There’s so much hype and fluff around AI because every bit of technology today is [marketed as] AI,” said Zijderveld, chief marketing officer and head of product strategy at the emotion measurement company.

As CIOs familiarize themselves with what’s out there, they also need to get a grip on the appropriate opportunities AI can provide to their companies, according to Zijderveld. In Affectiva’s case, its first customers came from an obvious market segment.

affective, AI, emotion AIGabi Zijderveld

Media and advertising companies began using the emotion AI technology, which can interpret facial expressions in real time, to test their content and assess audience response. These days, customers include educators who use the technology to help children with autism decode facial expressions, as well as medical care workers who can use it to detect Parkinson’s disease or as a benchmark for facial reconstruction surgery.

Zijderveld also suggested CIOs look at industry best practices, talk to their peers, find out what competitors are doing and uncover good examples of applied AI, taking note of their results and the products and technologies that drove those results.

And she provided a note of caution for CIOs: Don’t fall into the over-engineering trap. “If you have an old-fashioned ruler that does the job, maybe you don’t need AI there,” she said. “Use the damn ruler.”

Lifelong learning is key

For Sophie Vandebroek, vice president of emerging technology partnerships at IBM, building the AI-enabled enterprise means developing employee skills.

“At IBM, in fact, we are being measured to make sure we take 40 hours of education every year on these kinds of topics,” she said.

Not only is training important, but hiring and bringing in the right skills is also key, according to Vandebroek. For AI-enabled enterprises to succeed, employees who know how to use AI tools, especially as they become more accessible, easier to use and embedded into workflows, will be critical.

Vandebroek cited IBM’s Project Debater product as an example of how AI could change workflows. The AI system has been trained to take a topic, craft an argument and debate its merits — in minutes. Vandebroek believes a technology like this could help companies work through difficult decisions they need to make, such as with an acquisition.

As part of that education, companies — from the board of directors on down — need to recognize the importance of trust and transparency, according to Vandebroek. She stressed decisions be explainable and that data privacy be made a priority.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://searchcio.techtarget.com/news/252452985/AI-enabled-enterprise-starts-with-education-not-tech

AI

Understanding dimensionality reduction in machine learning models

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

The curse of dimensionality

Machine learning models map features to outcomes. For instance, say you want to create a model that predicts the amount of rainfall in one month. You have a dataset of different information collected from different cities in separate months. The data points include temperature, humidity, city population, traffic, number of concerts held in the city, wind speed, wind direction, air pressure, number of bus tickets purchased, and the amount of rainfall. Obviously, not all this information is relevant to rainfall prediction.

Some of the features might have nothing to do with the target variable. Evidently, population and number of bus tickets purchased do not affect rainfall. Other features might be correlated to the target variable, but not have a causal relation to it. For instance, the number of outdoor concerts might be correlated to the volume of rainfall, but it is not a good predictor for rain. In other cases, such as carbon emission, there might be a link between the feature and the target variable, but the effect will be negligible.

In this example, it is evident which features are valuable and which are useless. in other problems, the excessive features might not be obvious and need further data analysis.

But why bother to remove the extra dimensions? When you have too many features, you’ll also need a more complex model. A more complex model means you’ll need a lot more training data and more compute power to train your model to an acceptable level.

And since machine learning has no understanding of causality, models try to map any feature included in their dataset to the target variable, even if there’s no causal relation. This can lead to models that are imprecise and erroneous.

On the other hand, reducing the number of features can make your machine learning model simpler, more efficient, and less data-hungry.

The problems caused by too many features are often referred to as the “curse of dimensionality,” and they’re not limited to tabular data. Consider a machine learning model that classifies images. If your dataset is composed of 100×100-pixel images, then your problem space has 10,000 features, one per pixel. However, even in image classification problems, some of the features are excessive and can be removed.

Dimensionality reduction identifies and removes the features that are hurting the machine learning model’s performance or aren’t contributing to its accuracy. There are several dimensionality techniques, each of which is useful for certain situations.

Feature selection

Feature selection

A basic and very efficient dimensionality reduction method is to identify and select a subset of the features that are most relevant to target variable. This technique is called “feature selection.” Feature selection is especially effective when you’re dealing with tabular data in which each column represents a specific kind of information.

When doing feature selection, data scientists do two things: keep features that are highly correlated with the target variable and contribute the most to the dataset’s variance. Libraries such as Python’s Scikit-learn have plenty of good functions to analyze, visualize, and select the right features for machine learning models.

For instance, a data scientist can use scatter plots and heatmaps to visualize the covariance of different features. If two features are highly correlated to each other, then they will have a similar effect on the target variable, and including both in the machine learning model will be unnecessary. Therefore, you can remove one of them without causing a negative impact on the model’s performance.

Heatmap

Above: Heatmaps illustrate the covariance between different features. They are a good guide to finding and culling features that are excessive.

The same tools can help visualize the correlations between the features and the target variable. This helps remove variables that do not affect the target. For instance, you might find out that out of 25 features in your dataset, seven of them account for 95 percent of the effect on the target variable. This will enable you to shave off 18 features and make your machine learning model a lot simpler without suffering a significant penalty to your model’s accuracy.

Projection techniques

Sometimes, you don’t have the option to remove individual features. But this doesn’t mean that you can’t simplify your machine learning model. Projection techniques, also known as “feature extraction,” simplify a model by compressing several features into a lower-dimensional space.

A common example used to represent projection techniques is the “swiss roll” (pictured below), a set of data points that swirl around a focal point in three dimensions. This dataset has three features. The value of each point (the target variable) is measured based on how close it is along the convoluted path to the center of the swiss roll. In the picture below, red points are closer to the center and the yellow points are farther along the roll.

Swiss roll

In its current state, creating a machine learning model that maps the features of the swiss roll points to their value is a difficult task and would require a complex model with many parameters. But with the help of dimensionality reduction techniques, the points can be projected to a lower-dimension space that can be learned with a simple machine learning model.

There are various projection techniques. In the case of the above example, we used “locally-linear embedding,” an algorithm that reduces the dimension of the problem space while preserving the key elements that separate the values of data points. When our data is processed with the LLE, the result looks like the following image, which is like an unrolled version of the swiss roll. As you can see, points of each color remain together. In fact, this problem can still be simplified into a single feature and modeled with linear regression, the simplest machine learning algorithm.

Swiss roll, projected

While this example is hypothetical, you’ll often face problems that can be simplified if you project the features to a lower-dimensional space. For instance, “principal component analysis” (PCA), a popular dimensionality reduction algorithm, has found many useful applications to simplify machine learning problems.

In the excellent book Hands-on Machine Learning with Python, data scientist Aurelien Geron shows how you can use PCA to reduce the MNIST dataset from 784 features (28×28 pixels) to 150 features while preserving 95 percent of the variance. This level of dimensionality reduction has a huge impact on the costs of training and running artificial neural networks.

dimensionality reduction mnist dataset

There are a few caveats to consider about projection techniques. Once you develop a projection technique, you must transform new data points to the lower dimension space before running them through your machine learning model. However, the costs of this preprocessing step are not comparable to the gains of having a lighter model. A second consideration is that transformed data points are not directly representative of their original features and transforming them back to the original space can be tricky and in some cases impossible. This might make it difficult to interpret the inferences made by your model.

Dimensionality reduction in the machine learning toolbox

Having too many features will make your model inefficient. But cutting removing too many features will not help either. Dimensionality reduction is one among many tools data scientists can use to make better machine learning models. And as with every tool, they must be used with caution and care.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/16/understanding-dimensionality-reduction-in-machine-learning-models/

Continue Reading

AI

How AIOps can benefit businesses

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


AIOps,” which stands for “AI for IT operations,” refers to the way data and information from a dev environment is managed by an IT team — in this case, using AI. AIOps platforms leverage big data, machine learning, and analytics to enhance IT operations via monitoring, automation, and service desk functions with proactive and personal insights, enabling the use of multiple data sources and data collection methods. In theory, AIOps can provide faster resolutions to outages and other performance problems, in the process decreasing the costs associated with IT challenges.

The benefits of AIOps are driving enterprise adoption. Eighty-seven percent of respondents to a recent OpsRamp survey agree that AIOps tools are improving their data-driven collaboration, and Gartner predicts that AIOps service usage will rise from 5% in 2018 to 30% in 2023.

But when deploying an AIOps solution, businesses without a clear idea of potential blockers can run into challenges. That’s why it’s important to have a holistic understanding of AIOps before formulating a strategy.

What is AIOps?

AIOps platforms collect data from various IT operations tools in order to automatically spot issues while providing historical analytics. They typically have two components — big data and machine learning — and require a move away from siloed IT data in order to aggregate observational data alongside the engagement data in ticket, incident, and event recording.

As Seth Paskin, director of operations at BMC Software, writes: “The outcomes IT professionals expect from AIOps can be categorized generally as automation and prediction … Their first expectation from AIOps is that it will allow them to automate what they are currently doing manually and thus increase the speed at which those tasks are performed. Some specific examples I’ve heard include: correlate customer profile information with financial processing applications and infrastructure data to identify transaction duration outliers and highlight performance impacting factors; evaluate unstructured data in service tickets to identify problem automation candidates; categorize workloads for optimal infrastructure placement; and correlate incidents with changes, work logs, and app dev activities to measure production impact of infrastructure and application changes.”

An AIOps platform canvasses data on logs, performance alerts, tickets, and other items using an auto-discovery process that automatically collects data across infrastructure and application domains. The process identifies infrastructure devices, running apps, and business transactions and correlates all the data in a contextual form. Automatic dependency mapping determines the relationships between elements such as the physical and virtual connections at the networking layer by mapping app flows to the supporting infrastructure and between the business transactions and the apps.

AIOps’ automated dependency mapping has another benefit: helping to track relationships between hybrid infrastructure entities. AIOps platforms can create service and app topology maps across technology domains and environments, allowing IT teams to accelerate incident response and quantify the business impact of outages.

To identify patterns and predict future events, like service outages, AIOps employs supervised learning, unsupervised learning, and anomaly detection based on expected behaviors and thresholds. Particularly useful is unsupervised machine learning, which enables AIOps platforms to learn to recognize expected behavior and set thresholds across data and performance metrics. The platforms can analyze event patterns in real time and compare those to expected behavior, alerting IT teams when a sequence of events (or groups of events) demonstrates activity that indicates anomalies are present.

The insights from AIOps platforms can be turned into a range of intelligent actions performed automatically, from expediting service desk requests to end-to-end provisioning to deployment of network, compute, cloud, and applications. In sum, AIOps brings together data from both IT operations management and IT service management, allowing security teams to observe, engage, and act on issues more efficiently than before.

Challenges

Not every AIOps deployment goes as smoothly as planned. Challenges can stand in the way, including poor-quality data and IT team errors. Employees sometimes face difficulty in learning how to use AIOps tools, and handing over control to autonomous systems can pose concerns among the C-Suite. Moreover, adopting new AIOps solutions can be time-consuming — a majority of respondents to the OpsRamp survey said it takes three to six months to implement an AIOps solution, with 25% saying that it takes greater than six months.

Because AIOps platforms rely so heavily on machine learning, challenges in data science can impact the success of AIOps strategies. For example, getting access to quality data to train machine learning systems isn’t easy. According to a 2021 Rackspace Technology survey, poor data quality was the main reason for machine learning R&D failure among 34% of respondents. Thirty-one percent said they lacked production-ready data.

Beyond data challenges, the skills gap also presents a barrier to AIOps adoption. A majority of respondents in a 2021 Juniper report said their organizations were struggling with expanding their workforce to integrate with AI systems. Laments over the AI talent shortage have become a familiar refrain from private industry — O’Reilly’s 2021 AI Adoption in the Enterprise paper found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing it as a “significant” blocker.

Unrealistic expectations from the C-suite are another top reason for failure in machine learning projects. While 9 in 10 of C Suite survey respondents characterized AI as the “next technological revolution,” according to Edelman, Algorithmia found that a lack of executive buy-in contributes to delays in AI deployment.

Benefits

Successfully adopting AIOps isn’t a sure-fire thing, but many businesses find the benefits worth wrestling with the challenges. AIOps systems reduce the torrent of alerts that inundate IT teams and learn over time which types of alerts should be sent to which teams, reducing redundancy. They can be used to handle routine tasks like backups, server restarts, and low-risk maintenance activities. And they can predict events before they occur, such when network bandwidth is reaching its limit.

As Accenture explains in a recent whitepaper, AIOps ultimately improves an IT organization’s ability to be an effective partner to the business. “An IT operations platform with built-in AIOps capabilities can help IT operations proactively identify potential issues with the services and technology it delivers to the business and correct them before they become problems,” the consultancy wrote. “That’s the value of having a single data model that service and operations management applications can share seamlessly.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/16/how-aiops-can-benefit-businesses/

Continue Reading

Big Data

Artificial Intelligence Vs Machine Learning Vs Deep Learning: What exactly is the difference ?

Avatar

Published

on



Artificial Intelligence Vs Machine Learning Vs Deep Learning





















Learn everything about Analytics


Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.analyticsvidhya.com/blog/2021/05/ai-ml-dl/

Continue Reading

Big Data

Progressive Growing GAN- ProGAN

Avatar

Published

on



ProGAN | What is Progressive Growing GAN- ProGAN





















Learn everything about Analytics


Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.analyticsvidhya.com/blog/2021/05/progressive-growing-gan-progan/

Continue Reading
AI5 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Esports4 days ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Blockchain4 days ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Blockchain5 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Esports5 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Blockchain5 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

Blockchain4 days ago

Texas House Passes Bill that Recognizes Crypto Under Commercial Law

Aviation4 days ago

American Airlines Continues To Build Up Its Core Hub Strategy

Aviation5 days ago

Reuters: American Airlines adds stops to two flights after pipeline outage

ACN Newswire5 days ago

Duet Protocol closes first-round funding at US$3 million

Cyber Security5 days ago

Pending Data Protection and Security Laws At-A-Glance: APAC

AI5 days ago

Onestream: Data analysis, AI tools usage increased in 2021

Blockchain5 days ago

QAN Raises $2.1 Million in Venture Capital to Build DeFi Ecosystem

Blockchain4 days ago

Facebook’s Diem Enters Crypto Space With Diem USD Stablecoin

Business Insider5 days ago

Rally Expected To Stall For China Stock Market

Blockchain4 days ago

NSAV ANNOUNCES LAUNCH OF VIRTUABROKER’S PROPRIETARY CRYPTOCURRENCY PRICE SEARCH FEATURE

Esports4 days ago

Video: s1mple – MVP of DreamHack Masters Spring 2021

Business Insider4 days ago

HDI Announces Voting Results for Annual General and Special Meeting

AR/VR1 day ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Esports4 days ago

TiMi Studios partners with Xbox Game Studios to bring a “new game sensory experience” to players

Trending