Connect with us

AI

Determining the ROI of AI Projects A Key to Success

Avatar

Published

on

Determining how to measure the ROI of an AI project at the outset is being recognized as a best practice towards a successful deployment. (GETTY IMAGES)

By John P. Desmond, Editor, AI Trends

The best practices around determining whether your AI project will achieve a return for the business center around determining at the outset how the return on investment will be measured.

The evidence shows it will be time well spent. An estimated 87% of data science projects never make it to the production stage, and 56% of global CEOs expect it to take three to five years to see any real ROI on AI investments, according to a recent account in Forbes.

Like any other technology investment, business leaders need to define the specific goals of the AI projects, and commit to tracking it with benchmarks and key performance indicators, suggested author Mark Minevich, Advisor to Boston Consulting Group, venture capitalist and cognitive strategist. The company needs to think about the types of business problems that can be addressed with AI, so as not to set unrealistic expectations and not set the AI off in search of a business problem to solve.

Mark Minevich, Advisor to Boston Consulting Group, venture capitalist and cognitive strategist

Figuring out how to assign the people needed to help with the project is crucial. Some companies are using virtual teams where data scientists might work with an operations team two days a week. To break down the organization silos and allow various stakeholders to interact and collaborate, is a critical enabler of an AI project.

Employees need to be prepared. Investments in reskilling employees in AI need to be made, including for management in how to work in cross-functional teams across operations.

Every company engaged in AI projects is challenged to measure ROI. Author Minevich suggests focusing on what the project will save instead of potential revenue growth. “How much you invest in AI should be based on these saving forecasts and not revenue uplift,” he stated. That way, “If the deployment is not successful, the organization will have risked only what it expected to save, rather than risking what it expected to add in revenue.”

He also suggested knowing where the break-even point will be, when the cost savings of the project equals the investment. Many organizations struggle with predicting the break-even point, but using cost savings can allow a reasonable prediction to be made at the outset.

Some Dramatic Returns are Being Seen

While some 40% of organizations making significant investments in AI projects are not reporting business gains, others are seeing dramatic returns, according to a recent account in KungFuAI.

The top reasons AI projects fail were found to be: lack of vision, meaning the projects lacked a clear business purpose or was not rooted in a problem with a known business case worth solving; bad data, with no  way to collect, store and make relevant data accessible; the company culture is not embracing emerging technologies in operations and do not have strong data literacy; and not enough patience, expecting results too soon.

Suggestions: pick business problems or challenges that are easy to measure; deploy for targeted use cases; include many stakeholders; track small milestones. A narrow solution can contribute to the business case, if it does not solve the whole problem.

Customer Experience Experts Face Challenge of AI ROI

The biggest barrier to implementation of AI for experts in customer experience, is determining ROI, according to a recent account from CX Network. The organization surveyed 102 experts in customer experience; 36% cited this challenge, followed by 35% saying company culture was the major impediment, followed by competing priorities, cited by 31%.

Jon Stanesby, director of product strategy for AI Applications, Oracle

“Innovation of any kind cannot be expected to generate returns right away, so if you are embarking on something brand new ensure you have the buy-in from leadership. This includes the flexibility to fail a little along the way,” stated Jon Stanesby, director of product strategy for AI Application at Oracle, on linking initiatives to ROI. He also suggested factoring in the impact of reduced effort required by human workers on the AI is deployed.

McKinsey has estimated that AI can deliver additional global economic activity of about $13 trillion by 2030, amounting to an additional 1.2% of GDP growth each year, a rate comparable to the effect of other revolutionary innovations. The returns are out there for those that set up AI projects to be successful.

Read the source articles in Forbes, at KungFuAI and in CX Network.

Source: https://www.aitrends.com/ai-and-business-strategy/determining-the-roi-of-ai-projects-a-key-to-success/

Continue Reading

AI

XBRL: scrapping quarterlies, explaining AI and low latency reporting

Avatar

Published

on

Here is our pick of the 3 most important XBRL news stories this week.

1 FDIC considers scrapping quarterly bank reports

The Federal Deposit Insurance Corp. is moving to boost the way it monitors for risks at thousands of U.S. banks, potentially scrapping quarterly reports that have been a fixture of oversight for more than 150 years yet often contain stale data.

The FDIC has been one of the cheerleaders and case studies for the efficiency increasing impact of XBRL based reporting forever. Therefore it will be fascinating to observe this competition and its outcome.

2 XBRL data feeds explainable AI models

Amongst several fascinating presentations at the Eurofiling Innovation Day this week was an interesting demonstration on how XBRL reports can be used as the basis of explainable AI for bankruptcy prediction.

The black box nature of many AI models is one biggest issues of applying AI in regulated environments, where causal linkages are the bedrock of litigation etc. Making them explainable would remove a major headache for lots of use cases.

3 Low latency earnings press release data

Standardized financials from Earnings Press Release and 8-Ks are now available via the Calcbench API minutes after published.  Calcbench is leveraging our expertise in XBRL to get many of the numbers from the Income Statement, Balance Sheet and Statement of Cash Flows from the earnings press release or 8-K.  

The time lag between the publication of earnings information and its availability in the XBRL format continues to be a roadblock for the wholesale adoption of XBRL by financial markets until regulators require immediate publication in the XBRL format in real time. The Calcbench API is a welcome stop gap measure. 

 

—————————————————————

Christian Dreyer CFA is well known in Swiss Fintech circles as an expert in XBRL and financial reporting for investors.

 We have a self-imposed constraint of 3 news stories each week because we serve busy senior leaders in Fintech who need just enough information to get on with their job.

 For context on XBRL please read this introduction to our XBRL Week in 2016 and read articles tagged XBRL in our archives. 

 New readers can read 3 free articles.  To  become a member with full access to all that Daily Fintech offers,  the cost is just USD 143 a year (= USD 0.39 per day or USD 2.75 per week). For less than one cup of coffee you get a week full of caffeine for the mind.

Source: https://dailyfintech.com/2020/07/02/xbrl-scrapping-quarterlies-explaining-ai-and-low-latency-reporting/

Continue Reading

AI

AI- hot water for insurance incumbents, or a relaxing spa?

Avatar

Published

on

Frog-in-boiling-water

The parable of the frog in the boiling water is well known- you know, if you put a frog into boiling water it will immediately jump out, but if you put the frog into tepid water and gradually increase the temperature of the water it will slowly boil to death.  It’s not true but it is a clever lede into the artificial intelligence evolution within insurance.  Are there insurance ‘frogs’ in danger of tepid water turning hot, and are there frogs suffering from FOHW (fear of hot water?)

image source

Patrick Kelahan is a CX, engineering & insurance consultant, working with Insurers, Attorneys & Owners in his day job. He also serves the insurance and Fintech world as the ‘Insurance Elephant’.

The frog and boiling water example is intuitive- stark change is noticed, gradual change not so much.  It’s like Ernest Hemmingway’s quotation in “The Sun Also Rises”- “How did you go bankrupt?  Gradually, and then suddenly!”  In each of the examples the message is similar- adverse change is not always abrupt, but failure to notice or react to changing conditions can lead to a worst-case scenario.  As such with insurance innovation.

A recent interview in The Telegraph by Michael Dwyer of Peter Cullum, non-executive Director of Global Risk Partners (and certainly one with a CV that qualifies him as a knowing authority), provided this view:

“Insurance is one business that is all about data. It’s about numbers. It’s about the algorithms. Quite frankly, in 10 years’ time, I predict that 70pc or 80pc of all underwriters will be redundant because it will be machine driven.

“We don’t need smart people to make what I’d regard as judgmental decisions because the data will make the decision for you.”

A clever insurance innovation colleague, Craig Polley, recently posed Peter’s insurance scenario for discussion and the topic generated lively debate- will underwriting become machine driven, or is there an overarching need for human intuition?  I’m not brave enough to serve as arbiter of the discussion, but the chord Craig’s question struck leads to the broader point- is the insurance industry sitting in that tepid water now, and are the flames of AI potentially leading to par boiling?

I offered a thought recently to an AI advocate looking for some insight into how the concept is embraced by insurance organizations.  In considering the fundamentals of insurance, I recounted that insurance as a product thrives best in environments where risk can be understood, predicted, and priced across populations with widely varied individual risk exposures as best determined by risk experience within the population or application of risk indicators.  Blah, blah, blah. Insurance is a long-standing principle of sharing of the ultimate cost of risk where no one participant is unduly at a disadvantage, and no one party is at a financial advantage- it is a balance of cost and probability.

Underwriting has been built on a model of proxy information, on the law of large numbers, of historical performance, of significant populations and statistical sampling.  There is not much new in that description, but what if the dynamic is changed, to an environment where the understanding of risk factors is not retrospective, but prospective?

Take commercial motor insurance for example.  Reasonably expensive, plenty of human involvement in underwriting, high maximum loss outcomes for occurrences.  Internal data are the primary source of rating the book of business.  There are, however,  new approaches being made in the industry that supplant traditional internal or proxy data with robust analysis of external data.  Luminant Analytics is an example of a firm that leverages AI in providing not only provide predictive models for motor line loss frequency and severity trends, but also analytics that help companies expanding into new markets, where historical loss data is unavailable.  Traditional underwriting has remained a solid approach, but is it now akin to turning the heat up on the industry frog?

The COVID-19 environment has by default prompted a dramatic increase in virtual claim handling techniques, changing what was not too long ago verboten- waiver of inspection on higher value claims, or acceptance of third party estimates in lieu of measure by the inch adjuster work.  Yes, there will be severity hangovers and spikes in supplements, but carriers will find expediency trumps detail- as long as the customer is accepting of the change in methods.  If we consider the recent announcement by US P&C carrier Allstate of significant staff layoffs as an indicator of the inroads of virtual efforts then there seemingly is hope for that figurative frog.

Elsewhere it was announced that the All England Club has not had its Wimbledon event cancellation cover renewed for 2021 (please recall that the Club was prescient in having cancellation cover in force that included pandemic benefits).  The prior policy’s underwriters are apparently reluctant to shell out another potential $140 million with a recurrence of a pandemic, but are there other approaches to pandemic cover?  The consortium of underwriting firms devised the cover seventeen years ago; can the cover for a marquee event benefit from AI methodology that simply didn’t exist in 2003?  It’s apparent the ask for cover for the 2021 event attracted knowledgeable frogs that knew to jump out of hot water, but what if the exposure burner is turned down through better understanding of the breadth of data affecting the risk, that there is involvement of capital markets in diversifying the risk perhaps across many unique events’ outcomes and alternative risk financing, and leveraging of underwriting tools that are supported by AI and machine learning?  Will it be found in due time that the written rule that pandemics cannot be underwritten as a peril will have less validity because well placed application of data analysis has wrangled the risk exposure to a reasonable bet by an ILS fund?

There are more examples of AI’s promise but let us not forget that AI is not the magic solution to all insurance tasks.  Companies that invest in AI without a fitting use case simply are moving their frog to a different but jest as threatening a pot.  Companies that invest in innovation that cannot bridge their legacy system to meaningful outcomes because there is no API functionality are turning the heat up themselves.  Large scale innovation options that are coming to a twenty-year anniversary (think post Y2K) may have compounding legacy issues- old legacy and new legacy.

The insurance industry needs to consider not just individual instances of the gradual heat of change being applied.

What prevents the capital markets from applying AI methods (through design or purchase) in predicting or betting on risk outcomes?  The more comprehensive and accurate risk prediction methods become the more direct the path between customer and risk financing partner also becomes.  Insurance frogs need not fear the heat if there are fewer pots to work from, but no pots, no business.

The risk sharing/risk financing industry has evolved through application of available technology and tools, what’s to say AI does not become a double-edged sword for the insurance industry- a clever tool in the hands of insurers, or a clever tool in the hands of alternative financing that serves to cut away some of the insurers’ business?  If asked, Peter Cullum might opine that it’s not just underwriting that AI will affect, but any other aspect of insurance that AI can effectively influence.  Frogs beware.

You get three free articles on Daily Fintech; after that you will need to become a member for just US $143 per year ($0.39 per day) and get all our fresh content and archives and participate in our forum

Source: https://dailyfintech.com/2020/07/02/ai-hot-water-for-insurance-incumbents-or-a-relaxing-spa/

Continue Reading

AI

MIT takes down 80 Million Tiny Images data set due to racist and offensive content

Avatar

Published

on


Creators of the 80 Million Tiny Images data set from MIT and NYU took the collection offline this week, apologized, and asked other researchers to refrain from using the data set and delete any existing copies. The news was shared Monday in a letter by MIT professors Bill Freeman and Antonio Torralba and NYU professor Rob Fergus published on the MIT CSAIL website.

Introduced in 2006 and containing photos scraped from internet search engines, 80 Million Tiny Images was recently found to contain a range of racist, sexist, and otherwise offensive labels such as nearly 2,000 images labeled with the N-word, and labels like “rape suspect” and “child molester.” The data set also contained pornographic content like non-consensual photos taken up women’s skirts. Creators of the 79.3 million-image data set said it was too large and its 32 x 32 images too small, making visual inspection of the data set’s complete contents difficult. According to Google Scholar, 80 Million Tiny Images has been cited more 1,700 times.

Above: Offensive labels found in the 80 Million Tiny Images data set

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community — precisely those that we are making efforts to include,” the professors wrote in a joint letter. “It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

The trio of professors say the data set’s shortcoming were brought to their attention by an analysis and audit published late last month (PDF) by University of Dublin Ph.D. student Abeba Birhane and Carnegie Mellon University Ph.D. student Vinay Prabhu. The authors say their assessment is the first known critique of 80 Million Tiny Images.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Both the paper authors and the 80 Million Tiny Images creators say part of the problem comes from automated data collection and nouns from the WordNet data set for semantic hierarchy. Before the data set was taken offline, the coauthors suggested the creators of 80 Million Tiny Images do like ImageNet creators did and assess labels used in the people category of the data set. The paper finds that large-scale image data sets erode privacy and can have a disproportionately negative impact on women, racial and ethnic minorities, and communities at the margin of society.

Birhane and Prabhu assert that the computer vision community must begin having more conversations about the ethical use of large-scale image data sets now in part due to the growing availability of image-scraping tools and reverse image search technology. Citing previous work like the Excavating AI analysis of ImageNet, the analysis of large-scale image data sets shows that it’s not just a matter of data, but a matter of a culture in academia and industry that finds it acceptable to create large-scale data sets without the consent of participants “under the guise of anonymization.”

“We posit that the deeper problems are rooted in the wider structural traditions, incentives, and discourse of a field that treats ethical issues as an afterthought. A field where in the wild is often a euphemism for without consent. We are up against a system that has veritably mastered ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping, and ethics shirking,” the paper states.

To create more ethical large-scale image data sets, Birhane and Prabhu suggest:

  • Blur the faces of people in data sets
  • Do not use Creative Commons licensed material
  • Collect imagery with clear consent from data set participants
  • Include a data set audit card with large-scale image data sets, akin to the model cards Google AI uses and the datasheets for data sets Microsoft Research proposed

The work incorporates Birhane’s previous work on relational ethics, which suggests that the creators of machine learning systems should begin their work by speaking with the people most affected by machine learning systems, and that concepts of bias, fairness, and justice are moving targets.

ImageNet was introduced at CVPR in 2009 and is widely considered important to the advancement of computer vision and machine learning. Whereas previously some of the largest data sets could be counted in the tens of thousands, ImageNet contains more than 14 million images. The ImageNet Large Scale Visual Recognition Challenge ran from 2010 to 2017 and led to the launch of a variety of startups like Clarifai and MetaMind, a company Salesforce acquired in 2017. According to Google Scholar, ImageNet has been cited nearly 17,000 times.

As part of a series of changes detailed in December 2019, ImageNet creators including lead author Jia Deng and Dr. Fei-Fei Li found that 1,593 of the 2,832 people categories in the data set potentially contain offensive labels, which they said they plan to remove.

“We indeed celebrate ImageNet’s achievement and recognize the creators’ efforts to grapple with some ethical questions. Nonetheless, ImageNet as well as other large image datasets remain troublesome,” the Birhane and Prabhu paper reads.

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/knS0Ix3IHxA/

Continue Reading
Blockchain15 mins ago

Cardano Price Hits 2020 Top Following Network Upgrade – Will the Rally Sustain?

Blockchain29 mins ago

The Tide is Turning, No-Coiner Bill Burr Convinced Over Merits of Bitcoin

Blockchain31 mins ago

Chinese e-commerce giant Alibaba leads in filing the most number of blockchain-related patents.

Blockchain42 mins ago

Russian court rules stolen Bitcoin cannot be returned to the owner

Blockchain44 mins ago

Ripple Price Analysis: Things Looking Grim for XRP as Bears Attempt To Push Below 1900 SAT

Blockchain48 mins ago

Tesla Stock Surpasses $1,200 — Now 30% Higher Than Bitcoin Market Cap

Blockchain48 mins ago

Altcoin market analysis: Cardano sees 14% price surge

Blockchain50 mins ago

Fundamentally Strong: Bitcoin Hit These Highs Today Despite Stagnant Price Action

Business Insider57 mins ago

The No. 1-ranked tech analyst on Wall Street says these 6 stocks have potential for huge gains as they transform the sector

Blockchain59 mins ago

Altcoin market analysis: SNX/USD has surged 120% over the past two weeks

Business Insider1 hour ago

China warns the UK it will take ‘corresponding measures’ to stop millions of Hong Kong citizens taking refuge in Britain

Blockchain1 hour ago

Litecoin Price Analysis: Holding Above 36.8 Level That Can Hold

Blockchain1 hour ago

IOTA Breaks Out, Aims For Recent Highs

Blockchain1 hour ago

Blockchain Explorer to Educate Users With a Bitcoin Transaction Privacy Score

Blockchain1 hour ago

RippleNet gets new client, but where is the GBP-USD corridor?

Business Insider1 hour ago

A 22-year market vet explains why stocks are headed for a ‘massive reset’ as the economy struggles to recover from COVID-19 — and outlines why that will put mega-cap tech companies in serious danger

Big Data1 hour ago

PyTorch Multi-GPU Metrics Library and More in New PyTorch Lightning Release

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
Blockchain1 hour ago

The On-Chain Case for an Imminent Bitcoin Bull Market Just Gained Strength

Blockchain1 hour ago

Bitcoin SV, Monero, BAT Price Analysis: 02 July

Blockchain2 hours ago

Russia’s Blockchain Voting System Let Users Decrypt Results Before Count

Blockchain2 hours ago

Bitcoin and Ether Market Update July 2, 2020

Blockchain2 hours ago

3 snippets to begin your day: Bitcoin’s been busy, another crypto-ETP and more

Private Equity2 hours ago

Priveq closes SEK2.5bn Fund VI launched amid worst of coronavirus

Blockchain2 hours ago

GTA Online Is Bigger Than Ever, Let’s Review it in 2020

Blockchain2 hours ago

The Bank of Canada says a CBDC should focus on accessibility and inclusion.

Gaming2 hours ago

Evening Reading – July 1, 2020

Private Equity2 hours ago

Angelo Gordon surges to $1.5bn hard cap for third Europe RE fund, almost double size of Fund II

Blockchain2 hours ago

Cardano, IOTA, Dash Price Analysis: 02 July

Blockchain2 hours ago

U.S. Authorities Point Searchlight into Crypto’s Role in Trafficking

Blockchain2 hours ago

Analyst Expects Bitcoin Above $9.5K in Near-Term as Risk-On Sentiment Improves

IOT2 hours ago

Panavise Speedwheel #3DThursday #3DPrinting

Cannabis2 hours ago

Former NBA Star John Salley Joins Insurance Pro Daron Phillips To Offer Cannabis Coverage

Private Equity2 hours ago

Kennet Partners raises €223m for biggest ever fund in tie-up with Edmond de Rothschild

Cannabis2 hours ago

CA Media Report: Border Patrol Seizing Cash and Cannabis From Legal California Operators

Cannabis2 hours ago

Congressman Cohen Wishes To Investigate and Consider the Impeachment of Attorney General William P. Barr Includes Reference To “pretextual antitrust investigations against industries he disfavors”

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
BBC2 hours ago

One in six jobs to go as BBC cuts 450 staff from regional programmes

IOT3 hours ago

Spinwheel – fidget toy #3DThursday #3DPrinting

IOT3 hours ago

Tube Cutter with Peephole easy fit #3DThursday #3DPrinting

Cannabis3 hours ago

Is THC Most Important in Good Weed?

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
CovId193 hours ago

Mudslide at Myanmar jade mine kills more than 100 people

Trending