Connect with us

Artificial Intelligence

UK’s MHRA says it has ‘concerns’ about Babylon Health — and flags legal gap around triage chatbots

Avatar

Published

on

The U.K.’s medical device regulator has admitted it has concerns about VC-backed AI chatbot maker Babylon Health. It made the admission in a letter sent to a clinician who’s been raising the alarm about Babylon’s approach toward patient safety and corporate governance since 2017.

The HSJ reported on the MHRA’s letter to Dr. David Watkins yesterday. TechCrunch has reviewed the letter (see below), which is dated December 4, 2020. We’ve also seen additional context about what was discussed in a meeting referenced in the letter, as well as reviewing other correspondence between Watkins and the regulator in which he details a number of wide-ranging concerns.

In an interview he emphasized that the concerns the regulator shares are “far broader” than the (important but) single issue of chatbot safety.

“The issues relate to the corporate governance of the company — how they approach safety concerns. How they approach people who raise safety concerns,” Watkins told TechCrunch. “That’s the concern. And some of the ethics around the mispromoting of medical devices.

“The overall story is they did promote something that was dangerously flawed. They made misleading claims with regards to how [the chatbot] should be used — its intended use — with [Babylon CEO] Ali Parsa promoting it as a ‘diagnostic’ system — which was never the case. The chatbot was never approved for ‘diagnosis.’”

“In my opinion, in 2018 the MHRA should have taken a much firmer stance with Babylon and made it clear to the public that the claims that were being made were false — and that the technology was not approved for use in the way that Babylon were promoting it,” he went on. “That should have happened and it didn’t happen because the regulations at that time were not fit for purpose.”

“In reality there is no regulatory ‘approval’ process for these technologies and the legislation doesn’t require a company to act ethically,” Watkins also told us. “We’re reliant on the health tech sector behaving responsibly.”

The consultant oncologist began raising red flags about Babylon with U.K. healthcare regulators (CQC/MHRA) as early as February 2017 — initially over the “apparent absence of any robust clinical testing or validation,” as he puts it in correspondence to regulators. However with Babylon opting to deny problems and go on the attack against critics his concerns mounted.

An admission by the medical devices regulator that all Watkins’ concerns are “valid” and are “ones that we share” blows Babylon’s deflective PR tactics out of the water.

“Babylon cannot say that they have always adhered to the regulatory requirements — at times they have not adhered to the regulatory requirements. At different points throughout the development of their system,” Watkins also told us, adding: “Babylon never took the safety concerns as seriously as they should have. Hence this issue has dragged on over a more than three-year period.”

During this time the company has been steaming ahead inking wide-ranging “digitization” deals with healthcare providers around the world — including a 10-year deal agreed with the U.K. city of Wolverhampton last year to provide an integrated app that’s intended to have a reach of 300,000 people.

It also has a 10-year agreement with the government of Rwanda to support digitization of its health system, including via digitally enabled triage. Other markets it’s rolled into include the U.S., Canada and Saudi Arabia.

Babylon says it now covers more than 20 million patients and has done 8 million consultations and “AI interactions” globally. But is it operating to the high standards people would expect of a medical device company?

Safety, ethical and governance concerns

In a written summary, dated October 22, of a video call which took place between Watkins and the U.K. medical devices regulator on September 24 last year, he summarizes what was discussed in the following way: “I talked through and expanded on each of the points outlined in the document, specifically; the misleading claims, the dangerous flaws and Babylon’s attempts to deny/suppress the safety issues.”

In his account of this meeting, Watkins goes on to report: “There appeared to be general agreement that Babylon’s corporate behavior and governance fell below the standards expected of a medical device/healthcare provider.”

“I was informed that Babylon Health would not be shown leniency (given their relationship with [U.K. health secretary] Matt Hancock),” he also notes in the summary — a reference to Hancock being a publicly enthusiastic user of Babylon’s “GP at hand” app (for which he was accused in 2018 of breaking the ministerial code).

In a separate document, which Watkins compiled and sent to the regulator last year, he details 14 areas of concern — covering issues including the safety of the Babylon chatbot’s triage; “misleading and conflicting” T&Cs — which he says contradict promotional claims it has made to hype the product; as well as what he describes as a “multitude of ethical and governance concerns” — including its aggressive response to anyone who raises concerns about the safety and efficacy of its technology.

This has included a public attack campaign against Watkins himself, which we reported on last year; as well as what he lists in the document as “legal threats to avoid scrutiny and adverse media coverage.”

Here he notes that Babylon’s response to safety concerns he had raised back in 2018 — which had been reported on by the HSJ — was also to go on the attack, with the company claiming then that “vested interest” were spreading “false allegations” in an attempt to “see us fail.”

The allegations were not false and it is clear that Babylon chose to mislead the HSJ readership, opting to place patients at risk of harm, in order to protect their own reputation,” writes Watkins in associated commentary to the regulator.

He goes on to point out that, in May 2018, the MHRA had itself independently notified Babylon Health of two incidents related to the safety of its chatbot (one involving missed symptoms of a heart attack, another missed symptoms of DVT) — yet the company still went on to publicly rubbish the HSJ’s report the following month (which was entitled: “Safety regulators investigating concerns about Babylon’s ‘chatbot’”).

Wider governance and operational concerns Watkins raises in the document include Babylon’s use of staff NDAs — which he argues leads to a culture inside the company where staff feel unable to speak out about any safety concerns they may have; and what he calls “inadequate medical device vigilance” (whereby he says the Babylon bot doesn’t routinely request feedback on the patient outcome post triage, arguing that: “The absence of any robust feedback system significant impairs the ability to identify adverse outcomes”).

Re: unvarnished staff opinions, it’s interesting to note that Babylon’s Glassdoor rating at the time of writing is just 2.9 stars — with only a minority of reviewers saying they would recommend the company to a friend and where Parsa’s approval rating as CEO is also only 45% on aggregate. (“The technology is outdated and flawed,” writes one Glassdoor reviewer who is listed as a current Babylon Health employee working as a clinical ops associate in Vancouver, Canada — where privacy regulators have an open investigation into its app. Among the listed cons in the one-star review is the claim that: “The well-being of patients is not seen as a priority. A real joke to healthcare. Best to avoid.”)

Per Watkins’ report of his online meeting with the MHRA, he says the regulator agreed NDAs are “problematic” and impact on the ability of employees to speak up on safety issues.

He also writes that it was acknowledged that Babylon employees may fear speaking up because of legal threats. His minutes further record that: “Comment was made that the MHRA are able to look into concerns that are raised anonymously.”

In the summary of his concerns about Babylon, Watkins also flags an event in 2018 which the company held in London to promote its chatbot — during which he writes that it made a number of “misleading claims,” such as that its AI generates health advice that is “on-par with top-rated practicing clinicians.”

The flashy claims led to a blitz of hyperbolic headlines about the bot’s capabilities — helping Babylon to generate hype at a time when it was likely to have been pitching investors to raise more funding.

The London-based startup was valued at $2 billion+ in 2019 when it raised a massive $550 million Series C round, from investors including Saudi Arabia’s Public Investment Fund and a large (unnamed) U.S.-based health insurance company, as well as insurance giant Munich Re’s ERGO Fund — trumpeting the raise at the time as the largest ever in Europe or U.S. for digital health delivery.

“It should be noted that Babylon Health have never withdrawn or attempted to correct the misleading claims made at the AI Test Event [which generated press coverage it’s still using as a promotional tool on its website in certain jurisdictions],” Watkins writes to the regulator. “Hence, there remains an ongoing risk that the public will put undue faith in Babylon’s unvalidated medical device.”

In his summary he also includes several pieces of anonymous correspondence from a number of people claiming to work (or have worked) at Babylon — which make a number of additional claims. “There is huge pressure from investors to demonstrate a return,” writes one of these. “Anything that slows that down is seen [a]s avoidable.”

“The allegations made against Babylon Health are not false and were raised in good faith in the interests of patient safety,” Watkins goes on to assert in his summary to the regulator. “Babylon’s ‘repeated’ attempts to actively discredit me as an individual raises serious questions regarding their corporate culture and trustworthiness as a healthcare provider.”

In its letter to Watkins (screengrabbed below), the MHRA tells him: “Your concerns are all valid and ones that we share.”

It goes on to thank him for personally and publicly raising issues “at considerable risk to yourself.”

Letter from the MHRA to Dr. David Watkins (Screengrab: TechCrunch).

Babylon has been contacted for a response to the MHRA’s validation of Watkins’ concerns. At the time of writing it had not responded to our request for comment.

The startup told the HSJ that it meets all the local requirements of regulatory bodies for the countries it operates in, adding: “Babylon is committed to upholding the highest of standards when it comes to patient safety.”

In one aforementioned aggressive incident last year, Babylon put out a press release attacking Watkins as a “troll” and seeking to discredit the work he was doing to highlight safety issues with the triage performed by its chatbot.

It also claimed its technology had been “NHS validated” as a “safe service 10 times.”

It’s not clear what validation process Babylon was referring to there — and Watkins also flags and queries that claim in his correspondence with the MHRA, writing: “As far as I am aware, the Babylon chatbot has not been validated — in which case, their press release is misleading.”

The MHRA’s letter, meanwhile, makes it clear that the current regulatory regime in the U.K. for software-based medical device products does not adequately cover software-powered “health tech” devices, such as Babylon’s chatbot.

Per Watkins there is no approval process, currently. Such devices are merely registered with the MHRA — but there’s no legal requirement that the regulator assess them or even receive documentation related to their development. He says they exist independently — with the MHRA holding a register.

“You have raised a complex set of issues and there are several aspects that fall outside of our existing remit,” the regulator concedes in the letter. “This highlights some issues which we are exploring further, and which may be important as we develop a new regulatory framework for medical devices in the U.K.”

An update to pan-EU medical devices regulation — which will bring in new requirements for software-based medical devices and had been originally intended to be implemented in the U.K. in May last year — will no longer take place, given the country has left the bloc.

The U.K. is instead in the process of formulating its own regulatory update for medical device rules. This means there’s still a gap around software-based “health tech” — which isn’t expected to be fully plugged for several years. (Although Watkins notes there have been some tweaks to the regime, such as a partial lifting of confidentiality requirements last year.)

In a speech last year, health secretary Hancock told parliament that with the government aimed to formulate a regulatory system for medical devices that is “nimble enough” to keep up with tech-fueled developments such as health wearables and AI while “maintaining and enhancing patient safety.” It will include giving the MHRA “a new power to disclose to members of the public any safety concerns about a device,” he said then.

In the meanwhile the existing (outdated) regulatory regime appears to be continuing to tie the regulator’s hands — at least vis-a-vis what they can say in public about safety concerns. It has taken Watkins making its letter to him public to do that.

In the letter the MHRA writes that “confidentiality unfortunately binds us from saying more on any specific investigation,” although it also tells him: “Please be assured that your concerns are being taken seriously and if there is action to be taken, then we will.”

“Based on the wording of the letter, I think it was clear that they wanted to provide me with a message that we do hear you, that we understand what you’re saying, we acknowledge the concerns which you’ve raised, but we are limited by what we can do,” Watkins told us.

He also said he believes the regulator has engaged with Babylon over concerns he’s raised these past three years — noting the company has made a number of changes after he had raised specific queries (such as to its T&Cs, which had initially said it’s not a medical device but were subsequently withdrawn and changed to acknowledge it is; or claims it had made that the chatbot is “100% safe” which were withdrawn — after an intervention by the Advertising Standards Authority in that case).

The chatbot itself has also been tweaked to put less emphasis on the diagnosis as an outcome and more emphasis on the triage outcome, per Watkins.

“They’ve taken a piecemeal approach [to addressing safety issues with chatbot triage]. So I would flag an issue [publicly via Twitter] and they would only look at that very specific issue. Patients of that age, undertaking that exact triage assessment — ‘okay, we’ll fix that, we’ll fix that’ — and they would put in place a [specific fix]. But sadly, they never spent time addressing the broader fundamental issues within the system. Hence, safety issues would repeatedly crop up,” he said, citing examples of multiple issues with cardiac triages that he also raised with the regulator.

“When I spoke to the people who work at Babylon they used to have to do these hard fixes … All they’d have to do is just kind of ‘dumb it down’ a bit. So, for example, for anyone with chest pain it would immediately say go to A&E. They would take away any thought process to it,” he added. (It also of course risks wasting healthcare resources — as he also points out in remarks to the regulators.)

“That’s how they over time got around these issues. But it highlights the challenges and difficulties in developing these tools. It’s not easy. And if you try and do it quickly and don’t give it enough attention then you just end up with something that is useless.”

Watkins also suspects the MHRA has been involved in getting Babylon to remove certain pieces of hyperbolic promotional material related to the 2018 AI event from its website.

In one curious episode, also related to the 2018 event, Babylon’s CEO demoed an AI-powered interface that appeared to show real-time transcription of a patient’s words combined with an “emotion-scanning” AI — which he said scanned facial expressions in real time to generate an assessment of how the person was feeling — with Parsa going on to tell the audience: “That’s what we’ve done. That’s what we’ve built. None of this is for show. All of this will be either in the market or already in the market.”

However neither feature has actually been brought to market by Babylon as yet. Asked about this last month, the startup told TechCrunch: “The emotion detection functionality, seen in old versions of our clinical portal demo, was developed and built by Babylon‘s AI team. Babylon conducts extensive user testing, which is why our technology is continually evolving to meet the needs of our patients and clinicians. After undergoing pre-market user testing with our clinicians, we prioritized other AI-driven features in our clinical portal over the emotion recognition function, with a focus on improving the operational aspects of our service.”

“I certainly found [the MHRA’s letter] very reassuring and I strongly suspect that the MHRA have been engaging with Babylon to address concerns that have been identified over the past three-year period,” Watkins also told us today. “The MHRA don’t appear to have been ignoring the issues but Babylon simply deny any problems and can sit behind the confidentiality clauses.”

In a statement on the current regulatory situation for software-based medical devices in the U.K., the MHRA told us:

The MHRA ensures that manufacturers of medical devices comply with the Medical Devices Regulations 2002 (as amended). Please refer to existing guidance.

The Medicines and Medical Devices Act 2021 provides the foundation for a new improved regulatory framework that is currently being developed. It will consider all aspects of medical device regulation, including the risk classification rules that apply to Software as a Medical Device (SaMD).

The U.K. will continue to recognize CE marked devices until 1 July 2023. After this time, requirements for the UKCA Mark must be met. This will include the revised requirements of the new framework that is currently being developed.

The Medicines and Medical Devices Act 2021 allows the MHRA to undertake its regulatory activities with a greater level of transparency and share information where that is in the interests of patient safety.

The regulator declined to be interviewed or respond to questions about the concerns it says in the letter to Watkins that it shares about Babylon — telling us: “The MHRA investigates all concerns but does not comment on individual cases.”

“Patient safety is paramount and we will always investigate where there are concerns about safety, including discussing those concerns with individuals that report them,” it added.

Watkins raised one more salient point on the issue of patient safety for “cutting edge” tech tools — asking where is the “real-life clinical data”? So far, he says the studies patients have to go on are limited assessments — often made by the chatbot makers themselves.

“One quite telling thing about this sector is the fact that there’s very little real-life data out there,” he said. “These chatbots have been around for a good few years now … And there’s been enough time to get real-life clinical data and yet it hasn’t appeared and you just wonder if, is that because in the real-life setting they are actually not quite as useful as we think they are?”

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://techcrunch.com/2021/03/05/uks-mhra-says-it-has-concerns-about-babylon-health-and-flags-legal-gap-around-triage-chatbots/

Artificial Intelligence

Deep Learning vs Machine Learning: How an Emerging Field Influences Traditional Computer Programming

Avatar

Published

on

When two different concepts are greatly intertwined, it can be difficult to separate them as distinct academic topics. That might explain why it’s so difficult to separate deep learning from machine learning as a whole. Considering the current push for both automation as well as instant gratification, a great deal of renewed focus has been heaped on the topic.

Everything from automated manufacturing worfklows to personalized digital medicine could potentially grow to rely on deep learning technology. Defining the exact aspects of this technical discipline that will revolutionize these industries is, however, admittedly much more difficult. Perhaps it’s best to consider deep learning in the context of a greater movement in computer science.

Defining Deep Learning as a Subset of Machine Learning

Machine learning and deep learning are essentially two sides of the same coin. Deep learning techniques are a specific discipline that belong to a much larger field that includes a large variety of trained artificially intelligent agents that can predict the correct response in an equally wide array of situations. What makes deep learning independent of all of these other techniques, however, is the fact that it focuses almost exclusively on teaching agents to accomplish a specific goal by learning the best possible action in a number of virtual environments.

Traditional machine learning algorithms usually teach artificial nodes how to respond to stimuli by rote memorization. This is somewhat similar to human teaching techniques that consist of simple repetition, and therefore might be thought of the computerized equivalent of a student running through times tables until they can recite them. While this is effective in a way, artificially intelligent agents educated in such a manner may not be able to respond to any stimulus outside of the realm of their original design specifications.

That’s why deep learning specialists have developed alternative algorithms that are considered to be somewhat superior to this method, though they are admittedly far more hardware intensive in many ways. Subrountines used by deep learning agents may be based around generative adversarial networks, convolutional neural node structures or a practical form of restricted Boltzmann machine. These stand in sharp contrast to the binary trees and linked lists used by conventional machine learning firmware as well as a majority of modern file systems.

Self-organizing maps have also widely been in deep learning, though their applications in other AI research fields have typically been much less promising. When it comes to defining the deep learning vs machine learning debate, however, it’s highly likely that technicians will be looking more for practical applications than for theoretical academic discussion in the coming months. Suffice it to say that machine learning encompasses everything from the simplest AI to the most sophisticated predictive algorithms while deep learning constitutes a more selective subset of these techniques.

Practical Applications of Deep Learning Technology

Depending on how a particular program is authored, deep learning techniques could be deployed along supervised or semi-supervised neural networks. Theoretically, it’d also be possible to do so via a completely unsupervised node layout, and it’s this technique that has quickly become the most promising. Unsupervised networks may be useful for medical image analysis, since this application often presents unique pieces of graphical information to a computer program that have to be tested against known inputs.

Traditional binary tree or blockchain-based learning systems have struggled to identify the same patterns in dramatically different scenarios, because the information remains hidden in a structure that would have otherwise been designed to present data effectively. It’s essentially a natural form of steganography, and it has confounded computer algorithms in the healthcare industry. However, this new type of unsupervised learning node could virtually educate itself on how to match these patterns even in a data structure that isn’t organized along the normal lines that a computer would expect it to be.

Others have proposed implementing semi-supervised artificially intelligent marketing agents that could eliminate much of the concern over ethics regarding existing deal-closing software. Instead of trying to reach as large a customer base as possible, these tools would calculate the odds of any given individual needing a product at a given time. In order to do so, it would need certain types of information provided by the organization that it works on behalf of, but it would eventually be able to predict all further actions on its own.

While some companies are currently relying on tools that utilize traditional machine learning technology to achieve the same goals, these are often wrought with privacy and ethical concerns. The advent of deep structured learning algorithms have enabled software engineers to come up with new systems that don’t suffer from these drawbacks.

Developing a Private Automated Learning Environment

Conventional machine learning programs often run into serious privacy concerns because of the fact that they need a huge amount of input in order to draw any usable conclusions. Deep learning image recognition software works by processing a smaller subset of inputs, thus ensuring that it doesn’t need as much information to do its job. This is of particular importance for those who are concerned about the possibility of consumer data leaks.

Considering new regulatory stances on many of these issues, it’s also quickly become something that’s become important from a compliance standpoint as well. As toxicology labs begin using bioactivity-focused deep structured learning packages, it’s likely that regulators will express additional concerns in regards to the amount of information needed to perform any given task with this kind of sensitive data. Computer scientists have had to scale back what some have called a veritable fire hose of bytes that tell more of a story than most would be comfortable with.

In a way, these developments hearken back to an earlier time when it was believed that each process in a system should only have the amount of privileges necessary to complete its job. As machine learning engineers embrace this paradigm, it’s highly likely that future developments will be considerably more secure simply because they don’t require the massive amount of data mining necessary to power today’s existing operations.

Image Credit: toptal.io

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://datafloq.com/read/deep-learning-vs-machine-learning-how-emerging-field-influences-traditional-computer-programming/13652

Continue Reading

Artificial Intelligence

Extra Crunch roundup: Tonal EC-1, Deliveroo’s rocky IPO, is Substack really worth $650M?

Avatar

Published

on

For this morning’s column, Alex Wilhelm looked back on the last few months, “a busy season for technology exits” that followed a hot Q4 2020.

We’re seeing signs of an IPO market that may be cooling, but even so, “there are sufficient SPACs to take the entire recent Y Combinator class public,” he notes.

Once we factor in private equity firms with pockets full of money, it’s evident that late-stage companies have three solid choices for leveling up.

Seeking more insight into these liquidity options, Alex interviewed:

  • DigitalOcean CEO Yancey Spruill, whose company went public via IPO;
  • Latch CFO Garth Mitchell, who discussed his startup’s merger with real estate SPAC $TSIA;
  • Brian Cruver, founder and CEO of AlertMedia, which recently sold to a private equity firm.

After recapping their deals, each executive explains how their company determined which flashing red “EXIT” sign to follow. As Alex observed, “choosing which option is best from a buffet’s worth of possibilities is an interesting task.”

Thanks very much for reading Extra Crunch! Have a great weekend.

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist


Full Extra Crunch articles are only available to members
Use discount code ECFriday to save 20% off a one- or two-year subscription


The Tonal EC-1

Image Credits: Nigel Sussman

On Tuesday, we published a four-part series on Tonal, a home fitness startup that has raised $200 million since it launched in 2018. The company’s patented hardware combines digital weights, coaching and AI in a wall-mounted system that sells for $2,995.

By any measure, it is poised for success — sales increased 800% between December 2019 and 2020, and by the end of this year, the company will have 60 retail locations. On Wednesday, Tonal reported a $250 million Series E that valued the company at $1.6 billion.

Our deep dive examines Tonal’s origins, product development timeline, its go-to-market strategy and other aspects that combined to spark investor interest and customer delight.

We call this format the “EC-1,” since these stories are as comprehensive and illuminating as the S-1 forms startups must file with the SEC before going public.

Here’s how the Tonal EC-1 breaks down:

We have more EC-1s in the works about other late-stage startups that are doing big things well and making news in the process.

What to make of Deliveroo’s rough IPO debut

Why did Deliveroo struggle when it began to trade? Is it suffering from cultural dissonance between its high-growth model and more conservative European investors?

Let’s peek at the numbers and find out.

Kaltura puts debut on hold. Is the tech IPO window closing?

The Exchange doubts many folks expected the IPO climate to get so chilly without warning. But we could be in for a Q2 pause in the formerly scorching climate for tech debuts.

Is Substack really worth $650M?

A $65 million Series B is remarkable, even by 2021 standards. But the fact that a16z is pouring more capital into the alt-media space is not a surprise.

Substack is a place where publications have bled some well-known talent, shifting the center of gravity in media. Let’s take a look at Substack’s historical growth.

RPA market surges as investors, vendors capitalize on pandemic-driven tech shift

Business process organization and analytics. Business process visualization and representation, automated workflow system concept. Vector concept creative illustration

Image Credits: Visual Generation / Getty Images

Robotic process automation came to the fore during the pandemic as companies took steps to digitally transform. When employees couldn’t be in the same office together, it became crucial to cobble together more automated workflows that required fewer people in the loop.

RPA has enabled executives to provide a level of automation that essentially buys them time to update systems to more modern approaches while reducing the large number of mundane manual tasks that are part of every industry’s workflow.

E-commerce roll-ups are the next wave of disruption in consumer packaged goods

Elevated view of many toilet rolls on blue background

Image Credits: Javier Zayas Photography (opens in a new window) / Getty Images

This year is all about the roll-ups, the aggregation of smaller companies into larger firms, creating a potentially compelling path for equity value. The interest in creating value through e-commerce brands is particularly striking.

Just a year ago, digitally native brands had fallen out of favor with venture capitalists after so many failed to create venture-scale returns. So what’s the roll-up hype about?

Hack takes: A CISO and a hacker detail how they’d respond to the Exchange breach

3d Flat isometric vector concept of data breach, confidential data stealing, cyber attack.

Image Credits: TarikVision (opens in a new window) / Getty Images

The cyber world has entered a new era in which attacks are becoming more frequent and happening on a larger scale than ever before. Massive hacks affecting thousands of high-level American companies and agencies have dominated the news recently. Chief among these are the December SolarWinds/FireEye breach and the more recent Microsoft Exchange server breach.

Everyone wants to know: If you’ve been hit with the Exchange breach, what should you do?

5 machine learning essentials nontechnical leaders need to understand

Jumble of multicoloured wires untangling into straight lines over a white background. Cape Town, South Africa. Feb 2019.

Image Credits: David Malan (opens in a new window) / Getty Images

Machine learning has become the foundation of business and growth acceleration because of the incredible pace of change and development in this space.

But for engineering and team leaders without an ML background, this can also feel overwhelming and intimidating.

Here are best practices and must-know components broken down into five practical and easily applicable lessons.

Embedded procurement will make every company its own marketplace

Businesswomen using mobile phone analyzing data and economic growth graph chart. Technology digital marketing and network connection.

Image Credits: Busakorn Pongparnit / Getty Images

Embedded procurement is the natural evolution of embedded fintech.

In this next wave, businesses will buy things they need through vertical B2B apps, rather than through sales reps, distributors or an individual merchant’s website.

Knowing when your startup should go all-in on business development

One red line with arrow head breaking out from a business or finance growth chart canvas.

Image Credits: twomeows / Getty Images

There’s a persistent fallacy swirling around that any startup growing pain or scaling problem can be solved with business development.

That’s frankly not true.

Dear Sophie: What should I know about prenups and getting a green card through marriage?

lone figure at entrance to maze hedge that has an American flag at the center

Image Credits: Bryce Durbin/TechCrunch

Dear Sophie:

I’m a founder of a startup on an E-2 investor visa and just got engaged! My soon-to-be spouse will sponsor me for a green card.

Are there any minimum salary requirements for her to sponsor me? Is there anything I should keep in mind before starting the green card process?

— Betrothed in Belmont

Startups must curb bureaucracy to ensure agile data governance

Image of a computer, phone and clock on a desk tied in red tape.

Image Credits: RichVintage / Getty Images

Many organizations perceive data management as being akin to data governance, where responsibilities are centered around establishing controls and audit procedures, and things are viewed from a defensive lens.

That defensiveness is admittedly justified, particularly given the potential financial and reputational damages caused by data mismanagement and leakage.

Nonetheless, there’s an element of myopia here, and being excessively cautious can prevent organizations from realizing the benefits of data-driven collaboration, particularly when it comes to software and product development.

Bring CISOs into the C-suite to bake cybersecurity into company culture

Mixed race businesswoman using tablet computer in server room

Image Credits: Jetta Productions Inc (opens in a new window) / Getty Images

Cyber strategy and company strategy are inextricably linked. Consequently, chief information security officers in the C-Suite will be just as common and influential as CFOs in maximizing shareholder value.

How is edtech spending its extra capital?

Money tree: an adult hand reaches for dollar bills growing on a leafless tree

Image Credits: Tetra Images (opens in a new window) / Getty Images

Edtech unicorns have boatloads of cash to spend following the capital boost to the sector in 2020. As a result, edtech M&A activity has continued to swell.

The idea of a well-capitalized startup buying competitors to complement its core business is nothing new, but exits in this sector are notable because the money used to buy startups can be seen as an effect of the pandemic’s impact on remote education.

But in the past week, the consolidation environment made a clear statement: Pandemic-proven startups are scooping up talent — and fast.

Tech in Mexico: A confluence of Latin America, the US and Asia

Aerial view of crowd connected by lines

Image Credits: Orbon Alija (opens in a new window)/ Getty Images

Knowledge transfer is not the only trend flowing in the U.S.-Asia-LatAm nexus. Competition is afoot as well.

Because of similar market conditions, Asian tech giants are directly expanding into Mexico and other LatAm countries.

How we improved net retention by 30+ points in 2 quarters

Sparks coming off US dollar bill attached to jumper cables

Image Credits: Steven Puetzer (opens in a new window) / Getty Images

There’s certainly no shortage of SaaS performance metrics leaders focus on, but NRR (net revenue retention) is without question the most underrated metric out there.

NRR is simply total revenue minus any revenue churn plus any revenue expansion from upgrades, cross-sells or upsells. The greater the NRR, the quicker companies can scale.

5 mistakes creators make building new games on Roblox

BRAZIL - 2021/03/24: In this photo illustration a Roblox logo seen displayed on a smartphone. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)

Image Credits: SOPA Images (opens in a new window) / Getty Images

Even the most experienced and talented game designers from the mobile F2P business usually fail to understand what features matter to Robloxians.

For those just starting their journey in Roblox game development, these are the most common mistakes gaming professionals make on Roblox.

CEO Manish Chandra, investor Navin Chaddha explain why Poshmark’s Series A deck sings

CEO Manish Chandra, investor Navin Chaddha explain why Poshmark’s Series A deck sings image

“Lead with love, and the money comes.” It’s one of the cornerstone values at Poshmark. On the latest episode of Extra Crunch Live, Chandra and Chaddha sat down with us and walked us through their original Series A pitch deck.

Will the pandemic spur a smart rebirth for cities?

New versus old - an old brick building reflected in windows of modern new facade

Image Credits: hopsalka (opens in a new window) / Getty Images

Cities are bustling hubs where people live, work and play. When the pandemic hit, some people fled major metropolitan markets for smaller towns — raising questions about the future validity of cities.

But those who predicted that COVID-19 would destroy major urban communities might want to stop shorting the resilience of these municipalities and start going long on what the post-pandemic future looks like.

The NFT craze will be a boon for lawyers

3d rendering of pink piggy bank standing on sounding block with gavel lying beside on light-blue background with copy space. Money matters. Lawsuit for money. Auction bids.

Image Credits: Gearstd (opens in a new window) / Getty Images

There’s plenty of uncertainty surrounding copyright issues, fraud and adult content, and legal implications are the crux of the NFT trend.

Whether a court would protect the receipt-holder’s ownership over a given file depends on a variety of factors. All of these concerns mean artists may need to lawyer up.

Viewing Cazoo’s proposed SPAC debut through Carvana’s windshield

It’s a reasonable question: Why would anyone pay that much for Cazoo today if Carvana is more profitable and whatnot? Well, growth. That’s the argument anyway.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://techcrunch.com/2021/04/02/extra-crunch-roundup-tonal-ec-1-deliveroos-rocky-ipo-is-substack-really-worth-650m/

Continue Reading

AI

The AI Trends Reshaping Health Care

Avatar

Published

on

Click to learn more about author Ben Lorica.

Applications of AI in health care present a number of challenges and considerations that differ substantially from other industries. Despite this, it has also been one of the leaders in putting AI to work, taking advantage of the cutting-edge technology to improve care. The numbers speak for themselves: The global AI in health care market size is expected to grow from $4.9 billion in 2020 to $45.2 billion by 2026. Some major factors driving this growth are the sheer volume of health care data and growing complexities of datasets, the need to reduce mounting health care costs, and evolving patient needs.

Deep learning, for example, has made considerable inroads into the clinical environment over the last few years. Computer vision, in particular, has proven its value in medical imaging to assist in screening and diagnosis. Natural language processing (NLP) has provided significant value in addressing both contractual and regulatory concerns with text mining and data sharing. Increasing adoption of AI technology by pharmaceutical and biotechnology companies to expedite initiatives like vaccine and drug development, as seen in the wake of COVID-19, only exemplifies AI’s massive potential.

We’re already seeing amazing strides in health care AI, but it’s still the early days, and to truly unlock its value, there’s a lot of work to be done in understanding the challenges, tools, and intended users shaping the industry. New research from John Snow Labs and Gradient Flow, 2021 AI in Healthcare Survey Report, sheds light on just this: where we are, where we’re going, and how to get there. The global survey explores the important considerations for health care organizations in varying stages of AI adoption, geographies, and technical prowess to provide an extensive look into the state of AI in health care today.               

One of the most significant findings is around which technologies are top of mind when it comes to AI implementation. When asked what technologies they plan to have in place by the end of 2021, almost half of respondents cited data integration. About one-third cited natural language processing (NLP) and business intelligence (BI) among the technologies they are currently using or plan to use by the end of the year. Half of those considered technical leaders are using – or soon will be using – technologies for data integration, NLP, business intelligence, and data warehousing. This makes sense, considering these tools have the power to help make sense of huge amounts of data, while also keeping regulatory and responsible AI practices in mind.

When asked about intended users for AI tools and technologies, over half of respondents identified clinicians among their target users. This indicates that AI is being used by people tasked with delivering health care services – not just technologists and data scientists, as in years past. That number climbs even higher when evaluating mature organizations, or those that have had AI models in production for more than two years. Interestingly, nearly 60% of respondents from mature organizations also indicated that patients are also users of their AI technologies. With the advent of chatbots and telehealth, it will be interesting to see how AI proliferates for both patients and providers over the next few years.

In considering software for building AI solutions, open-source software (53%) had a slight edge over public cloud providers (42%). Looking ahead one to two years, respondents indicated openness to also using both commercial software and commercial SaaS. Open-source software gives users a level of autonomy over their data that cloud providers can’t, so it’s not a big surprise that a highly regulated industry like health care would be wary of data sharing. Similarly, the majority of companies with experience deploying AI models to production choose to validate models using their own data and monitoring tools, rather than evaluation from third parties or software vendors. While earlier-stage companies are more receptive to exploring third-party partners, more mature organizations are tending to take a more conservative approach.                      

Generally, attitudes remained the same when asked about key criteria used to evaluate AI solutions, software libraries or SaaS solutions, and consulting companies to work with.Although the answers varied slightly for each category,technical leaders considered no data sharing with software vendors or consulting companies, the ability to train their own models, and state-of-the art accuracy as top priorities. Health care-specific models and expertise in health care data engineering, integration, and compliance topped the list when asked about solutions and potential partners. Privacy, accuracy, and health care experience are the forces driving AI adoption. It’s clear that AI is poised for even more growth, as data continues to grow and technology and security measures improve. Health care, which can sometimes be seen as a laggard for quick adoption, is taking to AI and already seeing its significant impact. While its approach, the top tools and technologies, and applications of AI may differ from other industries, it will be exciting to see what’s in store for next year’s survey results.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.dataversity.net/the-ai-trends-reshaping-health-care/

Continue Reading

AI

Turns out humans are leading AI systems astray because we can’t agree on labeling

Avatar

Published

on

Top datasets used to train AI models and benchmark how the technology has progressed over time are riddled with labeling errors, a study shows.

Data is a vital resource in teaching machines how to complete specific tasks, whether that’s identifying different species of plants or automatically generating captions. Most neural networks are spoon-fed lots and lots of annotated samples before they can learn common patterns in data.

But these labels aren’t always correct; training machines using error-prone datasets can decrease their performance or accuracy. In the aforementioned study, led by MIT, analysts combed through ten popular datasets that have been cited more than 100,000 times in academic papers and found that on average 3.4 per cent of the samples are wrongly labelled.

The datasets they looked at range from photographs in ImageNet, to sounds in AudioSet, reviews scraped from Amazon, to sketches in QuickDraw. Examples of some of the mistakes compiled by the researchers show that in some cases, it’s a clear blunder, such as a drawing of a light bulb tagged as a crocodile, in others, however, it’s not always obvious. Should a picture of a bucket of baseballs be labeled as ‘baseballs’ or ‘bucket’?

Shocking contents revealed

Inside the 1TB ImageNet dataset used to train the world’s AI: Naked kids, drunken frat parties, porno stars, and more

READ MORE

Annotating each sample is laborious work. This work is often outsourced work to services like Amazon Mechanical Turk, where workers are paid the square root of sod all to sift through the data piece by piece, labeling images and audio to feed into AI systems. This process amplifies biases and errors, as Vice documented here.

Workers are pressured to agree with the status quo if they want to get paid: if a lot of them label a bucket of baseballs as a ‘bucket’, and you decide it’s ‘baseballs’, you may not be paid at all if the platform figures you’re wrong or deliberately trying to mess up the labeling. That means workers will choose the most popular label to avoid looking like they’ve made a mistake. It’s in their interest to stick to the narrative and avoid sticking out like a sore thumb. That means errors, or worse, racial biases and suchlike, snowball in these datasets.

The error rates vary across the datasets. In ImageNet, the most popular dataset used to train models for object recognition, the rate creeps up to six per cent. Considering it contains about 15 million photos, that means hundreds of thousands of labels are wrong. Some classes of images are more affected than others, for example, ‘chameleon’ is often mistaken for ‘green lizard’ and vice versa.

There are other knock-on effects: neural nets may learn to incorrectly associate features within data with certain labels. If, say, many images of the sea seem to contain boats and they keep getting tagged as ‘sea’, a machine might get confused and be more likely to incorrectly recognize boats as seas.

Problems don’t just arise when trying to compare the performance of models using these noisy datasets. The risks are higher if these systems are deployed in the real world, Curtis Northcutt, co-lead author of the stud and a PhD student at MIT, and also cofounder and CTO of ChipBrain, a machine-learning hardware startup, explained to The Register.

“Imagine a self-driving car that uses an AI model to make steering decisions at intersections,” he said. “What would happen if a self-driving car is trained on a dataset with frequent label errors that mislabel a three-way intersection as a four-way intersection? The answer: it might learn to drive off the road when it encounters three-way intersections.

What would happen if a self-driving car is trained on a dataset with frequent label errors that mislabel a three-way intersection as a four-way intersection?

“Maybe one of your AI self-driving models is actually more robust to training noise, so that it doesn’t drive off the road as much. You’ll never know this if your test set is too noisy because your test set labels won’t match reality. This means you can’t properly gauge which of your auto-pilot AI models drives best – at least not until you deploy the car out in the real-world, where it might drive off the road.”

When the team working on the study trained some convolutional neural networks on portions of ImageNet that have been cleared of errors, their performance improved. The boffins believe that developers should think twice about training large models on datasets that have high error rates, and advise them to sort through the samples first. Cleanlab, the software the team developed and used to identify incorrect and inconsistent labels, can be found on GitHub.

“Cleanlab is an open-source python package for machine learning with noisy labels,” said Northcutt. “Cleanlab works by implementing all of the theory and algorithms in the sub-field of machine learning called confident learning, invented at MIT. I built cleanlab to allow other researchers to use confident learning – usually with just a few lines of code – but more importantly, to advance the progress of science in machine learning with noisy labels and to provide a framework for new researchers to get started easily.”

And be aware that if a dataset’s labels are particularly shoddy, training large complex neural networks may not always be so advantageous. Larger models tend to overfit to data more than smaller ones.

“Sometimes using smaller models will work for very noisy datasets. However, instead of always defaulting to using smaller models for very noisy datasets, I think the main takeaway is that machine learning engineers should clean and correct their test sets before they benchmark their models,” Northcutt concluded. ®

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://go.theregister.com/feed/www.theregister.com/2021/04/01/mit_ai_accuracy/

Continue Reading
Esports2 days ago

Free Fire World Series APK Download for Android

Esports5 days ago

DreamHack Online Open Ft. Fortnite April Edition – How To Register, Format, Dates, Prize Pool & More

Esports5 days ago

Hikaru Nakamura drops chessbae, apologizes for YouTube strike

Esports2 days ago

Dota 2: Top Mid Heroes of Patch 7.29

Esports14 hours ago

Overwatch League 2021 Day 1 Recap

Esports4 days ago

Ludwig Closes Out Month-Long Streaming Marathon in First Place – Weekly Twitch Top 10s, April 5-11

Esports4 days ago

Position 5 Faceless Void is making waves in North American Dota 2 pubs after patch 7.29

Esports5 days ago

Apex Legends update 1.65 brings five new LTMs for War Games

Blockchain5 days ago

Which crypto exchange platform is faster, coin transfer or Godex?

Esports5 days ago

Complete guide to romance and marriage in Stardew Valley

Esports4 days ago

Wild Rift patch 2.2a brings tons of champion changes and the addition of Rammus later this month

Esports4 days ago

Fortnite: Patch Notes v16.20 – Off-Road Vehicle Mods, 50-Player Creative Lobbies, Bug Fixes & More

Esports5 days ago

LoL: MAD Lions Are The New Kings Of Europe, Is The Reign Of G2 Esports And Fnatic Finally Over?

Esports5 days ago

TenZ on loan to Sentinels through Valorant Challengers Finals

Esports4 days ago

Fortnite Leak Teases Aloy Skin From Horizon Zero Dawn

Blockchain4 days ago

Bitcoin Preis steigt auf über 60.000 USD, neues ATH wahrscheinlich

Esports3 days ago

Capcom Reveals Ransomware Hack Came from Old VPN

Esports5 days ago

flusha announces new CSGO roster featuring suNny and sergej

Esports4 days ago

Epic Games Store lost $181 million & $273 million in 2019 and 2020

Esports12 hours ago

Fortnite: Epic Vaults Rocket Launchers, Cuddlefish & Explosive Bows From Competitive

Trending