Connect with us

AI

Boeing Starliner crew capsule and Atlas V rocket complete dress rehearsal ahead of test flight

Avatar

Published

on

Boeing Starliner crew capsule and Atlas V rocket complete dress rehearsal ahead of test flight

Boeing and launch partner United Launch Alliance (ULA) completed a key step today in pursuit of launching U.S. astronauts aboard their commercial spacecraft. The Boeing CST-100 Starliner crew capsule was atop the ULA Atlas V rocket at Cape Canaveral Air Force Station’s Launch Complext 41 in Florida, with the rocket fully fuelled while the combined crew all took part in a dress rehearsal called the “integrated Day of Launch Test – aka IDOLT because space people all love acronyms so much.

The rehearsal paves the way for the uncrewed Orbital Flight Test (OFT) that NASA, ULA and Boeing are targeting for December 20 (which just changed today from December 19), which will be exactly what the first crewed mission aboard the Starliner will be, but without the crew on board. Today’s test involved everything leading up to the actual launch, including real feeling, a launch countdown, preparing and checking the access hatch to the crew capsule and more.

This kind of practice was standard during the days of Shuttle launches, and helps ensure that everyone knows what to do and when, and that more than just knowing, they can demonstrate that it works exactly as it’s supposed in a real-world setting. The full integrated dress rehearsal is especially important, since while you can always drill teams independently, you never know exactly how things are going to work until you run them all together.

As mentioned,d next up is the crucial OFT that will set the stage for a crewed launch early next year. The current target is December 20, so Boeing and its partners should get this in just before year’s end, if all goes to plan.

Published at Fri, 06 Dec 2019 22:23:00 +0000

Continue Reading

AI

AI Machine Learning Efforts Encounter A Carbon Footprint Blemish

Avatar

Published

on

Self-driving cars leave a measurable carbon footprint from the electricity needed to charge its batteries and to develop and maintain the machine learning models of its AI systems. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Green AI is arising.

Recent news about the benefits of Machine Learning (ML) and Deep Learning (DL) has taken a slightly downbeat turn toward pointing out that there is a potential ecological cost associated with these systems. In particular, AI developers and AI researchers need to be mindful of the adverse and damaging carbon footprint that they are generating while crafting ML/DL capabilities.

It is a so-called “green” or environmental wake-up call for AI that is worth hearing.

Let’s first review the nature of carbon footprints (CFPs) that are already quite familiar to all of us, such as the carbon belching transportation industry.

A carbon footprint is usually expressed as the amount of carbon dioxide emissions spewed forth, including for example when you fly in a commercial plane from Los Angeles to New York, or when you drive your gasoline-powered car from Silicon Valley to Silicon Beach.

Carbon accounting is used to figure out how much a machine or system produces in terms of its carbon footprint when being utilized and can be calculated for planes, cars, washing machines, refrigerators, and just about anything that emits carbon fumes.

We all seem to now know that our cars are emitting various greenhouse gasses including the dreaded carbon dioxide vapors that have numerous adverse environmental impacts. Some are quick to point out that hybrid cars that use both gasoline and electrical power tend to have a lower carbon footprint than conventional cars, while Electrical Vehicles (EV’s) are essentially zero carbon emissions at the tailpipe.

Calculating Carbon Footprints For A Car

When ascertaining the carbon footprint of a machine or device, it is easy to fall into the mental trap of only considering the emissions that occur when the apparatus is in use. A gasoline car might emit 200 grams of carbon dioxide per kilometer traveled, while a hybrid-electric might produce about half at 92 grams, and an EV presumably at 0 grams, per EPA and Department of Energy.

See this U.S. government website for detailed estimates about carbon emissions of cars: https://www.fueleconomy.gov/feg/info.shtml#guzzler

Though the direct carbon footprint aspect does indeed involve what happens during the utilization effort of a machine or device, there is also the indirect carbon footprint that requires our equal attention, involving both upstream and downstream elements that contribute to a fuller picture of the true carbon footprint involved. For example, a conventional gasoline-powered car might generate perhaps 28 percent of its total life-time carbon dioxide emissions when the car was originally manufactured and shipped to being sold.

You might at first be normally thinking like this:

  • Total CFP of a car = CFP while burning gasoline

But it should be more like this:

  • Total CFP of a car = CFP when the car is made + CFP while burning gasoline

Let’s define “CFP Made” as a factor about the carbon footprint when a car is manufactured and shipped, and another factor we’ll call “CFP FuelUse” that represents the carbon footprint while the car is operating.

For the full lifecycle of a car, we need to add more factors into the equation.

There is a carbon footprint when the gasoline itself is being generated, I’ll call it “CFP FuelGen,” and thus we should include not just the CFP when the fuel is consumed but also when the fuel was originally processed or generated. Furthermore, once a car has seen its day and will be put aside and no longer used, there is a carbon footprint associated with disposing or scrapping of the car (“CFP Disposal”).

This also brings up a facet about EV’s. The attention of EV’s as having zero CFP at the tailpipe is somewhat misleading when considering the total lifecycle CFP since you should also be including the carbon footprint required to generate the electrical power that gets charged into the EV and then is consumed while the EV is driving around. We’ll assign that amount to the CFP FuelGen factor.

The expanded formula is:

  • Total CFP of a car = CFP Made + CFP FuelUse + CFP FuelGen + CFP Disposal

Let’s rearrange the factors to group together the one-time carbon footprint amounts, which would be the CFP Made and CFP Disposal, and group together the ongoing usage carbon footprint amounts, which would be the CFP FuelUse and CFP FuelGen. This makes sense since the fuel used and the fuel generated factors are going to vary depending upon how much a particular car is being driven. Presumably, a low mileage driven car that mainly sits in your garage would have a smaller grand-total over its lifetime of the CFP consumption amount than would a car that’s being driven all the time and racking up tons of miles.

The rearranged overall formula is:

  • Total CFP of a car = (CFP Made + CFP Disposal) + (CFP FuelUse + CFP FuelGen)

Next, I’d like to add a twist that very few are considering when it comes to the emergence of self-driving autonomous cars, namely the carbon footprint associated with the AI Machine Learning for driverless cars.

Let’s call that amount as “CFP ML” and add it to the equation.

  • Total CFP of a car = (CFP Made + CFP Disposal) + (CFP FuelUse + CFP FuelGen) + CFP ML

You might be puzzled as to what this new factor consists of and why it is being included. Allow me to elaborate.

AI Machine Learning As A Carbon Footprint

In a recent study done at the University of Massachusetts, researchers examined several AI Machine Learning or Deep Learning systems that are being used for Natural Language Processing (NLP) and tried to estimate how much of a carbon footprint was expended in developing those NLP systems (see the study at this link here: https://arxiv.org/pdf/1906.02243.pdf).

You likely already know something about NLP if you’ve ever had a dialogue with Alexa or Siri. Those popular voice interactive systems are trained via a large-scale or deep Artificial Neural Network (ANN), a kind of computer-based model that simplistically mimics brain-like neurons and neural networks, and are a vital area of AI for having systems that can “learn” based on datasets provided to them.

Those of you versed in computers might be perplexed that the development of an AI Machine Learning system would somehow produce CFP since it is merely software running on computer hardware, and it is not a plane or a car.

Well, if you consider that there is electrical energy used to power the computer hardware, which is used to be able to run the software that then produces the ML model, you could then assert that the crafting of the AI Machine Learning system has caused some amount of CFP via however the electricity itself was generated to power the ML training operation.

According to the calculations done by the researchers, a somewhat minor or modest NLP ML model consumed an estimated 78,468 pounds of carbon dioxide emissions for its training, while a larger NLP ML consumed an estimated 626,155 pounds during training. As a basis for comparison, they report that an average car over its lifetime might consume 126,000 pounds of carbon dioxide emissions.

A key means of calculating the carbon dioxide produced was based on the EPA’s formula of total electrical power consumed is multiplied by a factor of 0.954 to arrive at the average CFP in pounds per kilowatt-hour and as based on assumptions of power generation plants in the United States.

Significance Of The CFP For Machine Learning

Why should you care about the CFP of the AI Machine Learning for an autonomous car?

Presumably, conventional cars don’t have to include the CFP ML factor since a conventional car does not encompass such a capability, therefore the factor would have a value of zero in the case of a conventional car. Meanwhile, for a driverless car, the CFP ML would have some determinable value and would need to be added into the total CFP calculation for driverless cars.

Essentially, it burdens the carbon footprint of a driverless car and tends to heighten the CFP in comparison to a conventional car.

For those of you that might react instantly to this aspect, I don’t think though that this means that the sky is falling and that we should somehow put the brakes on developing autonomous cars, you ought to consider these salient topics:

  • If the AI ML is being deployed across a fleet of driverless cars, perhaps in the hundreds, thousands, or eventually millions of autonomous cars, and if the AI ML is the same instance for each of those driverless cars, the amount of CFP for the AI ML production is divided across all of those driverless cars and therefore likely a relatively small fractional addition of CFP on a per driverless car basis.
  • Autonomous cars are more than likely to be EVs, partially due to the handy aspect that an EV is adept at storing electrical power, of which the driverless car sensors and computer processors slurp up and need profusely. Thus, the platform for the autonomous car is already going to be significantly cutting down on CFP due to using an EV.
  • Ongoing algorithmic improvements in being able to produce AI ML is bound to make it more efficient to create such models and therefore either decrease the amount of time required to produce the models (accordingly likely reducing the electrical power consumed) or can better use the electrical power in terms of faster processing by the hardware or software.
  • For semi-autonomous cars, you can expect that we’ll see AI ML being used there too, in addition to the fully autonomous cars, and therefore the reality will be that the CFP of the AI ML will apply to eventually all cars since conventional cars will gradually be usurped by semi-autonomous and fully autonomous cars.
  • Some might argue that the CFP of the AI ML ought to be tossed into the CFP Made bucket, meaning that it is just another CFP component within the effort to manufacture the autonomous car. And, if so, based on preliminary analyses, it would seem like the CFP AI ML is rather inconsequential in comparison to the rest of the CFP for making and shipping a car.

For those of you interested in trying out an experimental impact tracker in your AI ML developments, there are various tools coming available, including for example this one posted at GitHub that was developed jointly by Stanford University, Facebook AI Research, and McGill University: https://github.com/Breakend/experiment-impact-tracker.

As they say, your mileage may vary in terms of using any of these emerging tracking tools and you should proceed mindfully and with appropriate due diligence for applicability and soundness.

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

Conclusion

There’s an additional consideration for the CFP of AI ML.

You could claim that there is a CFP AI ML for the originating of the Machine Learning model that will be driving the autonomous car, and then there is the ongoing updating and upgrading involved too.

Therefore, the CFP AI ML is more than just a one-time CFP, it is also part of the ongoing grouping too.

Let’s split it across the two groupings:

  • Total CFP of a car = (CFP Made + CFP Disposal + CFP ML1) + (CFP FuelUse + CFP FuelGen + CFP ML2)

You can go even deeper and point out that some of the AI ML will be taking place in-the-cloud of the automaker or tech firm and then be pushed down into the driverless car (via Over-The-Air or OTA electronic communications), while some of the AI ML might be also occurring in the on-board systems of the autonomous car. In that case, there’s the CFP to be calculated for the cloud-based AI ML and then a different calculation to determine the CFP of the onboard AI ML.

There are some that point out that you can burden a lot of things in our society if you are going to be considering the amount of electrical power that they use, and perhaps it is unfair to suddenly bring up the CFP of AI ML, doing so in isolation of the myriad of other ways in which CFP arises due to any kind of computer-based system.

In the case of autonomous cars, it is also pertinent to consider not just the “costs” side of things, which includes the carbon footprint factor, but also the benefits side of things.

Even if there is some attributable amount of CFP for driverless cars, it would be prudent to consider what kinds of benefits we’ll derive as a society and weigh that against the CFP aspects. Without taking into account the hoped-for benefits, including the potential of human lives saved, the potential for mobility access to all and including the mobility marginalized, and other societal transformations, you get a much more robust picture.

In that sense, we need to figure out this equation:

  • Societal ROI of autonomous cars = Societal benefits – Societal costs

We don’t yet know how it is going to pan out, but most are hoping that the societal benefits will readily outweigh the societal costs, and therefore the ROI for self-driving driverless autonomous cars will be hefty and leave us all nearly breathless as such.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/ai-machine-learning-efforts-encounter-a-carbon-footprint-blemish/

Continue Reading

AI

AI Helping to Transform Education in Pandemic Era

Avatar

Published

on

AI is supporting innovation in education, such as with software to help struggling readers by providing micro-feedback. (GETTY IMAGES)

By AI Trends Staff

The impact of the COVID-19 pandemic on education has been profound, with new ways of thinking about how best to teach students reverberating in institutions of higher learning, K-12 classrooms and in the business community.

The role of AI is central to the discussion on every level. For the K-12 classroom, teachers are thinking about how to use AI as a teaching tool. For example, Deb Norton of the Oshkosh Area school district in Wisconsin, was asked several years ago by the International Society for Technology in Education to lead a course on the uses of AI in K-12 classrooms, according to a recent account in Education Week.

The course includes sections on the definition of artificial intelligence, machine learning, voice recognition, chatbots and the role of data in AI systems. To teach about machine learning, one teacher tied it to yoga, and how the student could do a yoga pose that could be recognized via machine learning, and then the machine could give them feedback on their yoga poses.

Another teacher working with elementary students used the coding site Scratch to create interactive characters and programs such as for creating a skill in Amazon’s Alexis, which are like apps on a smart phone except activated with voice.

Deb Norton, teacher, Oshkosh Area school district, Wisconsin

Asked if she foresees increasing interest in AI as a result of increased remote learning during the pandemic, Norton stated, “AI could become a really big part of virtual learning and at-home learning, but I just don’t think we’re quite there yet. For many of our educators, they’re just dipping their feet into how this would work.”

Protecting privacy is an issue. Many schools will not allow schools to open up Alexa and Google Home out of concern for personal privacy. One workaround could be a school-only network to serve as a test bed.

She does see the potential for AI to help with learning management applications “from a teacher-educator point of view, to be able to engage and monitor and track the types of lessons and strategies that can be delivered in the most effective way in the classroom.”

Investor Sees Disruption Ahead in Higher Education

From an investment point of view, AI in education in the new era represents opportunity. Some see disruption looming in the higher education university system as a result.

“A reckoning is coming for schools and universities,” stated Scott Galloway,  a professor of marketing at the NYU Stern School of Business, in a recent account in TechCrunch.  “We’ve raised prices 1400% but if you walked into a classroom today it wouldn’t look, smell or feel much different from what it did 40 years ago.”

Likening it to a shrinkage in retail – which saw 9,500 closures in 2019 and more than 15,000 so far in 2020 – he predicts a sustained drop in applications for four-year universities, with dozens if not hundreds of colleges and universities unable to recover.

Scott Galloway, professor of marketing, NYU Stern School of Business

Roei Deutsch, co-founder and CEO of live video course marketplace Jolt Inc., stated during a talk on the Coffee Break podcast, “The blow to the world of higher education was bound to come. There is a higher education bubble, something there does not work in terms of cost versus what students receive in return, and you can say that the coronavirus crisis is the beginning of this bubble’s bursting.”

Thus the virus is seen as accelerating a trend that was already underway. The global corporate e-learning market is estimated to grow up to $30 billion at a 13% compound annual growth rate through 2022. “This growth was driven in large part by the increased importance of matching workforce capabilities with actual required skill sets,” stated Joe Apprendi, a general partner at Revel, a venture capital firm formed by business founders, author of the TechCrunch article.

New core education products, as suggested by teacher Norton,  include learning experience platforms (LXP) and learning management systems (LMS), used to monitor, track and administer employment learning activities.

Learning software is primarily designed to create more personalized learning experiences and help users discover new learning opportunities by combining learning content from different sources, while recommending and delivering them — with the support of AI — across multiple digital touch points such as desktop applications and mobile learning apps.

Colleges, universities and enterprises are all looking at these tools. Instead of building training academics to help train people for new or expanded roles in an organization, “Enterprises will now target the front end of the recruiting funnel where higher education begins,” Apprendi suggests. “The potential for global enterprises to own the university experience is suddenly, very real.” The online faculty could be professors from shuttered universities. A hybrid, for-profit model that blends universities and global enterprises could emerge, along the lines of the US Naval Academy, where a tuition-free education comes with an obligation to serve for a period of time.

“Students could see debt cut in half and have a clear path forward toward employment,” he stated. Whatever landscape emerges, changes are in store for universities and colleges.

Software Helping with Remote Learning Challenges

Meanwhile back in K-12 education, the transition to remote learning has been challenging. Many students fail to log into classrooms or complete assignments, according to a recent account in TechRepublic. The number of students logging in has declined by 43% since the start of school closures, and the number of students completing at least one virtual lesson has dropped by 44%, according to a report from Achieve3000. The report was based on data from 1.6 million students across 1,364 school districts.

The transition to e-learning is particularly difficult for struggling readers, who need more time and individual assistance with lessons, the report found. An innovative approach is taken by AI-powered software Amira, designed to remotely help students become better readers.

Amira has been recognized, such as with a nomination for a Codie Award for Best Use of Emerging Technology for Learning in Education. Amira is an intelligent reading assistant designed from decades of research on the science of reading from the University of Texas, and AI in support of reading development from Carnegie Mellon University.

“Amira listens, delivers in-the-moment error-specific feedback, and reports progress for every reading session,” stated Sara Erickson, Amira’s vice president of customer success. “Amira is changing how teachers focus their reading instruction with the help of machine learning to accelerate student reading growth.”

As the student reads, Amira uses AI to decipher what obstacles the young reader is facing, delivering micro-interventions that help to bridge the reading skills gaps. The software helps assess reading fluency, pinpoint errors, and help improve those weaknesses.

“Teachers will never be replaced by software, but they can be supported by it,” Erickson stated. Approximately 125 school districts are currently using Amira, with the K-3 student population in those districts totaling more than 600,000.

Read the source articles in Education Week, TechCrunch and TechRepublic.

Source: https://www.aitrends.com/education/ai-helping-to-transform-education-in-pandemic-era/

Continue Reading

AI

AI-Based Tools Predict COVID-19 Disease Severity

Avatar

Published

on

Researchers are probing the use of fAI and imaging and determine which patients testing positive for COVID-19 are most likely to need extensive treatment. (GETTY IMAGES)

By Paul Nicolaus, Science Writer

Two healthcare workers under the age of 30 fell ill in Wuhan, China, where the first COVID-19 case was reported. One survived. The other wasn’t as fortunate. But why?

It’s an example researchers at the Radiological Society of North America highlighted while pointing out that this phenomenon—some patients falling critically ill and dying as others experience minimal symptoms or none at all—is one of the most mysterious elements of this disease. Mortality does correlate with factors such as age, gender, and some chronic conditions. Considering young and previously healthy individuals have succumbed to this virus, though, there could be more complex prognostic factors involved.

Current diagnostic tests determine whether or not individuals have the virus. They do not, however, offer clues as to just how sick a COVID-positive patient could become. For the time being, clinicians cannot easily predict which patients who test positive will require hospital admission for oxygen and possible ventilation.

Because most cases are mild, identifying those at risk for severe and critical cases early on could help healthcare facilities prioritize care and resources such as ventilators and ICU beds. Figuring out who is at low risk for complications could be useful, too, as this could reduce hospital admissions while these patients are managed at home. As health systems across the globe continue to deal with large numbers of COVID-19 cases, new and emerging technologies may be able to help in this regard.

AI Plus Imaging

Researchers have been probing he use of AI and imaging to determine who has COVID-19, but some groups are taking a different approach and using this same combination to determine which patients are most likely to need the most extensive treatment.

In a paper published July 22 in Radiology: Artificial Intelligence (doi: 10.1148/ryai.2020200079), researchers at Massachusetts General Hospital and Harvard Medical School reveal efforts to develop an automated measure of COVID-19 pulmonary disease severity using chest radiographs (CXRs) and a deep-learning algorithm.

Elsewhere, an international group proposed an AI model that uses COVID-19 patients’ geographical, travel, health, and demographic data to predict disease severity and outcome. Future work is expected to focus on the development of a pipeline that combines CXR scanning models with these types of healthcare data and demographic processing models, according to their paper published July 3 in Frontiers in Public Health (doi: 10.3389/fpubh.2020.00357).

In June, GE Healthcare announced a partnership with the University of Oxford-led National Consortium of Intelligent Medical Imaging (NCIMI) in the UK to develop algorithms aimed at predicting COVID-19 severity, complications, and long-term impact.

Similarly, experts at the University of Copenhagen set out to create models that calculate the risk of a COVID-19 patient’s need for intensive care. The algorithms are designed to find patterns among Danish coronavirus patients who have been through the system to find shared traits among the most severely affected. The patterns are compared with data gathered from recently hospitalized patients, such as X-rays, and sent to a supercomputer to predict how likely a patient is to require a ventilator and how many days will pass before that need arises.

Meanwhile, researchers at Case Western Reserve University are using computers to find details in digital images of chest scans that are not easily seen by the human eye to quickly determine which patients are most likely to experience further deterioration of their health and require the use of ventilators.

“The approach we’ve taken is actually to create a synergistic artificial intelligence algorithm—one that combines patterns from CT scans with clinical parameters based on lab values,” Anant Madabhushi, professor of biomedical engineering at Case Western Reserve and head of the Center for Computational Imaging and Personalized Diagnostics (CCIPD) told Diagnostics World.

Anant Madabhushi, professor of biomedical engineering, Case Western Reserve University

“And the secret sauce, if you will, is the fact that we’re using neural networks and deep learning to automatically go into the CT scans and identify exactly where the region of disease is,” he added. Zeroing in on the disease presentation on the CT scan makes it possible to mine patterns using the neural networks from those regions and combine them with the clinical parameters.

Madabhushi and colleagues have completed a multi-site study that included nearly 900 patients from Wuhan, China, and Cleveland, Ohio. They found that the combination of the clinical parameters and imaging features yielded a higher predictive accuracy in identifying who would go on to need a ventilator compared to a model that uses the imaging features alone and also compared to a model that used only the clinical parameters.

The inspiration for this work came about months ago as Italy hit its peak and the country’s hospitals were overwhelmed with patients who couldn’t breathe. Some of the stories were gut-wrenching, he explained, particularly the ones that highlighted how physicians had to make case by case determinations about who got a ventilator and who didn’t.

“It really got me thinking about what the implications are for the US or the rest of the world,” he said, if a second wave materializes in the fall as some experts have predicted. Of course, we are not out of the first wave yet, he acknowledged, but there is a real concern that a second wave could be even deadlier than the first considering it would take place during flu season.

Madabhushi and colleagues began building their model using images and datasets found online in early March. In April, the CCIPD was offered digital images of chest scans taken from roughly 100 early victims of the novel coronavirus from Wuhan, China. Using that information, the researchers developed machine learning models to predict the risk of a COVID-19 patient needing a ventilator—one based on neural networks and another derived from radiomics.

Early CT scans from patients with COVID-19 showed distinctive patterns specific to those in the intensive care unit (ICU) compared to those not in the ICU. Initially, the research team was able to achieve an accuracy of roughly 70% to 75%. Since then, they have improved upon that performance metric, he said, raising the accuracy level to about 84%.

They have worked to circumvent bias by exposing the AI to patients from different demographics, ethnicities, populations, and scanners. But there’s still work to be done, including additional multi-site testing and prospective field testing. Madabhushi hopes to validate the technology on patients from the Louis Stokes Cleveland VA Medical Center, where he is a research scientist, and is looking to prove the technology at Cleveland Clinic as well.

The team is also developing a user interface that couples the AI with a tool that allows the end-user to enter a CT scan and clinical parameters to see the likelihood of needing a ventilator. Before clinically deploying the technology, he wants to put this in the hands of end-users for additional prospective field testing so that users can get comfortable with the tool, get a sense of how to work with it, and learn how to interpret and use the results coming out of it.

Rather than making arbitrary decisions about who gets a ventilator and who does not, the big hope is that this type of triaging technology could enable more rational decision-making for appropriating resources.

AI and Blood Biomarkers

Another group was also motivated by the scenario that played out in northern Italy back in February and March as a lack of ICU beds led to tough decisions for clinicians.

“Unfortunately, this process, I would say, is a little bit cyclical,” John T. McDevitt, professor of biomaterials at NYU College of Dentistry and professor of chemical and molecular engineering at NYU Tandon School of Engineering told Diagnostics World. Similar scenarios have played out in New York City, for instance, and more recently in Houston. “When you hit this point where you don’t have any buffer, any excess capacity, then it forces a very difficult situation.”

John T. McDevitt, professor of chemical and molecular engineering at NYU Tandon School of Engineering

He wants to provide clinicians with what he describes as “a flashlight that goes into this dark room of COVID-19 severity.” The intent is to look into the future and attempt to figure out which patients will perish unless extreme measures are taken, which patients should be admitted to the hospital, and which patients can safely recover from home.

“I would describe this as the third leg of the stool for the diagnosis and prognosis of COVID-19,” he explained. PCR testing has been used to determine whether individuals have the disease, and serology testing has helped establish whether people have had the condition in the past. The missing leg here, he said, has been determining which patients are going to end up in the hospital and which patients are most likely to perish.

To fill that void, he and colleagues have developed a smartphone app that uses AI and biomarkers in patients’ blood to determine COVID-19 disease severity. Their findings were published June 3 in Lab on a Chip (doi: 10.1039/D0LC00373E).

Relying on data from 160 hospitalized COVID-19 patients in Wuhan, China, they found four biomarkers measured in blood tests that were elevated in the patients who died compared with those who recovered. These biomarkers (C-reactive protein, myoglobin, procalcitonin, and cardiac troponin I) can signal complications relevant to COVID-19, such as reduced cardiovascular health, acute inflammation, or lower respiratory tract infection.

The researchers then developed a model using the biomarkers as well as age and sex—two risk factors. They trained the model to define the patterns of COVID-19 disease and predict its severity. When a patient’s information is entered, the model comes up with a numerical severity score ranging from 0 (mild) to 100 (critical), reflecting the probability of death from the complications of COVID-19.

It was validated using information from 12 hospitalized COVID-19 patients from Shenzhen, China, and further validated using data from over 1,000 New York City patients. The app has also been evaluated in the Family Health Centers at NYU Langone in Brooklyn.

The diagnostic system uses small samples, such as swabs of saliva or drops of blood from a fingertip, which are added to credit card-sized cartridges. The cartridge is put into a portable analyzer that tests for a range of biomarkers, with results available in under 30 minutes. After optimizing the app’s clinical utility, the goal is to roll it out nationwide and worldwide.

Over the coming months, McDevitt’s laboratory, in partnership with SensoDx—a company spun out of his lab—intends to develop and scale the ability to produce a severity score similar to the way people with diabetes check their blood sugar. The plan is to distribute the tool first to disease epicenters to maximize its impact considering not all locations are dealing with a shortage of ICU beds or respirators.

McDevitt also highlighted the potential to help address racial disparities. “COVID has ripped the scab off of this particular wound,” he said. This technology can help level the healthcare playing field and remove some of the unintentional racial or ethnic biases that may weave their way into the delivery of healthcare. By putting the severity score on a numerical index, it arguably provides a more objective way to make challenging pandemic-related healthcare decisions.

McDevitt and colleagues aren’t the only ones pursuing blood-based biomarkers for the prediction of COVID-19 disease severity.

Another example can be found in a study published May 14 in Nature Machine Intelligence (doi: 10.1038/s42256-020-0180-7) and conducted by a group of Chinese researchers, who used a database of blood samples from nearly 500 infected patients in the Wuhan region.

Their machine learning-based model predicts the mortality rates of patients over 10 days in advance with more than 90% accuracy, according to the paper, using three biomarkers: lactic dehydrogenase, lymphocyte, and high-sensitivity C-reactive protein.

Paul Nicolaus is a freelance writer specializing in science, nature, and health. Learn more at www.nicolauswriting.com. This article was originally published in Diagnostics World.

Source: https://www.aitrends.com/ai-research/ai-based-tools-predict-covid-19-disease-severity/

Continue Reading
Blockchain10 hours ago

Yield Farming Fuels Buzz Around DeFi, but Fundamentals Are Lagging

Blockchain16 hours ago

South Korean Beachgoers Can Now Use Bitcoin to Pay for Services

Blockchain17 hours ago

Price Highs, Bull Runs, and Thieves: Bad Crypto News of the Week

Blockchain17 hours ago

Massive Short Squeeze Prompts Chainlink (LINK) Price to Rally 52%

Blockchain17 hours ago

Cryptocurrency Cards: An Unnecessary Solution That Should Be Stopped

Blockchain19 hours ago

Kava Labs Partners with BNB48 Club to Raise BNB DeFi Awareness

Blockchain20 hours ago

Real Estate Blockchain Firm Ubitquity to Build Tokenized Title Platform

Blockchain22 hours ago

Cryptocurrency News From Japan: August 2 – August 8 in Review

Blockchain23 hours ago

Polish Financial Watchdog Impersonated by Crypto Scammers

Blockchain23 hours ago

Chinese State Grid Launches Blockchain-Based Blackout Insurance Policy

Blockchain24 hours ago

Mobile DeFi and the Shift Toward Self-Sovereignty

Blockchain24 hours ago

BTC and ETH Crypto Derivatives in Demand, Market Expected to Grow Further

Covid191 day ago

Virginia Supreme Court Grants Temporary Moratorium on Evictions

Blockchain1 day ago

Bitcoin is Almost as Big as Bank of America

Blockchain1 day ago

The Price of Bitcoin Is Facing Its Final Resistance Zone Before $15K

AR/VR1 day ago

Virtual Reality: The Solution for the Present and Future of Events — Simlab IT

Cannabis1 day ago

FREE Webinar September 17: Hemp CBD Q&A

Blockchain1 day ago

Richard Stallman: A Discussion on Freedom, Privacy & Cryptocurrencies

AR/VR1 day ago

VR Escape Room Specialist ARVI Partners With HTC Vive to Expand Global Deployment

Blockchain1 day ago

Slow But Steady: FATF Review Highlights Crypto Exchanges’ Struggle to Meet AML Standards

Science1 day ago

Première biopsie liquide à recevoir l’approbation de la FDA pour le profilage complet des tumeurs dans tous les cancers solides, le diagnostic compagnon Guardant360® de Guardant Health gagne en crédibilité auprès des oncologues en Asie, au Moyen-Orient et en Afrique.

Covid191 day ago

2 Out Of 3 Churchgoers: It’s Safe To Resume In-Person Worship

Blockchain1 day ago

Title Token for Blockchain Estate Registry, Part 3

Blockchain1 day ago

Eerily Accurate Analyst Thinks Bitcoin Could Hit $20,000 in the Next 3 Months

Blockchain1 day ago

Ransomware Attacks Demanding Crypto Are Unfortunately Here to Stay

Science1 day ago

Ever-Glory To Report Second Quarter 2020 Earnings on August 14, 2020

Cyber Security2 days ago

Bitglass Security Spotlight: Over 200k Instacart Users’ Data Is Being Sold on Dark Web

Blockchain2 days ago

Analysts Fear an Ethereum Drop to $300 As Price Becomes “Heavy”

Science2 days ago

WeissLaw LLP Reminds GRUB and NBL Shareholders About Its Ongoing Investigations

Science2 days ago

SHAREHOLDER ALERT: WeissLaw LLP Reminds OTEL and DCOM Shareholders About Its Ongoing Investigations

Science2 days ago

SHAREHOLDER ALERT: WeissLaw LLP Reminds MXIM and TORC Shareholders About Its Ongoing Investigations

Science2 days ago

WeissLaw LLP Reminds CNXM and ONDK Shareholders About Its Ongoing Investigations

Blockchain2 days ago

Major South Korean Bank Joins the Crypto Custody Business

Blockchain2 days ago

Bullish Bitcoin Price Trend Intact Even After BTC Retests $11.4K Support

Blockchain2 days ago

BitMEX Leaderboard Trader Fears Bitcoin Could See a Second “Flash Dump”

Blockchain2 days ago

Analyst: Bitcoin May “Teleport” to $13,000 if It Breaks Key Level

Blockchain2 days ago

Adam Back: Some ICOs Funded Useful Research Despite Being Unethical

Covid192 days ago

Gov. Cuomo Clears The Way For In-Person Learning At Schools In New York State

Blockchain2 days ago

An Official North Dakota Cryptocurrency Could Be on the Horizon

Blockchain2 days ago

Law Decoded: Tech as an Arena for International Conflict, July 31–August 7

Trending