Connect with us

AI

How we tune out distractions

Avatar

Published

on

Imagine trying to focus on a friend’s voice at a noisy party, or blocking out the phone conversation of the person sitting next to you on the bus while you try to read. Both of these tasks require your brain to somehow suppress the distracting signal so you can focus on your chosen input.

MIT neuroscientists have now identified a brain circuit that helps us to do just that. The circuit they identified, which is controlled by the prefrontal cortex, filters out unwanted background noise or other distracting sensory stimuli. When this circuit is engaged, the prefrontal cortex selectively suppresses sensory input as it flows into the thalamus, the site where most sensory information enters the brain.

“This is a fundamental operation that cleans up all the signals that come in, in a goal-directed way,” says Michael Halassa, an assistant professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers are now exploring whether impairments of this circuit may be involved in the hypersensitivity to noise and other stimuli that is often seen in people with autism.

Miho Nakajima, an MIT postdoc, is the lead author of the paper, which appears in the June 12 issue of Neuron. Research scientist L. Ian Schmitt is also an author of the paper.

Shifting attention

Our brains are constantly bombarded with sensory information, and we are able to tune out much of it automatically, without even realizing it. Other distractions that are more intrusive, such as your seatmate’s phone conversation, require a conscious effort to suppress.

In a 2015 paper, Halassa and his colleagues explored how attention can be consciously shifted between different types of sensory input, by training mice to switch their focus between a visual and auditory cue. They found that during this task, mice suppress the competing sensory input, allowing them to focus on the cue that will earn them a reward.

This process appeared to originate in the prefrontal cortex (PFC), which is critical for complex cognitive behavior such as planning and decision-making. The researchers also found that a part of the thalamus that processes vision was inhibited when the animals were focusing on sound cues. However, there are no direct physical connections from the prefrontal cortex to the sensory thalamus, so it was unclear exactly how the PFC was exerting this control, Halassa says.

In the new study, the researchers again trained mice to switch their attention between visual and auditory stimuli, then mapped the brain connections that were involved. They first examined the outputs of the PFC that were essential for this task, by systematically inhibiting PFC projection terminals in every target. This allowed them to discover that the PFC connection to a brain region known as the striatum is necessary to suppress visual input when the animals are paying attention to the auditory cue.

Further mapping revealed that the striatum then sends input to a region called the globus pallidus, which is part of the basal ganglia. The basal ganglia then suppress activity in the part of the thalamus that processes visual information.

Using a similar experimental setup, the researchers also identified a parallel circuit that suppresses auditory input when animals pay attention to the visual cue. In that case, the circuit travels through parts of the striatum and thalamus that are associated with processing sound, rather than vision.

The findings offer some of the first evidence that the basal ganglia, which are known to be critical for planning movement, also play a role in controlling attention, Halassa says.

“What we realized here is that the connection between PFC and sensory processing at this level is mediated through the basal ganglia, and in that sense, the basal ganglia influence control of sensory processing,” he says. “We now have a very clear idea of how the basal ganglia can be involved in purely attentional processes that have nothing to do with motor preparation.”

Noise sensitivity

The researchers also found that the same circuits are employed not only for switching between different types of sensory input such as visual and auditory stimuli, but also for suppressing distracting input within the same sense — for example, blocking out background noise while focusing on one person’s voice.

The team also showed that when the animals are alerted that the task is going to be noisy, their performance actually improves, as they use this circuit to focus their attention.

“This study uses a dazzling array of techniques for neural circuit dissection to identify a distributed pathway, linking the prefrontal cortex to the basal ganglia to the thalamic reticular nucleus, that allows the mouse brain to enhance relevant sensory features and suppress distractors at opportune moments,” says Daniel Polley, an associate professor of otolaryngology at Harvard Medical School, who was not involved in the research. “By paring down the complexities of the sensory stimulus only to its core relevant features in the thalamus — before it reaches the cortex — our cortex can more efficiently encode just the essential features of the sensory world.”

Halassa’s lab is now doing similar experiments in mice that are genetically engineered to develop symptoms similar to those of people with autism. One common feature of autism spectrum disorder is hypersensitivity to noise, which could be caused by impairments of this brain circuit, Halassa says. He is now studying whether boosting the activity of this circuit might reduce sensitivity to noise.

“Controlling noise is something that patients with autism have trouble with all the time,” he says. “Now there are multiple nodes in the pathway that we can start looking at to try to understand this.”

The research was funded by the National Institutes of Mental Health, the National Institute of Neurological Disorders and Stroke, the Simons Foundation, the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, and the Human Frontier Science Program.


Source: http://news.mit.edu/2019/how-brain-ignores-distractions-0612

Artificial Intelligence

CMU researchers show potential of privacy-preserving activity tracking using radar

Avatar

Published

on

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://techcrunch.com/2021/05/11/cmu-researchers-show-potential-of-privacy-preserving-activity-tracking-using-radar/

Continue Reading

Artificial Intelligence

CMU researchers show potential of privacy-preserving activity tracking using radar

Avatar

Published

on

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://techcrunch.com/2021/05/11/cmu-researchers-show-potential-of-privacy-preserving-activity-tracking-using-radar/

Continue Reading

Artificial Intelligence

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

Avatar

Published

on

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

·        The £40 million Water Breakthrough Challenge aims to spark ambitious innovation and enable new approaches and ways of working to address the big challenges facing the water sector. 

·        Entries are now open until Thursday 3 June 2021 with successful partnerships winning up to £10 million to develop and implement their initiatives.   

·        The Breakthrough Challenge is run by Ofwat and Nesta Challenges, supported by Arup, and is the second in a series of competitions funded through Ofwat’s Innovation Fund.  

·        The winners of Ofwat’s first competition – the Innovation in Water Challenge – include projects that turn ammonia in wastewater into green energy and use artificial intelligence (AI) and unexploited telecoms cables to detect leaks in the water network. 

A £40 million innovation competition – the Water Breakthrough Challenge – launches today (Thursday 6 May) to spark ambitious innovation and new ways of working in the water sector – and companies in the fintech and retail banking space are being urged to get involved.   

The Water Breakthrough Challenge aims to equip the water sector to address the big challenges facing the sector, driving far-reaching and long-lasting benefits to customers, society and the environment across England and Wales now and into the future.  It encourages collaborative entries from other sectors and worldwide partners, and aims to fund initiatives which water companies would otherwise have been unable to invest in or explore. 

Entries must demonstrate how solutions help the water sector deliver for customers, society and the environment, such as by achieving net zero, protecting natural ecosystems and reducing the impact of extreme weather, or using open data to improve customer service. 

The winners of Ofwat’s first innovation competition – the £2m Innovation in Water Challenge – were revealed last month and include green initiatives such as planting and restoring seagrass meadows on the Essex and Suffolk coastlines, a scheme to turn ammonia in wastewater into green hydrogen gas, and software that can monitor the degradation of wildlife habitats.

 Other ideas focus on the prevention of leaks in the water network through the use of AI, CCTV, and unexploited optical fiber strands in telecoms networks, as well as using behavioral science to better support vulnerable customers.  

John Russell, Senior Director at Ofwat, said: “Our innovation competitions are now in full swing and we are beginning to see a wave of innovation across the sector. Within the Breakthrough Challenge we are looking forward to seeing continued collaboration outside of the sector from a wide range of industries, and even more cutting-edge projects that tackle the greatest challenges facing our sector, and society as a whole.” 

The Water Breakthrough Challenge is funded through Ofwat’s £200 million Innovation Fund, as part of the regulator’s goal to drive innovation and collaboration in the water sector, supporting it to meet the needs of customers, society and the environment in the years to come. It is being delivered by Ofwat and Nesta Challenges, supported by Arup. 

Arlene Goode, Associate from Arup added: “This is a great opportunity for water companies and project partners. We’re excited to see the transformative projects which can move the water sector towards meeting its long-term ambitions”.  

Entries must be submitted by water companies in England and Wales, but they can enter in partnership with organizations outside the water sector – including in the fintech and retail banking space.

Chris Gorst, Director of Challenges at Nesta Challenges, commented: “The winning innovations from the first Innovation in Water Challenge show that the sector is ready to address the major challenges facing the industry, and society. A new approach is needed, including new ways of working and greater collaboration, but we have already seen the sector can rise to the challenge and deliver ground-breaking initiatives that change the status quo. We are very excited to see the trailblazing projects that the water companies, and their partners, put forward for the latest competition.” 

After a first assessment period following entries received by 3 June, selected entrants will be invited to submit more details from 28 June, with the winners announced in September. Winning entries will receive between £1 million and £10 million to support their initiatives.  

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.fintechnews.org/fintech-and-retail-banking-firms-urged-to-get-involved-in-water-breakthrough-challenge/

Continue Reading

Artificial Intelligence

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

Avatar

Published

on

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

·        The £40 million Water Breakthrough Challenge aims to spark ambitious innovation and enable new approaches and ways of working to address the big challenges facing the water sector. 

·        Entries are now open until Thursday 3 June 2021 with successful partnerships winning up to £10 million to develop and implement their initiatives.   

·        The Breakthrough Challenge is run by Ofwat and Nesta Challenges, supported by Arup, and is the second in a series of competitions funded through Ofwat’s Innovation Fund.  

·        The winners of Ofwat’s first competition – the Innovation in Water Challenge – include projects that turn ammonia in wastewater into green energy and use artificial intelligence (AI) and unexploited telecoms cables to detect leaks in the water network. 

A £40 million innovation competition – the Water Breakthrough Challenge – launches today (Thursday 6 May) to spark ambitious innovation and new ways of working in the water sector – and companies in the fintech and retail banking space are being urged to get involved.   

The Water Breakthrough Challenge aims to equip the water sector to address the big challenges facing the sector, driving far-reaching and long-lasting benefits to customers, society and the environment across England and Wales now and into the future.  It encourages collaborative entries from other sectors and worldwide partners, and aims to fund initiatives which water companies would otherwise have been unable to invest in or explore. 

Entries must demonstrate how solutions help the water sector deliver for customers, society and the environment, such as by achieving net zero, protecting natural ecosystems and reducing the impact of extreme weather, or using open data to improve customer service. 

The winners of Ofwat’s first innovation competition – the £2m Innovation in Water Challenge – were revealed last month and include green initiatives such as planting and restoring seagrass meadows on the Essex and Suffolk coastlines, a scheme to turn ammonia in wastewater into green hydrogen gas, and software that can monitor the degradation of wildlife habitats.

 Other ideas focus on the prevention of leaks in the water network through the use of AI, CCTV, and unexploited optical fiber strands in telecoms networks, as well as using behavioral science to better support vulnerable customers.  

John Russell, Senior Director at Ofwat, said: “Our innovation competitions are now in full swing and we are beginning to see a wave of innovation across the sector. Within the Breakthrough Challenge we are looking forward to seeing continued collaboration outside of the sector from a wide range of industries, and even more cutting-edge projects that tackle the greatest challenges facing our sector, and society as a whole.” 

The Water Breakthrough Challenge is funded through Ofwat’s £200 million Innovation Fund, as part of the regulator’s goal to drive innovation and collaboration in the water sector, supporting it to meet the needs of customers, society and the environment in the years to come. It is being delivered by Ofwat and Nesta Challenges, supported by Arup. 

Arlene Goode, Associate from Arup added: “This is a great opportunity for water companies and project partners. We’re excited to see the transformative projects which can move the water sector towards meeting its long-term ambitions”.  

Entries must be submitted by water companies in England and Wales, but they can enter in partnership with organizations outside the water sector – including in the fintech and retail banking space.

Chris Gorst, Director of Challenges at Nesta Challenges, commented: “The winning innovations from the first Innovation in Water Challenge show that the sector is ready to address the major challenges facing the industry, and society. A new approach is needed, including new ways of working and greater collaboration, but we have already seen the sector can rise to the challenge and deliver ground-breaking initiatives that change the status quo. We are very excited to see the trailblazing projects that the water companies, and their partners, put forward for the latest competition.” 

After a first assessment period following entries received by 3 June, selected entrants will be invited to submit more details from 28 June, with the winners announced in September. Winning entries will receive between £1 million and £10 million to support their initiatives.  

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.fintechnews.org/fintech-and-retail-banking-firms-urged-to-get-involved-in-water-breakthrough-challenge/

Continue Reading
PR Newswire4 days ago

Polystyrene Foam Market worth $32.2 billion by 2026 – Exclusive Report by MarketsandMarkets™

Cyber Security4 days ago

How to Become a Cryptographer: A Complete Career Guide

Blockchain3 days ago

Launch of Crypto Trading Team by Goldman Sachs

Aviation3 days ago

What Happened To Lufthansa’s Boeing 707 Aircraft?

AR/VR4 days ago

Apple is giving a laser company that builds some of its AR tech $410 million

Aviation2 days ago

JetBlue Hits Back At Eastern Airlines On Ecuador Flights

Cyber Security3 days ago

Cybersecurity Degrees in Massachusetts — Your Guide to Choosing a School

Cyber Security4 days ago

How To Unblock Gambling Websites?

Payments5 days ago

G20 TechSprint Initiative invites firm to tackle green finance

Ripple’s XRP Price
Blockchain4 days ago

Charted: Ripple (XRP) Turns Green, Here’s Why The Bulls Could Aim $2

Blockchain3 days ago

Miten tekoälyä käytetään videopeleissä ja mitä tulevaisuudessa on odotettavissa

Blockchain3 days ago

DOGE Co-founder Reveals the Reasons Behind its Price Rise

Blockchain4 days ago

South America’s Largest E-Commerce Company Adds $7.8M Worth of Bitcoin to its Balance Sheet

Blockchain4 days ago

Bitcoin Has No Existential Threats, Says Michael Saylor

Aviation3 days ago

United Airlines Uses The Crisis To Diversify Latin American Network

Cyber Security3 days ago

U.S. and the U.K. Published Attack on IT Management Company SolarWinds

Fintech3 days ago

The Spanish fintech Pecunpay strengthens its position as a leader in the issuance of corporate programs

Blockchain2 days ago

“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi

Blockchain4 days ago

Cardano (ADA) Staking Live on the US-Based Kraken Exchange

Private Equity4 days ago

This Dream Job Will Pay You to Gamble in Las Vegas on the Company’s Dime

Trending