Connect with us

AI

AlphaFold: Using AI for scientific discovery

Avatar

Published

on

The second method optimised scores through gradient descent—a mathematical technique commonly used in machine learning for making small, incremental improvements—which resulted in highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process.

What happens next?

The success of our first foray into protein folding is indicative of how machine learning systems can integrate diverse sources of information to help scientists come up with creative solutions to complex problems at speed. Just as we’ve seen how AI can help people master complex games through systems like AlphaGo and AlphaZero, we similarly hope that one day, AI breakthroughs will help us master fundamental scientific problems, too.

It’s exciting to see these early signs of progress in protein folding, demonstrating the utility of AI for scientific discovery. Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous. With a dedicated team focused on delving into how machine learning can advance the world of science, we’re looking forward to seeing the many ways our technology can make a difference.


Until we have published a paper on this work, please cite it as:

De novo structure prediction with deep-learning based scoring

R.Evans,  J.Jumper, J.Kirkpatrick, L.Sifre, T.F.G.Green, C.Qin, A.Zidek, A.Nelson, A.Bridgland, H.Penedones, S.Petersen, K.Simonyan, S.Crossan, D.T.Jones, D.Silver, K.Kavukcuoglu, D.Hassabis, A.W.Senior

In Thirteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstracts) 1-4 December 2018. Retrieved from here here.


This work was done in collaboration with Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Zidek, Sandy Nelson, Alex Bridgland, Hugo Penedones, Stig Petersen, Karen Simonyan, Steve Crossan, David Jones, David Silver, Koray Kavukcuoglu, Demis Hassabis, and Andrew Senior.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://deepmind.com/blog/article/alphafold-casp13

Artificial Intelligence

CMU researchers show potential of privacy-preserving activity tracking using radar

Avatar

Published

on

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://techcrunch.com/2021/05/11/cmu-researchers-show-potential-of-privacy-preserving-activity-tracking-using-radar/

Continue Reading

Artificial Intelligence

CMU researchers show potential of privacy-preserving activity tracking using radar

Avatar

Published

on

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://techcrunch.com/2021/05/11/cmu-researchers-show-potential-of-privacy-preserving-activity-tracking-using-radar/

Continue Reading

Artificial Intelligence

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

Avatar

Published

on

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

·        The £40 million Water Breakthrough Challenge aims to spark ambitious innovation and enable new approaches and ways of working to address the big challenges facing the water sector. 

·        Entries are now open until Thursday 3 June 2021 with successful partnerships winning up to £10 million to develop and implement their initiatives.   

·        The Breakthrough Challenge is run by Ofwat and Nesta Challenges, supported by Arup, and is the second in a series of competitions funded through Ofwat’s Innovation Fund.  

·        The winners of Ofwat’s first competition – the Innovation in Water Challenge – include projects that turn ammonia in wastewater into green energy and use artificial intelligence (AI) and unexploited telecoms cables to detect leaks in the water network. 

A £40 million innovation competition – the Water Breakthrough Challenge – launches today (Thursday 6 May) to spark ambitious innovation and new ways of working in the water sector – and companies in the fintech and retail banking space are being urged to get involved.   

The Water Breakthrough Challenge aims to equip the water sector to address the big challenges facing the sector, driving far-reaching and long-lasting benefits to customers, society and the environment across England and Wales now and into the future.  It encourages collaborative entries from other sectors and worldwide partners, and aims to fund initiatives which water companies would otherwise have been unable to invest in or explore. 

Entries must demonstrate how solutions help the water sector deliver for customers, society and the environment, such as by achieving net zero, protecting natural ecosystems and reducing the impact of extreme weather, or using open data to improve customer service. 

The winners of Ofwat’s first innovation competition – the £2m Innovation in Water Challenge – were revealed last month and include green initiatives such as planting and restoring seagrass meadows on the Essex and Suffolk coastlines, a scheme to turn ammonia in wastewater into green hydrogen gas, and software that can monitor the degradation of wildlife habitats.

 Other ideas focus on the prevention of leaks in the water network through the use of AI, CCTV, and unexploited optical fiber strands in telecoms networks, as well as using behavioral science to better support vulnerable customers.  

John Russell, Senior Director at Ofwat, said: “Our innovation competitions are now in full swing and we are beginning to see a wave of innovation across the sector. Within the Breakthrough Challenge we are looking forward to seeing continued collaboration outside of the sector from a wide range of industries, and even more cutting-edge projects that tackle the greatest challenges facing our sector, and society as a whole.” 

The Water Breakthrough Challenge is funded through Ofwat’s £200 million Innovation Fund, as part of the regulator’s goal to drive innovation and collaboration in the water sector, supporting it to meet the needs of customers, society and the environment in the years to come. It is being delivered by Ofwat and Nesta Challenges, supported by Arup. 

Arlene Goode, Associate from Arup added: “This is a great opportunity for water companies and project partners. We’re excited to see the transformative projects which can move the water sector towards meeting its long-term ambitions”.  

Entries must be submitted by water companies in England and Wales, but they can enter in partnership with organizations outside the water sector – including in the fintech and retail banking space.

Chris Gorst, Director of Challenges at Nesta Challenges, commented: “The winning innovations from the first Innovation in Water Challenge show that the sector is ready to address the major challenges facing the industry, and society. A new approach is needed, including new ways of working and greater collaboration, but we have already seen the sector can rise to the challenge and deliver ground-breaking initiatives that change the status quo. We are very excited to see the trailblazing projects that the water companies, and their partners, put forward for the latest competition.” 

After a first assessment period following entries received by 3 June, selected entrants will be invited to submit more details from 28 June, with the winners announced in September. Winning entries will receive between £1 million and £10 million to support their initiatives.  

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.fintechnews.org/fintech-and-retail-banking-firms-urged-to-get-involved-in-water-breakthrough-challenge/

Continue Reading

Artificial Intelligence

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

Avatar

Published

on

Fintech and retail banking firms urged to get involved in Water Breakthrough Challenge

·        The £40 million Water Breakthrough Challenge aims to spark ambitious innovation and enable new approaches and ways of working to address the big challenges facing the water sector. 

·        Entries are now open until Thursday 3 June 2021 with successful partnerships winning up to £10 million to develop and implement their initiatives.   

·        The Breakthrough Challenge is run by Ofwat and Nesta Challenges, supported by Arup, and is the second in a series of competitions funded through Ofwat’s Innovation Fund.  

·        The winners of Ofwat’s first competition – the Innovation in Water Challenge – include projects that turn ammonia in wastewater into green energy and use artificial intelligence (AI) and unexploited telecoms cables to detect leaks in the water network. 

A £40 million innovation competition – the Water Breakthrough Challenge – launches today (Thursday 6 May) to spark ambitious innovation and new ways of working in the water sector – and companies in the fintech and retail banking space are being urged to get involved.   

The Water Breakthrough Challenge aims to equip the water sector to address the big challenges facing the sector, driving far-reaching and long-lasting benefits to customers, society and the environment across England and Wales now and into the future.  It encourages collaborative entries from other sectors and worldwide partners, and aims to fund initiatives which water companies would otherwise have been unable to invest in or explore. 

Entries must demonstrate how solutions help the water sector deliver for customers, society and the environment, such as by achieving net zero, protecting natural ecosystems and reducing the impact of extreme weather, or using open data to improve customer service. 

The winners of Ofwat’s first innovation competition – the £2m Innovation in Water Challenge – were revealed last month and include green initiatives such as planting and restoring seagrass meadows on the Essex and Suffolk coastlines, a scheme to turn ammonia in wastewater into green hydrogen gas, and software that can monitor the degradation of wildlife habitats.

 Other ideas focus on the prevention of leaks in the water network through the use of AI, CCTV, and unexploited optical fiber strands in telecoms networks, as well as using behavioral science to better support vulnerable customers.  

John Russell, Senior Director at Ofwat, said: “Our innovation competitions are now in full swing and we are beginning to see a wave of innovation across the sector. Within the Breakthrough Challenge we are looking forward to seeing continued collaboration outside of the sector from a wide range of industries, and even more cutting-edge projects that tackle the greatest challenges facing our sector, and society as a whole.” 

The Water Breakthrough Challenge is funded through Ofwat’s £200 million Innovation Fund, as part of the regulator’s goal to drive innovation and collaboration in the water sector, supporting it to meet the needs of customers, society and the environment in the years to come. It is being delivered by Ofwat and Nesta Challenges, supported by Arup. 

Arlene Goode, Associate from Arup added: “This is a great opportunity for water companies and project partners. We’re excited to see the transformative projects which can move the water sector towards meeting its long-term ambitions”.  

Entries must be submitted by water companies in England and Wales, but they can enter in partnership with organizations outside the water sector – including in the fintech and retail banking space.

Chris Gorst, Director of Challenges at Nesta Challenges, commented: “The winning innovations from the first Innovation in Water Challenge show that the sector is ready to address the major challenges facing the industry, and society. A new approach is needed, including new ways of working and greater collaboration, but we have already seen the sector can rise to the challenge and deliver ground-breaking initiatives that change the status quo. We are very excited to see the trailblazing projects that the water companies, and their partners, put forward for the latest competition.” 

After a first assessment period following entries received by 3 June, selected entrants will be invited to submit more details from 28 June, with the winners announced in September. Winning entries will receive between £1 million and £10 million to support their initiatives.  

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.fintechnews.org/fintech-and-retail-banking-firms-urged-to-get-involved-in-water-breakthrough-challenge/

Continue Reading
PR Newswire4 days ago

Polystyrene Foam Market worth $32.2 billion by 2026 – Exclusive Report by MarketsandMarkets™

Cyber Security4 days ago

How to Become a Cryptographer: A Complete Career Guide

Blockchain3 days ago

Launch of Crypto Trading Team by Goldman Sachs

Aviation3 days ago

What Happened To Lufthansa’s Boeing 707 Aircraft?

AR/VR4 days ago

Apple is giving a laser company that builds some of its AR tech $410 million

Aviation2 days ago

JetBlue Hits Back At Eastern Airlines On Ecuador Flights

Cyber Security3 days ago

Cybersecurity Degrees in Massachusetts — Your Guide to Choosing a School

Cyber Security4 days ago

How To Unblock Gambling Websites?

Payments5 days ago

G20 TechSprint Initiative invites firm to tackle green finance

Ripple’s XRP Price
Blockchain4 days ago

Charted: Ripple (XRP) Turns Green, Here’s Why The Bulls Could Aim $2

Blockchain3 days ago

Miten tekoälyä käytetään videopeleissä ja mitä tulevaisuudessa on odotettavissa

Blockchain3 days ago

DOGE Co-founder Reveals the Reasons Behind its Price Rise

Blockchain4 days ago

South America’s Largest E-Commerce Company Adds $7.8M Worth of Bitcoin to its Balance Sheet

Blockchain4 days ago

Bitcoin Has No Existential Threats, Says Michael Saylor

Aviation3 days ago

United Airlines Uses The Crisis To Diversify Latin American Network

Cyber Security3 days ago

U.S. and the U.K. Published Attack on IT Management Company SolarWinds

Fintech3 days ago

The Spanish fintech Pecunpay strengthens its position as a leader in the issuance of corporate programs

Blockchain2 days ago

“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi

Blockchain4 days ago

Cardano (ADA) Staking Live on the US-Based Kraken Exchange

Private Equity4 days ago

This Dream Job Will Pay You to Gamble in Las Vegas on the Company’s Dime

Trending