Connect with us


Confidential Computing Is Coming To AI Autonomous Vehicles 



The use of confidential computing for the AI self-driving car fleet cloud could make it more difficult for hackers to launch a cyberattack. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

Imagine a scenario involving a coy bit of spy craft.   

A friend of yours wants to write down a secret and pass along the note to you. There is dire concern that an undesirable interloper might intercept the note. As such, the secret is first encrypted before being written down, and thus will be inscrutable to anyone that intervenes. All told, the message will look scrambled or seem like gobbledygook.   

You have the password or key needed to decrypt the message. 

After the note has passed through many hands, it finally reaches you. The fact that many others saw and ostensibly were able to read the note is of no consequence. They could not make neither head nor tails of what it said. 

Upon receiving the encrypted message, you decrypt it. Voilà, you can now see what it says. The message has been appropriately received and deciphered. The world is saved and everyone can rejoice.   

But wait for a second, when you decrypted the message, you wrote it down, and meanwhile, a dastardly spy was looking over your shoulder. The snoop has now seen the entire message. The gig is up. Sadly, after having gone from hand to hand and being protected that entire time, at this last moment the secret was revealed.   

Maybe worse still, you are the one that is revealed it (i.e., you being the intended receiver). 

What went wrong? 

Some might refer to this as the last-mile problem, or perhaps more aptly coin it as the last-step problem in this instance. 

You see, the catchphrases of last-mile or last-step are often used when describing a situation that has a kind of gap or arduous challenge at the very end of a task or activity. For example, in the telecommunications industry, there is the notion that the hardest and most costly part of providing high-speed networking to homes is the so-called last mile from the main trunk to the actual home of the consumer. 

Envision a cable that runs down the middle of a neighborhood street, and the last mile would be to make all the offshoot branches that need to extend from the centerline to each specific domicile. The number of such branches is high. It is one thing to simply lay down the center cable, while a hugely costly effort to then string out to each house. Even though the actual distance is not a mile long to get from the center to each house, the notion is that you’ve overall reached the proverbial last-mile or last step involved in the process. 

This last-mile or last-step can be the weak link in a long chain of efforts.    

Three Major Ways Data is Protected 

In the cybersecurity field, there are three major ways that data such as the message on the note are usually intended to be protected or secured: 

  • Data at rest (standing still data)
  • Data in transit (flowing data)
  • Data in use (when being read or utilized)   

Your friend’s note to you was in transit when it was being passed along from person to person on its way to you. At some point, perhaps the note was sitting on someone’s desk for a while, waiting for them to pick it up and continue the journey of the note to you. That would be data at rest.  

When you opted to decrypt the note and take a look at what it said, the data was considered in use at that time. Per the saga, this is when things went awry and a spy saw the message. Up until that moment, the message was relatively secret and secure. The last mile or last step exposed it.   

I bring this up to highlight a hot new trend known as confidential computing.   

We will use the tale of the encrypted note to help explore the nature of confidential computing. Admittedly, the parable per se is not precisely on-target with the topic, but you will soon see that it does offer a semblance of insightful parallels.   

Confidential computing is usually associated with making use of cloud computing. 

Cloud computing is the now familiar notion of using unseen computing resources that are available via remote access. Referring to this as cloud computing is an easy way to envision the matter and has been an extremely catchy way to denote various computers as being “in the cloud” and available for use.   

When your data is placed into a cloud-based computer, you likely want to feel comfortable that the data is well-protected. If the data is sitting in a database, perhaps a fiendish hacker might try to access the data. You want to prevent the cybercrook from being able to see your precious data, and ergo there are typically cybersecurity locks that seek to keep the bad hackers out of the database. 

Suppose though the evildoer cracks through the locks. Aha, by encrypting the data, which is sitting at rest, the ability to do anything untoward with the data is greatly lessened. Though the hacker might be able to see the data, it is scrambled and generally unusable.   

Imagine that there is a need to share the data and thus copy it to another database on a different computer. While the data is in this transit from one database to another, it is potentially vulnerable to prying eyes. If the data is encrypted while in transit, the interloper will presumably not gain much since the data is inscrutable. 

We now are heading to the last mile or last step. 

Assume that at some point the data will be needed for making calculations. The database with the encrypted data is accessed and the data while still encrypted is copied over to a computer that is doing the computations. All’s good so far.   

Upon the encrypted data being brought into the CPU (Central Processing Unit) of the computer, at this last-mile or last step, it is now necessary to decrypt it, else the data won’t be of much use for making the desired calculations if otherwise remaining in an encrypted format.   

Here is the potential loophole in all of this series of carefully encrypted steps. Now that the data is momentarily decrypted for use while inside the CPU, it becomes open for a wrongdoer to peek at it.   

Your first thought might be that the idea of a cybercriminal hacking all the way into the inner guts of the CPU while it is processing seems nearly unimaginable. 

Can they do really that? The answer is yes, it is possible. 

That being said, it is generally a quite difficult trick to pull off. Numerous system protections would have to be overcome. Nonetheless, a very determined and crafty cyber hacker could devise such a scheme (especially when you include the nation-state’s elements of cybersecurity).   

This last-mile cybersecurity concern is being partially mitigated by the use of confidential computing. 

Within the CPU of a confidential computing arranged computer, there is a special, highly secure enclave. This is usually done via using a hardware-based environment that governs the execution of CPU running tasks. In industry parlance, this is known as a Trusted Execution Environment (TEE). Some keys or passwords are kept under added protection and used only when the last step occurs.   

The enclave tries to hide from any other resources what is going on inside the enclave. Remember how you inadvertently allowed a spy to look over your shoulder? That is the type of intrusion that the enclave is fortress-like constructed to keep at bay. 

Here’s why this is especially relevant to cloud computing. 

Suppose the cloud computer being used has somehow gotten malware on it. If there isn’t a provided provision of confidential computing, the risks of the malware peeking at the CPU and also catching the data in an unencrypted format are heightened. Likewise, the OS or operating system of cloud computing could potentially take a peek (perhaps the OS then leaks it elsewhere), and even (sadly) there is a possibility that employees of the cloud provider might have access to take a look.   

Using the TEE and the enclave, potential interlopers cannot see what is going on inside the CPU during the computational efforts. Furthermore, there is typically a feature of confidential computing that upon detecting intrusion, any operations are canceled, and an alert is raised.   

This could be likened to you noticing that a person was spying over your shoulder. You would stop the decryption of the secret message. Of course, you might have already started to decrypt it, in which case maybe the interloper saw some of it, but at least you would curtail your activities at that juncture. Plus, you likely would have called for the cops to come and bust the reprehensible spy.   

Most of the major cloud providers have made available various flavors of confidential computing, including the biggies such as IBM Cloud, Amazon AWS, Microsoft Azure, Google Cloud, Oracle Cloud, and others. The makers of CPUs are also integral to the confidential computing architecture and thus companies such as Intel, AMD, and the like are involved.   

As eloquently stated in a paper by IBM Fellow and CTO for Cloud Security, Nataraj Nagaratnam: “As companies rely more and more on public and hybrid cloud services, data privacy in the cloud is imperative. The primary goal of confidential computing is to provide greater assurance to companies that their data in the cloud is protected and confidential, and to encourage them to move more of their sensitive data and computing workloads to public cloud services.”   

There is a well-known group called the Confidential Computing Consortium (CCC) that has banded together numerous cloud providers, hardware vendors, and software development outfits to focus on confidential computing. Per the posted CCC remarks of Stephen Walli, Governing Board Chair: “The Confidential Computing Consortium is a community focused on open source licensed projects securing data in use and accelerating the adoption of confidential computing through open collaboration.”   

For those readers that are adept at programming, you likely know that your encrypted data while sitting on a database is usually decrypted once you bring the data into the internal memory of the computer system. This is done so that then the CPU can readily use the data for doing computations. In the confidential computing arrangement, the data is not decrypted until the final moment or last-mile or last-step of being placed into the CPU for use. Therefore, even while sitting in internal memory, the data is encrypted and less vulnerable to cyberattack.   

One additional quick point is that this scheme for confidential computing does not guarantee that no one can ever hack it. The cybersecurity field is an ongoing game of cat and mouse. Each new protection that is devised will be deviously picked apart until some unforeseen hole or gotcha is discovered. The hole will usually get plugged. Meanwhile, the gambit continues as the cybercrooks try to find a means to undo or overcome the plug or look for other ways to break-in.    

This is a never-ending cycle.   

In that sense, the confidential computing approach is another added layer of cybersecurity. The more layers that you have, the odds are that it becomes increasingly harder for someone to crack through. At your home, you might have a gated fence around your property (a layer of protection), locks on your doors and windows (another layer of protection), and a motion detector inside the house (yet an additional layer). The belief is that by placing numerous hurdles in the way of a robber, they will be rebuffed in their intrusion efforts.   

Having those added layers is not cost-free. For each layer, you need to ascertain the cost of the added protection versus the risks and consequences of someone breaking in. This is of course the same for confidential computing. Whether you require confidential computing is contingent on the type of computing activities you are undertaking, the magnitude of cybersecurity you are desirous of achieving, the risks and adverse consequences if a cyber breach occurs, etc. 

Your car might also have various layers of security protection. There are locks on the car doors. The windows are made of materials that are hard to smash. Any motion immediately next to the vehicle might be detected and cause the horn to sound. And so on. 

Speaking of cars, the future of cars consists of AI-based true self-driving cars.   

Allow me to briefly elaborate on this point and then tie things to the topic of confidential computing.   

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: Will confidential computing be useful for the advent of AI systems all told, and particularly for the advent of AI-based true self-driving cars?   

Before jumping into the details, I’d like to further clarify what is meant when referring to true self-driving cars.   

For my framework about AI autonomous cars, see the link here:   

Why this is a moonshot effort, see my explanation here: 

For more about the levels as a type of Richter scale, see my discussion here:   

For the argument about bifurcating the levels, see my explanation here:   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here:   

To be wary of fake news about self-driving cars, see my tips here: 

The ethical implications of AI driving systems are significant, see my indication here:   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: 

AI And Self-Driving Cars And Confidential Computing   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. 

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. 

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

One overarching point that is worthy of particular attention is that any AI system and especially ones running in the cloud should be potentially making use of confidential computing. 

This is regrettably not a top-of-mind consideration for many AI developers. 

The typical focus for AI software engineers is primarily on the underlying AI capabilities such as employing advanced uses of Machine Learning (ML) and Deep Learning (DL). Once the AI system is ready to be fielded, the AI builders tend to be less attentive to what happens when the program is placed into operational use. The assumption is that whatever existing cybersecurity is already available in the execution environment will probably be sufficient. 

The average AI developer usually wants to get back to their AI bag-of-tricks and continue tweaking the AI-related elements of the system, or perhaps move onward to some other new development that requires their honed skills at crafting AI systems. Concerns about whether or not the prevailing execution environment for their budding AI system is highly secure does not explicitly enter into their mindset and nor is found in their usual toolset. 

Some will exhort, hey, I’m not a darned cybersecurity expert, I’m an AI developer (that line is a heartfelt homage to the classic indication in Star Trek that hey, I’m a doctor, darn it, not an engineer).   

The thing is, the best AI systems can be readily brought to their knees if the cybersecurity is not topnotch and using all available layers of protection. Up until recently, many AI systems were not necessarily aimed at domains that entailed potentially high risks and pronounced adverse consequences if the AI was undermined at execution.   

Nowadays, with AI becoming pervasive across all manner of applications, the idea of treating AI systems as merely experimental or prototypes is long gone. 

Simply stated, any AI developer worth their salt should be giving due consideration to how their AI systems will be deployed, including what kinds of cyberattacks might be launched to undercut the AI system processing. Since the AI developer ought to know what portends for especially vulnerable weaknesses in their AI while executing, they should take a close look at confidential computing as a potential countermeasure and gauge whether this added layer of security is warranted.   

I’m not saying that it will always be a necessity, just that with AI systems of a sensitive nature running in the cloud, it is prudent and nearly obligatory to consider which of the numerous potential cybersecurity precautions should be undertaken. 

Hopefully, that will be a useful call to arms for those AI developers that haven’t yet taken into account the utility of confidential computing. And perhaps a startling wake-up blaring of trumpets for some.   

Moving beyond the overall notion of all types of AI systems that are running in the cloud, let’s next take a gander at the use of the cloud for the specific advent of AI-based true self-driving cars. The most commonly anticipated use of the cloud for self-driving cars encompasses the use of OTA (Over-The-Air) electronic communications capabilities.   

Via OTA, various patches and updates stored in the cloud for a fleet of self-driving cars can be downloaded into each autonomous vehicle and accordingly installed, doing so automatically. This is handy to be able to remotely push out new features for the AI driving system or possibly provide bug fixes, plus avoiding having to bring the vehicles to a dealer site or some repair shop merely to do needed software updates.   

The OTA will also enable the ease of uploading data from the self-driving cars into the fleet-provided cloud. Self-driving cars will have a sensor suite that includes video cameras, radar, LIDAR, ultrasonic units, thermal imagining, and other such devices. The data they collect can be usefully analyzed by collecting together the data across an entire fleet of self-driving cars and then conglomerating it while in the cloud.   

So, you might be wondering, what does this have to do with confidential computing?   

Think of it this way, if there are programs and data in the cloud that are going to potentially be downloaded and installed into the AI driving systems, this becomes a handy and sneaky path for a cyber attacker to get their malware into the self-driving cars. The cybercrook merely plants the evil-doing elements into the cloud and then patiently waits until the OTA mechanism does the rest of the work for the wrongdoer by broadcasting it out into the fleet. 

Whereas most people tend to be thinking about how an AI driving system might get corrupted or undermined by someone physically accessing the autonomous vehicle, the likely greater threat comes from using the OTA to do so. The innocent beauty of the OTA is that it is a trusted avenue to directly get something inserted into the AI driving system, and this will happen across an entire fleet of self-driving cars. Imagine that there were hundreds, maybe thousands, perhaps hundreds of thousands of self-driving cars and all of them were using an OTA to get updates from a fleet cloud.   

Okay, so we might want to put some devoted attention to what is happening in the fleet cloud.   

The more cybersecurity we put there, the lesser the chance that the OTA will become a specter of doom. It could be that the judicious use of confidential computing for the fleet cloud will curtail or at least make much harder the possibility of launching a cyberattack that might inevitably get carried out into the AI driving systems of the fleet.   

For more details about ODDs, see my indication at this link here: 

On the topic of off-road self-driving cars, here’s my details elicitation: 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: 


Another potential use of confidential computing would be for the execution or processing that takes place inside self-driving cars.   

When the AI driving system is being executed on the onboard computer processors, this execution obviously needs to be highly secure too. The tough tradeoff is that confidential computing tends to incur a performance hit on the processors and thus presents a somewhat complicated consideration when dealing with real-time systems. Keep in mind that real-time processing is controlling the actions of the self-driving car. Any substantive delay in processing times can be problematic. 

Self-driving cars are real-time machines that also just so happen to involve life-or-death matters.   

You typically do not have that same life-or-death concern for an everyday cloud-based application. If the cloud processing has any modicum of delay, this might be of little consequence. In addition, because a cloud-based application resides in the cloud, you can readily toss more processors at the application or reallocate to using faster processors available in the cloud.   

For a self-driving car, the processors installed into the autonomous vehicle are generally not as readily switched out, since that can be a very physical effort and logistically costly to undertake. Automakers and self-driving tech firms are pretty much stuck once they’ve decided which processors to put into their self-driving cars. They’ve got to hope that the choice will last a while.   

Overall, a handy insight that arises from confidential computing is that we need to be on our toes for any and all kinds of cyberattacks. What you don’t want to do is establish a series of tightly secure steps and then neglect to consider what will happen in that last mile or last step. 

Make sure that what you start, finishes properly and securely. 

Those are wise words to live by.  

Copyright 2021 Dr. Lance Eliot 

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Artificial Intelligence

Wealthech: Fabrick and Prometeia Partner on Wealth Management Solution Incorporating Open Banking, AI



Fabrick, an Open Banking Fintech and Prometia, a company offering wealth management solutions, have joined to launch the Global Investment Portfolio, a digital wealth management solution that utilizes artificial intelligence (AI) as well as Open Banking tech.

According to a release, the two companies have pooled assets and skills in open banking and AI to develop the Global Investment Portfolio that puts together an investor’s overall financial portfolio through the aggregate analysis of the bank accounts held by them across various institutions. The Wealthtech leverages AI to spot information generated by asset management activities run by other banks without the need for direct access to all of an investor’s separate investment accounts.

Global Investment Portfolio uses Fabrick’s PSD2 Gateway that allows access to comprehensive bank data through the account aggregation service which provides analysis of all current accounts. The service provides a multi-bank experience that allows customers to view all information from a single touch point. The service is designed to allow investors to monitor all their investments from a single platform while providing real-time comparisons of investments and the ability to easily see which are performing and which are not.

Matteo Necci, a Partner at Prometeia, explained:

“Global Investment Portfolio is a cutting-edge solution with respect to the main trends in Digital Finance and is proposed as a distinctive element in the automation and digitisation of customer advisory processes. The combination of our know-how in artificial intelligence solutions for wealth management with Fabrick’s open banking expertise and ecosystem allows intermediaries to have in-depth knowledge of the investor’s financial portfolio, fully developing the potential of PSD2”.

Paolo Zaccardi, CEO of Fabrick, said that wealth management is a sector that is proving to be very active in exploiting the benefits of Open Finance to develop new digital services that meet the needs of the public and end consumers:

“Fabrick is an active part of this process and the partnership with Prometeia demonstrates how access to current account data represents only the tip of the iceberg of the numerous opportunities presented by our ecosystem and the collaborative approach we promote. You just have to look at the Global Investment Portfolio solution to understand the great value that the combination of account aggregation and data categorisation brings to all the players involved, tangibly enabling a new and more complete and personalised offer model.”

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading


U.S. Issues Warning Advisory on Travel for the UK Over Rising COVID-19 Cases



Americans should not travel to the United Kingdom – including England, Scotland, Wales, and Northern Ireland – because of the rise in Covid-19 cases caused by the virus in the Delta variant. Both the U.S. Centers for Disease Control and Prevention (CDC) and the U.S. Department of State have given the UK their very high warning levels.

Yesterday the CDC raised travel advisory in the United Kingdom to level 4, which means “the highest level of Covid-19.” They issued a notice that reads “If you must travel to the UK make sure that you are vaccinated”. “Due to the current situation in the United Kingdom, even fully vaccinated travelers may be at risk of receiving and distributing COVID-19,” the CDC notice said.

Covid-19 cases grew by more than 50,000 a day in the UK and hundreds of thousands of Britons were asked to go for self-isolation for ten days. In the U.K. the warning level previously was at level 3, indicating a “high” level of Covid-19 and warns that only fully vaccinated travelers should travel.

The U.S. Department of State raised its United Kingdom tourism warning to Level 4, which means “don’t visit the Uk.”The United Kingdom is currently recording an average of 65 new Covid-19 cases per 100,000 people, from the data issued by the Brown School of Public Health. That level of exposure puts the country “tipping point,” according to Brown’s Covid-19 risk assessment map.

U.S. Warnings were issued last Monday just after England abandoned the last of its epidemic restrictions and celebrated festive events to celebrate “Freedom Day”. This raised eyebrows for many countries including the US. However, Scotland, Wales, and Northern Ireland keep certain restrictions such as compulsory masks and social distances in public places.

Covid-19 is also rapidly spreading in the United States. Delta’s variant of Covid-19 exacerbates an increase in the number of deaths nationwide, say U.S. health officials. The United States currently records 12 new cases every day for every 100,000 people. The American epidemic epicenter is the state of Florida, currently recording 49.3 new cases a day out of 100,000. “This has become a pandemic for the uninitiated,” said Dr. Rochelle Walensky, director of the CDC.

Recommended Products

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading


Praemium’s machine learning takes platform accuracy to a new level



Praemium has expanded its machine learning and artificial intelligence capabilities to benefit users of its non-custodial Virtual Managed Account solution, by reducing human errors in data entry and improving data integrity.

Using machine learning across a range of data sets, Praemium has been able to identify transactions that may have been incorrectly entered or categorised by administrators.

Praemium’s Chief Technology Officer, Adam Pointon said, “When managing large volumes of data, human errors happen. Through machine learning we have been able to identify which transactions may be incorrect and predict the correct classification. For example, a buy transaction might be incorrectly entered as a withdrawal, or income as a deposit. These errors could provide incorrect portfolio performance information or have tax implications for investors.”

“This functionality allows for errors to be detected at scale and rectified quickly and is already being used successfully with several of Praemium’s institutional clients,” Pointon added.

The functionality expands upon Praemium’s existing machine learning capability Insights, launched in 2019, that is able to provide highly accurate predictive analytics that a client is demonstrating behaviours that indicate they are needing advice.

“Praemium’s non-custodial solution is recognised as the market-leader and we continue to enhance our technology with these exciting innovations.” Pointon continues.

Recent research undertaken by Praemium with Investment Trends showed that almost 60% of advisers are managing non-custodial client assets off-platform. Typically, these assets are managed manually via spreadsheets, consuming two extra hours of adviser resource per client.

Praemium’s Chief Commercial Officer Mat Walker also commented, “Praemium’s non-custodial solution has $140bn in assets under administration and offers advisers and wealth managers the benefit of managing both custodial and non-custodial assets on a single platform. Our research indicates that advisers are feeling the burden of administering these assets and our technology not only does this efficiently but also more accurately. We also offer the option to remove the administration burden completely by outsourcing to Praemium’s Administration Service who also utilise this functionality for large volume data processing.”

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading

Artificial Intelligence

Will AI Developments Help Open Banking Take Off?



Artificial intelligence has become a gamechanger in the banking industry in recent years. The global market for AI in Fintech was valued at nearly $8 billion last year. It is projected to be worth nearly $27 billion by 2026.

There are a number of reasons that AI is becoming an integral part of the banking industry. One reason is that it is driving process automation. However, AI is starting to show potential with even more complicated automation issues.

AI has made open banking possible. New advances in AI could help open banking become even more popular in the near future.

AI Drives the Future of Open Banking

Open banking is the technical process that allows financial providers to dip in and see the banking history and activity of a customer before they apply. It has been made possible through new developments in AI technology.

The process was recently introduced in the UK and many suggest that it could be the future of underwriting and eligibility for products such as credit cards, loans and mortgages. Antonio Tinto wrote an article about the evolution of open banking in the context of AI in fintech in his LinkedIn post Open Banking and AI – The Rise of Cognitive Banking.

Customers must agree for lenders to see their transactional history and financial information during the application process – but this should be able to provide lenders with a better understanding of the customer’s borrower spending, including highlighting any gambling or debt problems with machine learning algorithms. 

For lenders this offers a very insightful look into a customer’s spending habits and should provide much better decisions in terms of loan approvals, credit limits, loan amounts and more.

Budget planning programs and also fall under the umbrella of open banking. These machine learning programs compile data sourced from multiple locations such as credit cards and bank accounts, providing a full picture of spending habits. 

What Are the Benefits of Open Banking with AI?

Open banking gives lenders a better picture of a spender’s habits, allowing them to make an informed decision regarding potential loan and credit applications. Lenders use complex data-driven algorithms to make these analyses.

Currently, lenders rely heavily on customer credit scoring and other metrics including income checks and affordability checks, but for the average personal loan or credit card, there is no real delving into someone’s banking activity or machine learning analysis.

This allows lenders to find concrete information if there are recurring gambling issues, multiple loans taken out or huge overdrafts – something that typically goes unnoticed by lenders in basic checks.

Beyond this, lenders and credit providers can use these findings to improve their underwriting and build models to determine eligibility patterns – and thus approve better customers and increase their repayment rates.

What Are the Risks Associated with Online Banking?

Risks associated with online banking tend to include concerns about privacy policies and data protection. Financial data from various sources is merged in order to be analyzed in comparison with other datasets to create predictive algorithms. This can then forecast future spending habits.

This requires the access of private financial data, giving firms access to any transactions. Lenders are able to see any financial transactions taking place with customer consent, which could prevent them offering a loan.

The Difference Between Open Banking and Credit Scoring

Open banking can potentially offer more accurate reflections of a person’s financial situation and can also utilize existing credit scores to make decisions surrounding potential loans even stronger.

As open banking increases in popularity, different types of loans will be able to use it to provide lenders with clear insights of borrowers financial habits. Mortgages and other types of loans have the potential to operate in this manner, as open banking is adopted by more and more businesses.

Will Open Banking Take Off as AI Becomes More Widely Used in the Financial Sector?

AI technology has made open banking possible. Banking institutions are relying more heavily than ever on machine learning algorithms.

David Beard, founder of price comparison site, Lending Expert, commented:

“Open banking is certainly revolutionary and will definitely help lenders to better understand their applicants. Being able to see a customer’s bank statement history can highlight potential risks such as gambling debts or if they are starting with huge debts to begin with. This could help lenders steer clear of troubled customers or approve those that look more appealing.”

“The only challenge is that people have to opt into open banking, which not every customer will want to do – and ideally you need real volumes to make a difference to your bottom line and to build future models.”

“If lenders and providers can present this in a smart way that is data compliant and abides by regulation, open banking could be transformative.”

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading
Esports3 days ago

How to reduce lag and increase FPS in Pokémon Unite

Esports4 days ago

Coven skins for Ashe, Evelynn, Ahri, Malphite, Warwick, Cassiopeia revealed for League of Legends

Esports4 days ago

Will New World closed beta progress carry over to the game’s full release?

Aviation5 days ago

And Here’s Yet Another Image Of Russia’s New Fighter Concept That Will Be Officially Unveiled Tomorrow

Esports4 days ago

Can you sprint in New World?

Esports3 days ago

How to add friends and party up in New World

Esports3 days ago

How to claim New World Twitch drops

AR/VR3 days ago

Moth+Flame partners with US Air Force to launch Virtual Reality sexual assault prevention and response training

Esports5 days ago

How to complete FUTTIES Alessandrini’s objectives in FIFA 21 Ultimate Team

Esports3 days ago

Twitch streamer gets banned in New World after milking cow

Esports5 days ago

Everything we know about Seer in Apex Legends

Aerospace5 days ago

Boeing crew capsule mounted on Atlas 5 rocket for unpiloted test flight

Esports4 days ago

What Time Does League of Legends Patch 11.15 Go Live?

Esports5 days ago

Evil Geniuses top laner Impact breaks all-time LCS early-game gold record in win over Dignitas

Blockchain4 days ago

Rothschild Investment Purchases Grayscale Bitcoin and Ethereum Trusts Shares

Gaming5 days ago

Pokémon UNITE – 13 Things You Need To Know

Blockchain4 days ago

Uniswap (UNI) and AAVE Technical Analysis: What to Expect?

Esports4 days ago

Konami unveils Yu-Gi-Oh! Master Duel, a digital version of the Yu-Gi-Oh! TCG and OCG formats

Blockchain3 days ago

BNY Mellon Joins State Street Into Crypto Trading, Backs Pure Digital Trading Platform

Esports3 days ago

How to change or join a new world in New World