Zephyrnet Logo

Confidential Computing Is Coming To AI Autonomous Vehicles 

Date:

The use of confidential computing for the AI self-driving car fleet cloud could make it more difficult for hackers to launch a cyberattack. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

Imagine a scenario involving a coy bit of spy craft.   

A friend of yours wants to write down a secret and pass along the note to you. There is dire concern that an undesirable interloper might intercept the note. As such, the secret is first encrypted before being written down, and thus will be inscrutable to anyone that intervenes. All told, the message will look scrambled or seem like gobbledygook.   

You have the password or key needed to decrypt the message. 

After the note has passed through many hands, it finally reaches you. The fact that many others saw and ostensibly were able to read the note is of no consequence. They could not make neither head nor tails of what it said. 

Upon receiving the encrypted message, you decrypt it. Voilà, you can now see what it says. The message has been appropriately received and deciphered. The world is saved and everyone can rejoice.   

But wait for a second, when you decrypted the message, you wrote it down, and meanwhile, a dastardly spy was looking over your shoulder. The snoop has now seen the entire message. The gig is up. Sadly, after having gone from hand to hand and being protected that entire time, at this last moment the secret was revealed.   

Maybe worse still, you are the one that is revealed it (i.e., you being the intended receiver). 

What went wrong? 

Some might refer to this as the last-mile problem, or perhaps more aptly coin it as the last-step problem in this instance. 

You see, the catchphrases of last-mile or last-step are often used when describing a situation that has a kind of gap or arduous challenge at the very end of a task or activity. For example, in the telecommunications industry, there is the notion that the hardest and most costly part of providing high-speed networking to homes is the so-called last mile from the main trunk to the actual home of the consumer. 

Envision a cable that runs down the middle of a neighborhood street, and the last mile would be to make all the offshoot branches that need to extend from the centerline to each specific domicile. The number of such branches is high. It is one thing to simply lay down the center cable, while a hugely costly effort to then string out to each house. Even though the actual distance is not a mile long to get from the center to each house, the notion is that you’ve overall reached the proverbial last-mile or last step involved in the process. 

This last-mile or last-step can be the weak link in a long chain of efforts.    

Three Major Ways Data is Protected 

In the cybersecurity field, there are three major ways that data such as the message on the note are usually intended to be protected or secured: 

  • Data at rest (standing still data)
  • Data in transit (flowing data)
  • Data in use (when being read or utilized)   

Your friend’s note to you was in transit when it was being passed along from person to person on its way to you. At some point, perhaps the note was sitting on someone’s desk for a while, waiting for them to pick it up and continue the journey of the note to you. That would be data at rest.  

When you opted to decrypt the note and take a look at what it said, the data was considered in use at that time. Per the saga, this is when things went awry and a spy saw the message. Up until that moment, the message was relatively secret and secure. The last mile or last step exposed it.   

I bring this up to highlight a hot new trend known as confidential computing.   

We will use the tale of the encrypted note to help explore the nature of confidential computing. Admittedly, the parable per se is not precisely on-target with the topic, but you will soon see that it does offer a semblance of insightful parallels.   

Confidential computing is usually associated with making use of cloud computing. 

Cloud computing is the now familiar notion of using unseen computing resources that are available via remote access. Referring to this as cloud computing is an easy way to envision the matter and has been an extremely catchy way to denote various computers as being “in the cloud” and available for use.   

When your data is placed into a cloud-based computer, you likely want to feel comfortable that the data is well-protected. If the data is sitting in a database, perhaps a fiendish hacker might try to access the data. You want to prevent the cybercrook from being able to see your precious data, and ergo there are typically cybersecurity locks that seek to keep the bad hackers out of the database. 

Suppose though the evildoer cracks through the locks. Aha, by encrypting the data, which is sitting at rest, the ability to do anything untoward with the data is greatly lessened. Though the hacker might be able to see the data, it is scrambled and generally unusable.   

Imagine that there is a need to share the data and thus copy it to another database on a different computer. While the data is in this transit from one database to another, it is potentially vulnerable to prying eyes. If the data is encrypted while in transit, the interloper will presumably not gain much since the data is inscrutable. 

We now are heading to the last mile or last step. 

Assume that at some point the data will be needed for making calculations. The database with the encrypted data is accessed and the data while still encrypted is copied over to a computer that is doing the computations. All’s good so far.   

Upon the encrypted data being brought into the CPU (Central Processing Unit) of the computer, at this last-mile or last step, it is now necessary to decrypt it, else the data won’t be of much use for making the desired calculations if otherwise remaining in an encrypted format.   

Here is the potential loophole in all of this series of carefully encrypted steps. Now that the data is momentarily decrypted for use while inside the CPU, it becomes open for a wrongdoer to peek at it.   

Your first thought might be that the idea of a cybercriminal hacking all the way into the inner guts of the CPU while it is processing seems nearly unimaginable. 

Can they do really that? The answer is yes, it is possible. 

That being said, it is generally a quite difficult trick to pull off. Numerous system protections would have to be overcome. Nonetheless, a very determined and crafty cyber hacker could devise such a scheme (especially when you include the nation-state’s elements of cybersecurity).   

This last-mile cybersecurity concern is being partially mitigated by the use of confidential computing. 

Within the CPU of a confidential computing arranged computer, there is a special, highly secure enclave. This is usually done via using a hardware-based environment that governs the execution of CPU running tasks. In industry parlance, this is known as a Trusted Execution Environment (TEE). Some keys or passwords are kept under added protection and used only when the last step occurs.   

The enclave tries to hide from any other resources what is going on inside the enclave. Remember how you inadvertently allowed a spy to look over your shoulder? That is the type of intrusion that the enclave is fortress-like constructed to keep at bay. 

Here’s why this is especially relevant to cloud computing. 

Suppose the cloud computer being used has somehow gotten malware on it. If there isn’t a provided provision of confidential computing, the risks of the malware peeking at the CPU and also catching the data in an unencrypted format are heightened. Likewise, the OS or operating system of cloud computing could potentially take a peek (perhaps the OS then leaks it elsewhere), and even (sadly) there is a possibility that employees of the cloud provider might have access to take a look.   

Using the TEE and the enclave, potential interlopers cannot see what is going on inside the CPU during the computational efforts. Furthermore, there is typically a feature of confidential computing that upon detecting intrusion, any operations are canceled, and an alert is raised.   

This could be likened to you noticing that a person was spying over your shoulder. You would stop the decryption of the secret message. Of course, you might have already started to decrypt it, in which case maybe the interloper saw some of it, but at least you would curtail your activities at that juncture. Plus, you likely would have called for the cops to come and bust the reprehensible spy.   

Most of the major cloud providers have made available various flavors of confidential computing, including the biggies such as IBM Cloud, Amazon AWS, Microsoft Azure, Google Cloud, Oracle Cloud, and others. The makers of CPUs are also integral to the confidential computing architecture and thus companies such as Intel, AMD, and the like are involved.   

As eloquently stated in a paper by IBM Fellow and CTO for Cloud Security, Nataraj Nagaratnam: “As companies rely more and more on public and hybrid cloud services, data privacy in the cloud is imperative. The primary goal of confidential computing is to provide greater assurance to companies that their data in the cloud is protected and confidential, and to encourage them to move more of their sensitive data and computing workloads to public cloud services.”   

There is a well-known group called the Confidential Computing Consortium (CCC) that has banded together numerous cloud providers, hardware vendors, and software development outfits to focus on confidential computing. Per the posted CCC remarks of Stephen Walli, Governing Board Chair: “The Confidential Computing Consortium is a community focused on open source licensed projects securing data in use and accelerating the adoption of confidential computing through open collaboration.”   

For those readers that are adept at programming, you likely know that your encrypted data while sitting on a database is usually decrypted once you bring the data into the internal memory of the computer system. This is done so that then the CPU can readily use the data for doing computations. In the confidential computing arrangement, the data is not decrypted until the final moment or last-mile or last-step of being placed into the CPU for use. Therefore, even while sitting in internal memory, the data is encrypted and less vulnerable to cyberattack.   

One additional quick point is that this scheme for confidential computing does not guarantee that no one can ever hack it. The cybersecurity field is an ongoing game of cat and mouse. Each new protection that is devised will be deviously picked apart until some unforeseen hole or gotcha is discovered. The hole will usually get plugged. Meanwhile, the gambit continues as the cybercrooks try to find a means to undo or overcome the plug or look for other ways to break-in.    

This is a never-ending cycle.   

In that sense, the confidential computing approach is another added layer of cybersecurity. The more layers that you have, the odds are that it becomes increasingly harder for someone to crack through. At your home, you might have a gated fence around your property (a layer of protection), locks on your doors and windows (another layer of protection), and a motion detector inside the house (yet an additional layer). The belief is that by placing numerous hurdles in the way of a robber, they will be rebuffed in their intrusion efforts.   

Having those added layers is not cost-free. For each layer, you need to ascertain the cost of the added protection versus the risks and consequences of someone breaking in. This is of course the same for confidential computing. Whether you require confidential computing is contingent on the type of computing activities you are undertaking, the magnitude of cybersecurity you are desirous of achieving, the risks and adverse consequences if a cyber breach occurs, etc. 

Your car might also have various layers of security protection. There are locks on the car doors. The windows are made of materials that are hard to smash. Any motion immediately next to the vehicle might be detected and cause the horn to sound. And so on. 

Speaking of cars, the future of cars consists of AI-based true self-driving cars.   

Allow me to briefly elaborate on this point and then tie things to the topic of confidential computing.   

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: Will confidential computing be useful for the advent of AI systems all told, and particularly for the advent of AI-based true self-driving cars?   

Before jumping into the details, I’d like to further clarify what is meant when referring to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

AI And Self-Driving Cars And Confidential Computing   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. 

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. 

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

One overarching point that is worthy of particular attention is that any AI system and especially ones running in the cloud should be potentially making use of confidential computing. 

This is regrettably not a top-of-mind consideration for many AI developers. 

The typical focus for AI software engineers is primarily on the underlying AI capabilities such as employing advanced uses of Machine Learning (ML) and Deep Learning (DL). Once the AI system is ready to be fielded, the AI builders tend to be less attentive to what happens when the program is placed into operational use. The assumption is that whatever existing cybersecurity is already available in the execution environment will probably be sufficient. 

The average AI developer usually wants to get back to their AI bag-of-tricks and continue tweaking the AI-related elements of the system, or perhaps move onward to some other new development that requires their honed skills at crafting AI systems. Concerns about whether or not the prevailing execution environment for their budding AI system is highly secure does not explicitly enter into their mindset and nor is found in their usual toolset. 

Some will exhort, hey, I’m not a darned cybersecurity expert, I’m an AI developer (that line is a heartfelt homage to the classic indication in Star Trek that hey, I’m a doctor, darn it, not an engineer).   

The thing is, the best AI systems can be readily brought to their knees if the cybersecurity is not topnotch and using all available layers of protection. Up until recently, many AI systems were not necessarily aimed at domains that entailed potentially high risks and pronounced adverse consequences if the AI was undermined at execution.   

Nowadays, with AI becoming pervasive across all manner of applications, the idea of treating AI systems as merely experimental or prototypes is long gone. 

Simply stated, any AI developer worth their salt should be giving due consideration to how their AI systems will be deployed, including what kinds of cyberattacks might be launched to undercut the AI system processing. Since the AI developer ought to know what portends for especially vulnerable weaknesses in their AI while executing, they should take a close look at confidential computing as a potential countermeasure and gauge whether this added layer of security is warranted.   

I’m not saying that it will always be a necessity, just that with AI systems of a sensitive nature running in the cloud, it is prudent and nearly obligatory to consider which of the numerous potential cybersecurity precautions should be undertaken. 

Hopefully, that will be a useful call to arms for those AI developers that haven’t yet taken into account the utility of confidential computing. And perhaps a startling wake-up blaring of trumpets for some.   

Moving beyond the overall notion of all types of AI systems that are running in the cloud, let’s next take a gander at the use of the cloud for the specific advent of AI-based true self-driving cars. The most commonly anticipated use of the cloud for self-driving cars encompasses the use of OTA (Over-The-Air) electronic communications capabilities.   

Via OTA, various patches and updates stored in the cloud for a fleet of self-driving cars can be downloaded into each autonomous vehicle and accordingly installed, doing so automatically. This is handy to be able to remotely push out new features for the AI driving system or possibly provide bug fixes, plus avoiding having to bring the vehicles to a dealer site or some repair shop merely to do needed software updates.   

The OTA will also enable the ease of uploading data from the self-driving cars into the fleet-provided cloud. Self-driving cars will have a sensor suite that includes video cameras, radar, LIDAR, ultrasonic units, thermal imagining, and other such devices. The data they collect can be usefully analyzed by collecting together the data across an entire fleet of self-driving cars and then conglomerating it while in the cloud.   

So, you might be wondering, what does this have to do with confidential computing?   

Think of it this way, if there are programs and data in the cloud that are going to potentially be downloaded and installed into the AI driving systems, this becomes a handy and sneaky path for a cyber attacker to get their malware into the self-driving cars. The cybercrook merely plants the evil-doing elements into the cloud and then patiently waits until the OTA mechanism does the rest of the work for the wrongdoer by broadcasting it out into the fleet. 

Whereas most people tend to be thinking about how an AI driving system might get corrupted or undermined by someone physically accessing the autonomous vehicle, the likely greater threat comes from using the OTA to do so. The innocent beauty of the OTA is that it is a trusted avenue to directly get something inserted into the AI driving system, and this will happen across an entire fleet of self-driving cars. Imagine that there were hundreds, maybe thousands, perhaps hundreds of thousands of self-driving cars and all of them were using an OTA to get updates from a fleet cloud.   

Okay, so we might want to put some devoted attention to what is happening in the fleet cloud.   

The more cybersecurity we put there, the lesser the chance that the OTA will become a specter of doom. It could be that the judicious use of confidential computing for the fleet cloud will curtail or at least make much harder the possibility of launching a cyberattack that might inevitably get carried out into the AI driving systems of the fleet.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Another potential use of confidential computing would be for the execution or processing that takes place inside self-driving cars.   

When the AI driving system is being executed on the onboard computer processors, this execution obviously needs to be highly secure too. The tough tradeoff is that confidential computing tends to incur a performance hit on the processors and thus presents a somewhat complicated consideration when dealing with real-time systems. Keep in mind that real-time processing is controlling the actions of the self-driving car. Any substantive delay in processing times can be problematic. 

Self-driving cars are real-time machines that also just so happen to involve life-or-death matters.   

You typically do not have that same life-or-death concern for an everyday cloud-based application. If the cloud processing has any modicum of delay, this might be of little consequence. In addition, because a cloud-based application resides in the cloud, you can readily toss more processors at the application or reallocate to using faster processors available in the cloud.   

For a self-driving car, the processors installed into the autonomous vehicle are generally not as readily switched out, since that can be a very physical effort and logistically costly to undertake. Automakers and self-driving tech firms are pretty much stuck once they’ve decided which processors to put into their self-driving cars. They’ve got to hope that the choice will last a while.   

Overall, a handy insight that arises from confidential computing is that we need to be on our toes for any and all kinds of cyberattacks. What you don’t want to do is establish a series of tightly secure steps and then neglect to consider what will happen in that last mile or last step. 

Make sure that what you start, finishes properly and securely. 

Those are wise words to live by.  

Copyright 2021 Dr. Lance Eliot  http://ai-selfdriving-cars.libsyn.com/website 

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.aitrends.com/ai-insider/confidential-computing-is-coming-to-ai-autonomous-vehicles/

spot_img

Latest Intelligence

spot_img