Zephyrnet Logo

Computational Omnipresence And Bird’s-Eye View Are Aiding AI Autonomous Cars 

Date:

To successfully leverage a bird’s-eye view perspective of a driving scene, a view from above, a self-driving car would have to contain the needed software to receive the information. (Credit: Getty Images)  

By Lance Eliot, the AI Trends Insider   

When driving a car, you could really benefit from having a bird’s-eye view of the driving scene. Let’s explore why.  

Imagine that you are driving in a crowded and altogether hectic downtown area. There are humongous skyscraper buildings that are towering over you and ostensibly blocking any chance of seeing beyond an extremely narrow tunnel-vision perspective of the roadway. Among the visual obscurity, you cannot see anything on the streets that intersect with the road that you are currently driving on. Until you get directly into an intersection, you pretty much have no idea what is taking place on any of those perpendicular avenues that are to the left and right of you.   

You come to a corner that is packed with pedestrians and signposts, once again blocking your view, and decide to engage a rapid and sharp right turn. Just as you poke forward into the turn, you’ll have a very brief chance to glimpse whatever lies beyond. In that split second, you have to visually scan the entire driving scene and hope that you can mentally ascertain the considerations and contortions of whatever unknown menaces are looming ahead of you.   

For example, as you make the right turn, you might suddenly come upon a car that is unlawfully parked in the active lane. You didn’t see the halted car until making your turnand could ram directly into the back of this reckless driver.   

Your mind races as you consider your options. 

You could hit the brakes, but this might get you violently rear-ended by a car that is closely following youturn. Another possibility would be to swing wide, going into the lane to the left of the illegally stopped car. But other traffic is using that lane, and your attempt to dart into their path could be catastrophic. You will either sideswipe one of those innocent cars or possibly disrupt their steady flow and produce a series of automotive-screeching cascading collisions. 

Sadly, neither option is satisfactory.   

This is the nature of driving. You are always on the edge of your seat because you are in the midst of continually making life-or-death decisions. Most people that sit down at the steering wheel are not actively thinking about the life-or-death matters involved in driving a car. Until they get themselves into a dicey driving situation, they take for granted the grim magnitude of the driving task.   

All it takes is for you to make the wrong decision, and you can end up striking other cars (or they could ram into you). Besides the likely damage to the vehicles, there is a viable chance of you getting injured, plus your passengers getting injured. There is also the likely chance of injuring the driver of the other car and the passengers in that vehicle. Regrettably, there is also the real chance of producing fatalities. The startling statistics are that about 40,000 car crash-related fatalities occur in the United States annually, along with approximately 2.3 million related injuries.   

Driving a car is dangerous, and yet we generally tend to downplay the risks. It sure would be handy if there were ways to reduce those risks. The example of making the right turn highlights especially the dismal conditions of driving when you are only able to see a small part of the overarching puzzle. Had you somehow been able to see or know that there was a car parked in the active turn lane, you would have been able to take proactive steps to avoid the crisis.   

What could you have done differently if you had a better semblance of the roadway situation? 

You might have come to a gradual stop before making the turn, which then presumably would have coaxed the car behind you to also slow down, thus reducing the risk of getting struck from behind. Alternatively, you might have chosen to not make the right turn at all, perhaps waiting to do so when you had driven down another block or two. In short, you would have had many more options available and been able to better make those life-or-death decisions if you had a macroscopic picture of the driving scene.   

Voilathe bird’s-eye view.   

Suppose that you had some kind of extending periscope that was attached to your car. This oddball contraption could allow you to look down those intersecting streets, giving you a brief heads-up before reaching the turn. That might work, but it doesn’t seem particularly practical.   

Imagine instead that you could somehow fly above the driving scene. There you are, sitting in your car at the steering wheel, simultaneously looking down upon the driving scene. Now that’s some kind of driving. 

Rather than relying upon a farfetched notion, we can be much more down-to-earth and consider everyday options that are viable right now. Assume that we mounted a camera above the intersection that you were aiming to make that right turn at. The camera could be doing real-time streaming and send the video to your in-car display.   

As such, you might glance at your in-car display and observe that a motionless car is sitting smack dab in the lane that you are expecting to use when you complete your right turn. Akin to the earlier emphasis about having crucial and timely beforehand options, you can make a wiser choice with this added vantage point.   

The idea of having a camera that overlooks an intersection is altogether practical and doable today. No magic is required. We might further increase the sensing capability by including other kinds of sensory devices. For example, we could include radar, LIDAR, thermal imaging, and so on. This array of sensors would allow for detecting the driving scene in a wide variety of conditions, such as even when it is foggy, raining, snowing, etc.   

That certainly seems tempting and a valuable way to give drivers an enhanced perspective about the driving scene. There are some potential downsides. 

A human driver has limitations on how much input they can absorb at once. Furthermore, their eyes can only usually be looking at thing at any given point in time. Thus, if you are looking down at an in-car display to see what is beyond the upcoming corner, the odds are that you’ve now taken your eyes off the road directly ahead of you. At that moment, you might not notice that a pedestrian has suddenly stepped into the street and you are about to run them over.   

The question arises as to how a human driver can take in extra information. This would have to be arranged in a fashion that would somehow keep your attention still riveted to the straight-ahead driving, and meanwhile allow for glimpsing what is beyond your ordinary viewpoint.   

Even if this could be arranged (perhaps via some kind of HUD or heads-up display), there is also the issue of making sense of this additional perspective. When you glance at the in-car display, you need to mentally analyze the added perspective and combine it with whatever you already have in your noggin about the driving scene.   

The odds are that people would have difficulty doing this, certainly at first try. Unless you had some specialized training or a lot of experience using this kind of perspective-augmenting facility, you would undoubtedly struggle. You might entirely ignore the secondary perspective. You might fail to spot the key elements in the secondary perspective that apply to your existing driving efforts. And so on.   

Some people would readily take to the feature, others might never make use of it (they could presumably disengage the feature and avoid using it).   

You can imagine how this would play out in societal terms. A person gets into a car crash and if they had used the augmented perspective, they perhaps would have been able to avoid the collision. They are then held culpable for not having used the capability. Likewise, somebody using the augmented perspective gets into a car crash, and the claim is made that the additional info was confusing or led the driver to somehow make a worse choice than if they had not been using the secondary perspective.    

Shifting gears, the future of cars consists of self-driving cars. Self-driving cars are going to be using AI-based driving systems and there won’t be a human driver at the wheel. Here is an intriguing question: Could AI-based true self-driving cars successfully leverage a bird’s-eye view perspective of the driving scene? 

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Bird’s Eye View   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

The AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about how to leverage a bird’s-eye view perspective of the driving scene. This is an aspect that needs to be programmed as part of the hardware and software of the self-driving car (I’ve referred to this as part of the “computational omnipresence” that self-driving cars can potentially attain).   

Let’s dive into the myriad aspects that come to play on this topic.   

First, the AI driving system would have to contain some device or equipment that would receive the secondary perspective. As mentioned earlier, the bird’s-eye view electronics might contain a multitude of sensory devices. In addition, there would need to be an electronic communications capability to transmit the data being collected. In turn, any vehicle wishing to make use of the transmission would need to be equipped with an appropriate receiving device.   

The point is that the self-driving car would have to contain some form of hardware and software to receive the bird’s-eye view data. This might be undertaken by communications devices already on-board the vehicle, or there might need to be an added component installed into the self-driving car. Some might argue that this potentially adds an additional cost to the self-driving car. In that case, there would need to be an appropriate ROI (Return on Investment) calculated.   

Via a back-of-the-envelope kind of hunch, the odds are quite high that this ROI would be worthwhile since if the capability is well-implemented, it could boost the AI driving system safety and reduce the chances of self-driving cars getting into calamitous issues (something that I’ve discussed at length in my columns).   

Okay, so let’s assume that a self-driving car is equipped to receive the data streaming from the bird’s-eye view. We next need to consider the timeliness of the data.   

Suppose the data is time-delayed in being sent. As such, as the vehicle comes up to make a right turn, and yet the data is provided to the AI driving system is let’s suppose several seconds behind real-time. In that case, the somewhat outdated data is problematic. The driving scene might have already changed, and the AI driving system is trying to use stale data. 

That’s bound to create difficulties.   

In that sense, the data coming from the bird’s-eye view has to be timely to be considered especially valued. The latency is a factor. That being said, let’s clarify that a timing delay of a split-second might not fully undermine the value of the data, and likewise even a delay of several seconds would not necessarily obviate the value. The point is that though the timing is vital, at least if it is time-stamped, the AI driving system would presumably be programmed to take into account the recency of the data and give weight accordingly to what is being received. Some are suggesting that the use of 5G will be significant to this timing aspect.   

The next aspect for the AI driving system consists of programmatically intertwining the bird’s-eye view with the existent sensory data from the in-car onboard sensory suite. In essence, the self-driving car already has its own set of sensory devices. There is a computational analysis that takes place and is commonly referred to as Multi-Sensor Data Fusion (MSDF). You can consider the bird’s-eye view to be an added set of sensors that are now being applied to the otherwise customary MSDF computational effort of the AI driving system.   

Here’s something else to ponder. Suppose the bird’s-eye view provides the entirety of the raw data being collected. This could amount to extraordinarily voluminous transmissions, taking precious time to do so. Also, the self-driving car has to have sufficient computational processing capabilities to crunch through the data, once it has been received. All told, within the self-driving car, there has to be a lot of added or presumably available computing resources for undertaking this calculation-intensive effort. Plus, it will take time for that kind of data-related interpretation and processing to occur.   

You might be thinking, well, in that case, just transmit a shorthand version. Perhaps the bird’s eye system ought to have its own computing capabilities and pre-crunch the data. Instead of sending the streaming video of that car parked in the active lane, the bird’s-eye capability might just send a text message stating that a car is going to be blocking that upcoming turn.   

This seems readily usable, but it turns out that there are thorny consequential issues that arise.  

For example, is the car completely blocking the lane or only partially intruding into the lane? Is the car truly at a complete standstill or perhaps rolling forward slowly? A zillion questions can be imagined. Without the raw data, the AI driving system will be getting an incomplete semblance of what issues might be awaiting the self-driving car. There is a challenging tradeoff of whether to only transmit a shorthand notation versus the full set of data.   

I’ll throw you another curveball. Should the AI driving system entirely trust the data coming from the bird’s-eye view? 

On the one hand, this bird’s-eye view could be a tremendous asset. At the same time, suppose the data is delayed and is no longer viably accurate about the driving scene. Worse, suppose that the data coming from the sensors is corrupted or perhaps has been hacked. The key is that the AI driving system is considered “responsible” for the driving of the vehicle. As such, whatever the bird’s-eye view provides is not as vital as the aspects of what the AI driving system is going to do when undertaking the driving task. 

Without seeking to create an anthropomorphic analogy, recall that earlier I mentioned that human drivers might struggle to combine the bird’s-eye view with their own recognition of the driving scene. You could suggest that the AI driving system is in the same boat.   

That being said, the beauty of the AI driving system is that, unlike a human that can only provide attention to one thing at a time (i.e., looking at the roadway versus looking at an in-car display), the AI system can be doing those types of actions simultaneously. Assuming that there is sufficient processing speed available and that the software is well-written, the AI driving system ought to be able to analyze the totality of the perspectives that are available via the amalgamated data from the widened sensory indications.   

We can add more icing to that cake.   

It is expected that self-driving cars will be equipped with V2V (vehicle-to-vehicle) electronic communications. This allows a self-driving car to send electronic messages to other nearby self-driving cars. For example, a self-driving car that perhaps made the right turn at the corner could send out a V2V cautioning that a car is parked just beyond that turn. Other nearby self-driving cars could receive the message, and the AI driving systems would accordingly (hopefully) be programmed to consider that added info.   

There is also going to be V2I (vehicle-to-infrastructure). For example, traffic signals will be beaming out electronic messages. Thus, rather than having to rely solely on a visual indication of whether a traffic signal is green-yellow-red, this can be transmitted electronically. 

All told, a self-driving car is apt to have a plethora of outside info that will be flowing into the AI driving system. This data needs to be computationally examined. There needs to be a fusing of the data to try and determine what the driving scene consists of. Plus, the AI driving system cannot necessarily assume that all the data is valid or truthful. Some data will be, while perhaps other parts of the data might be noisy, corrupted, or otherwise have misleading indications. 

Various experimental or pilot uses of birds-eye view capabilities are taking place today. 

For example, Ford has devised a bird’s-eye view sensory set up in Miami Beach, doing so as part of the joint effort of Ford and Argo AI’s self-driving vehicles. Specifically, a busy intersection in South Beach that is known for especially being hectic has been selected for the tryout (the corner of Lincoln Road and Lenox Avenue). This selection makes sense since it will provide a plentiful opportunity to test the bird’s-eye capability (versus if a quiet and otherwise seldom-used intersection was chosen).   

The intersection is near an outdoor mall and lots of popular stores and eateries, so there is a bustling and ongoing flow of bicyclists, pedestrians, and human-driven cars at that intersection. As with any of these efforts, it is prudent to do so in conjunction with roadway and related authorities. This effort includes the Florida Department of Transportation, the City of Miami Beach, and Miami-Dade County.   

Per an indication by Scott Griffith, CEO of Ford Autonomous Vehicles and Mobility Businesses, this effort brings together the added pieces of the puzzle that underlay the emergence of self-driving cars, including the use of state-of-the-art infrastructure capabilities (see his LinkedIn coverage): “Bringing together the future of self-driving requires us to think about every piece of the puzzle. One part of this is researching emerging technologies, like smart infrastructure, to explore how we can provide our self-driving vehicles with as much information as possible to navigate complex urban areas.”   

Bryan Salesky, CEO of Argo AI, and as I’ve previously covered in my columns, is well-known for his focused mission of seeking to develop and deploy self-driving cars for the betterment of making getting around cities safer, easier, and more thoroughly enjoyable experience for all. By Argo AI actively participating in these kinds of birds-eye view efforts, the synergistic impact of leveraging such capabilities is undoubtedly an incremental step in that commendable path.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Overall, the bird’s-eye view provides a handy add-on for the advent of self-driving cars.  

I mention that this should be considered an add-on since the philosophical bent must be that an AI-based true self-driving car will still operate effectively without having a bird’s-eye view available. This is important due to the obvious aspect that many locales won’t have a birds-eye setup in place. In addition, there is always the chance that any such equipment might suddenly be disrupted or experience troubles, in which case the AI driving system has to be programmed to work as though the birds-eye view is no longer functional. 

There are other twists and turns to be considered. 

For example, assume that the bird’s-eye view is being operated 24×7, naturally so since cars can be coming through an intersection at any time of the day. The concern by some is that this is essentially a spying type of capability, one that can record the coming and going of people in whatever locale has the bird’s-eye view established. Yes, this is ostensibly the case, though keep in mind that we already have lots and lots of video cameras set up in many areas and the trend toward doing so continues to expand (partially due to the low-cost nature of today’s surveillance-style technologies). 

The key here is that the aspect of having a bird’s-eye view for car traffic is presumably not markedly different than if there were conventional video cameras put in place. The same kind of debate and qualms would ensue. You might try to argue that the added array of sensors makes the bird’s-eye capability somewhat different, such as including perhaps radar, LIDAR, and the like, though this seems not demonstratively indistinguishable for the overarching qualms involved per se.   

One last quick point for the moment on this topic: There are other means to gain a bird’s-eye view.   

For example, I’ve covered the use of autonomous drones that would work in concert with self-driving cars (see my coverage in my columns). A self-driving car might have a launchpad that can place into the air a drone, which then would fly around at the command of the AI driving system and provide sensory data from a bird’s-eye perspective. This could be done via the self-driving car capabilities, or there might be drones that are provided by others, such as a local roadway authority that has autonomous drones. 

All of the same issues earlier stated are likely to be encompassed by a drone-based bird’s-eye view. Will the AI driving system be programmed to make use of the drone-added perspective? Will the AI be programmed to deal with any faulty data coming from a drone (which could happen)? Etc.   

The privacy aspects also come to play, which can be worsened or lessened via the use of a drone. On the one hand, a drone is likely to be only temporarily in the skies, while a fixed-in-place bird’s-eye view mounted sensory equipment is likely to be somewhat permanently put in place. At the same time, realize that the drone could wander around and glean a likely wider swath of data. On and on this matter goes.   

The other futuristic consideration involves how many of these bird’s-eye views will we end up having? 

Suppose that an intersection has several bird’s-eye views that have been put in place. You could argue that this is fine since the more the merrier, on the other hand, the counterbalancing argument is that things are getting out-of-hand and too much can be overbearing and overwhelming.   

Similarly, when you consider the use of drones for a bird’s-eye view, just imagine if all the cars in a given area opted to launch their respective drones, you would seemingly have a sky utterly cluttered with a massive flock of such mechanical beasts. Anyway, we can certainly aim to start someplace, seeking to crawl before we walk, walk before we run, and run before we fly, so to speak.   

Speaking of birds and flying, when I was a youngster, I had a canary as a beloved pet. I used to dream about what the canary could see when it took flight (we would let it out of the cage in our house and allow the bird to cheerfully fly throughout).   

Maybe I could get another canary now and train it to carry around some sensory devices, proffering yet another means of getting a bird’s-eye view for self-driving cars. Besides toting around the sensors, perhaps it would sing those delightful warbling bird songs. And the costs to keep the canary on duty would be appealingly low, only requiring everyday birdseed.   

You could cheekily assert that the canary could forewarn about impending car collisions, serving as a veritable canary in a coal mine. 

Copyright 2021 Dr. Lance Eliot  http://ai-selfdriving-cars.libsyn.com/website 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/ai-insider/computational-omnipresence-and-birds-eye-view-are-aiding-ai-autonomous-cars/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?