Connect with us


Are drone swarms the future of aerial warfare?



Technology of deploying drones in squadrons is in its infancy, but armed forces are investing millions in its development

As evening fell on Russias Khmeimim airbase in western Syria, the first drones appeared. Then more, until 13 were flashing on radars, speeding towards the airbase and a nearby naval facility.

The explosives-armed aircraft were no trouble for Russian air defences, which shot down seven and jammed the remaining six, according to the countrys defence ministry. But the failed attack in January last year was disturbing to close observers of drone warfare.

It was the first instance of a mass-drone attack and the highest number of drones that I believe weve seen non-state actors use simultaneously in a combat operation, says Paul Scharre, a defence analyst and author who studies the weaponisation of artificial intelligence.

The attempted attacks continued and in September the Russian army said it had downed nearly 60 drones around the Khmeimim base so far this year.

A Russian general presents what he says are drones that were intercepted near the Khmeimim base. Photograph: Maxime Popov/AFP via Getty Images

For now, military drone use is dominated by lightweight surveillance unmanned aerial vehicles (UAVs) and larger attack UAVs. This situation is unlikely to change in the near future: according to defence experts at the information group Janes, orders for both types of device are expected to increase dramatically in the decade ahead.

But the assaults on Khmeimim, as well as Septembers successful strike on oil facilities in Saudi Arabia, were early flashes of a possible future for aerial warfare: drone swarming.

The technology of swarming drones deployed in squadrons, able to think independently and operate as a pack is in its infancy, but armed forces around the world, including in the UK, are investing millions of pounds in its development.

Smoke rises from Saudi Aramcos Abqaiq oil processing facility on 14 September. Photograph: AP

The drones used to attack Khmeimim and the Saudi facilities were likely to have been programmed with the GPS coordinates of their targets and then launched in their direction. Israel is already using hordes of drones to overwhelm Syrian air defences, saturating areas with more targets than anti-aircraft systems can handle.

According to analysts, drone swarms of the future could have the capacity to assess targets, divide up tasks and execute them with limited human interaction.

The real leap forward is swarming where a human says Go accomplish this task and the robots in the swarm communicate amongst each other about how to divvy it up, Scharre says.

A test at China Lake, California, shows drone swarms forming an attack orbit. Photograph: US Department of Defence

Analysts predict we might see rudimentary versions of the technology in use within a decade. That might include swarms of drones operating on multiple different frequencies, so they are more resistant to jamming, or swarms that can block or shoot down multiple threats more quickly than the human brain can process.

Two fielders running to catch a ball can [usually] coordinate amongst themselves, Scharre says. But imagine a world where you have 50 fielders and 50 balls. Humans couldnt handle the complexity of that degree of coordination. Robots could handle that with precision.

Advances in swarming technology are mostly classified, though governments have given glimpses of their progress.

In 2016, the US released video of more than 100 micro-drones over a lake in California manoeuvring as a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature, an air force scientist said.

Play Video

Footage shows 2016 drone swarm test over lake in California video

In tests last year, the Defense Advanced Research Projects Agency claimed a small squadron of its drones had successfully shared information, allocated jobs and made coordinated tactical decisions against both pre-programmed and pop-up threats.

The US navy has already announced breakthroughs in autonomous boats that could sweep for mines, or serve effectively as bodyguards for larger, manned vessels.

If you look back at the USS Cole bombing that boat was just sitting as an open target at that port in Yemen, says Dan Gettinger, a co-director at the Center for the Study of the Drone at Bard College, referring to the October 2000 attack by two boat-borne al-Qaida suicide bombers that killed 17 American sailors.

If you had a protective shield of unmanned service vehicles, they could intercept that before it happens, he says.

The idea of autonomous, intelligent drones empowered to kill understandably sparks concern. Antnio Guterres, the UN secretary-general, said in a speech last year: The prospect of machines with the discretion and power to take human life is morally repugnant.

In 2017, advocates of a ban against autonomous weapons released a short film, Slaughterbots, depicting a dystopian future where terrorists could unleash swarms of tiny drones capable of identifying and killing specific people.

Some analysts are sceptical of these nightmare scenarios. Drones may one day develop the capacity to carry out targeted killings in swarms. But militaries are not certain to adopt such technology, says Jack Watling, a senior fellow at the Royal United Services Institute.

Their reluctance would be more about expense than ethics. If you think about the logistics of having a lot of sophisticated drones that can pick out individuals, process the data, communicate with each other, navigate a city theres a lot of moving parts to that and its very expensive, Watling says.

More affordable, and therefore more likely to be procured, he says, will be drone swarms that perform relatively simple tasks such as cluttering radar systems to distract and confuse enemy sensors.

Part of what makes drones so attractive is their low cost, Scharre adds. Western military inventories have drastically shrunk in past years, as ships and aircraft have become more sophisticated and too expensive to purchase in large quantities (which, in turn, raises the cost of each vessel or plane).

Drones are a cheap way to boost the sheer size of a force. Western militaries are trying to find ways to add numbers to the equation, to complement these expensive, bespoke aircraft and ships with cheaper systems that can augment them, Scharre says.

Ultimately, he adds, it may be fruitless to try to predict the future of swarming technology from the vantage point of 2019. Imagine someone looking at an airplane in 1912, he says. They might be thinking, This will be useful. But nobody really knows yet what it can do.

Read more:


Ubers self-driving unit starts mapping Washington, D.C. ahead of testing




Uber Advanced Technologies Group will start mapping Washington, D.C., ahead of plans to begin testing its self-driving vehicles in the city this year.

Initially, there will be three Uber vehicles mapping the area, a company spokesperson said. These vehicles, which will be manually driven and have two trained employees inside, will collect sensor data using a top-mounted sensor wing equipped with cameras and a spinning lidar. The data will be used to build high-definition maps. The data will also be used for Uber’s virtual simulation and test track testing scenarios.

Uber intends to launch autonomous vehicles in Washington, D.C. before the end of 2020.

At least one other company is already testing self-driving cars in Washington, D.C. Ford announced in October 2018 plans to test its autonomous vehicles in Washington, D.C. Argo AI is developing the virtual driver system and high-definition maps designed for Ford’s self-driving vehicles.

Argo, which is backed by Ford and Volkswagen, started mapping the city in 2018. Testing was expected to begin in the first quarter of 2019.

Uber ATG has kept a low profile ever since one of its human-supervised test vehicles struck and killed a pedestrian in Tempe, Ariz. in March 2018. The company halted its entire autonomous vehicle operation immediately following the incident.

Nine months later, Uber ATG resumed on-road testing of its self-driving vehicles in Pittsburgh, following a Pennsylvania Department of Transportation decision to authorize the company to put its autonomous vehicles on public roads. The company hasn’t resumed testing in other markets, such as San Francisco.

Uber is collecting data and mapping in three other cities: Dallas, San Francisco and Toronto. In those cities, just like in Washington, D.C., Uber manually drives its test vehicles.

Uber spun out the self-driving car business in April 2019 after closing $1 billion in funding from Toyota, auto-parts maker Denso and SoftBank’s Vision Fund. The deal valued Uber ATG at $7.25 billion at the time of the announcement. Under the deal, Toyota and Denso are providing $667 million, with the Vision Fund throwing in the remaining $333 million.

Read more:

Continue Reading


Community of AI Artists Exploring Creativity with Technology




AI artists explore the use of AI as a creative medium to produce original works, often exploring themes around the relationship of humans and machines. (GETTY IMAGES)

By AI Trends Staff

Artists are using AI to explore original work in new mediums.

Refik Anadol, for example, creates art installations using pools of data to create what he calls a new kind of “sculpture.” His “Machine Hallucination” installation ran in Chelsea Market, New York City, last fall.

The Turkish artist used machine learning algorithms on a dataset of more than three million images, to create a synthetic reality experiment. The model generates “a data universe of architectural hallucinations in 512 dimensions,” according to an account of the exhibit in designboom.

The exhibit was installed in the boiler room in the basement of Chelsea Market, a 6,000 square-foot space newly opened with the Anadol exhibit. He commented on being selected, “I’m especially proud to be the first to reimagine this historic building, which is more than 100 years old, by employing machine intelligence to help narrate the hybrid relationship between architecture and our perception of time and space, machine hallucination offers the audience a glimpse into the future of architecture itself.”

Machine Hallucinations was shown on giant screens or projected onto walls, floors, ceilings or entire buildings, using data to produce a kind of AI pointillism, in an immersive experience.

One theme of Anadol’s work is the symbiosis and tension between people and machines, according to an account in Wired.  The artist says his work is an example of how AI—like other technologies—will have a broad range of uses. “When we found fire, we cooked with it, we created communities; with the same technology we kill each other or destroy,” Anadol stated. “Clearly AI is a discovery of humanity that has the potential to make communities, or destroy each other.”

Artists working with AI as a medium have come together to form to curate works by pioneering AI artists and act as the world’s first clearinghouse for AI’s impact on art and culture. The site was founded by Marnie Benney, an independent, contemporary art curator. The site features the community of AI artists and works they are investigating.

The artists are exploring themes around our relationship with technology. Will AI be the greatest invention or the last one? How can AI expand human creativity? Can AI be autonomously creative in a meaningful way? Can AI help us learn about our collective imagination? How can artists build creative and improvisational partnerships with AI? Can AI write poetry and screenplays? What does a machine see when it looks at the depth and breadth of our human experience?

Lauren McCarthy, LA-based AI artist who examines social relationships

These are fun questions to consider. The site offers resources for AI artists, a timeline of AI art history and a compilation of unanswered questions about AI.

Among the artists listed is Lauren McCarthy, an LA-based artist who examines social relationships in the midst of surveillance, automation and algorithmic living. She is the creator of p5.js, an open source programming language for learning creative expression through code online. It has over 1.5 million users. She is co-director of the Processing Foundation, a nonprofit with a mission to promote software literacy within the visual arts. She is an assistant professor at UCLA Design Media Arts.

See the source articles in designboom, Wired and visit


Continue Reading


Self-Imposed Undue AI Constraints and AI Autonomous Cars




Constraints coded into AI self-driving cars need to be flexible enough to allow adjustments when for example rain floods the street, and it might be best to drive on the median. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider


They are everywhere.

Seems like whichever direction you want to move or proceed, there is some constraint either blocking your way or at least impeding your progress.

Per Jean-Jacques Rousseau’s famous 1762 book entitled “The Social Contract,” he proclaimed that mankind is born free and yet everywhere mankind is in chains.

Though it might seem gloomy to have constraints, I’d dare say that we probably all welcome the aspect that arbitrarily deciding to murder someone is pretty much a societal constraint that inhibits such behavior. Movies like “The Purge” perhaps give us insight into what might happen if we removed the criminal constraints or repercussions of murder, which, if you’ve not seen the movie, let’s just say the aspect of providing a 12-hour period to commit any crime that you wish, doing so without any legal ramifications, well, it makes for a rather sordid result.

Anarchy, some might say.

There are thus some constraints that we like and some that we don’t like.

In the case of our laws, we as a society have gotten together and formed a set of constraints that governs our societal behaviors.

One might though contend that some constraints are beyond our ability to overcome, imposed upon us by nature or some other force.

Icarus, according to Greek mythology, tried to fly, doing so via the use of wax-made wings, and flew too close to the sun, falling into the sea and drowning. Some interpreted this to mean that mankind was not meant to fly. Biologically, certainly our bodies are not made to fly, at least not on our own, and thus this is indeed a constraint, and yet we have overcome the constraint by utilizing the Wright Brothers invention to fly via artificial means (I’m writing this right now at 30,000 feet, flying across the United States, in a modern day commercial jet, even though I was not made to fly per se).

In computer science and AI, we deal with constraints in a multitude of ways.

When you are mathematically calculating something, there are constraints that you might apply to the formulas that you are using. Optimization is a popular constraint. You might desire to figure something out and want to do so in an optimal way. You decide to impose a constraint that means that if you are able to figure out something, the most optimum version is the best. One person might develop a computer program that takes hours to calculate pi to thousands of digits in size, while someone else writes a program that can do so in minutes, and thus the more optimal one is perhaps preferred.

When my children were young, I’d look in their crayon box and pulled out four of the crayons, let’s say I selected yellow, red, blue, and green, thus choosing four different colors. I’d then give them a printed map of the world and ask them to use the four colors to colorize the countries and their states or sub-entities as shown on map. They could use whichever of the four colors and do so in whatever manner they desired.

They might opt to color all of North American countries and their sub-entities in green, and perhaps all of Europe’s in blue. This would be an easy way to colorize the map. It wouldn’t take them very long to do so. They might or might not even choose to use all four of the colors. For example, the entire map and all of its countries and sub-entities you could just scrawl in with the red crayon. The only particular constraint was that the only colors you could use had to be one or more of the four colors that I had selected.

Hard Versus Soft Constraints

Let’s recast the map coloring problem.

I would add an additional constraint to my children’s effort to color the printed map.

I would tell them that they were to use the four selected crayons and could not have any entity border that touched another entity border using the same color. For those of you versed in computer science or mathematics, you might recognize this as the infamous four-color conjecture problem that was first promulgated by Francis Guthrie (he mentioned it to his brother, and his brother mentioned it to a college mathematics professor, and eventually it caught the attention of the London Mathematical Society and became a grand problem to be solved).

Coloring maps is interesting, but even more so is the aspect that you can change your perspective to assert that the four-color problem should be applied to algebraic graphs.

You might say that the map coloring led to spurring attention to algorithms that could do nifty things with graphs. With the development of chromatic polynomials, you can count how many ways a graph can be colored, using as a parameter the number of distinct colors that you have in-hand.

Anyway, my children delighted in my adding the four-color constraint, in the sense that it made the map coloring problem more challenging.

I suppose when I say they were delighted, I should add that they expressed frustration too, since the problem went from very easy to suddenly becoming quite hard. Furthermore, at first, they assumed that the problem would be easy, since it had been easy to use the colors in whatever fashion they desired, and they thought that with four crayons that the four-color constraints would likewise be simple. They discovered otherwise as they used up many copies of the printed map, trying to arrive at a solution that met the constraint.

There are so-called “hard” constraints and “soft” constraints. Some people confuse the word “hard” with the idea that if the problem itself becomes hard that the constraint that caused it is considered a “hard” constraint. That’s not what is meant though by the proper definition of “hard” and “soft” constraints.

A “hard” constraint is considered a constraint that is inflexible. It is imperative. You cannot try to shake it off. You cannot try to bend it to become softer. A “soft” constraint is one that is considered flexible and you can bend it. It is not considered mandatory.

For my children and their coloring of the map, when I added the four-color constraint, I tried to make it seem like a fun game and wanted to see how they might do. After some trial-and-error of using just four colors and getting stuck trying to color the map as based on the constraint that no two borders could have the same color, one of them opted to reach into the crayon bin and pull out another crayon. When I asked what was up with this, I was told that the problem would be easier to solve if it allowed for five colors instead of four.

This was interesting since they accepted the constraint that no two borders could be the same color but had then opted to see if they could loosen the constraint about how many crayon colors could be used.  I appreciated the thinking outside-of-the box approach but said that the four-color option was the only option and that using five colors was not allowed in this case. It was considered a “hard” constraint in that it wasn’t flexible and could not be altered. Though, I did urge that they might try using five colors as an initial exploration, seeking to to try and figure out how to ultimately reduce things down to just four colors.

From a cognition viewpoint, notice that they accepted one of the “hard” constraints, namely about two borders rule, but tried to stretch one of the other constraints, the number of colors allowed. Since I had not emphasized that the map must be colored with only four colors, it was handy that they tested the waters to make sure that the number of colors allowed was indeed a firm or hard constraint. In other words, I had handed them only four colors, and one might assume therefore they could only use four colors, but it was certainly worthwhile asking about it, since trying to solve the map problem with just four colors is a lot harder than with five colors.

This brings us to the topic of self-imposed constraints, and particularly ones that might be undue.

Self-Imposed And Undue Constraints

When I was a professor and taught AI and computer science classes, I used to have my students try to solve the classic problem of getting items across a river or lake. You’ve probably heard or seen the problem in various variants. It goes something like this. You are on one side of the river and have with you a fox, a chicken, and some corn. They are currently supervised by you and remain separated from each other. The fox would like to eat the chicken, and the chicken would like to eat the corn.

You have a boat that you can use to get to the other side of the river. You can only take two items with you on each trip. When you reach the other side, you can leave the items there. Any items on that other side can also be taken back to the side that you came from. You want to end-up with all of the three items intact on the other side.

Here’s the dilemma. If you take over the chicken and the corn, the moment that you head back to get the fox, the chicken will gladly eat the corn. Fail! If you take over the fox and the chicken, the moment you head back to get the corn, the fox will gleefully eat the chicken. Fail! And so on.

How do you solve this problem?

I won’t be a spoiler and tell you how it is solved, and only offer the hint that it involves multiple trips. The reason I bring up the problem is that nearly every time I presented this problem to the students, they had great difficulty solving it because they made an assumption that the least number of trips was a requirement or constraint.

I never said that the number of trips was a constraint. I never said that the boat couldn’t go back-and-forth as many times as you desired. This was not a constraint that I had placed on the solution to the problem. I tell you this because if you try to simultaneously solve the problem and also add a self-imposed constraint that the number of trips must be a minimal number, you get yourself into quite a bind trying to solve the problem.

It is not surprising that computer science students would make such an assumption, since they are continually confronted with having to find the most optimal way to solve things. In their beginning algorithm theories classes and programming classes, they usually are asked to write a sorting program. The program is supposed to sort some set of data elements, perhaps a collection of words are to be sorted into alphabetical order. They are likely graded on how efficient their sorting program is. The fastest version that takes the least number of sorting steps is often given the higher grade. This gets them into the mindset that optimality is desired.

Don’t get me wrong that somehow I am eschewing optimality. Love it. I’m just saying that it can lead to a kind of cognitive blindness to solving problems. If each new problem that you try to solve, you approach it with the mindset that you must at first shot also always arrive at optimality, you are going to have a tough road in life, I would wager. There are times that you should try to solve a problem in whatever way possible, and then afterwards try to wean it down to make it optimal. Trying to do two things at once, solving a problem and doing so optimally, can be too big a chunk of food to swallow all in one bite.

Problems that are of interest to computer scientists and AI specialists are often labeled as Constraint Satisfaction Problems (CSP’s).

These are problems for which there are some number of constraints that need to be abide by, or satisfied, as part of the solution that you are seeking.

For my children, it was the constraint that they had to use the map I provided, they could not allow the same color to touch one border and another border, and they must use only the four colors. Notice there were multiple constraints. They were all considered “hard” constraints in that I wouldn’t let them flex any of the constraints.

This is classic CSP.

I did somewhat flex the number of colors, but only in the sense that I urged them to try with five colors to get used to trying to do the problem (after they had broached the subject). This is in keeping with my point above that sometimes it is good to solve a problem by loosening a constraint. You might then tighten the constraint after you’ve already come up with some strategy or tactic that you discovered when you had flexed a constraint.

Some refer to a CSP that contains “soft” constraints as one that is considered Flexible. A classic version of CSP usually states that all of the given constraints are considered hard or inflexible. If you are faced with a problem that does allow for some of the constraints to be flexible, it is referred to as a FCSP (Flexible CSP), meaning there is some flexibility allowed in one or more of the constraints. It does not necessarily mean that all of the constraints are flexible or soft, just that some of them are.

Autonomous Cars And Self-Imposed Undue Constraints

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that deserves apt attention is the self-imposed undue constraints that some AI developers are putting into their AI systems for self-driving cars.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of self-imposed undue constraints, let’s consider how this applies to AI self-driving cars.

I’ll provide some examples of driving behavior that exhibit the self-imposed undue constraints phenomena.

The Tale Of The Flooded Street

Keep in mind my earlier story about the computer science students that attempted to solve the river crossing problem and did so with the notion of optimality permeating their minds, which made it much harder to solve the problem.

It was a rainy day and I was trying to get home before the rain completely flooded the streets around my domicile.

Though in Southern California we don’t get much rain, maybe a dozen inches a year, whenever we do get rain it seems like our gutters and flood-control are not built to handle it. Plus, the drivers here go nuts when there is rain. In most other rain-familiar cities, the drivers take rain in stride. Here, drivers get freaked out. You would think they would drive more slowly and carefully. It seems to be the opposite, namely in rain they drive more recklessly and with abandon.

I was driving down a street that definitely was somewhat flooded.

The water was gushing around the sides of my car as I proceeded forward. I slowed down quite a bit. Unfortunately, I found myself almost driving into a kind of watery quicksand. As I proceeded forward, the water got deeper and deeper. I realized too late that the water was now nearly up to the doors of my car. I wondered what would happen once the water was up to the engine and whether it might conk out the engine. I also was worried that the water would seep into the car and I’d have a nightmare of a flooded interior to deal with.

I looked in my rear-view mirror and considered trying to back out of the situation by going in reverse. Unfortunately, other cars had followed me and they were blocking me from behind. As I rounded a bend, I could see ahead of me that several cars had gotten completed stranded in the water up ahead. This was a sure sign that I was heading into deeper waters and likely also would get stuck.

Meanwhile, one of those pick-up trucks with a high clearance went past me going fast, splashing a torrent of water onto my car. He was gunning it to make it through the deep waters. Probably was the type that had gotten ribbed about having such a truck for suburbs and why he bought such a monster, and here was his one moment to relish the purchase.

Yippee, I think he was exclaiming.

I then saw one car ahead of me that did something I would never have likely considered. He drove up onto the median of the road. There was a raised median that divided the northbound and southbound lanes. It was a grassy median that was raised up to the height of the sidewalk, maybe an inch or two higher. By driving up onto the median, the driver ahead of me had gotten almost entirely out of the water, though there were some parts of the median that were flooded and underwater. In any case, it was a viable means of escape.

I had just enough traction left on the road surface to urge my car up onto the median. I then drove on the median until I reached a point that would allow me to come off it and head down a cross-street that was not so flooded. As I did so, I looked back at the other cars that were mired in the flooded street that I had just left. They were getting out of their cars and I could see water pouring from the interiors. What a mess!

Why do I tell this tale of woe and survival (well, Okay, not real survival in that I wasn’t facing the grim reaper, just a potentially stranded car that would be flooded and require lots of effort to ultimately get out of the water and then deal with the flooded interior)?

As a law-abiding driver, I would never have considered driving up on the median of a road.

It just wouldn’t occur to me. In my mind, it was verboten.

The median is off-limits.

You could get a ticket for driving on the median.

It was something only scofflaws would do.

It was a constraint that was part of my driving mindset.

Never drive on a median.

In that sense, it was a “hard” constraint. If you had asked me before the flooding situation whether I would ever drive on a median, I am pretty sure I would have said no. I considered it inviolate. It was so ingrained in my mind that even when I saw another driver ahead of me do it, for a split second I rejected the approach, merely due to my conditioning that driving on the median was wrong and was never to be undertaken.

I look back at it now and realize that I should have classified the constraint as a “soft” constraint.

Most of the time, you probably should not be driving on the median. That seems to be a relatively fair notion. There might be though conditions under which you can flex the constraint and can drive on the median. My flooding situation seemed to be that moment.

AI Dealing With Constraints

Let’s now recast this constraint in light of AI self-driving cars.

Should an AI self-driving car ever be allowed to drive up onto the median and drive on the median?

I’ve inspected and reviewed some of the AI software being used in open source for self-driving cars and it contains constraints that prohibit such a driving act from ever occurring. It is verboten by the software.

I would say it is a self-imposed undue constraint.

Sure, we don’t want AI self-driving cars willy nilly driving on medians.

That would be dangerous and potentially horrific.

Does this mean that the constraint though must be “hard” and inflexible?

Does it mean that there might not ever be a circumstance in which an AI system would “rightfully” opt to drive on the median?

I’m sure that in addition to my escape of flooding, we could come up with other bona fide reasons that a car might want or need to drive on a median.

I realize that you might be concerned that driving on the median should be a human judgement aspect and not be made by some kind of automation such as the AI system that’s driving an AI self-driving car. This raises other thorny elements. If a human passenger commands the AI self-driving car to drive on a median, does that ergo mean that the AI should abide by such a command? I doubt we want that to occur, since you could have a human passenger that is wacko that commands their AI self-driving car to drive onto a median, doing so for either no reason or for a nefarious reason.

For my article about open source and AI self-driving cars, see:

For my article about pranking of AI self-driving cars, see:

For the Natural Language Processing (NLP) interaction of AI and humans, see my article:

For safety aspects of AI self-driving cars, see my article:

I assert that there are lots of these kinds of currently hidden constraints in many of the AI self-driving cars that are being experimented with in trials today on our public roadways.

The question will be whether ultimately these self-imposed undue or “hard” constraints will limit the advent of true AI self-driving cars.

To me, an AI self-driving car that cannot figure out how to get out of a flooded street by driving up onto the median is not a true AI self-driving car.

I realize this sets a pretty high bar.

I mention this too because there were many other human drivers on that street that either did not think of the possibility or thought of the possibility after it was too late to try to maneuver onto the median. If some humans cannot come up with a solution, are we asking too much for the AI to come up with a solution?

In my case, I freely admit that it was not my own idea to drive up on the median. I saw someone else do it and then weighed whether I should do the same. In that manner, you could suggest that I had in that moment learned something new about driving. After all these many years of driving, and perhaps I thought I had learned it all, in that flooded street I was suddenly shocked awake into the realization that I could drive on the median. Of course, I had always known it was possible, the thing that was stopping me was the mindset that it was out-of-bounds and never to be considered as a viable place to drive my car.

Machine Learning And Deep Learning Aspects

For AI self-driving cars, it is anticipated that via Machine Learning (ML) and Deep Learning (DL) they will be able to gradually over time develop more and more in their driving skills.

You might say that I learned that driving on the median was a possibility and viable in an emergency situation such as a flooded street.

Would the AI of an AI self-driving car be able to learn the same kind of aspect?

The “hard” constraints inside much of the AI systems for self-driving cars is embodied in a manner that it is typically not allowed to be revised.

The ML and DL takes place for other aspects of the self-driving car, such as “learning” about new roads or new paths to go when driving the self-driving car. Doing ML or DL on the AI action planning portions is still relatively untouched territory. It would pretty much require a human AI developer to go into the AI system and soften the constraint of driving on a median, rather than the AI itself doing some kind of introspective analysis and changing itself accordingly.

There’s another aspect regarding much of today’s state-of-the-art on ML and DL that would make it difficult to have done what I did in terms of driving up onto the median. For most ML and DL, you need to have available lots and lots of examples for the ML or DL to pattern match onto. After examining thousands or maybe millions of instances of pictures of road signs, the ML or DL can somewhat differentiate stop signs versus say yield signs.

When I was on the flooded street, it took only one instance for me to learn to overcome my prior constraint about not driving on the median. I saw one car do it. I then generalized that if one car could do so, perhaps other cars could. I then figured out that my car could do the same. I then enacted this.

All of that took place based on just one example.

And in a split second of time.

And within the confines of my car.

It happened based on one example and occurred within my car, which is significant to highlight. For the Machine Learning of AI self-driving cars, most of the automakers and tech firms are currently restricting any ML to occur in the cloud. Via OTA (Over-The-Air) electronic communications, an AI self-driving car sends data that it has collected from being on the streets and pushes it up to the cloud. The auto maker or tech firm does some amount of ML or DL via the cloud-based data, and then creates updates or patches that are pushed down into the AI self-driving car via the OTA.

In the case of my being on the flooded street, suppose that I was in an AI self-driving car. Suppose that the AI via its sensors could detect that a car up ahead went up onto the median. And assume too that the sensors detected that the street was getting flooded. Would the on-board AI have been able to make the same kind of mental leap, learning from the one instance, and adjust itself, all on-board the AI of the self-driving car? Today, likely no.

I’m sure some AI developers are saying that if the self-driving car had OTA it could have pushed the data up to the cloud and then a patch or update might have been pumped back into the self-driving car, allowing the AI to then go up onto the median. Really?

Consider that I would have to be in a place that allowed for the OTA to function (since it is electronic communication, it won’t always have a clear signal). Consider that the cloud system would have to be dealing with this data and tons of other data coming from many other self-driving cars. Consider that the pumping down of the patch would have to be done immediately and be put into use immediately, since time was a crucial element. Etc. Not likely.

For more about OTA, see my article:

For aspects of Machine Learning see my article:

For the importance of plasticity in Deep Learning, see my article:

For AI developers and an egocentric mindset, see my article:

At this juncture, you might be tempted to say that I’ve only given one example of a “hard” constraint in a driving task and it is maybe rather obscure.

So what if an AI self-driving car could not discern the value of driving onto the median.

This might happen once in a blue moon, and you might say that it would be safer to have the “hard” constraint than to not have it in place (I’m not saying that such a constraint should not be in place, and instead arguing that it needs to be a “soft” constraint that can be flexed in the right way at the right time for the right reasons).

More Telling Examples

Here’s another driving story for you that might help.

I was driving on a highway that was in an area prone to wildfires. Here in Southern California (SoCal) we occasionally have wildfires, especially during the summer months when the brush is dry and there is a lot of tinder ready to go up in flames. The mass media news often makes it seem as though all of SoCal gets caught up in such wildfires. The reality is that it tends to be localized. That being said, the air can get pretty brutal once the wildfires get going and large plumes of smoke can be seen for miles.

Driving along on this highway in an area known for wildfires, I could see up ahead that there was smoke filling the air. I hoped that the highway would skirt around the wildfires and I could just keep driving until I got past it. I neared a tunnel and there was some smoke filling into it. There wasn’t any nearby exits to get off the highway. The tunnel still had enough visibility that I thought I could zip through the tunnel and pop out the other side safely. I’d driven through this tunnel on many other trips and knew that at my speed of 65 miles per hour it would not take long to traverse the tunnel.

Upon entering into the tunnel, I realize it was a mistake to do so.

Besides the smoke, when I neared the other end of the tunnel, there were flames reaching out across the highway and essentially blocking the highway up ahead. This tunnel was a one-way. Presumably, I could not go back in the same direction as I had just traversed. If I tried to get out of the tunnel, it looked like I might get my car caught into the fire.

Fortunately, all of the cars that had entered into the tunnel had come to a near halt or at least a very low speed. The drivers all realized the danger of trying to dash out of the tunnel. I saw one or two cars make the try. I later found out they did get some scorching by the fire. There were other cars that ended-up on the highway and the drivers abandoned their cars due to the flames. Several of those cars got completely burned to a crisp.

In any case, we all made U-turns there in the tunnel and headed in the wrong direction of the tunnel so that we could drive out and get back to the safer side of the tunnel.

Would an AI self-driving car be able and willing to drive the wrong-way on a highway?

Again, most of the AI self-driving cars today would not allow it.

They are coded to prevent such a thing from happening.

We can all agree that having an AI system drive a self-driving car the wrong-way on a road is generally undesirable.

Should it though be a “hard” constraint that is never allowed to soften? I think not.

As another story, I’ll make this quick, I was driving on the freeway and a dog happened to scamper onto the road.

The odds were high that a car was going to ram into the dog.

The dog was frightened out of its wits and was running back-and-forth wantonly. Some drivers didn’t seem to care and were just wanting to drive past the dog, perhaps rushing on their way to work or to get their Starbucks morning coffee.

I then saw something that was heartwarming.

Maybe a bit dangerous, but nonetheless heartwarming.

Several cars appeared to coordinate with each other to slow down the traffic (they slowed down and got the cars behind them to do the same) and brought traffic to a halt. They then maneuvered their cars to make a kind of fence or kennel surrounding the dog. This prevented the dog from readily running away. Some of the drivers then got out of their cars and one had a leash (presumably a dog owner), leashed the dog, got the dog into their car, and drove away, along with the rest of traffic resuming.

Would an AI self-driving car have been able to do this same kind of act, specifically coming to a halt on the freeway and turning the car kitty corner while on the freeway to help make the shape of the virtual fence?

This would likely violate other self-imposed constraints that the AI has embodied into it.

Doubtful that today’s AI could have aided in this rescue effort.

In case you still think these are all oddball edge cases, let’s consider other kinds of potential AI constraints that likely exist for self-driving cars, as put in place by the AI developers involved. What about going faster than the speed limit? I’ve had some AI developers say that they’ve setup the system so that the self-driving car will never go faster than the posted speed limit. I’d say that we can come up with lots of reasons why at some point a self-driving car might want or need to go faster than the posted speed limit.

Indeed, I’ve said and written many times that the notion that an AI self-driving car is never going to do any kind of “illegal” driving is nonsense.

It is a simplistic viewpoint that defies what actually driving consists of.

For my article about the illegal driving needs of AI self-driving cars, see:

For my article about the importance of edge cases, see:

For the aspects of AI boundaries, see my article:

For the Turing test as applied to AI self-driving cars, see my article:


The nature of constraints is that we could not live without them, nor at times can we live with them, or at least that’s what many profess to say. For AI systems, it is important to be aware of the kinds of constraints they are being hidden or hard-coded into them, along with understanding which of the constraints are hard and inflexible, and which ones are soft and flexible.

It is a dicey proposition to have soft constraints. I say this because for each of my earlier examples in which a constraint was flexed, I gave examples wherein the flexing was considered appropriate. Suppose though that the AI is poorly able to discern when to flex a soft constraint and when not to do so? Today’s AI is so brittle and incapable that we are likely better off to have hard constraints and deal with those consequences, rather than having soft constraints that could be handy in some instances but maybe disastrous in other instances.

To achieve a true AI self-driving car, I claim that the constraints must nearly all be “soft” and that the AI needs to discern when to appropriately bend them. This does not mean that the AI can do so arbitrarily. This also takes us into the realm of the ethics of AI self-driving cars. Who is to decide when the AI can and cannot flex those soft constraints?

For my article on ethics boards and AI self-driving cars, see:

For my article about reframing the levels of AI self-driving cars, see:

For the crossing of the Rubicon and AI self-driving cars, see my article: 1

For my article about starting over with AI, see:

For common sense reasoning advances, see my article:

My children have long moved on from the four-color crayon mapping problem and they are faced nowadays with the daily reality of constraints all around them as adults.

The AI of today that is driving self-driving cars is at best the capability of a young child (though not in any true “thinking” manner), which is well-below where we need to be in terms of having AI systems that are responsible for multi-ton cars that can wreak havoc and cause damage and injury.

Let’s at least make sure that we are aware of the internal self-imposed constraints embedded in AI systems and whether the AI might be blind to taking appropriate action while driving on our roads.

That’s the kind of undue that we need to undue before it is too late.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]


Continue Reading