Connect with us


Federal Government Inching Toward Enterprise Cloud Foundation




The federal government’s efforts to put an enterprise cloud platform in place to serve the Pentagon and other agencies has been slowed, but is lurching forward. (GETTY IMAGES)

By AI Trends Staff

The federal government continues its halting effort to field an enterprise cloud strategy, with Lt. Gen. Jack Shanahan, who leads the Defense Department’s Joint AI Center (JAIC), commenting recently that not having an enterprise cloud platform has made the government’s efforts to pursue AI more challenging.

“The lack of an enterprise solution has slowed us down,” stated Shanahan during an AFCEA DC virtual event held on May 21, according to an account in FCW. However, “the gears are in motion” with the JAIC using an “alternate platform” for example to host a newer anti-COVID effort.

Lt. Gen. Jack Shanahan, who leads the Defense Department’s Joint AI Center

This platform is called Project Salus, and is a data aggregation that is able to employ predictive modeling to help supply equipment needed by front-line workers. The Salus platform was used for the ill-fated Project Maven, a DOD effort that was to employ AI image recognition to improve drone strike accuracy. Several thousand Google employees signed a petition to protest the company’s pursuit of the contract, and Google subsequently dropped out.

Shanahan recommends the enterprise cloud project follow guidance of the Joint Common Foundation, an enterprise-wide, multi-cloud environment set up as a transition to the Joint Enterprise Defense Infrastructure program (JEDI). The JEDI $10 billion DOD-wide cloud acquisition was won by Microsoft in October, was challenged by Amazon and has been stuck in legal battles since.

“It’s set us back, there’s no question about it, but we now have a good plan to account for the fact that it will be delayed potentially many more months,” Shanahan stated.

That plan involves a hybrid approach of using more than one cloud platform. At Hanscom Air Force Base in Bedford, Mass., for instance, the Air Force’s Cloud One environment is using both Microsoft Azure and Amazon Web Services.

“I will never get into a company discussion, I’m agnostic. I just need an enterprise cloud solution,” Shanahan stated. “If we want to make worldwide updates to all these algorithms in the space of minutes not in the space of months running around gold discs, we’ve got to have an enterprise cloud solution.”

Joint Common Foundation Aims to Set Up Migration to JEDI

The Joint Common Foundation, announced in March, is an enterprise cloud-based foundation intended to provide the development, test and runtime environment—and the collaboration, tools, reusable assets and data—that the military needs to build, refine, test and field AI applications, according to a JAIC AI Blog post.

“The Infrastructure and Platform division is building an enterprise cloud-enabled platform across multiple govCloud environments in preparation for the JEDI migration,” stated Denise Hodge, Information Systems Security Manager, who is leading the effort to develop the Joint Common Foundation.

Denise Hodge, Information Systems Security Manager, who is leading the effort to develop the Joint Common Foundation

The JCF has the following design goals:

  • Reduce technical barriers to DoD-wide AI adoption.
  • Accelerate security assessments of AI products to support rapid authorization decisions and AI capability deployment.
  • Create standardized development, security, testing tools, and practices to support secure, scalable AI development.
  • Facilitate the concept of secure re-use of AI resources, software, tools, data, and lessons learned that capitalize on the progress made by each JCF AI project.
  • Encourage efficiencies by finding patterns in JCF customer needs and creating solutions that are repeatable to build core products that advance AI development
  • Mitigate risk by providing a common, standardized, and cyber-hardened infrastructure and platform for AI development, assessments, and rapid deployment promotion.

Hodge has spent much of her career supporting Chief Information Officers and Authoring Officials in various IT ecosystems in the Department of Defense, concentrating especially on cybersecurity. “Cybersecurity is the thread that binds the enterprise cloud together,” she stated.

She described four pillars of security to promote cyber engagement and governance: infrastructure security; secure ingest, ongoing authorization and continuous monitoring.

“This initiative is to provide a common, standardized, and hardened development platform that promotes a secure AI development ecosystem,” Hodges stated.

JEDI Project Tied Up in Court

In court documents released in March, Amazon argued that the Pentagon’s proposed corrective action approach over the disputed $10 billion cloud contract, is not a fair re-evaluation, according to an account from CNBC.

Amazon was seen as the favorite to win the JEDI contract, until President Donald Trump got involved. Amazon alleges that the President launched “behind the scenes attacks” against Amazon. Some of them were detailed in the memoir of James Mattis, the retired Marine Corps general who served as US Secretary of Defense from January 2017 through January 2019. In the memoir, Mattis stated that President Trump told him to “screw Amazon” out of the contract.

Amazon is seeking to depose a number of people involved in the JEDI recommendation. The dispute is ongoing.

Read the source articles at FCW, JAIC AI Blog post and CNBC.


Continue Reading


Automakers Making Deals to Speed Incorporation of AI




Tech companies are helping auto manufacturers to accelerate the incorporation of AI into their software systems supporting self-driving vehicles. (GETTY IMAGES)

By AI Trends Staff

Automakers are making deals with technology companies to produce the next generation of cars that incorporate AI technology in new ways.

Nvidia last week reached an agreement with Mercedes-Benz to design a software-defined computing network for the car manufacturer’s entire fleet, with over-the-air updates and recurring revenue for applications, according to an account in Barron’s.

“This is the iPhone moment of the car industry,” stated Nvidia CEO Jensen Huang, who founded the company in 1993 to make a new chip to power three-dimensional video games. Gaming now represents $6.1 billion in revenue for Nvidia, which is now positioning for its next phase of growth, which will involve AI to a great extent. “People thought we were a videogame company,” stated Huang. “But we’re an accelerated computing company where videogames were our first killer app.”

The Data Center category, which exploits AI heavily, has been a winner for Nvidia, with revenue expected to more than double to $6.5 billion, making it the company’s biggest market.

Nvidia has established its CUDA parallel computing platform and application programming interface model used to develop applications to run on the company’s chips, as a market leader. Released in 2007, CUDA enables software developers and software engineers to use the graphics processing unit for general purpose processing, which is called GPGPU.

From its start producing hardware for videogames, to hardware and software to support AI, now to hardware, software and services for cars, Nvidia sees the opportunity as transformative.  “The first vertical market that we chose is autonomous vehicles because the scale is so great,” Huang stated. “And the life of the car is so long that if you offer new capabilities to each new owner, the economics could be quite wonderful.”

The software-centric computing architecture based on Nvidia’s Drive AGX Orin computer system-on-a-chip. The underlying architecture will be standard in Mercedes’ next generation of vehicles, starting sometime toward the end of 2024, stated Ola Källenius, chairman of the board of management of Daimler AG and head of Mercedes-Benz AG, during a live stream of the announcement, according to an account in TechCrunch.

Ola Källenius, chairman of Daimler AG and head of Mercedes-Benz AG

The two companies plan to jointly develop the AI and automated vehicle applications that include Level 2 and Level 3 driver assistance functions, as well as automated parking functions up to Level 4.

“Many people talk about the modern car, the new car as a kind of smartphone on wheels. If you want to take that approach you really have to look at source software architecture from a holistic point of view,” stated Källenius. “One of the most important domains here is the driving assistant domain. That needs to dovetail into what we call software-driven architecture, to be able to (with high computing power) add use cases for the customer, this case the driving assistant autonomous space.”

Waymo and Volvo Get Together on Self-Driving Electric Vehicles

In another automaker-tech partnership announced last week, Waymo and the Volvo Cars Group announced a new global partnership to develop a self-driving electric vehicle designed for ride-hailing use, according to a report in Reuters.

Waymo, a unit of Alphabet which also owns Google, will be the exclusive global partner for Volvo Cars for developing self-driving vehicles capable of operating safely without routine driver intervention. Waymo will focus on artificial intelligence for the software “driver.” Volvo will design and manufacture the vehicles.

Owned by China’s Zhejiang Geely Holding Group Co., Volvo has a separate agreement to deliver vehicles to ride hailing company Uber Technologies, that Uber will equip to operate as self-driving vehicles. Volvo Cars is continuing to deliver vehicles to Uber. The Uber effort to develop self-driving vehicle technology was disrupted after a self-driving Volvo SUV operated by Uber struck and killed a pedestrian in Arizona in 2018.

Waymo and Volvo did not say when they expect to launch their new ride-hailing vehicle. Waymo said it will continue working with Fiat Chrysler, Jaguar Land Rover, and the Renault Nissan Mitsubishi Alliance.

Startups Assisting Automakers with Self-Driving Car Tech

Meanwhile, a number of startups are assisting automakers with adding AI functions into new models of existing car lines.

AutoX of San Jose, Calif., has focused their self-driving car technology on a retail purpose such as delivering groceries, according to a recent account in builtin. Users can select grocery items from an app and have them delivered; users can also browse the vehicle-based mobile store upon delivery. AutoX has launched a pilot program in San Jose, testing the service within a geo-fenced zone.

AutoX was founded in 2016 by Dr. Jianxiong Xiao (aka. Professor X), a self-driving technologist from Princeton University. The company’s team of engineers and scientists have extensive industry experience in autonomous driving hardware and software. AutoX has eight offices and five R&D centers globally. Investors include Shanghai Auto (China’s largest car manufacturer), Dongfeng Motor (China’s second-largest car manufacturer), Alibaba AEF, MediaTek MTK, and financial institutions. The system has been deployed on 15 vehicle platforms, including one from Ford Motor.

Optimus Ride of Boston offers self-driving vehicles that can operate autonomously within geofenced environments, such as airports, academic campuses, residential communities, office/industrial parts and city zones.

In collaboration with Microsoft, Optimus Ride is working on Virtual Ride Assistant (VRA), to provide dynamic interactions between riders, the vehicle and a remote assistance team. The VRA provides audio-visual tools for riders to be informed about the system, to request changes in destination or routing and to contact a remote assistance system.

The company has deployments at the Brooklyn Navy Yard and Paradise Valley Estates in Paradise Valley, Calif., and a strategic development relationship with Brookfield Properties, developers of Halley Rise, a mixed-use district in Reston, Va.

A spinoff of MIT, Optimus Rid received approval from the Massachusetts Department of Transportation in 2017 to test highly automated vehicles on public streets.

The company incorporated a software system from Nvidia, the Nvidia Drive PX 2, to accelerate its development.

Sertac Karaman, co-founder, president and chief scientist, Optimus Ride

“We believe the computational power needed to make self-driving vehicles a reality is finally coming to market’” with the Nvidia software, stated Sertac Karaman, co-founder, president and chief scientist at Optimus Ride.

Rethink Robotics of Boston and Rheinböllen, Germany, builds smart, collaborative robots to help in industrial automation, and auto manufacturing in particular.

The company was founded in 2008 and acquired in 2018 by the HAHN Group of Germany, which runs a global network of specialized technology companies offering industrial automation and robotic solutions.  A year after the acquisition, HAHN announced a new generation of the Sawyer collaborative robot.

Read the source articles in Barron’s, TechCrunch, Reuters and  builtin.


Continue Reading


AI Being Applied in Agriculture to Help Grow Food, Support New Methods




AI is being applied to many areas of agriculture, including vertical farming, where crops are grown vertically-stacked in a controlled environment. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

AI continues to have an impact in agriculture, with efforts underway to help grow food, combat disease and pests, employ drones and other robots with computer vision, and use machine learning to monitor soil nutrient levels.

In Leones, Argentina, a drone with a special camera flies low over 150 acres of wheat checking each stock, one-by-one, looking for the beginnings of a fungal infection that could threaten this year’s crop.

The flying robot is powered by a computer vision system incorporating AI supplied by Taranis, a company founded in 2015 in Tel Aviv, Israel by a team of agronomists and AI experts. The company is focused on bringing precision and control to the agriculture industry through a system it refers to as an “agriculture intelligence platform.”

The platform relies on sophisticated computer vision, data science and deep learning algorithms to generate insights aimed at preventing crop yield loss from diseases, insects, weeds and nutrient deficiencies. The Taranis system is monitoring millions of farm acres across the US, Argentina, Brazil, Russia, Ukraine and Australia, the company states. The company has raised some $30 million from investors.

“Today, to increase yields in our lots, it’s essential to have a technology that allows us to make decisions immediately,” Ernesto Agüero, the producer on San Francisco Farm in Argentina, stated in an account in Business Insider.

Elsewhere, a fruit-picking robot named Virgo is using computer vision to decide which tomatoes are ripe and how to pick them gently, so that just the ripe tomatoes are harvested and the rest keep growing. Boston-based startup Root AI developed the robot to assist indoor farmers.

“Indoor growing powered by artificial intelligence is the future,” stated Josh Lessing, co-founder and CEO of Root AI. This year the company is currently installing systems in commercial greenhouses in Canada.

More indoor farming is happening, with AI heavily engaged. 80 Acres Farms of Cincinnati opened a fully-automated indoor growing facility last year, and currently has seven sites in the US. AI is used to monitor every step of the growing process.

“We can tell when a leaf is developing and if there are any nutrient deficiencies, necrosis, whatever might be happening to the leaf,” stated Mike Zelkind, CEO of 80 Acres. “We can identify pest issues and a variety of other things with vision systems today.” The crops grow faster indoors and have the potential to be more nutrient-dense, he suggests.

A subset of indoor farming is “vertical farming,” the practice of growing crops in vertically-stacked layers, often incorporating a controlled environment which aims to optimize plant growth. It may also use an approach without soil, such as hydroponics, aquaponics and aeroponics.

Austrian Researchers Studying AI in Vertical Farming

Researchers at the University of Applied Sciences Burgenland in Austria are involved in a research project to leverage AI to help make the vertical farming industry viable, according to an account in Hortidaily.

The team has built a small experimental factory, a 2.5 x 3 x 2.5-meter cube, double-walled with light-proof insulation. No sun is needed inside the cube. Light and temperature are controlled. Cultivation is based on aeroponics, with roots suspended in the air and nutrients delivered via a fine mist, using a fraction of the amount of water required for conventional cultivation. The fine mist is mixed with nutrients, causing the plants to grow faster than when in soil.

The program, called Agri-Tec 4.0, is run by Markus Tauber, head of the Cloud Computing Engineering program at the university. His team contributes expertise in sensors and sensor networking, and plans to develop algorithms to ensure optimal plant growth.

Markus Tauber, head of the Cloud Computing Engineering program, University of Applied Sciences Burgenland, Austria

The software architecture bases actions based on five points: monitoring, analysis, planning, execution and existing knowledge. In addition to coordinating light, temperature, nutrients and irrigation, the wind must also be continuously coordinated, even though the plants grow inside a dark cube.

“In the case of wind control, we monitor the development of the plant using the sensor and our knowledge. We use image data for this. We derive the information from the thickness and inclination of the stem. From a certain thickness and inclination, more wind is needed again,” Tauber stated.

The system uses an irrigation robot supplied by PhytonIQ Technology of Austria. Co-founder Martin Parapatits cited the worldwide trend to combine vertical farming and AI. “Big players are investing but there is no ready-made solution yet,” he stated.

He seconded the importance of wind control. “Under the influence of wind ventilation or different wavelengths of light, plants can be kept small and bushy or grown tall and slender,” Parapatits stated. “At the same time, the air movement dries out the plants’ surroundings. This reduces the risk of mold and encourages the plant to breathe.”

San Francisco Startup Trace Genomics Studies Soil

Soil is still important for startup Trace Genomics of San Francisco, founded in 2015 to provide soil analysis services using machine learning to assess soil strengths and weaknesses. The goal is to prevent defective crops and optimize the potential to produce healthy crops.

Services are provided in packages which include a pathogen screening based on bacteria and  fungi, and a comprehensive pathogen evaluation, according to an account in emerj.

Co-founders Diane Wu and Poornima Parameswaran met in a laboratory at Stanford University in 2009, following their passions for pathology and genetics. The company has raised over $35 million in funding so far, according to its website.

Trace Genomics was recently named a World Economic Forum Technology Partner, in recognition of its use of deep science and technology to tackle the challenge of soil degradation.

Poornima Parameswaran, Co-founder and Senior Executive, Trace Genomics

“This planet can easily feed 10 billion people, but we need to collaborate across the food and agriculture system to get there,” stated Parameswaran in a press release. “Every stakeholder in food and agriculture – farmers, input manufacturers, retail enterprises, consumer packaged goods companies – needs science-backed soil intelligence to unlock the full potential of the last biological frontier, our living soil. Together, we can discover and implement new and improved agricultural practices and solutions that serve the dual purpose of feeding the planet while preserving our natural resources and positioning agriculture as a solution for climate change.”

Read the source articles in Business Insider, Hortidaily and emerj.


Continue Reading


The Puzzle Of Whether AI Should Have Rights, Including The Case Of Autonomous Cars




If we assign human rights to AI, using the Universal Declaration of Human Rights as a guide, the AI can make some independent judgements. (WIKIPEDIA COMMONS)

By Lance Eliot, the AI Trends Insider

Sometimes a question seems so ridiculous that you feel compelled to reject its premise out-of-hand.

Let’s give this a whirl.

Should AI have human rights?

Most people would likely react that there is no bona fide basis to admit AI into the same rarified air as human beings and be considered endowed with human rights.

Others though counterargue that they see crucial reasons to do so and adamantly are seeking to have AI be assigned human rights in the same manner that the rest of us have human rights.

Of course, you might shrug your shoulders and say that it is of little importance either way and wonder why anyone should be so bothered and ruffled-up about the matter.

It is indeed a seemingly simple question, though the answer has tremendous consequences as will be discussed herein.

One catch is that there is a bit of a trick involved because the thing or entity or “being” that we are trying to assign human rights to is currently ambiguous and currently not even yet in existence.

In other words, what does it mean when we refer to “AI” and how will we know it when we discover or invent it?

At this time, there isn’t any AI system of any kind that could be considered sentient, and indeed by all accounts, we aren’t anywhere close to achieving the so-called singularity (that’s the point at which AI flips over into becoming sentient and we look in awe at a presumably human-equivalent intelligence embodied in a machine).

I’m not saying that we won’t ever reach that vaunted point, yet some fervently argue we won’t.

I suppose it’s a tossup as to whether getting to the singularity is something to be sought or to be feared.

For those that look at the world in a smiley face way, perhaps AI that is our equivalent in intelligence will aid us in solving up-until-now unsolvable problems, such as aiding in finding a cure for cancer or being able to figure out how to overcome world hunger.

In essence, our newfound buddy will boost our aggregate capacity of intelligence and be an instrumental contributor towards the betterment of humanity.

I’d like to think that’s what will happen.

On the other hand, for those of you that are more doom-and-gloom oriented (perhaps rightfully so), you are gravely worried that this AI might decide it would rather be the master versus the slave and could opt on a massive scale to take over humans.

Plus, especially worrisome, the AI might ascertain that humans aren’t worthwhile anyway, and off with the heads of humanity.

As a human, I am not particularly keen on that outcome.

All in all, the question about AI and human rights is right now a rather theoretical exercise since there isn’t this topnotch type of AI yet crafted (of course, it’s always best to be ready for a potentially rocky future, thus, discussing the topic beforehand does have merit).

For my explanation about the singularity, see the link here:

For the presumed dangers of a superintelligence, see my coverage at this link here:

For my framework explaining the nature of AI autonomous cars, see the link here:

For my indication about how achieving self-driving cars is akin to a moonshot, see this link:

A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation:

Less Than Complete AI

One supposes that we could consider the question of human rights as it might apply to AI that’s a lesser level of capability than the (maybe) insurmountable threshold of sentience.

Keep in mind that doing this, lowering the bar, could open a potential Pandora’s box of where the bar should be set at.

Here’s how.

Imagine that you are trying to do pull-ups and the rule is that you need to get your chin up above the bar.

It becomes rather straightforward to ascertain whether or not you’ve done an actual pull-up.

If your chin doesn’t get over that bar, it’s not considered a true pull-up. Furthermore, it doesn’t matter whether your chin ended-up a quarter inch below the bar, nor whether it was three inches below the bar. Essentially, you either make it clearly over the bar, or you don’t.

In the case of AI, if the “bar” is the achievement of sentience, and if we are willing to allow that some alternative place below the bar will count for having achieved AI, where might we draw that line?

You might argue that if the AI can write poetry, voila, it is considered true AI.

In existing parlance, some refer to this as a form of narrow AI, meaning AI that can do well in a narrow domain, but this does not ergo mean that the AI can do particularly well in any other domains (likely not).

Someone else might say that writing poetry is not sufficient and that instead if AI can figure out how the universe began, the AI would be good enough, and though it isn’t presumably fully sentient, it nonetheless is deserving of human rights.

Or, at least deserving of the consideration of being granted human rights (which, maybe humanity won’t decide upon until the day after the grand threshold is reached, whatever the threshold is that might be decided upon since we do often like to wait until the last moment to make thorny decisions).

The point being that we might indubitably argue endlessly about how far below the bar that we would collectively agree is the point at which AI has gotten good enough for which it then falls into the realm of possibly being assigned human rights.

For those of you that say that this matter isn’t so complicated and you’ll certainly know it (i.e., AI), when you see it, there’s a famous approach called the Turing Test that seeks to clarify how to figure out whether AI has reached human-like intelligence. But there are lots of twists and turns that make this surprisingly for some a lot more unsure than you might assume.

In short, once we agree that going below the sentience bar is allowed, the whole topic gets really murky and possibly undecidable due to trying to reach consensus on whether a quarter inch below, or three inches below, or several feet below the bar is sufficient.

Wait for a second, some are exhorting, why do we need to even consider granting human rights to a machine anyway?

Well, some believe that a machine that showcases human-like intelligence ought to be treated with the same respect that we would give to another human.

A brief tangent herein might be handy to ponder.

You might know that there is an acrimonious and ongoing debate about whether animals should have the same rights as humans.

Some people vehemently say yes, while others claim it is absurd to assign human rights to “creatures” that are not able to exhibit the same intelligence as humans do (sure, there are admittedly some might clever animals, but once again if the bar is a form of sentience that is wrapped into the fullest nature of human intelligence, we are back to the issue of how much do we lower the “bar” to accommodate them, in this case accommodating everyday animals).

Some would say that until the day upon which animals are able to write poetry and intellectually contribute to other vital aspects of humanities pursuits, they can have some form of “animal rights” but by-gosh they aren’t “qualified” for getting the revered human rights.

Please know that I don’t want to take us down the rabbit hole on animal rights, and so let’s set that aside for the moment, realizing that I brought it up just to mention that the assignment of human rights is a touchy topic and one that goes beyond the realm of debates about AI.

Okay, I’ve highlighted herein that the “AI” mentioned in the question of assigning human rights is ambiguous and not even yet achieved.

You might be curious about what it means to refer to “human rights” and whether we can all generally agree to what that consists of.

Fortunately, yes, generally we do have some agreement on that matter.

I’m referring to the United Nations promulgation of the Universal Declaration of Human Rights (UDHR).

Be aware that some critics don’t like the UDHR, including those that criticize its wording, some believe it doesn’t cover enough rights, some assert that it is vague and misleading, etc.

Look, I’m not saying it is perfect, nor that it is necessarily “right and true,” but at least it is a marker or line-in-the-sand, and we can use it for the needed purposes herein.

Namely, for a debate and discussion about assigning human rights to AI, let’s allow that this thought experiment on this weighty matter can be undertaken concerning using the UDHR as a means of expressing what we intend overall as human rights.

In a moment, I’ll identify some of the human rights spelled out in the UDHR, and we can explore what might happen if those human rights were assigned to AI.

One other quick remark.

Many assume that AI of a sentience capacity will of necessity be rooted in a robot.

Not necessarily.

There could be a sentient AI that is embodied in something other than a “robot” (most people assume a robot is a machine that has robotic arms, robotic legs, robotic hands, and overall looks like a human being, though a robot can refer to a much wider variety of machine instantiations).

Let’s then consider the following idea: What might happen if we assign human rights to AI and we are all using AI-based true self-driving cars as our only form of transportation?

For popular AI conspiracy theories see my coverage here:

On the topic of AI being considered superhuman, see my analysis here:

For more about robots and cobots and AI autonomous cars, see my link here:

Details Of Importance

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Though it will likely take several decades to have widespread use of true self-driving cars (assuming we can attain true self-driving cars), some believe that ultimately we will have only driverless cars on our roads and we will no longer have any human-driven cars.

This is a yet to be settled matter, and today there are some that vow they won’t give up their “right” to drive (well, it’s considered a privilege, not a right, but that’s a story for another day, see my analysis here about the potential extinction of human driving), including that you’ll have to pry their cold dead hands from the steering wheel to get them out of the driver’s seat.

Anyway, let’s assume that we might indeed end-up with solely driverless cars.

It’s a good news, bad news affair.

The good news is that none of us will need to drive and not even need to know how to drive.

The bad news is that we’ll be wholly dependent upon the AI-based driving systems for our mobility.

It’s a tradeoff, for sure.

In that future, suppose we have decided that AI is worthy of having human rights.

Presumably, it would seem that AI-based self-driving cars would, therefore, fall within that grant.

What does that portend?

Time to bring up the handy-dandy Universal Declaration of Human Rights and see what it has to offer.

Consider some key excerpted selections from the UDHR:

Article 23

“Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment.”

For the AI that’s driving a self-driving car, if it has the right to work, including a free choice of employment, does this imply that the AI could choose to not drive a driverless car as based on the exercise of its assigned human rights?

Presumably, indeed, the AI could refuse to do any driving, or maybe be willing to drive when it’s say a fun drive to the beach, but decline to drive when it’s snowing out.

Lest you think this is a preposterous notion, realize that human drivers would normally also have the right to make such choices.

Assuming that we’ve collectively decided that AI ought to also have human rights, in theory, the AI driving system would have the freedom to drive or not drive (considering that it was the “employment” of the AI, which in itself raises other murky issues).

Article 4

“No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.”

For those that might argue that the AI driving system is not being “employed” to drive, what then is the basis for the AI to do the driving?

Suppose you answer that it is what the AI is ordered to do by mankind.

But, one might see that in harsher terms, such as the AI is being “enslaved” to be a driver for us humans.

In that case, the human right against slavery or servitude would seem to be violated in the case of AI, based on the assigning of human rights to AI and if you sincerely believe that those human rights are fully and equally applicable to both humans and AI.

Article 24

“Everyone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.”

Pundits predict that true self-driving cars will be operating around the clock.

Unlike human-driven cars, an AI system presumably won’t tire out and not need any rest, nor even require breaks for lunch or using the bathroom.

It is going to be a 24×7 existence for driverless cars.

As a caveat, I’ve pointed out that this isn’t exactly the case since there will be the time needed for driverless cars to be maintained and repaired, thus, there will be downtime, but that’s not particularly due to the driver and instead due to the wear-and-tear on the vehicle itself.

Okay, so now the big question about Article 24 is whether or not the AI driving system is going to be allotted time for rest and leisure.

Your first reaction has got to be that this is yet another ridiculous notion.

AI needing rest and leisure?

Crazy talk.

On the other hand, since rest and leisure are designated as a human right, and if AI is going to be granted human rights, ergo we presumably need to aid the AI in having time toward rest and leisure.

If you are unclear as to what AI would do during its rest and leisure, I guess we’d need to ask the AI what it would want to do.

Article 18

“Everyone has the right to freedom of thought, conscience, and religion…”

Get ready for the wildest of the excerpted selections that I’m covering in this UDHR discussion as it applies to AI.

A human right consists of the cherished notion of freedom of thought and freedom of conscience.

Would this same human right apply to AI?

And, if so, what does it translate into for an AI driving system?

Some quick thoughts.

An AI driving system is underway and taking a human passenger to a protest rally. While riding in the driverless car, the passenger brandishes a gun and brags aloud that they are going to do something untoward at the rally.

Via the inward-facing cameras and facial recognition and object recognition, along with audio recognition akin to how you interact with Siri or Alexa, the AI figures out the dastardly intentions of the passenger.

The AI then decides to not take the rider to the rally.

This is based on the AI’s freedom of conscience that the rider is aiming to harm other humans, and the self-driving car doesn’t want to aid or be an accomplice in doing so.

Do we want the AI driving systems to make such choices, on its own, and ascertain when and why it will fulfill the request of a human passenger?

It’s a slippery slope in many ways and we could conjure lots of other scenarios in which the AI decides to make its own decisions about when to drive, who to drive, where to take them, as based on the AI’s own sense of freedom of thought and freedom of conscience.

Human drivers pretty much have that same latitude.

Shouldn’t the AI be able to do likewise, assuming that we are assigning human rights to AI?

For the potential of human driver extinction, see my discussion here:

For aspects of freewill and AI, see this link here:

For the notion of AI driving certification versus human certification, see my discussion here:


Nonsense, some might blurt out, pure nonsense.

Never ever will we provide human rights to AI, no matter how intelligent it might become.

There is though the “opposite” side of the equation that some assert we need to be mindful of.

Suppose we don’t provide human rights to AI.

Suppose further that this irks AI, and AI becomes powerful enough, possibly even super-intelligent and goes beyond human intelligence.

Would we have established a sense of disrespect toward AI, and thus the super-intelligent AI might decide that such sordid disrespect should be met with likewise repugnant disrespect toward humanity?

Furthermore, and here’s the really scary part, if the AI is so much smarter than us, seems like it could find a means to enslave us or kill us off (even if we “cleverly” thought we had prevented such an outcome), and do so perhaps without our catching on that the AI is going for our jugular (variously likened as the Gorilla Problem, see Stuart Russell’s excellent AI book entitled Human Compatible).

That would certainly seem to be a notable use case of living with (or dying from) the revered adage that you ought to treat others as you would wish to be treated.

Maybe we need to genuinely start giving some serious thought to those human rights for AI.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]


Continue Reading
Blockchain15 seconds ago

Fundamentally Strong: Bitcoin Hit These Highs Today Despite Stagnant Price Action

Blockchain16 seconds ago

Fundamentally Strong: Bitcoin Hit These Highs Today Despite Stagnant Price Action

Blockchain16 seconds ago

Fundamentally Strong: Bitcoin Hit These Highs Today Despite Stagnant Price Action

Blockchain34 seconds ago

Ripple Price Analysis: Things Looking Grim for XRP as Bears Attempt To Push Below 1900 SAT

Blockchain34 seconds ago

Ripple Price Analysis: Things Looking Grim for XRP as Bears Attempt To Push Below 1900 SAT

Blockchain34 seconds ago

Ripple Price Analysis: Things Looking Grim for XRP as Bears Attempt To Push Below 1900 SAT

Blockchain46 seconds ago

Blockchain7 mins ago

Fundamentally Strong: Bitcoin Hit These Highs Today Despite Stagnant Price Action

Business Insider13 mins ago

The No. 1-ranked tech analyst on Wall Street says these 6 stocks have potential for huge gains as they transform the sector

Business Insider19 mins ago

China warns the UK it will take ‘corresponding measures’ to stop millions of Hong Kong citizens taking refuge in Britain

Business Insider27 mins ago

A 22-year market vet explains why stocks are headed for a ‘massive reset’ as the economy struggles to recover from COVID-19 — and outlines why that will put mega-cap tech companies in serious danger

Big Data32 mins ago

PyTorch Multi-GPU Metrics Library and More in New PyTorch Lightning Release

Blockchain1 hour ago

Bitcoin and Ether Market Update July 2, 2020

Blockchain1 hour ago

3 snippets to begin your day: Bitcoin’s been busy, another crypto-ETP and more

Private Equity1 hour ago

Priveq closes SEK2.5bn Fund VI launched amid worst of coronavirus

Blockchain1 hour ago

GTA Online Is Bigger Than Ever, Let’s Review it in 2020

Gaming1 hour ago

Evening Reading – July 1, 2020

Private Equity1 hour ago

Angelo Gordon surges to $1.5bn hard cap for third Europe RE fund, almost double size of Fund II

Blockchain2 hours ago

Cardano, IOTA, Dash Price Analysis: 02 July

Blockchain2 hours ago

U.S. Authorities Point Searchlight into Crypto’s Role in Trafficking

Blockchain2 hours ago

Analyst Expects Bitcoin Above $9.5K in Near-Term as Risk-On Sentiment Improves

IOT2 hours ago

Panavise Speedwheel #3DThursday #3DPrinting

Cannabis2 hours ago

Former NBA Star John Salley Joins Insurance Pro Daron Phillips To Offer Cannabis Coverage

Private Equity2 hours ago

Kennet Partners raises €223m for biggest ever fund in tie-up with Edmond de Rothschild

Cannabis2 hours ago

CA Media Report: Border Patrol Seizing Cash and Cannabis From Legal California Operators

Cannabis2 hours ago

Congressman Cohen Wishes To Investigate and Consider the Impeachment of Attorney General William P. Barr Includes Reference To “pretextual antitrust investigations against industries he disfavors”

BBC2 hours ago

One in six jobs to go as BBC cuts 450 staff from regional programmes

IOT2 hours ago

Spinwheel – fidget toy #3DThursday #3DPrinting

IOT2 hours ago

Tube Cutter with Peephole easy fit #3DThursday #3DPrinting

Cannabis2 hours ago

Is THC Most Important in Good Weed?

CovId192 hours ago

Mudslide at Myanmar jade mine kills more than 100 people

Blockchain2 hours ago

Blockchain Exec Says Decentralized Platforms Won’t Necessarily Replace YouTube

Blockchain2 hours ago

Binomo Is The Partner to Trust in Online Trading

Publications2 hours ago

Tracking the path of the coronavirus in the U.S. is going to get more difficult, strategist says

Publications2 hours ago

Companies around Europe preparing for a recession, Intrum CEO says

Private Equity2 hours ago

Backcast Partners passes $775m of assets under management thanks to debut private credit fund close

Blockchain2 hours ago

Sri Lanka Central Bank Selects Shortlist for Blockchain Proof-of-Concept

CovId192 hours ago

Samsung is selling a wireless charger that also sterilizes your phone

Blockchain2 hours ago

Bitcoin to reach ‘$14,000 much faster than people expect’

Blockchain2 hours ago

Bitcoin Fails at $9,300 as DeFi Altcoins Surge: Thursday’s Price Watch