Connect with us

AI

Waymos Anca Dragan and Ike Robotics CTO Jur van den Berg are coming to TC Sessions: Robotics+AI

Avatar

Published

on

The road to “solving” self-driving cars is riddled with challenges, from perception and decision making to figuring out the interaction between humans and robots.

Today we’re announcing that joining us at TC Sessions: Robotics+AI on March 3 at UC Berkeley are two experts who play important roles in the development and deployment of autonomous vehicle technology: Anca Dragan and Jur van den Berg.

Dragan is an assistant professor in UC Berkeley’s electrical engineering and computer sciences department, as well as a senior research scientist and consultant for Waymo, the former Google self-driving project that is now a business under Alphabet. She runs the InterACT Lab at UC Berkeley, which focuses on algorithms for human-robot interaction. Dragan also helped found, and serves on, the steering committee for the Berkeley AI Research Lab, and is co-PI of the Center for Human-Compatible AI.

Last year, Dragan was awarded the Presidential Early Career Award for Scientists and Engineers.

Van den Berg is the co-founder and CTO of Ike Robotics, a self-driving truck startup that last year raised $52 million in a Series A funding round led by Bain Capital  Ventures. Van den Berg has been part of the most important, secretive and even controversial companies in the autonomous vehicle technology industry. He was a senior researcher and developer in Apple’s special projects group, before jumping to self-driving trucks startup Otto. He became a senior autonomy engineer at Uber after the ride-hailing company acquired Otto .

All of this led to Ike, which was founded in 2018 with Nancy Sun and Alden Woodrow, who were also veterans of Apple, Google and Uber Advanced Technologies Group’s self-driving truck program.

TC Sessions: Robotics+AI returns to Berkeley on March 3. Make sure to grab your early-bird tickets today for $275 before prices go up by $100. Students, grab your tickets for just $50 here.

Startups, book a demo table right here and get in front of 1,000+ of Robotics/AI’s best and brightest — each table comes with four attendee tickets.

Read more: https://techcrunch.com/2020/01/10/waymos-anca-dragan-and-ike-robotics-cto-jur-van-den-berg-are-coming-to-tc-sessions-robotics-ai/

AI

Data Science is Where to Find the Most AI Jobs and Highest Salaries

Avatar

Published

on

AI is a hot job market and the hottest jobs in AI are in data science. And data science jobs also pay the highest salaries. (Credit: Getty Images)

By John P. Desmond, AI Trends Editor

Jobs in data science grew nearly 46% in 2020, with salaries in the range of $100,000 to $130,000 annually, according to a recent account in TechRepublic based on information from LinkedIn and LHH, formerly Lee Hecht Harrison, a global provider of talent and leadership development.

Related job titles include data science specialist and data management analyst. Companies hiring were called out in the TechRepublic account, including:

Paul Anderson, CEO, Novacoast

Novacoast, which helps organizations build a cybersecurity posture through engineering, development, and managed services. Founded in 1996 in Santa Barbara, the company has many remote employees and a presence in the UK, Canada, Mexico, and Guatemala.

The company offers a security operations center (SOC) cloud offering called novaSOC, that analyzes emerging challenges. “We work to have an answer ready before we’ve been asked,” stated CEO Paul Anderson in a press release issued on the company’s inclusion on a list of the top 250 Managed Service Providers from MSSP Alert. novaSOC automatically collects endpoint data and correlates it with threat intelligence sources, adding in analysis and reporting to make a responsive security monitoring service. Novacoast is planning to hire 60 employees to open a new SOC in Wichita, Kansas.

Pendo is an information technology services company that provides step-by-step guides to help workers master new software packages. The software aims to boost employee proficiency through personalized training and automated support. Founded in 2013 in Raleigh, N.C., the company has raised $209.5 million to date, according to Crunchbase. Demand for the company’s services soared in 2020 as schools shifted to online teaching and many companies permitted employees to work from home.

“More people are using digital products. Many had planned to go digital but they could not afford to wait. That created opportunities for us,” stated Todd Olson, cofounder and CEO, in an account in Newsweek. The company now has about 2,000 customers, including Verizon, RE/MAX, Health AB, John Wiley & Sons, LabCorp, Mercury Insurance, OpenTable, Okta, Salesforce and Zendesk. The company plans to hire 400 more employees this year to fuel its growth as it invests in its presence overseas in an effort to win more large customers. The company recently had 169 open positions.

Ravi Kumar, President, Infosys

Infosys is a multinational IT services company headquartered in India that is expanding its workforce in North America. The company recently announced it would be hiring 500 people in Calgary, Alberta, Canada over the next three years, which would double its Canadian workforce to 4,000 employees. “Calgary is a natural next step of our Canadian expansion. The city is home to a thriving talent pool. We will tap into this talent and offer skills and opportunities that will build on the city’s economic strengths,” stated Ravi Kumar, President of Infosys, in a press release.

Over the last two years, Infosys has created 2,000 jobs across Toronto, Vancouver, Ottawa, and Montreal. The Calgary expansion will enable Infosys to scale work with clients in Western Canada, Pacific Northwest, and the Central United States across various industries, including natural resources, energy, media, retail, and communications. The company will hire tech talent from fourteen educational institutions across the country, including the University of Calgary, University of Alberta, Southern Alberta Institute of Technology, University of British Columbia, University of Toronto, and Waterloo. Infosys also plans to hire 300 workers in Pennsylvania as part of its US hiring strategy, recruiting for a range of opportunities across technology and digital services, administration and operations.

AI is Where the Money Is

In an analysis of millions of job postings across the US, the labor market information provider Burning Glass wanted to see which professions had the highest percentage of job postings requesting AI skills, according to an account from Dice. Data science was requested by 22.4% of the postings, by far the highest. Next was data engineer at 5.5%, database architect at 4.6% and network engineer/architect at 3.1%.

Burning Glass sees machine learning as a “defining skill” among data scientists, needed for day-to-day work. Overall, jobs requiring AI skills are expected to grow 43.4% over the next decade. The current median salary for jobs heavily using AI skills is $105,000, good compared to many other professions.

Hiring managers will test for knowledge of fundamental concepts and ability to execute. A portfolio of AI-related projects can help a candidate’s prospects.

Burning Glass recently announced an expansion and update of its CyberSeek source of information on America’s cybersecurity workforce. “These updates are timely as the National Initiative for Cybersecurity Education (NICE) Strategic Plan aims to promote the discovery of cybersecurity careers and multiple pathways to build and sustain a diverse and skilled workforce,” stated Rodney Petersen, Director of the NICE, in a Burning Glass press release

NICE is a partnership between government, academia, and the private sector focused on supporting the country’s ability to address current and future cybersecurity education and workforce challenges.

Trends for AI in 2021 in the beginning of the latter stages of the global pandemic were highlighted in a recent account in VentureBeat as:

  • Hyperautomation, the application of AI and machine learning to augment workers and automate processes to a higher degree;
  • Ethical AI, because consumers and employees expect companies to adopt AI in a responsible manner; companies will choose to do business with partners that commit to data ethics and data handling practices that reflect appropriate values;
  • And Workplace AI, to help with transitions to new models of work, especially with knowledge workers at home; AI will be used to augment customer services agents, to track employee health and for intelligent document extraction.

Read the source articles and information in TechRepublic, in a press release from Novacoast, in Newsweek, in a press release from Infosys, in an account from Dice, in a Burning Glass press release and in an account in VentureBeat.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/data-science/data-science-is-where-to-find-the-most-ai-jobs-and-highest-salaries/

Continue Reading

AI

Pandemic Has Spurred CIOs to Crystallize the IT Strategy

Avatar

Published

on

The pandemic has spurred an accelerated digital transformation for many companies, challenging CIOs to crystallize AI strategies to achieve synergy. (Credit: Getty Images)

By AI Trends Staff

CIOs have tested many emerging strategies during the pandemic, including the Internet of Things sensors, low-orbit satellites, and augmented reality. Now the challenge is to get the technologies to work together to reach for big business goals.

Adriana Karaboutis, Group Chief Information and Digital Officer, National Grid

This was the message from Adriana Karaboutis, group chief information and digital officer at National Grid, speaking at the 2021 MIT CIO Symposium in a session on Accelerated Digital Transformation, held virtually recently.

The effects of the crisis made organizations “double down on that focus and crystallization for what we need to do,” she stated in an account in CIO Dive

To pursue IoT, standardization is a must, suggested Harmeen Mehta, chief digital and innovation officer at BT, the British multinational telecommunications firm. “If the world can consolidate a bit on standardization, it will help pick up speed,” stated Mehta. “What we’ve not done well as an industry is truly come together and make some hard choices” in converging around specific types of IoT technologies.

Technologies and data streams playing off each other can lead to new outcomes, in the experience of David Neitz, CIO at engineering and construction company CDM Smith. Using NVIDIA’s Jetson Nano device, for example, CDM Smith is able to train an AI computer vision model to detect wrong-way drivers, he stated. The technology use case combines capabilities of sensors with the computing power of AI.

“You have someone sitting in a command center trying to monitor 300 screens,” stated Neitz, speaking on the panel. “Why rely on a human to be observant and alert?” Instead, the company relies on a mix of sensors and algorithms to monitor and track when drivers are using lanes in an erratic way, or how traffic is behaving around a construction zone.

Another application finds CDM Smith combining IoT soil sensors, data analytics and weather information to predict when a potential landslide could impact a railroad track.

Pandemic Has Accelerated the Move to Cloud Computing

The pandemic seems to have accelerated the move to cloud computing. A recent survey of 750 cloud decision-makers found that 92% of represented enterprises have a multi-cloud strategy, and 90% have a hybrid cloud strategy. The results are contained in the 2021 State of the Cloud Report from Flexera, an IT asset management software company.

Jim Ryan, President and CEO, Flexera

“COVID-19 has accelerated the migration to cloud computing,” stated Jim Ryan, President and CEO of Flexera, in a press release. “Still, cloud isn’t magic or the land of milk and honey. Companies are moving fast, facing challenges, and trying to connect cloud computing to business outcomes. The appetite for digital transformation is high, but real-world challenges—such as managing security and optimizing cloud spend—still must be addressed.”

Cloud adoption among the respondents was as follows: AWS adoption grew to 77% (from 76% last year); Azure grew to 73% (from 63% last year); Google Cloud grew to 47% (from 35% last year); VMware Cloud on AWS grew to 24% (from 17% last year); Oracle Infrastructure Cloud grew to 29% (from 17% last year); IBM Public Cloud grew to 24% (from 13% last year); and Alibaba Cloud grew to 12% (from 7% last year).

The key considerations CIOs are advised to take into account as they gravitate more of their IT operations to the cloud touch on infrastructure, processes and culture, advised a recent account in Forbes. They include:

  • Putting Customers and Employees First. In the past year, organizations have transformed and served customers in new ways. Restaurants, for example, had to enable better ordering of food online to be picked up or delivered and paid for electronically. CIOs have transitioned from being “enablers” of digital transformation to being “drivers” of business change. The trend is seen as so produced that Gartner predicts that 25% of large-enterprise CIOs could become “COO by proxy” by 2024.
  • Developing ‘Enterprise Agility.’ Each company needs to determine its own path to competitive advantage, as in which approach to take for the digital and data journey, whether to emulate the infrastructure choices of digital native competitors or invent a new way as the company looks for a unique enterprise identity.

IT Shifts Within Scaled, Agile Organizations

Research from McKinsey & Co. on enterprise agility identifies five core IT shifts within scaled agile organizations. These include:

  • Speed. Information needs to be relevant, actionable & timely, whether for real-time scenarios such as fraud detection that triggers machine intervention, or monitoring that requires human judgment, “The notion of speed and pace is key,” stated the author of the Forbes account, Bruno Aziza, a technology entrepreneur who is the current Head of Data & Analytics for Google Cloud.
  • Scale. The world is predicted to store 200 zettabytes [a zettabyte is 10 to the power of 21] of data by 2025, and 50% of all data will be in the cloud. “The question is: are your teams building for a world where capacity could be limitless? Or are they living in a world where only some data deserves to be stored?” Aziza queried. He suggests that team members look at more data; they don’t know which data will become more valuable over time.
  • Security. The number one concern of companies of all sizes in the 2021 State of the Cloud Report, is security. The higher the company’s cloud maturity, the higher the concern. In a world where more data comes from a range of sources and is used by more people across more use cases, security and data governance needs to be taken into account on day one, the author suggested.
  • Human Intelligence. AI is important to digital transformation efforts; the AI Specialist is now one of the fastest growing jobs on LinkedIn’s 2020 Emerging Jobs Report. Many organizations are looking to infuse AI to optimize their technical infrastructure and focus on the adoption of machine learning as part of intelligent application initiatives. “But many struggle to marry human intelligence with machine intelligence,” the author states. The best guidance he has found so far on when humans are better equipped than machines, is from the book Only Humans Need Apply by Tom Davenport, released in 2016.

Aziza recommends working to identify the use cases where machine intelligence is most appropriate and the ones where humans do a better job “augmenting” the machine’s capabilities.

Read the source articles and information from sessions of the 2021 MIT CIO Symposium, in CIO Dive, in a press release from Flexera, in Forbes and from the book Only Humans Need Apply by Tom Davenport.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/ai-and-business-strategy/pandemic-has-spurred-cios-to-crystallize-the-it-strategy/

Continue Reading

AI

AI in Construction Scenario and Workforce Planning Seen Lowering Costs 

Avatar

Published

on

Construction managers are using AI to help run simulations, decide on the number of workers and length of shifts, in an effort to lower costs and improve safety. (Credit: Getty Images) 

By AI Trends Staff   

AI is starting to become useful in the construction industry as firms have built the data lakes and analytics systems necessary for AI to provide useful advice on how to plan, schedule, and execute projects. 

In some cases, the AI advisors have become a standard ingredient of project delivery methods, and in other cases, it is a challenge to convince construction professionals to listen to the AI advisors, according to a recent account in the Engineering News-Record.  

Alice Technologies, offering an AI-powered construction simulation platform, was founded in 2015 based on research from Stanford University. The company has raised $38.3 million to date, according to Crunchbase. The goal of Alice is to have its customers optimize project schedules and thereby reduce project duration and save on labor and equipment.  

René Morkos, Founder and CEO, Alice Technologies

“What I always hear from people [in the industry] is that ‘I really like scheduling, but the number crunching is the boring part,’” stated René Morkos, founder and CEO. “Why would anyone in their right mind want to spend time crunching all the constraints on a project? It’s mind-numbingly boring.” In his view, the construction industry is approaching a tipping point of AI adoption. 

Alice is used to run simulations of a project’s building information model, the digital representation of physical and functional space that spans architecture, engineering and construction, used to plan, design, construct and manage buildings. Users of Alice can adjust inputs, and the software shows the impact on the construction schedule, helpful in generating alternatives. 

“The fundamental value proposition of the general contractor is changing,” Morkos stated. “This new ecosystem will be all about integrated data systems.”  

Project Manager for New San Francisco High-Rise Likes Advice From AI 

While planning out staging for structural concrete on a $150 million, 20-story residential tower development in San Francisco, project director Michael MacBean of Pacific Structures uses Alice for its informed second opinion.   

“We used it on pre-construction for that project to validate our approach and check our productivity,” he stated, while noting that his own experience as a project superintendent is most important. “The algorithm is awesome. Its ability to calculate every which way to skin the cat, if you will, gets that much better if you also have human expertise in construction,” stated MacBean.  

The Alice software helps him make decisions such as where to place a crane, whether to have workers put in eight-hour or 10-hour days, and whether to recruit 50 workers or 20. MacBean could have made the calculations on his own, but using the software was more efficient. “Alice does some pretty simple math, but it does it very quickly,” he stated. 

DPR Construction, an engineering company based in Redwood City, Calif., is developing its own AI-assisted build management program, relying on years of its own project data. “Some of the machine-learning projects we are working on right now, we’re not calling them AI. We’re calling them ‘AI assist’ or ‘human assist,’ ” stated Hrishi Maha, DPR data analytics leader. The idea is to augment the decision-making of human users, to offer insights based on the past performance of DPR projects.  

The automation can also be used in bid preparation and project planning. “The goal is to help our business development, operations, and scheduling folks make more informed decisions based on historical data so everything is more scientific, rather than someone’s bad feeling about something,” Maha stated. 

Niran Shrestha, CEO and Cofounder, Kwant.ai

To get more usable site data, DPR has also been trying outfitting workers with wearables containing Internet of Things sensors from Kwant.ai, a New York City-based software supplier focused on jobsite intelligence. The system helps with worker location and scheduling, and the company is working on applying machine learning to its datasets. “We never try to sell this by saying it will solve all your problems, but if you input all the data it will provide insights for you to take action,” stated Niran Shrestha, CEO and Cofounder of Kwant. 

As part of a panel on AI in Construction held by the Ontario General Contractors Association last year, Shrestha offered some insight into how AI can help estimate the needed manpower on a construction site.   

“We still don’t know how much manpower is required if you want to build an airport, a railroad or commercial building,” he stated in an account in ConstructConnect“When you are making a cost and schedule for a new project what do you do? You look at your older schedule, and you try to compare the schedule and see if you can use that as a reference for the new project, for estimation, for manpower and for cost. Now imagine with AI you are not comparing one or two schedules… but you’re looking at thousands and thousands of data points that you’ve collected historically for years and years.”  

A project manager with Cambria Design Build, Ltd., Milad Khalili, also on the panel, stated that he saw the advantage of being able to access historical data quickly. “With the help of machine learning, AI and automation, you will have it so much faster and a few clicks away,” he stated. “It’s also going to help us eliminate the repetitive tasks that project managers, project coordinators, and different people and trades are doing at construction sites. You are saving a lot of time and money on labor and materials.”  

Drones Increasingly Used for Site Surveys, Construction Planning  

Drones, or unmanned aerial vehicles (UAVs)  are being increasingly employed on construction sites, to monitor progress and safety and to survey sites prior to the first dirt being shoveled. Drones can provide detailed, high-resolution images, enabling engineers to pinpoint potential issues and allow for effective deployment of equipment during construction, according to a recent account in ForConstructionPros.com. Using drones to perform inspections also avoids the need to place workers at risk.  

“Simply put, drones enable us to provide needed views that are inaccessible, or otherwise too risky and expensive to capture by any other means,” stated Ryan Holmes, program manager of unmanned aircraft systems (UAS) for Multivista of Newton, Mass., which provides UAV/drone services with remote pilots on staff. “We are using drones to help anywhere, from assessing land clearing and earthwork, insurance coverage, inspections, through to project completion and maintenance thereafter,” he stated.  

The data gathered through the drone can usually be accessed on any platform, be it desktop computer, laptop, tablet, or smartphone, giving project data flexibility. Many companies optimize raw drone data to produce more clear drone images. Real Time Kinematic (RTK) drones, for example, use a GPS-correction technology that provides real-time location data corrections when capturing photos of a site.  

“The rise of RTK drones has provided a major step forward in providing accurate, repeatable results in a straightforward workflow, reducing one of the largest potential error sources in the placement and processing of Ground Control Points (GCPs) on earthwork projects,” stated Matthew Desmond, president of Agtek of Livermore, Calif., a supplier of survey, analysis and control software for the heavy construction industry. 

Read the source articles and information  in the Engineering News-Record, in ConstructConnect and in ForConstructionPros.com 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/ai-in-industry/ai-in-construction-scenario-and-workforce-planning-seen-lowering-costs/

Continue Reading

AI

Computational Omnipresence And Bird’s-Eye View Are Aiding AI Autonomous Cars 

Avatar

Published

on

To successfully leverage a bird’s-eye view perspective of a driving scene, a view from above, a self-driving car would have to contain the needed software to receive the information. (Credit: Getty Images)  

By Lance Eliot, the AI Trends Insider   

When driving a car, you could really benefit from having a bird’s-eye view of the driving scene. Let’s explore why.  

Imagine that you are driving in a crowded and altogether hectic downtown area. There are humongous skyscraper buildings that are towering over you and ostensibly blocking any chance of seeing beyond an extremely narrow tunnel-vision perspective of the roadway. Among the visual obscurity, you cannot see anything on the streets that intersect with the road that you are currently driving on. Until you get directly into an intersection, you pretty much have no idea what is taking place on any of those perpendicular avenues that are to the left and right of you.   

You come to a corner that is packed with pedestrians and signposts, once again blocking your view, and decide to engage a rapid and sharp right turn. Just as you poke forward into the turn, you’ll have a very brief chance to glimpse whatever lies beyond. In that split second, you have to visually scan the entire driving scene and hope that you can mentally ascertain the considerations and contortions of whatever unknown menaces are looming ahead of you.   

For example, as you make the right turn, you might suddenly come upon a car that is unlawfully parked in the active lane. You didn’t see the halted car until making your turnand could ram directly into the back of this reckless driver.   

Your mind races as you consider your options. 

You could hit the brakes, but this might get you violently rear-ended by a car that is closely following youturn. Another possibility would be to swing wide, going into the lane to the left of the illegally stopped car. But other traffic is using that lane, and your attempt to dart into their path could be catastrophic. You will either sideswipe one of those innocent cars or possibly disrupt their steady flow and produce a series of automotive-screeching cascading collisions. 

Sadly, neither option is satisfactory.   

This is the nature of driving. You are always on the edge of your seat because you are in the midst of continually making life-or-death decisions. Most people that sit down at the steering wheel are not actively thinking about the life-or-death matters involved in driving a car. Until they get themselves into a dicey driving situation, they take for granted the grim magnitude of the driving task.   

All it takes is for you to make the wrong decision, and you can end up striking other cars (or they could ram into you). Besides the likely damage to the vehicles, there is a viable chance of you getting injured, plus your passengers getting injured. There is also the likely chance of injuring the driver of the other car and the passengers in that vehicle. Regrettably, there is also the real chance of producing fatalities. The startling statistics are that about 40,000 car crash-related fatalities occur in the United States annually, along with approximately 2.3 million related injuries.   

Driving a car is dangerous, and yet we generally tend to downplay the risks. It sure would be handy if there were ways to reduce those risks. The example of making the right turn highlights especially the dismal conditions of driving when you are only able to see a small part of the overarching puzzle. Had you somehow been able to see or know that there was a car parked in the active turn lane, you would have been able to take proactive steps to avoid the crisis.   

What could you have done differently if you had a better semblance of the roadway situation? 

You might have come to a gradual stop before making the turn, which then presumably would have coaxed the car behind you to also slow down, thus reducing the risk of getting struck from behind. Alternatively, you might have chosen to not make the right turn at all, perhaps waiting to do so when you had driven down another block or two. In short, you would have had many more options available and been able to better make those life-or-death decisions if you had a macroscopic picture of the driving scene.   

Voilathe bird’s-eye view.   

Suppose that you had some kind of extending periscope that was attached to your car. This oddball contraption could allow you to look down those intersecting streets, giving you a brief heads-up before reaching the turn. That might work, but it doesn’t seem particularly practical.   

Imagine instead that you could somehow fly above the driving scene. There you are, sitting in your car at the steering wheel, simultaneously looking down upon the driving scene. Now that’s some kind of driving. 

Rather than relying upon a farfetched notion, we can be much more down-to-earth and consider everyday options that are viable right now. Assume that we mounted a camera above the intersection that you were aiming to make that right turn at. The camera could be doing real-time streaming and send the video to your in-car display.   

As such, you might glance at your in-car display and observe that a motionless car is sitting smack dab in the lane that you are expecting to use when you complete your right turn. Akin to the earlier emphasis about having crucial and timely beforehand options, you can make a wiser choice with this added vantage point.   

The idea of having a camera that overlooks an intersection is altogether practical and doable today. No magic is required. We might further increase the sensing capability by including other kinds of sensory devices. For example, we could include radar, LIDAR, thermal imaging, and so on. This array of sensors would allow for detecting the driving scene in a wide variety of conditions, such as even when it is foggy, raining, snowing, etc.   

That certainly seems tempting and a valuable way to give drivers an enhanced perspective about the driving scene. There are some potential downsides. 

A human driver has limitations on how much input they can absorb at once. Furthermore, their eyes can only usually be looking at thing at any given point in time. Thus, if you are looking down at an in-car display to see what is beyond the upcoming corner, the odds are that you’ve now taken your eyes off the road directly ahead of you. At that moment, you might not notice that a pedestrian has suddenly stepped into the street and you are about to run them over.   

The question arises as to how a human driver can take in extra information. This would have to be arranged in a fashion that would somehow keep your attention still riveted to the straight-ahead driving, and meanwhile allow for glimpsing what is beyond your ordinary viewpoint.   

Even if this could be arranged (perhaps via some kind of HUD or heads-up display), there is also the issue of making sense of this additional perspective. When you glance at the in-car display, you need to mentally analyze the added perspective and combine it with whatever you already have in your noggin about the driving scene.   

The odds are that people would have difficulty doing this, certainly at first try. Unless you had some specialized training or a lot of experience using this kind of perspective-augmenting facility, you would undoubtedly struggle. You might entirely ignore the secondary perspective. You might fail to spot the key elements in the secondary perspective that apply to your existing driving efforts. And so on.   

Some people would readily take to the feature, others might never make use of it (they could presumably disengage the feature and avoid using it).   

You can imagine how this would play out in societal terms. A person gets into a car crash and if they had used the augmented perspective, they perhaps would have been able to avoid the collision. They are then held culpable for not having used the capability. Likewise, somebody using the augmented perspective gets into a car crash, and the claim is made that the additional info was confusing or led the driver to somehow make a worse choice than if they had not been using the secondary perspective.    

Shifting gears, the future of cars consists of self-driving cars. Self-driving cars are going to be using AI-based driving systems and there won’t be a human driver at the wheel. Here is an intriguing question: Could AI-based true self-driving cars successfully leverage a bird’s-eye view perspective of the driving scene? 

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Bird’s Eye View   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

The AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient? Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about how to leverage a bird’s-eye view perspective of the driving scene. This is an aspect that needs to be programmed as part of the hardware and software of the self-driving car (I’ve referred to this as part of the “computational omnipresence” that self-driving cars can potentially attain).   

Let’s dive into the myriad aspects that come to play on this topic.   

First, the AI driving system would have to contain some device or equipment that would receive the secondary perspective. As mentioned earlier, the bird’s-eye view electronics might contain a multitude of sensory devices. In addition, there would need to be an electronic communications capability to transmit the data being collected. In turn, any vehicle wishing to make use of the transmission would need to be equipped with an appropriate receiving device.   

The point is that the self-driving car would have to contain some form of hardware and software to receive the bird’s-eye view data. This might be undertaken by communications devices already on-board the vehicle, or there might need to be an added component installed into the self-driving car. Some might argue that this potentially adds an additional cost to the self-driving car. In that case, there would need to be an appropriate ROI (Return on Investment) calculated.   

Via a back-of-the-envelope kind of hunch, the odds are quite high that this ROI would be worthwhile since if the capability is well-implemented, it could boost the AI driving system safety and reduce the chances of self-driving cars getting into calamitous issues (something that I’ve discussed at length in my columns).   

Okay, so let’s assume that a self-driving car is equipped to receive the data streaming from the bird’s-eye view. We next need to consider the timeliness of the data.   

Suppose the data is time-delayed in being sent. As such, as the vehicle comes up to make a right turn, and yet the data is provided to the AI driving system is let’s suppose several seconds behind real-time. In that case, the somewhat outdated data is problematic. The driving scene might have already changed, and the AI driving system is trying to use stale data. 

That’s bound to create difficulties.   

In that sense, the data coming from the bird’s-eye view has to be timely to be considered especially valued. The latency is a factor. That being said, let’s clarify that a timing delay of a split-second might not fully undermine the value of the data, and likewise even a delay of several seconds would not necessarily obviate the value. The point is that though the timing is vital, at least if it is time-stamped, the AI driving system would presumably be programmed to take into account the recency of the data and give weight accordingly to what is being received. Some are suggesting that the use of 5G will be significant to this timing aspect.   

The next aspect for the AI driving system consists of programmatically intertwining the bird’s-eye view with the existent sensory data from the in-car onboard sensory suite. In essence, the self-driving car already has its own set of sensory devices. There is a computational analysis that takes place and is commonly referred to as Multi-Sensor Data Fusion (MSDF). You can consider the bird’s-eye view to be an added set of sensors that are now being applied to the otherwise customary MSDF computational effort of the AI driving system.   

Here’s something else to ponder. Suppose the bird’s-eye view provides the entirety of the raw data being collected. This could amount to extraordinarily voluminous transmissions, taking precious time to do so. Also, the self-driving car has to have sufficient computational processing capabilities to crunch through the data, once it has been received. All told, within the self-driving car, there has to be a lot of added or presumably available computing resources for undertaking this calculation-intensive effort. Plus, it will take time for that kind of data-related interpretation and processing to occur.   

You might be thinking, well, in that case, just transmit a shorthand version. Perhaps the bird’s eye system ought to have its own computing capabilities and pre-crunch the data. Instead of sending the streaming video of that car parked in the active lane, the bird’s-eye capability might just send a text message stating that a car is going to be blocking that upcoming turn.   

This seems readily usable, but it turns out that there are thorny consequential issues that arise.  

For example, is the car completely blocking the lane or only partially intruding into the lane? Is the car truly at a complete standstill or perhaps rolling forward slowly? A zillion questions can be imagined. Without the raw data, the AI driving system will be getting an incomplete semblance of what issues might be awaiting the self-driving car. There is a challenging tradeoff of whether to only transmit a shorthand notation versus the full set of data.   

I’ll throw you another curveball. Should the AI driving system entirely trust the data coming from the bird’s-eye view? 

On the one hand, this bird’s-eye view could be a tremendous asset. At the same time, suppose the data is delayed and is no longer viably accurate about the driving scene. Worse, suppose that the data coming from the sensors is corrupted or perhaps has been hacked. The key is that the AI driving system is considered “responsible” for the driving of the vehicle. As such, whatever the bird’s-eye view provides is not as vital as the aspects of what the AI driving system is going to do when undertaking the driving task. 

Without seeking to create an anthropomorphic analogy, recall that earlier I mentioned that human drivers might struggle to combine the bird’s-eye view with their own recognition of the driving scene. You could suggest that the AI driving system is in the same boat.   

That being said, the beauty of the AI driving system is that, unlike a human that can only provide attention to one thing at a time (i.e., looking at the roadway versus looking at an in-car display), the AI system can be doing those types of actions simultaneously. Assuming that there is sufficient processing speed available and that the software is well-written, the AI driving system ought to be able to analyze the totality of the perspectives that are available via the amalgamated data from the widened sensory indications.   

We can add more icing to that cake.   

It is expected that self-driving cars will be equipped with V2V (vehicle-to-vehicle) electronic communications. This allows a self-driving car to send electronic messages to other nearby self-driving cars. For example, a self-driving car that perhaps made the right turn at the corner could send out a V2V cautioning that a car is parked just beyond that turn. Other nearby self-driving cars could receive the message, and the AI driving systems would accordingly (hopefully) be programmed to consider that added info.   

There is also going to be V2I (vehicle-to-infrastructure). For example, traffic signals will be beaming out electronic messages. Thus, rather than having to rely solely on a visual indication of whether a traffic signal is green-yellow-red, this can be transmitted electronically. 

All told, a self-driving car is apt to have a plethora of outside info that will be flowing into the AI driving system. This data needs to be computationally examined. There needs to be a fusing of the data to try and determine what the driving scene consists of. Plus, the AI driving system cannot necessarily assume that all the data is valid or truthful. Some data will be, while perhaps other parts of the data might be noisy, corrupted, or otherwise have misleading indications. 

Various experimental or pilot uses of birds-eye view capabilities are taking place today. 

For example, Ford has devised a bird’s-eye view sensory set up in Miami Beach, doing so as part of the joint effort of Ford and Argo AI’s self-driving vehicles. Specifically, a busy intersection in South Beach that is known for especially being hectic has been selected for the tryout (the corner of Lincoln Road and Lenox Avenue). This selection makes sense since it will provide a plentiful opportunity to test the bird’s-eye capability (versus if a quiet and otherwise seldom-used intersection was chosen).   

The intersection is near an outdoor mall and lots of popular stores and eateries, so there is a bustling and ongoing flow of bicyclists, pedestrians, and human-driven cars at that intersection. As with any of these efforts, it is prudent to do so in conjunction with roadway and related authorities. This effort includes the Florida Department of Transportation, the City of Miami Beach, and Miami-Dade County.   

Per an indication by Scott Griffith, CEO of Ford Autonomous Vehicles and Mobility Businesses, this effort brings together the added pieces of the puzzle that underlay the emergence of self-driving cars, including the use of state-of-the-art infrastructure capabilities (see his LinkedIn coverage): “Bringing together the future of self-driving requires us to think about every piece of the puzzle. One part of this is researching emerging technologies, like smart infrastructure, to explore how we can provide our self-driving vehicles with as much information as possible to navigate complex urban areas.”   

Bryan Salesky, CEO of Argo AI, and as I’ve previously covered in my columns, is well-known for his focused mission of seeking to develop and deploy self-driving cars for the betterment of making getting around cities safer, easier, and more thoroughly enjoyable experience for all. By Argo AI actively participating in these kinds of birds-eye view efforts, the synergistic impact of leveraging such capabilities is undoubtedly an incremental step in that commendable path.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Overall, the bird’s-eye view provides a handy add-on for the advent of self-driving cars.  

I mention that this should be considered an add-on since the philosophical bent must be that an AI-based true self-driving car will still operate effectively without having a bird’s-eye view available. This is important due to the obvious aspect that many locales won’t have a birds-eye setup in place. In addition, there is always the chance that any such equipment might suddenly be disrupted or experience troubles, in which case the AI driving system has to be programmed to work as though the birds-eye view is no longer functional. 

There are other twists and turns to be considered. 

For example, assume that the bird’s-eye view is being operated 24×7, naturally so since cars can be coming through an intersection at any time of the day. The concern by some is that this is essentially a spying type of capability, one that can record the coming and going of people in whatever locale has the bird’s-eye view established. Yes, this is ostensibly the case, though keep in mind that we already have lots and lots of video cameras set up in many areas and the trend toward doing so continues to expand (partially due to the low-cost nature of today’s surveillance-style technologies). 

The key here is that the aspect of having a bird’s-eye view for car traffic is presumably not markedly different than if there were conventional video cameras put in place. The same kind of debate and qualms would ensue. You might try to argue that the added array of sensors makes the bird’s-eye capability somewhat different, such as including perhaps radar, LIDAR, and the like, though this seems not demonstratively indistinguishable for the overarching qualms involved per se.   

One last quick point for the moment on this topic: There are other means to gain a bird’s-eye view.   

For example, I’ve covered the use of autonomous drones that would work in concert with self-driving cars (see my coverage in my columns). A self-driving car might have a launchpad that can place into the air a drone, which then would fly around at the command of the AI driving system and provide sensory data from a bird’s-eye perspective. This could be done via the self-driving car capabilities, or there might be drones that are provided by others, such as a local roadway authority that has autonomous drones. 

All of the same issues earlier stated are likely to be encompassed by a drone-based bird’s-eye view. Will the AI driving system be programmed to make use of the drone-added perspective? Will the AI be programmed to deal with any faulty data coming from a drone (which could happen)? Etc.   

The privacy aspects also come to play, which can be worsened or lessened via the use of a drone. On the one hand, a drone is likely to be only temporarily in the skies, while a fixed-in-place bird’s-eye view mounted sensory equipment is likely to be somewhat permanently put in place. At the same time, realize that the drone could wander around and glean a likely wider swath of data. On and on this matter goes.   

The other futuristic consideration involves how many of these bird’s-eye views will we end up having? 

Suppose that an intersection has several bird’s-eye views that have been put in place. You could argue that this is fine since the more the merrier, on the other hand, the counterbalancing argument is that things are getting out-of-hand and too much can be overbearing and overwhelming.   

Similarly, when you consider the use of drones for a bird’s-eye view, just imagine if all the cars in a given area opted to launch their respective drones, you would seemingly have a sky utterly cluttered with a massive flock of such mechanical beasts. Anyway, we can certainly aim to start someplace, seeking to crawl before we walk, walk before we run, and run before we fly, so to speak.   

Speaking of birds and flying, when I was a youngster, I had a canary as a beloved pet. I used to dream about what the canary could see when it took flight (we would let it out of the cage in our house and allow the bird to cheerfully fly throughout).   

Maybe I could get another canary now and train it to carry around some sensory devices, proffering yet another means of getting a bird’s-eye view for self-driving cars. Besides toting around the sensors, perhaps it would sing those delightful warbling bird songs. And the costs to keep the canary on duty would be appealingly low, only requiring everyday birdseed.   

You could cheekily assert that the canary could forewarn about impending car collisions, serving as a veritable canary in a coal mine. 

Copyright 2021 Dr. Lance Eliot  http://ai-selfdriving-cars.libsyn.com/website 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.aitrends.com/ai-insider/computational-omnipresence-and-birds-eye-view-are-aiding-ai-autonomous-cars/

Continue Reading
Blockchain4 days ago

Ethereum hits $3,000 for the first time, now larger than Bank of America

Blockchain4 days ago

Munger ‘Anti-Bitcoin’ and Buffett ‘Annoyance’ Towards Crypto Industry

Blockchain2 days ago

The Reason for Ethereum’s Recent Rally to ATH According to Changpeng Zhao

Aviation2 days ago

American Airlines Passenger Arrested After Alleged Crew Attack

Blockchain21 hours ago

Chiliz Price Prediction 2021-2025: $1.76 By the End of 2025

Gaming5 days ago

New Pokemon Snap: How To Unlock All Locations | Completion Guide

Blockchain4 days ago

BNY Mellon Regrets Not Owning Stocks of Companies Investing in Bitcoin

Blockchain2 days ago

Mining Bitcoin: How to Mine Bitcoin

Automotive4 days ago

Ford Mach-E Co-Pilot360 driver monitoring system needs an update ASAP

Blockchain2 days ago

Mining Bitcoin: How to Mine Bitcoin

Fintech5 days ago

Telcoin set to start remittance operations in Australia

Blockchain5 days ago

Mining Bitcoin: How to Mine Bitcoin

Blockchain4 days ago

Turkey Jails 6 Suspects Connected to the Thodex Fraud Including Two CEO Siblings

Aviation4 days ago

TV Stars Fined After Disorderly Conduct Onboard British Airways

Blockchain4 days ago

Here’s the long-term ROI potential of Ethereum that traders need to be aware of

Fintech3 days ago

Talking Fintech: Customer Experience and the Productivity Revolution

Blockchain5 days ago

Coinbase to Acquire Crypto Analytics Company Skew

AR/VR5 days ago

The dangers of clickbait articles that explore VR

Blockchain5 days ago

A Year Later: Uzbekistan Plans to Lift its Cryptocurrency Ban

Nano Technology4 days ago

Less innocent than it looks: Hydrogen in hybrid perovskites: Researchers identify the defect that limits solar-cell performance

Trending