Connect with us

AI

OpenAI at NeurIPS 2020

Published

on

Artificial Intelligence

Global Economic Impact of AI: Facts and Figures

Published

on

Sharmistha Chatterjee Hacker Noon profile picture

@sharmi1206Sharmistha Chatterjee

https://www.linkedin.com/in/sharmistha-chatterjee-7a186310/

Summarization of Research Insights from Emerj, Harvard Business Review, MIT Sloan, and Mckinsey

Wall Street, venture capitalists, technology executives, data scientists — all have important reasons to understand the growth and opportunity in the artificial intelligence market to access business growth and opportunities. This gives them insights on funds invested in AI and analytics as well potential revenue growth and turnover. Indeed, the growth of AI, continuing research, development of easier open source libraries and applications in small to large scale industries are sure to revolutionize the industry the next two decades and the impact is getting felt in almost all the countries worldwide.

To dive deep into the growth of AI and future trends, an insight into the type and size of the market is essential along with (a) AI-related industry market research forecasts and (b) data from reputable research sources for insight into AI valuation and forecasting.

The blog is structured as follows :

  • To provide a short consensus on well-researched projections of AI’s growth and market value in the coming decade.
  • To understand the per capita income and GDP of each country from businesses driven by AI and analytics.

Impact of AI is so widespread, touching and vivid that:

IBM’s CEO claims a potential $2 trillion dollar market for “cognitive computing”).

Google co-founder Larry Page states that “Artificial intelligence would be the ultimate version of Google. The ultimate search engine is capable of understanding everything on the web. It will become so much AI driven that in near future ,it would understand exactly what you wanted and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we’re working on”.

Different sectors exhibit dynamics in terms of adopting and absorbing AI, leading to different levels of economic impact.

Source

On comparing different industry-sectors we see from the figure above:

In high-tech industries like Telecom and media has already adopted AI relatively rapidly and looking for transformations in all possible avenues. They are then followed by Consumer, Financial Services and Professional Services.

Healthcare and Industrial Sector are adopting AI slowly. Energy and Public Sector are the slowest adaptors to this transition.

Further, the economic impact in the telecom and high-tech sector could be more than double that of healthcare in 2030. If the national average of macroeconomic impact is 100, healthcare might experience 40 percent lower impact (i.e. 60). The fast and rapid adopters like the telecom and high-tech sector are highly influenced by AI and could experience 40 percent higher impact (i.e. 140) than the national average.

Several internal and external factors specific to a country or a state, have been known to affect AI-driven productivity growth, including labor automation, innovation, and new competition. In addition, certain micro factors, such as the pace of adoption of AI, and macro factors such as a country’s global connectedness and labor-market structure also plays a certain factor to the size of the impact.

The end result is to grow the AI value chain and boost the ICT sector, making an important economic contribution to an economy.

Production channels: Direct economic impact of AI aims to automate production and save cost. It primarily considers three production dimensions. Firstly it includes calling labor and capital “augmentation”, where new AI capacity is developed, deployed, and operated by new engineers and big data analysts. Second, investment in AI technologies saves labor as machines take over tasks that humans currently perform. Thirdly, better AI-driven innovation saves overall cost (including infrastructure), enabling firms to produce the same output with the same or lower inputs.

Augmentation: Relates to increased use of productive AI-driven labor and capital.

Substitution: AI-driven technologies offer better results in the field like automation, where it has been found to be more cost-effective. It has also discovered ways and means to substitute other factors of production. Advanced economies could gain about 10 to 15 percent of the impact from labor substitution, compared with an impact of 5 to 10 percent in developing economies.

Product and service innovation and extension: Motivation for investment in AI beyond labor substitution can produce additional economic output by expanding firms’ portfolios, increasing channels for products and services (for e.g. AI-based recommendations), developing new business models, or combination of the three.

Externality channels: It serves as one of the external channels where the application of AI tools and techniques can contribute to economic global flows (for e.g. chatbots, news aggregation engines). Such flow happens inter-country (states and geographical boundaries) and even between countries that facilitate more efficient cross-border commerce. It is found that countries that are more connected and participate more in global flows would clearly benefit more from AI. Further AI could boost supply chain efficiency, reduce complexities associated with global contracts, classification, and trade compliance.

Wealth creation and reinvestment: AI is contributing to higher productivity of economies, efficiency gains. Further innovations result in an increase in wages for workers, entrepreneurs, and firms in the form of profits, higher consumption, and more productive investment.

Transition and implementation costs: Several costs incurred while executing the transition to AI like organization restructuring costs, adoption to new solutions, integration costs, and associated project and consulting fees are known to affect the transition in a negative way. Businesses should do a trade-off between cost and benefit analysis and correctly strategize their roadmap.

Negative externalities: AI could induce major negative distributional externalities affecting workers by depressing the labor share of income and potential economic growth.

The following figure illustrates the detailed overall economic impact sustained due to the wider adoption of AI techniques and strategies by businesses.

Source

AI-driven businesses have led to a positive impact on the growth of revenue over consecutive years. More so, the statements made by renowned founders, CEOs, entrepreneurs and visionary leaders is evident from the figure below as it shows the impact of AI on global GDP, the maximum being obtained from venture-backed startups.

Source: https://emerj.com/ai-sector-overviews/valuing-the-artificial-intelligence-market-graphs-and-predictions/

“Tractica forecasts that the revenue generated from the direct and indirect application of AI software is estimated to grow from $643.7 million in 2016 to $36.8 billion by 2025. This represents a significant growth curve for the 9-year period with a compound annual growth rate (CAGR) of 56.8%.”

Tractica has taken a conservative adoption of AI in the hedge fund and investment community, with an assumption that roughly 50% of the hedge fund assets traded by 2025 will be AI-driven. Under this estimate, the algorithmic trading use case remains the top use case among the 191 use cases identified by Tractica.

Further as per reports from Tractica, the market for enterprise AI systems will increase from $202.5 million in 2015 to $11.1 billion by 2024, as depicted in the following figure.

View of Worldwide growth of AI revenue, Source — Tractica

The growth forecasts over the next decade clearly show China’s dominance over the AI market yielding a significant increase in GDP, followed by USA, Nothern Europe, and other nations.

In China, AI is projected to give the economy a 26% boost over the next 13 years, measuring an equivalent of an extra $7 trillion in GDP, helping China to rise to the top. As North America’s companies are widely using AI, the adaptation is at an accelerating phase that it can expect a 14.5% increase in GDP, worth $3.7 trillion.

As the GDP growth varies across continents and nations, the level of AI absorption also varies significantly between the country groups with the most and the least absorption. The below figure demonstrates statistics of economies with higher readiness to benefit from AI. Such countries achieve absorption levels about 11 percentage points higher than those of slow adopters by 2023, and this gap looks set to widen to about 23 percentage points by 2030. This further gives an indication of the digital divide created from AI, between advanced and developing economies.

Source: Mckinsey

The resulting gap in net economic impact between the country groups with the highest economic gains and those with the least is likely to become larger, for e.g. a large gap in economic impact between the leading and the lagging — between Sweden and Zambia. The gap could widen from three percentage points in 2025 to 19 percentage points in 2030 in terms of net GDP impact.

AI is internationally recognized as the main driver of future growth and productivity, innovation, competitiveness and job creation for the 21st century. However, there remain certain technical challenges, that need to be overcome to take it to the next step. The key challenges include

  • Labeled training data
  • Obtaining sufficiently large data sets. 
  • Difficulty explaining results
  • Difficulty generalizingScaling challengesRisk of bias

Apart from the common technical challenges, risks, and barriers faced by organisations implementing AI are evident.

It is now the responsibility of policymakers and business leaders to take measurable actions to address the challenges, support researchers, data scientists, business analysts, and all included in the AI ecosystem to drive the economy with huge momentum.

As rightly quoted by Stephen Hawking, Famous Theoretical Physicist, Cosmologist, and Author:

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

References

  1. Valuing the Artificial Intelligence Market, Graphs and Predictions: https://emerj.com/ai-sector-overviews/valuing-the-artificial-intelligence-market-graphs-and-predictions/
  2. NOTES FROM
    THE AI FRONTIERMODELING THE IMPACT OF AI ON THE WORLD ECONOMY: 
    https://www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-ISSUEPAPER-2018-1-PDF-E.pdf
  3. USA-China-EU plans for AI: where do we stand: https://ec.europa.eu/growth/tools-databases/dem/monitor/sites/default/files/DTM_AI%20USA-China-EU%20plans%20for%20AI%20v5.pdf
  4. https://hbr.org/insight-center/interacting-with-ai
  5. https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/global-economic-impact-of-ai-facts-and-figures-jw1n35o3?source=rss

Continue Reading

AI

This AI Prevents Bad Hair Days

Published

on

Louis Bouchard Hacker Noon profile picture

@whatsaiLouis Bouchard

I explain Artificial Intelligence terms and news to non-experts.

Could this be the technological innovation that hairstylists have been dying for? I’m sure a majority of us have had a bad haircut or two. But hopefully, with this AI, you’ll never have to guess what a new haircut will look like ever again.

This AI can transfer a new hairstyle and/or color to a portrait to see how it would look like before committing to the change. Learn more about it below!

Watch the video

References:

►The full article: https://www.louisbouchard.ai/barbershop/

►Peihao Zhu et al., (2021), Barbershop, https://arxiv.org/pdf/2106.01505.pdf

►Project link: https://zpdesu.github.io/Barbershop/

►Code: https://github.com/ZPdesu/Barbershop

Video Transcript

00:00

This article is not about a new technology in itself.

00:03

Instead, it is about a new and exciting application of GANs.

00:06

Indeed, you saw the title, and it wasn’t clickbait.

00:10

This AI can transfer your hair to see how it would look like before committing to the

00:15

change.

00:16

We all know that it may be hard to change your hairstyle even if you’d like to.

00:19

Well, at least for myself, I’m used to the same haircut for years, telling my hairdresser

00:24

“same as last time” every 3 or 4 months even if I’d like a change.

00:29

I just can’t commit, afraid it would look weird and unusual.

00:33

Of course, this all in our head as we are the only ones caring about our haircut, but

00:38

this tool could be a real game-changer for some of us, helping us to decide whether or

00:43

not to commit to such a change having great insights on how it will look on us.

00:48

Nonetheless, these moments where you can see in the future before taking a guess are rare.

00:53

Even if it’s not totally accurate, it’s still pretty cool to have such an excellent approximation

00:57

of how something like a new haircut could look like, relieving us of some of the stress

01:02

of trying something new while keeping the exciting part.

01:06

Of course, haircuts are very superficial compared to more useful applications.

01:10

Still, it is a step forward towards “seeing in the future” using AI, which is pretty cool.

01:17

Indeed, this new technique sort of enables us to predict the future, even if it’s just

01:22

the future of our haircut.

01:24

But before diving into how it works, I am curious to know what you think about this.

01:28

In any other field: What other application(s) would you like to see using AI to “see into

01:34

the future”?

01:38

It can change not only the style of your hair but also the color from multiple image examples.

01:44

You can basically give three things to the algorithm:

01:47

a picture of yourself a picture of someone with the hairstyle you

01:51

would like to have and another picture (or the same one) of the hair

01:55

color you would like to tryand it merges everything on yourself realistically.

01:59

The results are seriously impressive.

02:02

If you do not trust my judgment, as I would completely understand based on my artistic

02:06

skill level, they also conducted a user study on 396 participants.

02:12

Their solution was preferred 95 percent of the time!

02:17

Of course, you can find more details about this study in the references below if this

02:21

seems too hard to believe.

02:22

As you may suspect, we are playing with faces here, so it is using a very similar process

02:27

as the past papers I covered, changing the face into cartoons or other styles that are

02:33

all using GANs.

02:34

Since it is extremely similar, I’ll let you watch my other videos where I explained how

02:39

GANs work in-depth, and I’ll focus on what is new with this method here and why it works

02:45

so well.

02:46

A GAN architecture can learn to transpose specific features or styles of an image onto

02:52

another.

02:53

The problem is that they often look unrealistic because of the lighting differences, occlusions

02:58

it may have, or even simply the position of the head that are different in both pictures.

03:04

All of these small details make this problem very challenging, causing artifacts in the

03:09

generated image.

03:10

Here’s a simple example to better visualize this problem, if you take the hair of someone

03:11

from a picture taken in a dark room and try to put it on yourself outside in daylight,

03:12

even if it is transposed perfectly on your head, it will still look weird.

03:13

Typically, these other techniques using GANs try to encode the pictures’ information and

03:15

explicitly identify the region associated with the hair attributes in this encoding

03:21

to switch them.

03:22

It works well when the two pictures are taken in similar conditions, but it won’t look real

03:27

most of the time for the reasons I just mentioned.

03:30

Then, they had to use another network to fix the relighting, holes, and other weird artifacts

03:36

caused by the merging.

03:38

So the goal here was to transpose the hairstyle and color of a specific picture onto your

03:43

own picture while changing the results to follow the lighting and property of your picture

03:49

to make it convincing and realistic all at once, reducing the steps and sources of errors.

03:55

If this last paragraph was unclear, I strongly recommend watching the video at the end of

03:56

this article as there are more visual examples to help to understand.

03:57

To achieve that, Peihao Zhu et al. added a missing but essential alignment step to GANs.

04:01

Indeed, instead of simply encoding the images and merge them, it slightly alters the encoding

04:07

following a different segmentation mask to make the latent code from the two images more

04:12

similar.

04:13

As I mentioned, they can both edit the structure and the style or appearance of the hair.

04:18

Here, the structure is, of course, the geometry of the hair, telling us if it’s curly, wavy,

04:24

or straight.

04:25

If you’ve seen my other videos, you already know that GANs encode the information using

04:30

convolutions.

04:31

This means it uses kernels to downscale the information at each layer and makes it smaller

04:37

and smaller, thus iteratively removing spatial details while giving more and more value to

04:43

general information to the resulting output.

04:46

This structural information is obtained, as always, from the early layers of the GAN,

04:52

so before the encoding becomes too general and, well, too encoded to represent spatial

04:58

features.

04:59

Appearance refers to the deeply encoded information, including hair color, texture, and lighting.

05:05

You know where the information is taken from the different images, but now, how do they

05:10

merge this information and make it look more realistic than previous approaches?

05:15

This is done using segmentation maps from the images.

05:18

And more precisely, generating this wanted new image based on an aligned version of our

05:24

target and reference image.

05:26

The reference image is our own image, and the target image the hairstyle we want to

05:31

apply.

05:32

These segmentation maps tell us what the image contains and where it is, hair, skin, eyes,

05:38

nose, etc.

05:40

Using this information from the different images, they can align the heads following

05:44

the target image structure before sending the images to the network for encoding using

05:49

a modified StyleGAN2-based architecture.

05:52

One that I already covered numerous times.

05:55

This alignment makes the encoded information much more easily comparable and reconstructable.

06:00

Then, for the appearance and illumination problem, they find an appropriate mixture

06:05

ratio of these appearances encodings from the target and reference images for the same

06:11

segmented regions making it look as real as possible.

06:15

Here’s what the results look like without the alignment on the left column and their

06:19

approach on the right.

06:21

Of course, this process is a bit more complicated, and all the details can be found in the paper

06:26

linked in the references.

06:27

Note that just like most GANs implementations, their architecture needed to be trained.

06:32

Here, they used a StyleGAN2-base network trained on the FFHQ dataset.

06:38

Then, since they made many modifications, as we just discussed, they trained a second

06:42

time their modified StleGAN2 network using 198 pairs of images as hairstyle transfer

06:50

examples to optimize the model’s decision for both the appearance mixture ratio and

06:55

the structural encodings.

06:57

Also, as you may expect, there are still some imperfections like these ones where their

07:02

approach fails to align the segmentation masks or to reconstruct the face.Still, the results

07:08

are extremely impressive and it is great that they are openly sharing the limitations.

07:13

As they state in the paper, the source code for their method will be made public after

07:18

an eventual publication of the paper.

07:21

The link to the official GitHub repo is in the references below, hoping that it will

07:25

be released soon.

07:27

Thank you for watching!

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/this-ai-prevents-bad-hair-days-uu6c37ei?source=rss

Continue Reading

Artificial Intelligence

The rising importance of Fintech innovation in the new age

Published

on

The rising importance of Fintech innovation in the new age

The rise of fintech has opened an array of opportunities for smart cities to develop and thrive. Its importance has actually increased in the age of the pandemic that calls for social distancing or contactless transactions.

The leading global payment solutions provider Visa recently indicated the increasing role of digital payments. Thanks to the expanding role of fintech, digital payments are expected to enter different smart city sectors.

Reportedly, fintech application is going to be instrumental in the transportation sector. It will come to people in different forms of contactless payments. It will also ease the process of paying for parking or hiring bikes and scooters.

More than that, whether it’s about loans, money transfer, investment, accounting and bookkeeping, airtime or fundraising. Smart cities and businesses are going to hugely rely on fintech in the coming future. 

Going ahead, we are delving into understanding the fintech situation in three smart cities. All three are important fintech hubs that the entire world looks upon.

London

In the smart city culture, London has the reputation of being the ‘fintech capital’ of the world. The number of fintech giants in the city is valued at more than $1 billion.

However, the pandemic has caused a number of businesses to shut down. At the same time, it has also catalysed the shift to digital and contactless. Businesses are now adopting new ways to support their customers.

Even in this time of crisis, London is at the foremost position of producing the next generation of fintech leaders. This is as per the Ed Lane, VP of Sales for the EMEA region at nCino, a US-based cloud banking provider. 

Remote work is becoming a necessity due to COVID-19. Hence, investments in different technologies and solutions in financial organisations and service providers are “more important than ever”. And so Lane claims that this has increased the adoption of cloud-based banking software developed by his firm. 

The UK recently introduced the Bounce Back Loan Scheme and the Coronavirus Business Interruption Loan Scheme (CBILS). This is helping Lane’s company nCino and others. They are offering a Bank Operating System to aid SMEs with effective processing of loan applications. 

Fintech companies are surviving and tapping into benefits in the COVID-19 age due to their disruptive mindset. The dot.com crash of 2001 and the financial crash of 2008 are drivers that lead them to become proactive.

Innovatively, fintech companies started offering mobile banking, online money management tools and other personalised solutions. Today, the same is enabling them to prevail during this pandemic. Besides all, partnerships have proven to be key strategies in achieving even the impossible, as experts say. 

Singapore

Singapore is showcasing a pioneering move in the fintech industry. Fintech is at the core of Singapore’s vision to become a ‘Smart Nation’ with a “Smart Financial Centre.”

To achieve the dream, the city-state has been showing constant efforts by using innovative technology. With this, it intends to pave the way for new opportunities, enhance efficiency and improve national management of financial risks.

Until 2019, Singapore was already home to over 600 fintech firms. These companies attracted more than half of the total funding for the same year. And amidst the COVID-19 pandemic, the Monetary Authority of Singapore (MAS) introduced two major support packages.

First on April 8, 2020, it announced a S$125 million COVID-19 care package for the financial and fintech sectors. This package aims at aiding the sectors in fighting the challenges from the COVID-19 health crisis. It will help in supporting workers, accelerate digitalisation, and improve operational readiness and resilience. 

Second, on May 13, 2020, MAS, the Singapore Fintech Association (SFA) and AMTD Foundation launched the MAS-SFA-AMTD Fintech Solidarity Grant. The S$6 million grant proposes to support Singapore-based fintech firms.

A specific focus is on managing cash flow, producing new sales and seeking growth strategies. At the individual level, many industry participants have launched their own initiatives to support the sector.

Hong Kong

HongKong’s fintech startup sector tells us a different story which involves the role of blockchain. Blockchain-based companies are dominating the city’s startup sector.

In 2019, enterprise DLT and crypto-assets exchanges earned rankings as the most popular sectors in Hong Kong’s fintech industry. The report comes from the Financial Services and Treasury Bureau. It confirms that blockchain startups make up 40% of the 57 Fintech firms established in the city in 2019.

As per reports, 45% of new companies are focused on developing applications for large businesses. This is the reason that enterprise blockchain firms were the most popular. Another 27% account for blockchain-related firms in Hong Kong involved in digital currency.  

The increase in the number of blockchain-based fintech startups is due to the Special Administrative Region of the People’s Republic of China. The authority introduced new policies towards blockchain tech development – making it a priority.

Blockchain is thriving in Hong Kong due to a number of reasons. The city has laid down clear regulatory guidelines for blockchain-related businesses. Many have leveraged the benefits of the QMAS program. It enables applicants to settle down in the region before having to look for employment. This has immensely encouraged several blockchain specialists to move to Hong Kong.

The city government is also entering partnerships to expand its fintech footprint in the right direction. For example, in November 2019, the government collaborated with Thailand’s officials to explore the development of Central Bank Digital Currencies (CBDCs). Blockchain is a promising technology for the fintech industry. It supports quick, secure and cost-effective transaction-related services.

More importantly, it provides transparency that other traditional technologies were not capable of. Thanks to the use of encrypted distributed ledgers. These enable real-time verification of transactions without the need for mediators such as correspondent banks.

Why Is Fintech Innovation Important For The Development Of Smart Cities?

Fintech Boosting Business And Growth Opportunities In Smart Cities

Advanced cities that are now smart cities have been using fintech for their development. With that, they are also leading the way for others to follow. Many experts confirm that innovation in fintech is a must for any city to become a ‘smart city.’

It enables easy national as well as international business. For the residents, it makes life more convenient by encouraging contactless, economical, sustainable and efficient payment-related operations. 

One important aspect that smart city development and fintech innovation has in common is their determination to cut bureaucracy. A city that manages to enable speedy and inexpensive international transfers will also enable its citizens with greater access to the global market. This is as said by Hans W. Winterhoff from KPMG in one of his articles.

Furthermore, fintech innovations of the past have demonstrated their success. Some fintech applications have simplified procedures that became unnecessarily complex over time. Traditional banking services are one of the biggest examples. 

The innovative fintech services opened doors for online shopping and easy international money transfers. Fintech is able to provide the same product or service to consumers. But that’s happening in less time, with fewer steps, and at more affordable rates.

Besides, transparency is another important factor that is allowing consumers to have faith in fintech services. With the current potential of fintech, we can now say that it is one of the essential pillars of successful smart city development. The results are already here in the age of this pandemic.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.fintechnews.org/the-rising-importance-of-fintech-innovation-in-the-new-age-2/

Continue Reading

AI

What Waabi’s launch means for the self-driving car industry

Published

on

Elevate your enterprise data technology and strategy at Transform 2021.


It is not the best of times for self-driving car startups. The past year has seen large tech companies acquire startups that were running out of cash and ride-hailing companies shutter costly self-driving car projects with no prospect of becoming production-ready anytime soon.

Yet, in the midst of this downturn, Waabi, a Toronto-based self-driving car startup, has just come out of stealth with an insane amount of $83.5 million in a Series A funding round led by Khosla Ventures, with additional participation from Uber, 8VC, Radical Ventures, OMERS Ventures, BDC, and Aurora Innovation. The company’s financial backers also include Geoffrey Hinton, Fei-Fei Li, Peter Abbeel, and Sanja Fidler, artificial intelligence scientists with great influence in the academia and applied AI community.

What makes Waabi qualified for such support? According to the company’s press release, Waabi aims to solve the “scale” challenge of self-driving car research and “bring commercially viable self-driving technology to society.” Those are two key challenges of the self-driving car industry and are mentioned numerous times in the release.

What Waabi describes as its “next generation of self-driving technology” has yet to pass the test of time. But its execution plan provides hints at what directions the self-driving car industry could be headed.

Better machine learning algorithms and simulations

According to Waabi’s press release: “The traditional approach to engineering self-driving vehicles results in a software stack that does not take full advantage of the power of AI, and that requires complex and time-consuming manual tuning. This makes scaling costly and technically challenging, especially when it comes to solving for less frequent and more unpredictable driving scenarios.”

Leading self-driving car companies have driven their cars on real roads for millions of miles to train their deep learning models. Real-road training is costly both in terms of logistics and human resources. It is also fraught with legal challenges as the laws surrounding self-driving car tests vary in different jurisdictions. Yet despite all the training, self-driving car technology struggles to handle corner cases, rare situations that are not included in the training data. These mounting challenges speak to the limits of current self-driving car technology.

Here’s how Waabi claims to solve these challenges (emphasis mine): “The company’s breakthrough, AI-first approach, developed by a team of world leading technologists, leverages deep learning, probabilistic inference and complex optimization to create software that is end-to-end trainable, interpretable and capable of very complex reasoning. This, together with a revolutionary closed loop simulator that has an unprecedented level of fidelity, enables testing at scale of both common driving scenarios and safety-critical edge cases. This approach significantly reduces the need to drive testing miles in the real world and results in a safer, more affordable, solution.”

There’s a lot of jargon in there (a lot of which is probably marketing lingo) that needs to be clarified. I reached out to Waabi for more details and will update this post if I hear back from them.

By “AI-first approach,” I suppose they mean that they will put more emphasis on creating better machine learning models and less on complementary technology such as lidars, radars, and mapping data. The benefit of having a software-heavy stack is the very low costs of updating the technology. And there will be a lot of updating in the coming years as scientists continue to find ways to circumvent the limits of self-driving AI.

The combination of “deep learning, probabilistic reasoning, and complex optimization” is interesting, albeit not a breakthrough. Most deep learning systems use non-probabilistic inference. They provide an output, say a category or a predicted value, without giving the level of uncertainty on the result. Probabilistic deep learning, on the other hand, also provides the reliability of its inferences, which can be very useful in critical applications such as driving.

“End-to-end trainable” machine learning models require no manual-engineered features. This means once you have developed the architecture and determined the loss and optimization functions, all you need to do is provide the machine learning model with training examples. Most deep learning models are end-to-end trainable. Some of the more complicated architectures require a combination of hand-engineered features and knowledge along with trainable components.

Finally, “interpretability” and “reasoning” are two of the key challenges of deep learning. Deep neural networks are composed of millions and billions of parameters. This makes it hard to troubleshoot them when something goes wrong (or find problems before something bad happens), which can be a real challenge in critical scenarios such as driving cars. On the other hand, the lack of reasoning power and causal understanding makes it very difficult for deep learning models to handle situations they haven’t seen before.

According to TechCrunch’s coverage of Waabi’s launch, Raquel Urtasan, the company’s CEO, described the AI system the company uses as a “family of algorithms.”

“When combined, the developer can trace back the decision process of the AI system and incorporate prior knowledge so they don’t have to teach the AI system everything from scratch,” TechCrunch wrote.

self-driving car simulation carla

Above: Simulation is an important component of training deep learning models for self-driving cars. (credit: CARLA)

Image Credit: Frontier Developments

The closed-loop simulation environment is a replacement for sending real cars on real roads. In an interview with The Verge, Urtasan said that Waabi can “test the entire system” in simulation. “We can train an entire system to learn in simulation, and we can produce the simulations with an incredible level of fidelity, such that we can really correlate what happens in simulation with what is happening in the real world.”

I’m a bit on the fence on the simulation component. Most self-driving car companies are using simulations as part of the training regime of their deep learning models. But creating simulation environments that are exact replications of the real world is virtually impossible, which is why self-driving car companies continue to use heavy road testing.

Waymo has at least 20 billion miles of simulated driving to go with its 20 million miles of real-road testing, which is a record in the industry. And I’m not sure how a startup with $83.5 million in funding can outmatch the talent, data, compute, and financial resources of a self-driving company with more than a decade of history and the backing of Alphabet, one of the wealthiest companies in the world.

More hints of the system can be found in the work that Urtasan, who is also a professor in the Department of Computer Science at the University of Toronto, does in academic research. Urtasan’s name appears on many papers about autonomous driving. But one in particular, uploaded on the arXiv preprint server in January, is interesting.

Titled “MP3: A Unified Model to Map, Perceive, Predict and Plan,” the paper discusses an approach to self-driving that is very close to the description in Waabi’s launch press release.

MP3 self-driving neural networks probablistic deep learning

Above: MP3 is a deep learning model that uses probabilistic inference to create scenic representations and perform motion planning for self-driving cars.

The researchers describe MP3 as “an end-to-end approach to mapless driving that is interpretable, does not incur any information loss, and reasons about uncertainty in the intermediate representations.” In the paper researchers also discuss the use of “probabilistic spatial layers to model the static and dynamic parts of the environment.”

MP3 is end-to-end trainable and uses lidar input to create scene representations, predict future states, and plan trajectories. The machine learning model obviates the need for finely detailed mapping data that companies like Waymo use in their self-driving vehicles.

Raquel posted a video on her YouTube that provides a brief explanation of how MP3 works. It’s fascinating work, though many researchers will point out that it not so much of a breakthrough as a clever combination of existing techniques.

There’s also a sizeable gap between academic AI research and applied AI. It remains to be seen if MP3 or a variation of it is the model that Waabi is using and how it will perform in practical settings.

A more conservative approach to commercialization

Waabi’s first application will not be passenger cars that you can order with your Lyft or Uber app.

“The team will initially focus on deploying Waabi’s software in logistics, specifically long-haul trucking, an industry where self-driving technology stands to make the biggest and swiftest impact due to a chronic driver shortage and pervasive safety issues,” Waabi’s press release states.

What the release doesn’t mention, however, is that highway settings are an easier problem to solve because they are much more predictable than urban areas. This makes them less prone to edge cases (such as a pedestrian running in front of the car) and easier to simulate. Self-driving trucks can transport cargo between cities, while human drivers take care of delivery inside cities.

With Lyft and Uber failing to launch their own robo-taxi services, and with Waymo still away from turning One, its fully driverless ride-hailing service, into a scalable and profitable business, Waabi’s approach seems to be well thought.

With more complex applications still being beyond reach, we can expect self-driving technology to make inroads into more specialized settings such as trucking and industrial complexes and factories.

Waabi also doesn’t make any mention of a timeline in the press release. This also seems to reflect the failures of the self-driving car industry in the past few years. Top executives of automotive and self-driving car companies have constantly made bold statements and given deadlines about the delivery of fully driverless technology. None of those deadlines have been met.

Whether Waabi becomes independently successful or ends up joining the acquisition portfolio of one of the tech giants, its plan seems to be a reality check on the self-driving car industry. The industry needs companies that can develop and test new technologies without much fanfare, embrace change as they learn from their mistakes, make incremental improvements, and save their cash for a long race.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/06/12/what-waabis-launch-means-for-the-self-driving-car-industry/

Continue Reading
Esports4 days ago

Genshin Impact Echoing Conch Locations Guide

Esports4 days ago

MLB The Show 21 Kitchen Sink 2 Pack: Base Round Revealed

Aviation3 days ago

The Story Of The Boeing 777 Family

zephyrnet4 days ago

7th Global Blockchain Congress by Agora Group & TDeFi on June 21st and 22nd, 2021, Dubai.

Esports4 days ago

Free boxes and skins up for grabs in Brawl Stars to celebrate one-year anniversary of China release

Blockchain4 days ago

Woonkly will be the official Title Sponsor of the 7th edition Global Blockchain Congress organized by Agora Group in Dubai

Crowdfunding3 days ago

April/May 2021 Top Campaigns

Big Data4 days ago

.NET DEVELOPMENT

Blockchain4 days ago

Death Cross is Appearing Over Bitcoin Price Chart

Blockchain4 days ago

Bitcoin (BTC) Officially a Legal Tender in El Salvador

Blockchain3 days ago

Crypto Fund Manager Says Bitcoin ETFs to be Approved By 2022

Crowdfunding4 days ago

US Fintech Broadridge Partners with Amazon Web Services to Expand Private Market Hub, Leveraging DLT

Big Data4 days ago

China arrests over 1,100 suspects in crackdown on crypto-related money laundering

Cleantech4 days ago

TC Energy Cancels Keystone XL Pipeline

Energy2 days ago

Industrial robots market in the automotive industry | $ 3.97 billion growth expected during 2021-2025 | 17000+ Technavio Research Reports

Gaming4 days ago

TrustDice Review: Features & Promotions

Cyber Security2 days ago

Data Breach that Impacted Both Audi of America and Volkswagen of America

Esports3 days ago

Every new Passive Power in Legends of Runeterra Lab of Legends 2.9.0

Fintech3 days ago

PayPal launches PayPal Rewards Card in Australia

Energy2 days ago

Daiki Axis Co., Ltd. (4245, First Section, Tokyo Stock Exchange) Overview of Operating Performance for the First Three Months Ended March 31, 2021

Trending