Connect with us

AI

Skeptics Leery of the Billions Being Invested in Autonomous Vehicle Software 

Published

on

By AI Trends Staff  

While billions of dollars are being invested in self-driving car software systems, skeptics are saying it’s a bottomless pit and a new approach is needed.  

Estimates of how big the market opportunity is vary widely. Lux Research is estimating the potential opportunity of the self-driving car market to be $87 billion by 2030, according to a recent report from GreyB Services, a technology research company based in India. Another estimate from Allied Market Research sizes the market at $557 billion by 2026.  

Missy Cummings, director, Humans and Autonomy Laboratory, Duke University

More questions are being raised as to whether these are good investments that will pay off. One skeptic is Missy Cummings, the director of the Humans and Autonomy Laboratory at Duke University. In a recent interview in Marketplace Tech, she stated, “You are starting to see all the mergers across the automotive industry where companies are either teaming up with each other or with software companies, because they realize that they just cannot keep hemorrhaging money the way they are. But that pit still has no bottom. And I don’t see this becoming a viable commercial set of operations in terms of self-driving cars for anyone anywhere, ever, until we address this problem.” 

The problem she refers to is the basic approach to autonomous driving software; She does not believe that neural nets or convolutional neural nets, are capable of the learning required to ensure safe driving. She describes three camps of developers working on self-driving cars, robotics and AI in general:   

“There’s the camp of people like me who know the reality. We recognize it for what it is, we’ve recognized it for some time, and we know that unless we change fundamentally the way that we’re approaching this problem, it is not solvable with our current approach,” stated Cummings, who has a PhD in systems engineering from the University of Virginia, and is a veteran naval officer and military pilot.   

“There’s another larger group of people who recognize that there are some problems but feel like with enough money and enough time, we can solve it. And then there’s a third group of people that—no matter what you tell them—they believe that we can solve this problem. And you can’t talk them off that platform,” she stated.  

She sees the departure of John Krafcik as CEO of Waymo in April as a sign. 

John Krafcik stepped down as CEO of Waymo in April

Krafcik had been running Waymo since 2015, when it was still a unit of Google known as the Google Self-Driving Car Project, according to an account in Motor AuthorityHe oversaw Waymo’s transition into a standalone company in 2016 and the launch of the Waymo One self-driving taxi service in Phoenix, Arizona, in 2018. 

While it was not stated this way by Waymo, Cummings saw the Krafcik departure as a type of surrender. “I have been trying to tell people… that we just can’t solve this problem in the way that you think we’re going to. We need to completely clean this sheet and start over.”  

She sees the exits of Uber and Lyft from the self-driving car software business as additional acknowledgements that the investments are far from paying off. In December, Uber sold its self-driving car software unit to startup Aurora, and in April, Lyft sold its self-driving technology unit to Toyota for $550 million, according to an account in Business Insider. 

Over 250 Companies Working on Autonomous Driving 

Meanwhile, the spending and investments continue at a torrid pace. The Greg B report states over 250 autonomous vehicle companies are working on autonomous driving technology, including automakers, technology providers, services providers, and tech startups.   

When a startup is perceived to have an innovation, the big players look at them to try to get an edge. For example, Amazon acquired the six-year startup Zoox in June 2020 for $1.2 billion, eyeing it for use in its logistics network. The founders of Zoox included Tim Kentley-Klay who has developed self-driving technology at Stanford University.   

Argo AI, based in Pittsburgh, is another autonomous driving startup the analysts cited. It was founded in 2016 by Brian Salesky and Peter Rander, veterans of the automated driving programs at Google and Uber. Its investors include Volkswagen, in for $1 billion, and Ford Motor, which in 2017 had also invested $1 billion over five years and continues its partnership.  

The founders of startup Aurora had a different idea. Founders Chris Urmson, Sterling Anderson, and Drew Bagnell had worked for Google’s Waymo, Tesla’s Autopilot, and Uber’s autonomy projects respectively. Aurora chose to make software and hardware that can be custom-fitted to non-autonomous vehicles to make them driverless. In July 2020, the company also announced plans for an autonomous truck. In a statement, it said it saw the market as having the best economics and level of service requirements that are “most accommodating.”  

Late last year, Aurora acquired Uber’s Advanced Technology Group in a deal that also brought an investment of $400 million into Aurora.   

Motional Has Deal with Eversource to Collect Data  

An autonomous driving startup in Boston has a different approach to generating revenue while waiting for its ship to come in. Motional, a joint venture between Hyundai Motor Group and technology company Aptiv, is running a pilot program with Eversource, New England’s largest utility, according to an account in ModernShipper.  

The program is using Motional vehicles operating within Eversource’s service territory of Massachusetts, New Hampshire, and Connecticut to collect data and information on Eversource’s utility infrastructure and report that data back to the utility. 

“We believed the sensor technology on our vehicles could serve multiple purposes, capture real-time data on energy infrastructure, and ultimately, lead to fewer outages and better service for customers,” Motional stated in a blog posting. 

A Motional spokesperson told Modern Shipper the company’s vehicles are collecting information on electric poles and wires. The insights will be used in the utility’s preventive maintenance program, as we as to monitor for ongoing repairs, and assess damage from severe weather events.  

“At Eversource, we’re focused every day on innovative solutions to lower costs, enhance reliability and advance clean energy for our customers and communities throughout New England,” stated Jaydeep Deshpande, Eversource program manager for substation analytics. “With Motional, we have one of the leaders in the autonomous vehicle industry right in our backyard. This partnership will be focused on developing future inspection solutions by combining Motional’s state-of-the-art vehicle platform with our in-house machine learning tools.” 

 

Read the source articles and information from GreyB Servicesin Marketplace Tech, in Motor Authority, in Business Insider and in ModernShipper.  

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.aitrends.com/selfdrivingcars/skeptics-leery-of-the-billions-being-invested-in-autonomous-vehicle-software/

Artificial Intelligence

Deep learning helps predict new drug combinations to fight Covid-19

Published

on

The existential threat of Covid-19 has highlighted an acute need to develop working therapeutics against emerging health concerns. One of the luxuries deep learning has afforded us is the ability to modify the landscape as it unfolds — so long as we can keep up with the viral threat, and access the right data. 

As with all new medical maladies, oftentimes the data need time to catch up, and the virus takes no time to slow down, posing a difficult challenge as it can quickly mutate and become resistant to existing drugs. This led scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Jameel Clinic for Machine Learning in Health to ask: How can we identify the right synergistic drug combinations for the rapidly spreading SARS-CoV-2? 

Typically, data scientists use deep learning to pick out drug combinations with large existing datasets for things like cancer and cardiovascular disease, but, understandably, they can’t be used for new illnesses with limited data.

Without the necessary facts and figures, the team needed a new approach: a neural network that wears two hats. Since drug synergy often occurs through inhibition of biological targets (like proteins or nucleic acids), the model jointly learns drug-target interaction and drug-drug synergy to mine new combinations. The drug-target predictor models the interaction between a drug and a set of known biological targets that are related to the chosen disease. The target-disease association predictor learns to understand a drug’s antiviral activity, which means determining the virus yield in infected tissue cultures. Together, they can predict the synergy of two drugs. 

Two new drug combinations were found using this approach: remdesivir (currently approved by the FDA to treat Covid-19) and reserpine, as well as remdesivir and IQ-1S, which, in biological assays, proved powerful against the virus. The study has been published in the Proceedings of the National Academy of Sciences.

“By modeling interactions between drugs and biological targets, we can significantly decrease the dependence on combination synergy data,” says Wengong Jin SM ’18, a postdoc at the Broad Institute of MIT and Harvard who recently completed his doctoral work in CSAIL, and who is the lead author on a new paper about the research. “In contrast to previous approaches using drug-target interaction as fixed descriptors, our method learns to predict drug-target interaction from molecular structures. This is advantageous since a large proportion of compounds have incomplete drug-target interaction information.” 

Using multiple medications to maximize potency, while also decreasing side effects, is practically ubiquitous for aforementioned cancer and cardiovascular disease, including a host of others such as tuberculosis, leprosy, and malaria. Using specialized drug cocktails can, quite importantly, reduce the grave and sometimes public threat of resistance (think methicillin-resistant Staphylococcus aureus known as “MRSA”), since many drug-resistant mutations are mutually exclusive. It’s much harder for a virus to develop two mutations at the same time and then become resistant to two drugs in a combination therapy. 

Importantly, the model isn’t limited to just one SARS-CoV-2 strain — it could also potentially be used for the increasingly contagious Delta variant or other variants of concern that may arise. To extend the model’s efficacy against these strains, you’d only need additional drug combination synergy data for the relevant mutation(s). In addition, the team applied their approach to HIV and pancreatic cancer.

To further refine their biological modeling down the line, the team plans to incorporate additional information such as protein-protein interaction and gene regulatory networks. 

Another direction for future work they’re exploring is something called “active learning.” Many drug combination models are biased toward certain chemical spaces due to their limited size, so there’s high uncertainty in predictions. Active learning helps guide the data collection process and improve accuracy in a wider chemical space. 

Jin wrote the paper alongside Jonathan M. Stokes, Banting Fellow at The Broad Institute of MIT and Harvard; Richard T. Eastman, a scientist at the National Center for Advancing Translational Sciences; Zina Itkin, a scientist at National Institutes of Health; Alexey V. Zakharo, informatics lead at the National Center for Advancing Translational Sciences (NCATS); James J. Collins, professor of biological engineering at MIT; and Tommi S. Jaakkola and Regina Barzilay, MIT professors of electrical engineering and computer science at MIT.

This project is supported by the Abdul Latif Jameel Clinic for Machine Learning in Health; the Defense Threat Reduction Agency; Patrick J. McGovern Foundation; the DARPA Accelerated Molecular Discovery program; and in part by the Intramural/Extramural Research Program of the National Center for Advancing Translational Sciences within the National Institutes of Health.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://news.mit.edu/2021/deep-learning-helps-predict-new-drug-combinations-fight-covid-19-0924

Continue Reading

AI

AI tradeoffs: Balancing powerful models and potential biases

Published

on

As developers unlock new AI tools, the risk for perpetuating harmful biases becomes increasingly high — especially on the heels of a year like 2020, which reimagined many of our social and cultural norms upon which AI algorithms have long been trained.

A handful of foundational models are emerging that rely upon a magnitude of training data that makes them inherently powerful, but it’s not without risk of harmful biases — and we need to collectively acknowledge that fact.

Recognition in itself is easy. Understanding is much harder, as is mitigation against future risks. Which is to say that we must first take steps to ensure that we understand the roots of these biases in an effort to better understand the risks involved with developing AI models.

The sneaky origins of bias

Today’s AI models are often pre-trained and open source, which allows researchers and companies alike to implement AI quickly and tailor it to their specific needs.

While this approach makes AI more commercially available, there’s a real downside — namely, that a handful of models now underpin the majority of AI applications across industries and continents. These systems are burdened by undetected or unknown biases, meaning developers who adapt them for their applications are working from a fragile foundation.

According to a recent study by Stanford’s Center for Research on Foundation Models, any biases within these foundational models or the data upon which they’re built are inherited by those using them, creating potential for amplification.

For example, YFCC100M is a publicly available data set from Flickr that is commonly used to train models. When you examine the images of people within this data set, you’ll see that the distribution of images around the world is heavily skewed toward the U.S., meaning there’s a lack of representation of people from other regions and cultures.

These types of skews in training data result in AI models that have under- or overrepresentation biases in their output — i.e., an output that is more dominant for white or Western cultures. When multiple data sets are combined to create large sets of training data, there is a lack of transparency, and it can become increasingly difficult to know if you have a balanced mix of people, regions and cultures. It’s no surprise that the resulting AI models are published with egregious biases contained therein.

Further, when foundational AI models are published, there is typically little to no information provided around their limitations. Uncovering potential issues is left to the end user to test — a step that is often overlooked. Without transparency and a complete understanding of a particular data set, it’s challenging to detect the limitations of an AI model, such as lower performance for women, children or developing nations.

At Getty Images, we evaluate whether bias is present in our computer vision models with a series of tests that include images of real, lived experiences, including people with varying levels of abilities, gender fluidity and health conditions. While we can’t catch all biases, we recognize the importance of visualizing an inclusive world and feel it’s important to understand the ones that may exist and confront them when we can.

Leveraging metadata to mitigate biases

So, how do we do this? When working with AI at Getty Images, we start by reviewing the breakdown of people across a training data set, including age, gender and ethnicity.

Fortunately, we’re able to do this because we require a model release for the creative content that we license. This allows us to include self-identified information in our metadata (i.e., a set of data that describes other data), which enables our AI team to automatically search across millions of images and quickly identify skews in the data. Open source data sets are often limited by a lack of metadata, a problem that is exacerbated when combining data sets from multiple sources to create a larger pool.

But let’s be realistic: Not all AI teams have access to expansive metadata, and ours isn’t perfect either. An inherent tradeoff exists — larger training data that leads to more powerful models at the expense of understanding skews and biases in that data.

As an AI industry, it’s crucial that we find a way to overcome this tradeoff given that industries and people globally depend upon it. The key is increasing our focus on data-centric AI models, a movement beginning to take stronger hold.

Where do we go from here?

Confronting biases in AI is no small feat and will take collaboration across the tech industry in the coming years. However, there are precautionary steps that practitioners can take now to make small but notable changes.

For example, when foundational models are published, we could release the corresponding data sheet describing the underlying training data, providing descriptive statistics of what is in the data set. Doing so would provide subsequent users with a sense of a model’s strengths and limitations, empowering them to make informed decisions. The impact could be huge.

The aforementioned study on foundational models poses the question, “What is the right set of statistics over the data to provide adequate documentation, without being too costly or difficult to obtain?” For visual data specifically, researchers would ideally provide the distributions of age, gender, race, religion, region, abilities, sexual orientation, health conditions and more. But, this metadata is costly and difficult to obtain on large data sets from multiple sources.

A complementary approach would be for AI developers to have access to a running list of known biases and common limitations for foundational models. This could include developing a database of easily accessible tests for biases that AI researchers could regularly contribute to, especially given how people use these models.

For example, Twitter recently facilitated a competition that challenged AI experts to expose biases in their algorithms (Remember when I said that recognition and awareness are key toward mitigation?). We need more of this, everywhere. Practicing crowdsourcing like this on a regular basis could help reduce the burden on individual practitioners.

We don’t have all of the answers yet, but as an industry, we need to take a hard look at the data we are using as the solution to more powerful models. Doing so comes at a cost –- amplifying biases — and we need to accept the role we play within the solution. We need to look for ways to more deeply understand the training data we are using, especially when AI systems are used to represent or interact with real people.

This shift in thinking will help companies of all types and sizes quickly spot skews and counteract them in the development stage, dampening the biases.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://techcrunch.com/2021/09/24/ai-tradeoffs-balancing-powerful-models-and-potential-biases/

Continue Reading

Artificial Intelligence

UK’s AI strategy is ‘ambitious’ but needs funding to match, says Faculty’s Marc Warner

Published

on

The U.K. published its first-ever national AI strategy this week. The decade-long commitment by the government to levelling up domestic artificial intelligence capabilities — by directing resource and attention toward skills, talent, compute power and data access — has been broadly welcomed by the country’s tech ecosystem, as you’d expect.

But there is a question mark over how serious government is about turning the U.K. into a “global AI superpower” given the lack of a funding announcement to accompany the publication.

A better hint is likely to come shortly, with the spending review tabled for October 27 — which will set out public spending plans for the next three years.

Ahead of that, TechCrunch spoke to Marc Warner, CEO of U.K. AI startup Faculty, who said government needs to show it’s serious about providing long-term support to develop the U.K.’s capabilities and global competitiveness with an appropriate level of funding — while welcoming the “genuine ambition” he believes the government is showing to support AI.

Warner’s startup, which closed a $42 million growth round of funding earlier this year, has started its own internal education program to attract PhDs into the company and turn out future data scientists. While Warner himself is also a member of the U.K.’s AI Council, an expert advisory group that provides advice to government and which was consulted on the strategy.

“I think this is a really pretty good strategy,” he told TechCrunch. “There’s a genuine ambition in it which is relatively rare for government and they recognize some of the most important things that we need to fix.

“The problem — and it’s a huge problem — is that there are currently no numbers attached to this.

“So while in principle there’s lots of great stuff in there, in practice it’s totally critical that it’s actually backed by the funding that it needs — and has the commitment of the wider gov to the high-quality execution that doing some of these things are going to require.”

Warner warned of the risk of the promising potential of a “pretty serious strategy” fading away if it’s not matched with an appropriate — dare we say it, “world beating” — level of funding.

“That’s a question for the spending review but it seems to me very easy now that having done — what looks like a really pretty serious strategy — then… it fades into a much more generic strategy off the back of not really willing to make the funding commitments, not really willing to push through on the execution side and actually make these things happen.”

Asked what level of funding he’d like to see government putting behind the strategy to deliver on its long-term ambitions, Warner said the U.K. needs to aim high — and do so on a global stage.

“We can look around the world and look at the commitments that other countries are making to their AI strategies which are in the hundred of millions to low billions,” he suggested. “And if we are serious about being globally competitive — which the strategy is, and I think we should be — then we’re talking at least matching the funding of other countries, if not exceeding it.”

“Ultimately it comes down to where does this rank in their priority list and if they want to deliver on an ambitious strategy it’s got to be high,” he added.

Access to talent

Discussing the broad detail of what the strategy says is needed for the U.K. to up its AI game, Warner highlighted talent as a key component.

“For a technical field like AI talent is a huge deal. There’s a global competition for that talent. And it seems like the government is taking that seriously and hopefully going to take actions to make sure the U.K. has all the talent it needs for this kind of stuff — from a skills perspective and training up people but also from a visa perspective.”

“From our perspective it’s just wonderful to be able to access some of the most talented people from across the world to come and work on important problems and so the easier that it can be made for those people — or for organizations, whether it’s universities or charities or companies like us or even government departments to start to be able to hire those people it’s just a massive step forward,” he added.

“It’s nice that they’re taking computing and data seriously,” he went on, discussing other elements of the strategy. “Obviously those are the two fuels for the set of techniques of machine learning that are sort of the foundation of modern AI. And having the government think about how we can make that more accessible is clearly a great thing.”

“I think the fact that they’re thinking about the long-term risks of AI is novel and basically important,” he also said.

“Then I think they’re relatively honest that our adoption is weaker than we’d like, as a country, as a set of businesses. And hopefully recognizing that and thinking seriously about how we might go about fixing it — so, all in all, from a strategy perspective it’s actually very good.”

The strategy also talks about the need to establish “clear rules, applied ethical principles and a pro-innovation regulatory environment” for AI. But the U.K. is already lagging on that front — with the European Union proposing an AI Regulation earlier this year.

Asked for his views on AI regulation Warner advocated for domain-specific rules.

Domain specific AI rules

“We think it would be a big mistake to regulate at the level of just artificial intelligence. Because that’s sort of equivalent to regulating steel where you don’t know whether the steel is going to be used in girders or in a knife or a gun,” he suggested. 

“Either you pick the kind of legislation that we have around girders and it becomes incredibly lax around the people who are using the steel to make guns or you pick the kind of legislation that we have around guns and it becomes almost impossible to make the girders.

“So while it’s totally critical that we regulate AI effectively that is almost certainly done in a domain-specific fashion.”

He gave the example of AIs used in health contexts, such as for diagnosis, as a domain that would naturally require tighter regulation — whereas a use-case like e-commerce would likely not need such guardrails, he suggested.

“I think the government recognizes this in the strategy,” he added. “It does talk about making sure the regulation is really thoughtfully attuned to the domain. And that just seems very sensible to me.

“We think it’s extremely important that AI is done well and safely and for the benefit of society.”

The EU’s proposed risk-based framework for regulating applications of AI does have a focus on certain domains and use cases — which are classified as higher or lower risk, with regulatory requirements varying accordingly. But Warner said he hasn’t yet studied the EU proposal in enough detail to have a view on their approach.  

TechCrunch also asked the Faculty CEO for his views on the U.K. government’s simultaneous push to “reform” the current data protection framework — which includes consulting on changes that could weaken protections for people’s information.

Critics of the reform plan suggest it risks a race to the bottom on privacy standards.

“My view would be that it’s absolutely critical that uses of AI are both legal and legitimate,” said Warner. “As in, if people knew what was being done with their data they would be completely comfortable with what’s going on.”

Data legitimacy

Faculty’s AI business was in existence (albeit under a different name) before the U.K.’s version of the EU General Data Protection Regulation (GDPR) was transposed into national law — although the prior regime was broadly similar. So existing rules don’t appear to have harmed its prospects as a high value and growing U.K. AI business.

Given that, might the government’s appetite to reduce the level of data protection that U.K. citizens enjoy — with the claim that doing so would somehow be “good for innovation” — actually be rather counterproductive for AI businesses which need the trust of users to flourish? (Plus, of course, if any U.K. AI businesses want to do business in the European Union they would need to comply with the GDPR.)

“GDPR is not perfect,” argued Warner. “If you speak to anyone I think that’s widely recognized — so I don’t think that the way it’s being framed as a choice between one or other, I think we can do better than both and I think that’s what we should aim for.

“I think there are lots of ways that we can — over time — be better at regulating these things. So that we maintain the absolute best in class for legitimacy around the use of these technologies which is obviously totally critical for companies like us that want to do business in a way that’s widely accepted and even encouraged in society.

“Basically I don’t think we should compromise but I don’t think it’s a choice between just following GDPR or not. It’s more complicated than that.”

It’s also worth noting there have been a number of high-profile data scandals emanating from the U.K. in recent years.

And Faculty — in its pre-rebranding guise as ASI Data Science — was intimately involved in controversial use of data for targeting ads at voters during the U.K.’s Brexit vote, for example.

Although it has since said it will never do political work again.

Political campaigning

ASI Data Science’s corporate rebranding followed revelations around the data-mining activities of the now defunct and disgraced data company, Cambridge Analytica — which broke into a global scandal in 2018, and led to parliamentarians around the world asking awkward questions about the role of data and predictive modelling to try to sway voters.

The U.K.’s information commissioner even called for an “ethical pause” on the use of data and AI tools for political add targeting, warning that trust in democracy was being undermined by big data techniques opaquely targeting voters with custom political messaging.

During the Brexit referendum, Warner worked with the U.K. government’s former special advisor, Dominic Cummings, who was a director for the Vote Leave campaign. And Cummings has written extensively that data scientists played a crucial role in winning the Brexit vote — writing, for instance, in a 2016 blog post on how data science and AI was used in the referendum, that:

One of our central ideas was that the campaign had to do things in the field of data that have never been done before. This included a) integrating data from social media, online advertising, websites, apps, canvassing, direct mail, polls, online fundraising, activist feedback, and some new things we tried such as a new way to do polling… and b) having experts in physics and machine learning do proper data science in the way only they can – i.e. far beyond the normal skills applied in political campaigns. We were the first campaign in the UK to put almost all our money into digital communication then have it partly controlled by people whose normal work was subjects like quantum information (combined with political input from Paul Stephenson and Henry de Zoete, and digital specialists AIQ). We could only do this properly if we had proper canvassing software. We built it partly in-house and partly using an external engineer who we sat in our office for months.

Given this infamous episode in his company’s history, we asked Warner if he would be supportive of AI rules that limit how the technology can be used for political campaigning?

The U.K. government has not made such a proposal but it is eyeing changing in election law — such as disclosure labels for online political campaigning.

“Faculty as an organization is not interested in politics anymore — it’s not something we’re thinking about,” was Warner’s response on this. 

Pushed again on whether he would support limits on AI in the political campaigning domain, he added: “From Faculty’s perspective we don’t do politics anymore. I think it’s up to the government what they think it’s best around that area.”

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://techcrunch.com/2021/09/24/uks-ai-strategy-is-ambitious-but-needs-funding-to-match-says-facultys-marc-warner/

Continue Reading

Artificial Intelligence

Operations observability platform Avenue launches with $4M

Published

on

Avenue launched Friday to give operations their own tools to monitor teams, and is building a “command center” for this area of business that is often forgotten, co-founder and CEO Justin Bleuel told TechCrunch.

In addition to the launch the company is announcing $4 million in seed funding, led by Accel, with participation from Flexport and a group of individual investors from companies like Coinbase, Uber, Stripe and Thumbtack.

Bleuel and his co-founder, Jeff Barg, grew up building iOS apps together and then went their separate ways, Bleuel to Uber and Barg to Amazon. While at Uber, Bleuel was working on observability — passive or proactive monitoring — building a lot of the tools in-house to monitor the marketplace for data like rider experience.

Both saw an opening to build these tools themselves for operations teams, and Avenue was born. The technology enables business teams to set up alerts to observe when there is a problem with data, act on it correctly and improve how the overall team functions, think “Datadog or PagerDuty for operations teams,” Bleuel said.

“Data is now all centralized in data warehouses, so you can build on top of them in a way you could not before, like Fivetran, and activate off of it,” he added. “You used to have to build one-to-one alerts for each tool, but now we can actively direct them from the warehouse.”

Avenue dashboard. Image Credits: Avenue

The company, founded in 2020, came out of the Y Combinator winter 2021 cohort, and one of its early customers is food pickup service Snackpass, which is using Avenue to monitor uptime and receive notifications when restaurant partners, for example, have an ordering tablet battery die or lose Wi-Fi. Snackpass is able to contact the location and help them figure out why they went offline. As a result, the company was able to cut the percentage of offline stores in half, Bleuel said.

Avenue’s customer sweet spot is marketplace companies or warehouses for monitoring stock. However, the co-founders are also seeing their technology being used by other companies, like furniture delivery companies, to monitor for reliability or know their inventory levels. Customers are also packaging up reports and sharing them with other internal teams on how to improve operations.

The company intends to use the funding to build on its small team of three, especially in engineering to be able to go to market with new products, Barg said.

Avenue is working with more than 50 companies and since April has sent out over 200,000 alerts. The company’s model bills customers per alert per month, and the team is looking at a freemium model as well as enterprise levels.

Meanwhile, Amit Kumar, partner at Accel, said via email the firm is “very thesis-driven,” one of them being the modern data stack. Accel made early investments into companies like Airbyte, Monte Carlo and Privacer, and saw opportunity for new downstream applications based on the innovation, of which Avenue stood out.

The combination of Bleuel and Barg was “particularly compelling because of their fluency with the problem space” due to their backgrounds at Uber and Amazon and experiencing firsthand how “poorly served” operations teams are and how that can affect the overall business.

He believes the current approach of ops teams in the market today relies heavily on dashboarding, periodic sweeps and color-coded Excel sheets — a process that is “often inaccurate and disorganized.” At the same time, product engineers were flush with well-established tools around observability and incident response.

“Given the rise of ‘atoms’ startups powered by a cohort of ex-Uber and ex-Amazon operators, Justin and Jeff were uniquely positioned to find early design partners and customers among their peer networks,” Kumar added. “As ops teams become both increasingly commonplace and essential to business outcomes, I expect their processes to mature and benefit from similar tooling. This is the thesis behind Avenue, and early traction indicates that next-gen leading ops-heavy companies agree.”

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://techcrunch.com/2021/09/24/operations-observability-platform-avenue-launches-with-4m/

Continue Reading
Esports3 days ago

Here are all of CS:GO’s Operation Riptide skins

Esports2 days ago

How to start a Private Queue in CS:GO

Esports2 days ago

How to complete all week one missions in Operation Riptide

Esports3 days ago

Valve reveals CS:GO Operation Riptide, featuring private queue, short competitive games, new deathmatch modes, and more

Esports4 days ago

All Fashion Week Timed Research, Finding Your Voice Special Research, and event-exclusive Field Research tasks and rewards in Pokémon Go

Esports3 days ago

Pokémon UNITE APK and OBB download links for Android

Esports1 day ago

Can You Play Diablo II: Resurrected Offline?

Esports3 days ago

Some players unable to claim Pokémon UNITE mobile pre-registration rewards due to new error

Esports2 days ago

CS:GO Riptide Case: Full List of New Skins

Esports5 days ago

nexa: “We worked really hard to get back into the shape we were in before the player break”

Esports5 days ago

Team Anarchy and STMN Esports qualify for CoD: Mobile World Championship Finals from Europe

Esports1 day ago

Failed to Enter Game, Character Could Not be Found: How to Fix Error in Diablo II: Resurrected

Covid194 days ago

Fintech Apps Sees a Surge in Downloads Amidst the Pandemic

Energy2 days ago

Carbon Nanotubes Market size worth $ 20.31 Billion, Globally, by 2028 at 17.27% CAGR: Verified Market Research®

Blockchain2 days ago

United States Infrastructure Bill Brings Cardano Billionaire to Washington.

Blockchain4 days ago

Bitcoin & Ethereum Options Expiry on September 24th, What Does This Mean for the BTC and ETH Price?

Esports1 day ago

Best Stats for the Druid in Diablo II: Resurrected

Esports2 days ago

How to redeem Operation Riptide’s rewards in CS:GO

Cyber Security4 days ago

What Is a Ticketing System?

Esports3 days ago

5 Best Counters to Vex in League of Legends

Trending