Connect with us

AI

Canadian Government Collaborates with Startups on Autonomous Trucking 

Published

on

The government of Ontario is supporting private industry to foster the industry around autonomous driving systems, especially focused on autonomous trucking. (Credit: Getty Images) 

By AI Trends Staff 

Several Canadian startups are making a play in autonomous truck technology, with support from the government of Ontario as well as venture capitalists.  

Autonomous trucking in Canadian has been in the spotlight with the launch of Raquel Urtasun’s company Waabi on June 8, with $83.5 million in backing. Waabi’s initial focus with its autonomous driving software relying heavily on simulations will be autonomous trucking. (See AI Trends, June 24, 2021)  

Partnerships pursuing autonomous truck technology in Canada, described in a recent account in trucknews, include:  

  • BlackBerry, partnering with Amazon Web Services to develop the BlackBerry IVY platform, aimed at pulling together data from vehicle sensors; 
  • Pitstop, offering a cloud-based analytics platform focused on predictive maintenance;  
  • Bluewire, which uses data analysis to help trucking companies protect their reputations against false narratives; and 
  • NuPort Robotics, a company retrofitting highly autonomous trucks that will move freight along the so-called “middle mile” connecting warehouses and distribution centers. 

The developments follow the evolution of telematics systems, which are the convergence of telecommunications and information processing, with artificial intelligence.   

Mike Branch, vice-president, data and analytics, Geotab

“There is AI for an individual truck, which could involve anything from vision systems and autonomous driving, to predictive maintenance. And then there is AI related to fleet operations, which could involve everything from route optimization to an electric vehicle suitability assessment,” stated Mike Branch, vice-president, data and analytics at Geotab, headquartered in Oakville, Ont. Geotab provides telematics services for commercial fleet management via a software-as-a-service model. In 2015, the company had one data scientist; today it has 80 of them.  

Telematics Has Evolved to Now Include AI 

Telematics systems originally focused on data to identify vehicle location and whether deliveries or pickups were made on schedule. That has since evolved to data picked up from vehicle sensors, that can better predict when a component might fail and so needs maintenance, before it breaks down.  

Sandeep Kar, chief strategy officer, Fleet Complete, Toronto

“We can create certain data models, and on that data model, apply artificial intelligence and machine learning techniques and technologies to predict an outcome,” stated Sandeep Kar, chief strategy officer at Fleet Complete of Toronto. The company, founded in 2000, offers GPS tracking and asset management software.   

He sees the move to autonomous driving as having a big impact on telematics. “Autonomous driving will change everything,” Kar stated. “The moment you bring that technology and make it mainstream, what you’re doing is you’re creating some really profound changes … The advent of autonomous driving is going to be that inflection point for so many technologies and business models in trucking.”  

Trucking companies will face potential challenges in incorporating AI into their solutions. “Part of the limitation is customers being able to effectively implement an AI solution,” stated Christopher Plaat, senior vice-president and general manager of BlackBerry Radar, a trailer tracking and monitoring system. “There are struggles in trucking being able to take in very complex solutions that are difficult to implement.”  

Ontario Government Partners with NuPort Robotics  

The government of Ontario in March announced a $3 million partnership to bolster the province’s contribution to the development of autonomous vehicle technology, according to an account in The Globe and Mail.  

The investment includes $1 million contributions each from Ontario’s Autonomous Vehicle Innovation Network (AVIN), Canadian Tire, and NuPort Robotics, the Toronto-based AI startup, working to test its autonomous truck software system. 

“The goal is to demonstrate automated technology in a predetermined set of routes, which will enable us to showcase quantitatively that these systems are more efficient, safer, and enhance the driver experience,” stated Raghavender Sahdev, the CEO of NuPort Robotics. “Our goal is to put Ontario and Canada on the world map when it comes to automated technologies.” 

The project will focus on the short-haul route trucks take between distribution centers such as warehouses and terminals every day, in what are called the “middle miles.” Hoped-for benefits include improved safety, reduced carbon emissions, and lower maintenance and repair costs. “Because we know what the route looks like, we can deploy state-of-the-art machine-learning and autonomous-driving algorithms, which then allow us to improve performance,” Sahdev stated.   

Two semi-tractor trailer trucks have been outfitted with high-tech sensors and controls, including obstacle and collision avoidance systems.  

“The main purpose for all of this technology is increased safety,” stated Raed Kadri, the head of AVIN, which is backed by the government of Ontario with the goal of fostering growth of the transportation and infrastructure technology in the region. “Developing and deploying these technologies—pushing them into being commercially ready—is important, because the more automation you have, the more potential for increased safety,” Kadri stated. 

The Canadians see an opportunity in focusing on the trucking industry. Some 70% of domestic freight is transported by truck, according to Statistics Canada, but the industry is facing an imminent talent shortage. Vacancies in the industry reached close to seven percent in 2019, more than double the national average, according to the Canadian Trucking Alliance. The industry is looking for autonomous solutions to help make up the talent shortage. 

“There is a pretty good business case in the sense that there’s a huge shortage of truck drivers,” stated University of Waterloo electrical and computer engineering professor Dr. Krzysztof Czarnecki. “This technology is coming no matter what, it’s just a question of time,” he added. “Maybe it’s not coming as fast as some would like, but it’s coming, and we need companies to innovate in this space, and make sure Canada is not just watching from the sidelines.” 

The Canadians have proven the model of government support for private industry is effective for working toward a number of worthwhile goals, including addressing environmental concerns. “This project applies unique and made-in-Ontario Artificial Intelligence technology that offers increased safety and efficiency, with a reduced carbon footprint, to the goods supply chains on which we all rely,” stated Vic Fedeli, Ontario minister of economic development, job creation and trade, in an account in SmartCitiesWorld. 

“This is the latest example of how Ontario’s Autonomous Vehicle Innovation Network acts as a catalyst, fostering partnerships between ambitious technology start-ups and industry to develop and commercialize next generation transportation technologies that strengthen our economy and benefit society.”  

Read the source articles and information in trucknews, in AI Trends, June 24, 2021,  in The Globe and Mail and in SmartCitiesWorld. 

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.

Click here to access.

Source: https://www.aitrends.com/startups/canadian-government-collaborates-with-startups-on-autonomous-trucking/

AI

Funding wrap: AI-driven Asian fintech closes $400M series D

Published

on

It was a diverse week for investment in fintech automations, headlined by the largest-ever funding round targeting AI-driven tech from Asia. Here in the U.S., investors threw their support behind an intelligent document processing technology firm and an altruistic startup aiming to help millennials and others progress toward homeownership. Advance Intelligence Group In the biggest […]

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://bankautomationnews.com/allposts/retail/funding-wrap-ai-driven-asian-fintech-closes-400m-series-d/

Continue Reading

Artificial Intelligence

Deep learning helps predict new drug combinations to fight Covid-19

Published

on

The existential threat of Covid-19 has highlighted an acute need to develop working therapeutics against emerging health concerns. One of the luxuries deep learning has afforded us is the ability to modify the landscape as it unfolds — so long as we can keep up with the viral threat, and access the right data. 

As with all new medical maladies, oftentimes the data need time to catch up, and the virus takes no time to slow down, posing a difficult challenge as it can quickly mutate and become resistant to existing drugs. This led scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Jameel Clinic for Machine Learning in Health to ask: How can we identify the right synergistic drug combinations for the rapidly spreading SARS-CoV-2? 

Typically, data scientists use deep learning to pick out drug combinations with large existing datasets for things like cancer and cardiovascular disease, but, understandably, they can’t be used for new illnesses with limited data.

Without the necessary facts and figures, the team needed a new approach: a neural network that wears two hats. Since drug synergy often occurs through inhibition of biological targets (like proteins or nucleic acids), the model jointly learns drug-target interaction and drug-drug synergy to mine new combinations. The drug-target predictor models the interaction between a drug and a set of known biological targets that are related to the chosen disease. The target-disease association predictor learns to understand a drug’s antiviral activity, which means determining the virus yield in infected tissue cultures. Together, they can predict the synergy of two drugs. 

Two new drug combinations were found using this approach: remdesivir (currently approved by the FDA to treat Covid-19) and reserpine, as well as remdesivir and IQ-1S, which, in biological assays, proved powerful against the virus. The study has been published in the Proceedings of the National Academy of Sciences.

“By modeling interactions between drugs and biological targets, we can significantly decrease the dependence on combination synergy data,” says Wengong Jin SM ’18, a postdoc at the Broad Institute of MIT and Harvard who recently completed his doctoral work in CSAIL, and who is the lead author on a new paper about the research. “In contrast to previous approaches using drug-target interaction as fixed descriptors, our method learns to predict drug-target interaction from molecular structures. This is advantageous since a large proportion of compounds have incomplete drug-target interaction information.” 

Using multiple medications to maximize potency, while also decreasing side effects, is practically ubiquitous for aforementioned cancer and cardiovascular disease, including a host of others such as tuberculosis, leprosy, and malaria. Using specialized drug cocktails can, quite importantly, reduce the grave and sometimes public threat of resistance (think methicillin-resistant Staphylococcus aureus known as “MRSA”), since many drug-resistant mutations are mutually exclusive. It’s much harder for a virus to develop two mutations at the same time and then become resistant to two drugs in a combination therapy. 

Importantly, the model isn’t limited to just one SARS-CoV-2 strain — it could also potentially be used for the increasingly contagious Delta variant or other variants of concern that may arise. To extend the model’s efficacy against these strains, you’d only need additional drug combination synergy data for the relevant mutation(s). In addition, the team applied their approach to HIV and pancreatic cancer.

To further refine their biological modeling down the line, the team plans to incorporate additional information such as protein-protein interaction and gene regulatory networks. 

Another direction for future work they’re exploring is something called “active learning.” Many drug combination models are biased toward certain chemical spaces due to their limited size, so there’s high uncertainty in predictions. Active learning helps guide the data collection process and improve accuracy in a wider chemical space. 

Jin wrote the paper alongside Jonathan M. Stokes, Banting Fellow at The Broad Institute of MIT and Harvard; Richard T. Eastman, a scientist at the National Center for Advancing Translational Sciences; Zina Itkin, a scientist at National Institutes of Health; Alexey V. Zakharo, informatics lead at the National Center for Advancing Translational Sciences (NCATS); James J. Collins, professor of biological engineering at MIT; and Tommi S. Jaakkola and Regina Barzilay, MIT professors of electrical engineering and computer science at MIT.

This project is supported by the Abdul Latif Jameel Clinic for Machine Learning in Health; the Defense Threat Reduction Agency; Patrick J. McGovern Foundation; the DARPA Accelerated Molecular Discovery program; and in part by the Intramural/Extramural Research Program of the National Center for Advancing Translational Sciences within the National Institutes of Health.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://news.mit.edu/2021/deep-learning-helps-predict-new-drug-combinations-fight-covid-19-0924

Continue Reading

AI

AI tradeoffs: Balancing powerful models and potential biases

Published

on

As developers unlock new AI tools, the risk for perpetuating harmful biases becomes increasingly high — especially on the heels of a year like 2020, which reimagined many of our social and cultural norms upon which AI algorithms have long been trained.

A handful of foundational models are emerging that rely upon a magnitude of training data that makes them inherently powerful, but it’s not without risk of harmful biases — and we need to collectively acknowledge that fact.

Recognition in itself is easy. Understanding is much harder, as is mitigation against future risks. Which is to say that we must first take steps to ensure that we understand the roots of these biases in an effort to better understand the risks involved with developing AI models.

The sneaky origins of bias

Today’s AI models are often pre-trained and open source, which allows researchers and companies alike to implement AI quickly and tailor it to their specific needs.

While this approach makes AI more commercially available, there’s a real downside — namely, that a handful of models now underpin the majority of AI applications across industries and continents. These systems are burdened by undetected or unknown biases, meaning developers who adapt them for their applications are working from a fragile foundation.

According to a recent study by Stanford’s Center for Research on Foundation Models, any biases within these foundational models or the data upon which they’re built are inherited by those using them, creating potential for amplification.

For example, YFCC100M is a publicly available data set from Flickr that is commonly used to train models. When you examine the images of people within this data set, you’ll see that the distribution of images around the world is heavily skewed toward the U.S., meaning there’s a lack of representation of people from other regions and cultures.

These types of skews in training data result in AI models that have under- or overrepresentation biases in their output — i.e., an output that is more dominant for white or Western cultures. When multiple data sets are combined to create large sets of training data, there is a lack of transparency, and it can become increasingly difficult to know if you have a balanced mix of people, regions and cultures. It’s no surprise that the resulting AI models are published with egregious biases contained therein.

Further, when foundational AI models are published, there is typically little to no information provided around their limitations. Uncovering potential issues is left to the end user to test — a step that is often overlooked. Without transparency and a complete understanding of a particular data set, it’s challenging to detect the limitations of an AI model, such as lower performance for women, children or developing nations.

At Getty Images, we evaluate whether bias is present in our computer vision models with a series of tests that include images of real, lived experiences, including people with varying levels of abilities, gender fluidity and health conditions. While we can’t catch all biases, we recognize the importance of visualizing an inclusive world and feel it’s important to understand the ones that may exist and confront them when we can.

Leveraging metadata to mitigate biases

So, how do we do this? When working with AI at Getty Images, we start by reviewing the breakdown of people across a training data set, including age, gender and ethnicity.

Fortunately, we’re able to do this because we require a model release for the creative content that we license. This allows us to include self-identified information in our metadata (i.e., a set of data that describes other data), which enables our AI team to automatically search across millions of images and quickly identify skews in the data. Open source data sets are often limited by a lack of metadata, a problem that is exacerbated when combining data sets from multiple sources to create a larger pool.

But let’s be realistic: Not all AI teams have access to expansive metadata, and ours isn’t perfect either. An inherent tradeoff exists — larger training data that leads to more powerful models at the expense of understanding skews and biases in that data.

As an AI industry, it’s crucial that we find a way to overcome this tradeoff given that industries and people globally depend upon it. The key is increasing our focus on data-centric AI models, a movement beginning to take stronger hold.

Where do we go from here?

Confronting biases in AI is no small feat and will take collaboration across the tech industry in the coming years. However, there are precautionary steps that practitioners can take now to make small but notable changes.

For example, when foundational models are published, we could release the corresponding data sheet describing the underlying training data, providing descriptive statistics of what is in the data set. Doing so would provide subsequent users with a sense of a model’s strengths and limitations, empowering them to make informed decisions. The impact could be huge.

The aforementioned study on foundational models poses the question, “What is the right set of statistics over the data to provide adequate documentation, without being too costly or difficult to obtain?” For visual data specifically, researchers would ideally provide the distributions of age, gender, race, religion, region, abilities, sexual orientation, health conditions and more. But, this metadata is costly and difficult to obtain on large data sets from multiple sources.

A complementary approach would be for AI developers to have access to a running list of known biases and common limitations for foundational models. This could include developing a database of easily accessible tests for biases that AI researchers could regularly contribute to, especially given how people use these models.

For example, Twitter recently facilitated a competition that challenged AI experts to expose biases in their algorithms (Remember when I said that recognition and awareness are key toward mitigation?). We need more of this, everywhere. Practicing crowdsourcing like this on a regular basis could help reduce the burden on individual practitioners.

We don’t have all of the answers yet, but as an industry, we need to take a hard look at the data we are using as the solution to more powerful models. Doing so comes at a cost –- amplifying biases — and we need to accept the role we play within the solution. We need to look for ways to more deeply understand the training data we are using, especially when AI systems are used to represent or interact with real people.

This shift in thinking will help companies of all types and sizes quickly spot skews and counteract them in the development stage, dampening the biases.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://techcrunch.com/2021/09/24/ai-tradeoffs-balancing-powerful-models-and-potential-biases/

Continue Reading

Artificial Intelligence

UK’s AI strategy is ‘ambitious’ but needs funding to match, says Faculty’s Marc Warner

Published

on

The U.K. published its first-ever national AI strategy this week. The decade-long commitment by the government to levelling up domestic artificial intelligence capabilities — by directing resource and attention toward skills, talent, compute power and data access — has been broadly welcomed by the country’s tech ecosystem, as you’d expect.

But there is a question mark over how serious government is about turning the U.K. into a “global AI superpower” given the lack of a funding announcement to accompany the publication.

A better hint is likely to come shortly, with the spending review tabled for October 27 — which will set out public spending plans for the next three years.

Ahead of that, TechCrunch spoke to Marc Warner, CEO of U.K. AI startup Faculty, who said government needs to show it’s serious about providing long-term support to develop the U.K.’s capabilities and global competitiveness with an appropriate level of funding — while welcoming the “genuine ambition” he believes the government is showing to support AI.

Warner’s startup, which closed a $42 million growth round of funding earlier this year, has started its own internal education program to attract PhDs into the company and turn out future data scientists. While Warner himself is also a member of the U.K.’s AI Council, an expert advisory group that provides advice to government and which was consulted on the strategy.

“I think this is a really pretty good strategy,” he told TechCrunch. “There’s a genuine ambition in it which is relatively rare for government and they recognize some of the most important things that we need to fix.

“The problem — and it’s a huge problem — is that there are currently no numbers attached to this.

“So while in principle there’s lots of great stuff in there, in practice it’s totally critical that it’s actually backed by the funding that it needs — and has the commitment of the wider gov to the high-quality execution that doing some of these things are going to require.”

Warner warned of the risk of the promising potential of a “pretty serious strategy” fading away if it’s not matched with an appropriate — dare we say it, “world beating” — level of funding.

“That’s a question for the spending review but it seems to me very easy now that having done — what looks like a really pretty serious strategy — then… it fades into a much more generic strategy off the back of not really willing to make the funding commitments, not really willing to push through on the execution side and actually make these things happen.”

Asked what level of funding he’d like to see government putting behind the strategy to deliver on its long-term ambitions, Warner said the U.K. needs to aim high — and do so on a global stage.

“We can look around the world and look at the commitments that other countries are making to their AI strategies which are in the hundred of millions to low billions,” he suggested. “And if we are serious about being globally competitive — which the strategy is, and I think we should be — then we’re talking at least matching the funding of other countries, if not exceeding it.”

“Ultimately it comes down to where does this rank in their priority list and if they want to deliver on an ambitious strategy it’s got to be high,” he added.

Access to talent

Discussing the broad detail of what the strategy says is needed for the U.K. to up its AI game, Warner highlighted talent as a key component.

“For a technical field like AI talent is a huge deal. There’s a global competition for that talent. And it seems like the government is taking that seriously and hopefully going to take actions to make sure the U.K. has all the talent it needs for this kind of stuff — from a skills perspective and training up people but also from a visa perspective.”

“From our perspective it’s just wonderful to be able to access some of the most talented people from across the world to come and work on important problems and so the easier that it can be made for those people — or for organizations, whether it’s universities or charities or companies like us or even government departments to start to be able to hire those people it’s just a massive step forward,” he added.

“It’s nice that they’re taking computing and data seriously,” he went on, discussing other elements of the strategy. “Obviously those are the two fuels for the set of techniques of machine learning that are sort of the foundation of modern AI. And having the government think about how we can make that more accessible is clearly a great thing.”

“I think the fact that they’re thinking about the long-term risks of AI is novel and basically important,” he also said.

“Then I think they’re relatively honest that our adoption is weaker than we’d like, as a country, as a set of businesses. And hopefully recognizing that and thinking seriously about how we might go about fixing it — so, all in all, from a strategy perspective it’s actually very good.”

The strategy also talks about the need to establish “clear rules, applied ethical principles and a pro-innovation regulatory environment” for AI. But the U.K. is already lagging on that front — with the European Union proposing an AI Regulation earlier this year.

Asked for his views on AI regulation Warner advocated for domain-specific rules.

Domain specific AI rules

“We think it would be a big mistake to regulate at the level of just artificial intelligence. Because that’s sort of equivalent to regulating steel where you don’t know whether the steel is going to be used in girders or in a knife or a gun,” he suggested. 

“Either you pick the kind of legislation that we have around girders and it becomes incredibly lax around the people who are using the steel to make guns or you pick the kind of legislation that we have around guns and it becomes almost impossible to make the girders.

“So while it’s totally critical that we regulate AI effectively that is almost certainly done in a domain-specific fashion.”

He gave the example of AIs used in health contexts, such as for diagnosis, as a domain that would naturally require tighter regulation — whereas a use-case like e-commerce would likely not need such guardrails, he suggested.

“I think the government recognizes this in the strategy,” he added. “It does talk about making sure the regulation is really thoughtfully attuned to the domain. And that just seems very sensible to me.

“We think it’s extremely important that AI is done well and safely and for the benefit of society.”

The EU’s proposed risk-based framework for regulating applications of AI does have a focus on certain domains and use cases — which are classified as higher or lower risk, with regulatory requirements varying accordingly. But Warner said he hasn’t yet studied the EU proposal in enough detail to have a view on their approach.  

TechCrunch also asked the Faculty CEO for his views on the U.K. government’s simultaneous push to “reform” the current data protection framework — which includes consulting on changes that could weaken protections for people’s information.

Critics of the reform plan suggest it risks a race to the bottom on privacy standards.

“My view would be that it’s absolutely critical that uses of AI are both legal and legitimate,” said Warner. “As in, if people knew what was being done with their data they would be completely comfortable with what’s going on.”

Data legitimacy

Faculty’s AI business was in existence (albeit under a different name) before the U.K.’s version of the EU General Data Protection Regulation (GDPR) was transposed into national law — although the prior regime was broadly similar. So existing rules don’t appear to have harmed its prospects as a high value and growing U.K. AI business.

Given that, might the government’s appetite to reduce the level of data protection that U.K. citizens enjoy — with the claim that doing so would somehow be “good for innovation” — actually be rather counterproductive for AI businesses which need the trust of users to flourish? (Plus, of course, if any U.K. AI businesses want to do business in the European Union they would need to comply with the GDPR.)

“GDPR is not perfect,” argued Warner. “If you speak to anyone I think that’s widely recognized — so I don’t think that the way it’s being framed as a choice between one or other, I think we can do better than both and I think that’s what we should aim for.

“I think there are lots of ways that we can — over time — be better at regulating these things. So that we maintain the absolute best in class for legitimacy around the use of these technologies which is obviously totally critical for companies like us that want to do business in a way that’s widely accepted and even encouraged in society.

“Basically I don’t think we should compromise but I don’t think it’s a choice between just following GDPR or not. It’s more complicated than that.”

It’s also worth noting there have been a number of high-profile data scandals emanating from the U.K. in recent years.

And Faculty — in its pre-rebranding guise as ASI Data Science — was intimately involved in controversial use of data for targeting ads at voters during the U.K.’s Brexit vote, for example.

Although it has since said it will never do political work again.

Political campaigning

ASI Data Science’s corporate rebranding followed revelations around the data-mining activities of the now defunct and disgraced data company, Cambridge Analytica — which broke into a global scandal in 2018, and led to parliamentarians around the world asking awkward questions about the role of data and predictive modelling to try to sway voters.

The U.K.’s information commissioner even called for an “ethical pause” on the use of data and AI tools for political add targeting, warning that trust in democracy was being undermined by big data techniques opaquely targeting voters with custom political messaging.

During the Brexit referendum, Warner worked with the U.K. government’s former special advisor, Dominic Cummings, who was a director for the Vote Leave campaign. And Cummings has written extensively that data scientists played a crucial role in winning the Brexit vote — writing, for instance, in a 2016 blog post on how data science and AI was used in the referendum, that:

One of our central ideas was that the campaign had to do things in the field of data that have never been done before. This included a) integrating data from social media, online advertising, websites, apps, canvassing, direct mail, polls, online fundraising, activist feedback, and some new things we tried such as a new way to do polling… and b) having experts in physics and machine learning do proper data science in the way only they can – i.e. far beyond the normal skills applied in political campaigns. We were the first campaign in the UK to put almost all our money into digital communication then have it partly controlled by people whose normal work was subjects like quantum information (combined with political input from Paul Stephenson and Henry de Zoete, and digital specialists AIQ). We could only do this properly if we had proper canvassing software. We built it partly in-house and partly using an external engineer who we sat in our office for months.

Given this infamous episode in his company’s history, we asked Warner if he would be supportive of AI rules that limit how the technology can be used for political campaigning?

The U.K. government has not made such a proposal but it is eyeing changing in election law — such as disclosure labels for online political campaigning.

“Faculty as an organization is not interested in politics anymore — it’s not something we’re thinking about,” was Warner’s response on this. 

Pushed again on whether he would support limits on AI in the political campaigning domain, he added: “From Faculty’s perspective we don’t do politics anymore. I think it’s up to the government what they think it’s best around that area.”

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://techcrunch.com/2021/09/24/uks-ai-strategy-is-ambitious-but-needs-funding-to-match-says-facultys-marc-warner/

Continue Reading
Esports4 days ago

Here are all of CS:GO’s Operation Riptide skins

Esports3 days ago

How to start a Private Queue in CS:GO

Esports2 days ago

Can You Play Diablo II: Resurrected Offline?

Esports3 days ago

How to complete all week one missions in Operation Riptide

Esports4 days ago

Valve reveals CS:GO Operation Riptide, featuring private queue, short competitive games, new deathmatch modes, and more

Esports2 days ago

Failed to Enter Game, Character Could Not be Found: How to Fix Error in Diablo II: Resurrected

Esports5 days ago

All Fashion Week Timed Research, Finding Your Voice Special Research, and event-exclusive Field Research tasks and rewards in Pokémon Go

Esports4 days ago

Pokémon UNITE APK and OBB download links for Android

Esports3 days ago

CS:GO Riptide Case: Full List of New Skins

Esports4 days ago

Some players unable to claim Pokémon UNITE mobile pre-registration rewards due to new error

Esports2 days ago

Valkyrae says YouTube is working on gifted members and a feature similar to Twitch Prime

Esports4 days ago

5 Best Counters to Vex in League of Legends

Esports3 days ago

Initial reactions to the Worlds 2021 group draw: How does each team stack up against the field?

Esports2 days ago

Valkyrae says YouTube is working on gifted members and a feature similar to Twitch Prime

Esports1 day ago

Fall Guys achieves Guinness World Record for most downloaded PlayStation Plus game ever

Esports2 days ago

Best Stats for the Druid in Diablo II: Resurrected

Covid195 days ago

Fintech Apps Sees a Surge in Downloads Amidst the Pandemic

Esports2 days ago

Microsoft’s The Initiative brings on Crystal Dynamics to help develop its Perfect Dark reboot

Blockchain3 days ago

United States Infrastructure Bill Brings Cardano Billionaire to Washington.

Energy3 days ago

Carbon Nanotubes Market size worth $ 20.31 Billion, Globally, by 2028 at 17.27% CAGR: Verified Market Research®

Trending