Connect with us


NeurIPS competition tackles climate data challenges



The Earth’s climate is a highly complex, dynamic system. It is difficult to understand and predict how different climate variables interact. Finding causal relations in climate research today relies mostly on expensive and time-consuming model simulations. Fortunately, with the explosion in the availability of large-scale climate data and increasing computational power via the cloud, there are new, complementary ways to use machine learning (ML) and causal inference to understand relationships in climate data, like rainfall and ocean temperatures. This understanding can help improve weather forecasting and identify the causes of extreme events, like hurricanes and tornadoes. To help accelerate progress, AWS sponsored the Causality for Climate (C4C) competition at NeurIPS in 2019. This competition, which focused on the causal discovery and development of new methodologies to understand climate data, was one of 12 accepted NeurIPS 2019 competitions. It was organized by Jakob Runge and colleagues of the German Aerospace Center with collaborators at the University of Valencia.

Machine learning offers flexible methods that learn and adapt to the characteristics of climate data rather than assuming a rigid statistical model. This is important given the complex nature of climate data, which exhibits interdependencies between multiple subcomponents. Even with an unprecedented increase in the volume of climate observations, it is difficult to find patterns and identify complex relationships amongst the data. The goal of the competition was to develop new benchmarks and find new methods that can be applied to real-world challenges in climate. Participants were provided time series datasets featuring climate data (such as precipitation, humidity, and temperature) and AWS credits, with the aim to discover novel methodologies and open up new interdisciplinary research for climate data analysis.

The top prize went to a team of Ph.D.s and postdocs from the Copenhagen Causality Lab in the Department of Mathematical Sciences at the University of Copenhagen. They worked with 34 different datasets and the aim was to identify the causal relationship among the variables. The team started with simple baseline approaches and closely monitored the results as they introduced new variations to identify the methods that performed best across the competition track. Overall, because the climate data was blind and participants were not aware what measurements different time series corresponded to, they could optimize for the best methodologies without being influenced by preconceived hypotheses or assumptions. For more information, see their GitHub repo.

“Raising awareness in the community to focus on one of the most pressing issues of our time is a benefit of competitions like this,” said Sebastian Weichwald, a postdoc from the University of Copenhagen. “We are excited to see what was behind the competition data. As a next step, we would like to further investigate why the methods we employed performed well in this competition and continue to iterate and learn to ultimately make an impact on sustainability and climate science.”

The winning team from University of Copenhagen. From left to right: Lasse Petersen, Nikolaj Thams, Phillip Bredahl Mogensen, Sebastian Weichwald, Gherardo Varando and Martin Emil Jakobsen.

A second winning team, made up of professors and Ph.D.s from the University of Ghent (Belgium), University of Palermo (Italy), University of Bari (Italy), and University of Rome La Sapienza (Italy), focused on the nonlinear nature of climate interactions. Their method was inspired by the theory of chaotic systems. This theory originated from studying our weather, which is a chaotic system in which you cannot forecast weather more than a few days out. The team used an approach that better captures those dynamics, which is why they succeeded in the categories with chaotic nonlinear datasets. Working to develop better tools to predict weather forecasting can help to understand climate change and extreme weather events. For more information, see their GitHub repo.

The winners were announced at NeurIPS on December 14, 2019. With 146 different methods and over 6,500 submitted results, the teams used AWS credits to iterate, experiment, and learn what methods delivered the best results. Their experimentation will help close the gap in understanding climate interactions and causality and raise awareness in a variety of communities, from physics to ML to statistics, to spur new innovation to improve upon our understanding of global climate.

About the author

Rebecca Wolff is a senior product marketing manager for AWS AI/ML. She is passionate about the power of machine learning to benefit society and improve lives. Out of the office, she likes to try new recipes and explore Seattle neighborhoods with her family.



Google gets woke on gender in Vision API, Amazon happy to sell its facial recognition code to foreigners, and more



Elon Musk roasts OpenAI, says should be more open

Roundup Hello readers. If you’re struggling to keep up with all the AI-related news spewed out and have already read what we’ve covered this week, then here’s more.

Me, sexist? No! What’s gender anyway?: Google’s Vision API, a service that offers pre-trained computer vision models for image recognition, will no longer identify gender in photos.

If an image of a person is fed into the API, Google will now label them as a ‘person’ rather than ‘male’ or ‘man,’ or ‘female’ or ‘woman’. The move to scrap “gendered labels” was to reduce the chances of unfair biases, apparently.

“Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias,” a spokesperson told Business Insider.

The classification of male and female doesn’t apply to everyone. Training machine learning models on these two labels means that they can fail when given a picture of transgender or non-binary people. To avoid such mistakes, Google’s Vision API will now just label someone as a person.

The change only affects Google’s Vision API, and doesn’t apply to its AutoML Vision service. AutoML Vision is more flexible, and users can train models on their own custom labels, so they can include gendered labels if they want.

Deepfakes in India’s politics: Fake videos of politician Manoj Tiwari, who is running for the current State Legislative Assemblies in India, began surfacing this week.

In one of the clips, Tiwari criticises his opponent, Arvind Kejriwal, from the Aam Aadmi Party for not sticking to his promises of opening more schools and installing more CCTV cameras in English.

Youtube Video

In another clip, he’s positioned against the same background, wearing the same clothes, and making another speech. But this time, he’s speaking in Haryanvi, a dialect of Hindi.

Youtube Video

If that’s not suspicious enough, here’s a third video that’s very similar to the first two – except now Tiwari is speaking in a completely different language.

Youtube Video

When viewed together, it definitely looks like the clips may have been altered using machine learning algorithms. The fake content described as deepfakes allows people to paste over someone’s face onto another person’s body. It’s possible that Tiwari’s appearance from the shoulders up was mapped onto other people’s bodies, and presumably these people were the ones that spread his message in English and Haryvani.

These deepfake videos were then spread across 5,800 WhatsApp groups, reaching up to 15 million people, as first reported by Vice.

The majority of deepfakes – about 96 per cent – are for pornographic content. Internet perverts have a penchant for swapping the faces of adult actresses for their favorite female celebrities.

But the creation of deepfakes for political reasons seems to be rising. Suspected fake videos of politicians from other countries, like Malaysia and Gabon, have cropped up too.

Hell yeah, we sell our facial recognition to police departments. And we’d probably sell it foreign governments too: The head of Amazon’s AWS cloud service, Andy Jassy, said he was happy to offer its facial recognition technology to law enforcement and would sell it to foreign governments too.

Facial recognition is the most controversial application of modern AI. It’s a well-known fact that the vast majority of models struggle to identify women and people of darker skin as accurately as white men. The technology, therefore, is likely to carry racial and gender biases, possibly leading to things like false arrests from incorrect matches.

Despite these issues, however, Amazon continues to sell to law enforcement departments across the US. In a documentary, Amazon Empire: The Rise and Reign of Jeff Bezos, produced by Frontline, the investigative journalism arm of America’s Public Broadcasting Service, Jassy states that he would sell Amazon’s Rekognition technology to foriegn governments.

“There’s a number of governments that are against the law for U.S. companies to do business with,” he said. “We would not sell it to those people or those governments.”

When pressed with the fact that some countries that the US can trade freely with are known for enforcing oppressive regimes and human rights abuses, Jassy said: “Yeah, again, if we have documented cases where customers of any sort are using the technology in a way that’s against the law or that we think is impinging people’s civil liberties, then we won’t allow them to use the platform” — meaning all of AWS, not just Rekognition.”

So, erm, that’s all okay then.

Algorithms inspecting visa applications: An architect’s visa to travel to the US was revoked after a computer algorithm flagged him up for being involved in a security threat.

Eyal Weizman, director of Forensic Architecture, a research group based in London that analyses and investigates videos of violent conflicts and human rights abuses around the world, was told he could no longer enter the US for a trip planned this month. Weizman has had no previous problems passing in and out of the border, and was flew to American in December.

But this time, this visa was revoked. When he went to the US embassy in London to apply for it again, he was told that his name had been flagged up by an algorithm. The computers had “identified a security threat that was related to him,” according to The New York Times. The embassy told him that the algorithm may have singled him out for interacting with certain people or staying in certain hotels.

He was asked to provide travel details over the last 15 years, including whether he had visited Syria, Iran, Iraq, Yemen, or Somalia. Weizman has passports from the United Kingdom and Israel.

Not much is known about how the algorithm works. A spokesperson from the US Customs and Border Protection refused to discuss the issue further and said that visa records were confidential under US law.

OpenAI not so open, after all: Here’s this week’s long read: OpenAI, the research lab based in San Francisco, known for its very public quest to develop artificial general intelligence has changed over the years.

Its reputation as a friendlier and more transparent company, compared to other bigger Silicon Valley tech corps, has slowly eroded over time. OpenAI appears to operate just like any other upstart now. There is a strong incentive for developing technology for profit, installing corporate secrecy, and an aggressive PR strategy.

All of it seems to stem from the OpenAI transforming from a nonprofit to a startup accepting cash from investors.

MIT Tech Review’s Karen Hao discovered this when she was given limited access to interview some of the company’s most prominent employees. On the surface they appeared open, talking about their grandeur visions of AGI, but behind closed doors other employees were told to notify the internal communications team whenever Hao contacted them without explicit permission to talk. It’s a common tactic employed by companies to prevent employees leaking to the press.

Read her story to find out more about the internal politics of what goes on inside OpenAI.

After the article was published, Elon Musk, who left OpenAI’s board last year criticized the company for its lack of transparency and said he had little confidence in the company’s safety strategy. Ouch. ®

Sponsored: Detecting cyber attacks as a small to medium business


Continue Reading


Petnet’s smart pet feeder system is back after a week-long outage, but customers are still waiting for answers



Petnet, the smart pet feeder backed by investors including Petco, recently experienced a week-long system outage affecting its second-generation SmartFeeders. While the startup’s customer service recently tweeted that its SmartFeeders and app’s functionality have been restored, Petnet’s lack of responsiveness continues to leave many customers frustrated and confused.

Petnet first announced on Feb. 14 that it was investigating a system outage affecting its second-generation SmartFeeders that made the feeders appear to be offline. The company said in a tweet that the SmartFeeders were still able to dispense on schedule, but several customers replied that their devices had also stopped dispensing food or weren’t dispensing it on schedule.

On Feb. 19, the company said that it is “working closely with our third-party service provider in regards to the outage,” before announcing on Feb. 22 that the SmartFeeders are returning online.

During that time, customers voiced frustration at the company’s lack of responses to their questions on Twitter and Facebook. Messages to the company’s support email and CEO Carlos Herrera were undeliverable.

TechCrunch tried contacting their emails and got delivery failure notices. A message sent to their Twitter account was also not replied to. We have contacted the company again for comment.

Petnet also experienced a similar system outage last month.

According to Crunchbase, has raised $14.9 million since it was founded in 2012, including a Series A led by Petco.

In a statement sent to TechCrunch over the weekend before Petnet said the outage was resolved, a Petco representative said “Petco is a minor and passive investor in Petnet, but we do not have any involvement in the company’s operations nor insight into the system outage they are currently experiencing.


Continue Reading


Heres our pick of the top six startups from Pause Fest



We’ve been dropping into the Australian startup scene increasingly over the years as the ecosystem has been building at an increasingly faster pace, most notably at our own TechCrunch Battlefield Australia in 2017. Further evidence that the scene is growing has come recently in the shape of the Pause Fest conference in Melbourne. This event has gone from strength to strength in recent years, and it is fast becoming a must-attend for Aussie startups aiming for both national international attention.

I was able to drop in virtually to interview a number of those showcased in the Startup Pitch Competition, so here’s a run-down of some of the stand-out companies.

Medinet Australia

Medinet Australia is a health-tech startup aiming to make healthcare more convenient and accessible to Australians by allowing doctors to do consultations with patients via an app. Somewhat similar to apps like Babylon Health, Medinet’s telehealth app allows patients to obtain clinical advice from a GP remotely; access prescriptions and have medications delivered; access pathology results; directly email their medical certificate to their employer; and access specialist referrals along with upfront information about specialists such as their fees, waitlist, and patient experience. They’ve raised $3M in Angel financing and are looking for institutional funding in due course. Given Australia’s vast distances, Medinet is well-placed to capitalize on the shift of the population towards much more convenient telehealth apps. (1st Place Winner)


Everty allows companies to easily manage, monitor and monetize Electric Vehicle charging stations. But this isn’t about infrastructure. Instead, they link up workplaces and accounting systems to the EV charging network, thus making it more like a “Salesforce for EV charging.” It’s available for both commercial and home charging tracking. It’s also raised an Angel round and is poised to raise further funding. (2nd Place Winner)

AI On Spectrum

It’s a sad fact that people with Autism statistically tend to die younger, and unfortunately, the suicide rate is much higher for Autistic people. “AI on Spectrum” takes an accessible approach in helping autistic kids and their families find supportive environments and feel empowered. The game encourages Autism sufferers to explore their emotional side and arms them with coping strategies when times get tough, applying AI and machine learning in the process to assist the user. (3rd Place Winner.)


Professional bee-keepers need a fast, reliable, easy-to-use record keeper for their bees and this startup does just that. But it’s also developing a software and sensor technology to give beekeepers more accurate analytics, allowing them to get an early-warning about issues and problems. Their technology could even, in the future, be used to alert for coming bushfires by sensing the changed behavior of the bees. (Hacker Exchange Additional Winner.)


Rechargeable batteries for things like cars can be re-used again, but the key to employing them is being able to extend their lives. Relectrify says its battery control software can unlock the full performance from every cell, increasing battery cycle life. It will also reduce storage costs by providing AC output without needing a battery inverter for both new and 2nd-life batteries. Its advanced battery management system combines power and electric monitoring to rapidly the check which are stronger cells and which are weaker making it possible to get as much as 30% more battery life, as well as deploying “2nd life storage”. So far, they have a project with Nissan and American Electric Power and have raised a Series A of $4.5 million. (SingularityU Additional Winner.)


Sadly, seniors and patients can contract bedsores if left too long. People can even die from bedsores. Furthermore, hospitals can end up in litigation over the issue. What’s needed is a technology that can prevent this, as well as predicting where on a patient’s body might be worst affected. That’s what Gabriel has come up with: using multi-modal technology to prevent and detect both falls and bedsores. Its passive monitoring technology is for the home or use in hospitals and consists of a resistive sheet with sensors connecting to a system which can understand the pressure on a bed. It has FDA approval, is patent-pending and is already working in some Hawaiian hospitals. It’s so far raised $2 million in Angel and is now raising money.

Here’s a taste of Pause Fest:

Read more:

Continue Reading