Connect with us

AI

How AI can empower communities and strengthen democracy

Avatar

Published

on

Each Fourth of July for the past five years I’ve written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good.

This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes. AI literacy is also, as Microsoft CTO Kevin Scott asserted, a critical part of being an informed citizen in the 21st century.

This year, I posed the question on Twitter to gather a broader range of insights. Thank you to everyone who contributed.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

This selection is not meant to be comprehensive, and some ideas included here may be in the early stages, but they represent ways AI might enable the development of more free and just societies.

Machine learning for open source intelligence 

Open source intelligence, or OSINT, is the collection and analysis of freely available public material. This can power solutions for cryptology and security, but it can also be used to hold governments accountable.

Crowdsourced efforts by groups like Bellingcat were once looked upon as interesting side projects. But findings based on open source evidence from combat zones — like an MH-17 being shot down over Ukraine and a 2013 sarin gas attack in Syria — have proved valuable to investigative authorities.

Groups like the International Consortium of Investigative Journalists (ICIJ) are using machine learning in their collaborative work. Last year, the ICIJ’s Marina Walker Guevara detailed lessons drawn from the Machine Learning for Investigations reporting process, conducted in partnership with Stanford AI Lab.

In May, researchers from Universidade Nove de Julho in Sao Paulo, Brazil published a systematic review of AI for open source intelligence that found nearly 250 examples of OSINT that uses AI in works published between 1990 and 2019. Topics range from AI for crawling web text and documents to applications for social media, business, and — increasingly — cybersecurity.

Along similar lines, an open source initiative out of Swansea University is currently using machine learning to investigate alleged war crimes happening in Yemen.

AI for emancipation 

Last month, shortly after the start of some of the largest protests in U.S. history engulfed American cities and spread around the world, I wrote about an analysis of AI bias in language models. Although I did not raise the point in that piece,  the study stood out as the first time I’ve come across the word “emancipation” in AI research. The term came up in relation to researchers’ best practice recommendations for NLP bias analysts in the field of sociolinguistics.

I asked lead author Su Lin Blodgett to speak more about this idea, which would treat marginalized people as coequal researchers or producers of knowledge. Blodgett said she’s not aware of any AI system today that can be defined as emancipatory in its design, but she is excited by the work of groups like the Indigenous Protocol and Artificial Intelligence Working Group.

Blodgett said AI that touches on emancipation includes NLP projects to help revitalize or reclaim languages and projects for creating natural language processing for low-resource languages. She also cited AI aimed at helping people resist censorship and hold government officials accountable.

Chelsea Barabas explored similar themes in an ACM FAccT conference presentation earlier this year. Barabas drew on the work of anthropologist Laura Nader, who finds that anthropologists tend to study disadvantaged groups in ways that perpetuate stereotypes. Instead, Nader called for anthropologists to expand their fields of inquiry to include “study of the colonizers rather than the colonized, the culture of power rather than the culture of the powerless, the culture of affluence rather than the culture of poverty.”

In her presentation, Barabas likewise urged data scientists to redirect their critical gaze in the interests of fairness. As an example, both Barbara and Blodgett endorsed research that scrutinizes “white collar” crimes with the level of attention typically reserved for other offenses.

In Race After Technology, Princeton University professor Ruha Benjamin also champions the notion of abolitionist tools in tech. Catherine D’Ignazio and Lauren F. Klein’s Data Feminism and Sasha Costanza-Chock’s Design Justice offer further examples of data sets that can be used to challenge power.

Racial bias detection for police officers

Taking advantage of NLP’s ability to process data at scale, Stanford University researchers examined recordings of conversations between police officers and people stopped for traffic violations. Using computational linguistics, the researchers were able to demonstrate that officers paid less respect to Black citizens during traffic stops. Part of the focus of the work published in the Proceedings of the National Academy of Science in 2017 was to highlight ways police  body camera footage can be used as a way to build trust between communities and law enforcement agencies. The analysis used a collection of recordings collected over the course of years, drawing conclusions from a batch of data instead of parsing instances one by one.

An algorithmic bill of rights

The idea of an algorithmic bill of rights recently came up in a conversation with Black roboticist about building better AI. The notion was introduced in the 2019 book A Human’s Guide to Machine Intelligence and further fleshed out by Vox staff writer Sigal Samuel.

A core tenet of the idea is transparency, meaning each person has the right to know when an algorithm is making a decision and any factors considered in that process. An algorithmic bill of rights would also include freedom from bias, data portability, freedom to grant or refuse consent, and a right to dispute algorithmic results with human review.

As Samuel points out in her reporting, some of these notions, such as freedom from bias, have appeared in laws proposed in Congress, such as the 2019 Algorithmic Accountability Act.

Fact-checking and fighting misinformation

Beyond bots that provide citizen services or promote public accountability, AI can be used to fight deepfakes and misinformation. Examples include Full Fact’s work with Africa Check, Chequeado, and the Open Data Institute to automate fact-checking as part of the Google AI Impact Challenge.

Deepfakes are a major concern heading into the U.S. election season this fall. In a fall 2019 report about upcoming elections, the New York University Stern Center for Business and Human Rights warned of domestic forms of disinformation, as well as potential external interference from China, Iran, or Russia. [Was this report referencing the upcoming elections?] The Deepfake Detection Challenge aims to help counter such deceptive videos, and Facebook has introduced a data set of videos for training and benchmarking deepfake detection systems.

Pol.is

Recommendation algorithms from companies like Facebook and YouTube — with documented histories of stoking division to boost user engagement — have been identified as another threat to democratic society.

Pol.is uses machine learning to achieve opposite aims, gamifying consensus and grouping citizens on a vector map. To reach consensus, participants need to revise their answers until they reach agreement. Pol.is has been used to help draft legislation in Taiwan and Spain.

Algorithmic bias and housing

In Los Angeles County, individuals who are homeless and White exit homelessness at a rate 1.4 times greater than people of color, a fact that could be related to housing policy or discrimination. Citing structural racism, a homeless population count for Los Angeles released last month found that Black people make up only 8% of the county population but nearly 34% of its homeless population.

To redress this injustice, the University of Southern California Center for AI in Society will explore how artificial intelligence can help ensure housing is fairly distributed. Last month, USC announced $1.5 million in funding to advance the effort together with the Los Angeles Homeless Services Authority.

The University of Southern California’s school for social work and the Center of AI in Society have been investigating ways to reduce bias in the allocation of housing resources since 2017. Homelessness is a major problem in California and could worsen in the months ahead as more people face evictions due to pandemic-related job losses. 

Putting AI ethics principles into practice

Putting ethical AI principles into practice is not just an urgent matter for tech companies, which have virtually all released vague statements about their ethical intentions in recent years. As a study from the UC Berkeley Center for Long-Term Cybersecurity found earlier this year, it’s also increasingly important that governments establish ethical guidelines for their own use of the technology.

Through the Organization for Economic Co-operation and Development (OECD) and G20, many of the world’s democratic governments have committed to AI ethics principles. But deciding what constitutes ethical use of AI is meaningless without implementation. Accordingly, in February the OECD established the Public Observatory to help nations put these principles into practice.

At the same time, governments around the world are formulating their own ethical parameters. Trump administration officials introduced ethical guidelines for federal agencies in January that, among other things, encourage public participation in establishing AI regulation. However, the guidelines also reject regulation the White House considers overly burdensome, such as bans on facial recognition technology.

One analysis recently found the need for more AI expertise in government. A joint Stanford-NYU study released in February examines the idea of “algorithmic governance,” or AI playing an increasing role in government. Analysis of AI used by the U.S. federal government today found that more than 40% of agencies have experimented with AI but only 15% of those solutions can be considered highly sophisticated. The researchers implore the federal government to hire more in-house AI talent for vetting AI systems and warn that algorithmic governance could widen the public-private technology gap and, if poorly implemented, erode public trust, or give major corporations an advantage over small businesses.

Another crucial part of the equation is how governments choose to award contracts to AI startups and tech giants. In what was believed to be a first, last fall the World Economic Forum, U.K. government, and businesses like Salesforce worked together to produce a set of rules and guidelines for government employees in charge of procuring services or awarding contracts.

Such government contracts are an important space to watch as businesses with ties to far-right or white supremacist groups — like Clearview AI and Banjo — sell surveillance software to governments and law enforcement agencies. Peter Thiel’s Palantir has also collected a number of lucrative government contracts in recent months. Earlier this week, Palmer Luckey’s Anduril, also backed by Thiel, raised $200 million and was awarded a contract to build a digital border wall using surveillance hardware and AI.

Ethics documents like those mentioned above invariably espouse the importance of “trustworthy AI.” If you roll your eyes at the phrase, I certainly don’t blame you. It’s a favorite of governments and businesses peddling principles to push through their agendas. The White House uses it, the European Commission uses it, and tech giants and groups advising the U.S. military on ethics use it, but efforts to put ethics principles into action could someday give the term some meaning and weight.

Protection against ransomware attacks

Before local governments began scrambling to respond to the coronavirus and structural racism, ransomware attacks had established themselves as another growing threat to stability and city finances.

In 2019, ransomware attacks on public-facing institutions like hospitals, schools, and governments were rising at unprecedented rates, siphoning off public funds to pay ransoms, recover files, or replace hardware.

Security companies working with cities told VentureBeat earlier this year that machine learning is being used to combat these attacks through approaches like anomaly detection and quickly isolating infected devices.

Robot fish in city pipes

Beyond averting ransomware attacks, AI can help municipal governments avoid catastrophic financial burdens by monitoring infrastructure, catching leaks or vulnerable city pipes before they burst.

Engineers at the University of Southern California built a robot for pipe inspections to address these costly issues. Named Pipefish, it can swim into city pipe systems through fire hydrants and collect imagery and other data.

Facial recognition protection with AI

When it comes to shielding people from facial recognition systems, efforts range from shirts to face paint to full-on face projections.

EqualAIs was developed at MIT’s Media Lab in 2018 to make it tough for facial recognition to identify subjects in photographs, project manager Daniel Pedraza told VentureBeat. The tool uses adversarial machine learning to modify images in order to evade facial recognition detection and preserve privacy. EqualAIs was developed as a prototype to show the technical feasibility of attacking facial recognition algorithms, creating a layer of protection around images uploaded in public forums like Facebook or Twitter. Open source code and other resources from the project are available online.

Other apps and AI can recognize and remove people from photos or blur faces to protect individuals’ identity.  University of North Carolina at Charlotte assistant professor Liyue Fan published work that applies differential privacy to images for added protection when using pixelization to hide a face. Should tech like EqualAIs be widely adopted, it may give a glimmer of hope to privacy advocates who call Clearview AI the end of privacy.

Legislators in Congress are currently considering a bill that would prohibit facial recognition use by federal officials and withhold some funding to state or local governments that choose to use the technology.

Whether you favor the idea of a permanent ban, a temporary moratorium, or minimal regulation, facial recognition legislation is an imperative issue for democratic societies. Racial bias and false identification of crime suspects are major reasons people across the political landscape are beginning to agree that facial recognition is unfit for public use today.

ACM, one of the largest groups for computer scientists in the world, this week urged governments and businesses to stop using the technology. Members of Congress have also voiced concern about use of the tech at protests or political rallies. Experts testifying before Congress have warned that if facial recognition becomes commonplace in these settings, it has the potential to dampen people’s constitutional right to free speech.

Protestors and others might have used face masks to evade detection in the past, but in the COVID-19 era, facial recognition systems are getting better at recognizing people wearing masks.

Final thoughts

This story is written with the clear understanding that techno-solutionism is no panacea and AI can be used for both positive and negative purposes. And the series is published on an annual basis because we all deserve to keep dreaming about ways AI can empower people and help build stronger communities and a more just society.

We hope you enjoyed this year’s selection. If you have additional ideas, please feel free to comment on the tweet or email khari@venturebeat.com to share suggestions for stories on this or related topics.

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/-NXtQAwz4u4/

AI

Is It Worth Investing in a Website Builder?

Avatar

Published

on

There are many different ways to build a website these days. There’s the timeless method of building your site code in Adobe Dreamweaver and exporting it to the web.

You can build a site in WordPress with a bit of CSS knowledge, or you can just outsource everything to a website design agency. Then there’s also the option of using a website builder, which is perhaps the easiest solution of all.

“Website builders are a popular way for people to easily and quickly set up a website with as little hassle as possible.” 

They’re great for small retail businesses, whether you’re selling handmade crafts or drop shipping products from Amazon, but larger companies can effectively use website builders as well. They certainly aren’t for everyone, but let’s take a look at whether or not investing in a website builder is the right choice for you.

How much do website builders actually cost?

Website builders are always going to be cheaper than custom website design, and to an extent WordPress, but there are some variables. The thing is that some people (design agencies) like to point out that website builders cost a little more in things like domain hosting, SSL certificates, and other little monthly fees, compared to DIY hosting or a WordPress domain host.

So it becomes a question of upfront costs versus long-term costs in monthly fees, but there are several catches people don’t like to mention. Let me try to explain it succinctly.

Cost of a website builder

If you use a website builder to create, for example, a small eCommerce website. You’re probably going to pay around $200 ~ $500 upfront. This will include your domain name, any premium themes and add-ons (like a shopping cart module), and monthly hosting (which you’ll probably pay as an annual subscription upfront). It’s kind of like an “all-inclusive” vacation package, where everything is included in the total upfront cost.

So you’ll pay a small upfront fee which is mostly the annual hosting subscription, followed by monthly fees for the additional customizations you add to your website. Hosting plans on website builder platforms average around $9 to $75 per month, depending on your plan.

Again, it really depends on your plan, as website builders aren’t just for eCommerce websites. For example, there are a number of platforms which are built for specific industries, such as real estate, as this guide describes. Ultimately if you are going to use a website builder, it’s best to find one that is best suited to the industry you are operating in. 

Cost of a WordPress website

If you use WordPress, you can expect to pay around $500 – $1,000 upfront for a similar small eCommerce website, with lower monthly fees. This is because you can shop around for a domain name and domain hosting from sites like HostGator, BlueHost, etc. to get the best subscription-based pricing available, but you’ll also be paying additionally for WordPress themes, mobile design plug-ins, shopping cart plug-ins, etc.

Using the vacation package analogy again, WordPress is like you’re paying for your own drinks, meals, and WiFi access at the resort.

This means that you’ll be spending a bit more upfront on piecing together the different elements of your website, but you’ll pay on average around $11 – $40 per month for domain hosting. Of course, you could also pay monthly for plug-in subscriptions, website maintenance, etc.

Cost of custom website design

So in the vacation package analogy, custom website design is like flying first-class to a resort, and you own the resort. Custom website design is going to cost a minimum of around $5,000 and could go much higher, depending on your web project.

“Website designers are paid around $50 – $100 per hour, and custom website design takes around 14 weeks on average, from beginning to launch.” 

Now some website design agencies are going to be mad at me for saying this, but when they like to point out the “higher monthly costs” of a website builder, take a look at their fine print. Many website design agencies can lock you into monthly maintenance contracts, which can range from an additional $500 up to $3,000 per month or more, depending on the size of your site.

It’s kind of like if you have a contract with a car mechanic to inflate your tires and change your oil every month, except they keep billing you for a clutch assembly replacement. I’m not saying that website design agencies are dishonest, but you do need to be aware of what kind of monthly maintenance your website actually needs.

Conclusion

When we compare all three options (website builder, WordPress, and custom website design), it’s quite clear that website builders are the most affordable option. However, you’ll also be limited in customization options with a website builder, as you’re really piecing together templates and blocks, so you won’t get the exclusive customization and brand appeal you would with custom web design or a WordPress website. So you’ll have to consider what’s best for your long-term business plan.

Also, Read Tips to Automate Your Ecommerce

Source: https://www.aiiottalk.com/business/investing-in-a-website-builder/

Continue Reading

AI

Amazon EC2 Inf1 instances featuring AWS Inferentia chips now available in five new Regions and with improved performance

Avatar

Published

on

Following strong customer demand, AWS has expanded the availability of Amazon EC2 Inf1 instances to five new Regions: US East (Ohio), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland). Inf1 instances are powered by AWS Inferentia chips, which Amazon custom-designed to provide you with the lowest cost per inference in the cloud and lower barriers for everyday developers to use machine learning (ML) at scale.

As you scale your use of deep learning across new applications, you may be bound by the high cost of running trained ML models in production. In many cases, up to 90% of the infrastructure spent on developing and running an ML application is on inference, making the need for high-performance, cost-effective ML inference infrastructure critical. Inf1 instances are built from the ground up to support ML inference applications and deliver up to 30% higher throughput and up to 45% lower cost per inference than comparable GPU-based instances. This gives you the performance and cost structure you need to confidently deploy your deep learning models across a broad set of applications.

Customers and Amazon services adopting Inf1 instances

Since the launch of Inf1 instances, a broad spectrum of customers, such as large enterprises and startups, as well as Amazon services, have begun using them to run production workloads. Amazon’s Alexa team is in the process of migrating their Text-To-Speech workload from running on GPUs to Inf1 instances. INGA Technology, a startup focused on advanced text summarization, got started with Inf1 instances quickly and saw immediate gains.

“We quickly ramped up on AWS Inferentia-based Amazon EC2 Inf1 instances and integrated them in our development pipeline,” says Yaroslav Shakula, Chief Business Development Officer at INGA Technologies. “The impact was immediate and significant. The Inf1 instances provide high performance, which enables us to improve the efficiency and effectiveness of our inference model pipelines. Out of the box, we have experienced four times higher throughput, and 30% lower overall pipeline costs compared to our previous GPU-based pipeline.”

SkyWatch provides you with the tools you need to cost-effectively add Earth observation data into your applications. They use deep learning to process hundreds of trillions of pixels of Earth observation data captured from space every day.

“Adopting the new AWS Inferentia-based Inf1 instances using Amazon SageMaker for real-time cloud detection and image quality scoring was quick and easy,” says Adler Santos, Engineering Manager at SkyWatch. “It was all a matter of switching the instance type in our deployment configuration. By switching instance types to AWS Inferentia-based Inf1, we improved performance by 40% and decreased overall costs by 23%. This is a big win. It has enabled us to lower our overall operational costs while continuing to deliver high-quality satellite imagery to our customers, with minimal engineering overhead.”

AWS Neuron SDK performance and support for new ML models

You can deploy your ML models to Inf1 instances using the AWS Neuron SDK, which is integrated with popular ML frameworks such as TensorFlow, PyTorch, and MXNet. Because Neuron is integrated with ML frameworks, you can deploy your existing models to Amazon EC2 Inf1 instances with minimal code changes. This gives you the freedom to maintain hardware portability and take advantage of the latest technologies without being tied to vendor-specific software libraries.

Since its launch, the Neuron SDK has seen dramatic improvement in performance, delivering throughput up to two times higher for image classification models and up to 60% improvement for natural language processing models. The most recent launch of Neuron added support for OpenPose, a model for multi-person keypoint detection, providing 72% lower cost per inference than GPU instances.

Getting started

The easiest and quickest way to get started with Inf1 instances is via Amazon SageMaker, a fully managed service for building, training, and deploying ML models. If you prefer to manage your own ML application development platforms, you can get started by either launching Inf1 instances with AWS Deep Learning AMIs, which include the Neuron SDK, or use Inf1 instances via Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized ML applications.

For more information, see Amazon EC2 Inf1 Instances.


About the Author

Michal Skiba is a Senior Product Manager at AWS and passionate about enabling developers to leverage innovative hardware. Over the past ten years he has managed various cloud computing infrastructure products at Silicon Valley companies, large and small.

Source: https://aws.amazon.com/blogs/machine-learning/amazon-ec2-inf1-instances-featuring-aws-inferentia-chips-now-available-in-five-new-regions-and-with-improved-performance/

Continue Reading

AI

Argonne National Labs Using AI To Predict Battery Cycles

Avatar

Published

on

Researchers at the Argonne National Laboratory are exploring the use of AI to decrease the testing time of batteries for demanding grid applications. (GETTY IMAGES)

By Allison Proffitt, Editorial Director, AI Trends

Thanks to the cost reductions that have come from global electric vehicle adoption, lithium ion batteries now have an important role to play in grid storage, Susan Babinec, Argonne National Laboratory, told audiences last week at the International Battery Virtual Seminar and Exhibit. But making full use of them is going to require a bit of help from artificial intelligence.

While EVs prize high energy density, and only need to last about eight years, grid applications require more cycles, more calendar life—20 to 30 years—and more safety at a lower cost.

“Grid economics requires precise life data, which is very time and resource intensive to generate,” Babinec said. “We are using approximations that create risk, limit our design creativity, and increase cost.” The solution? Of course, in today’s day and age the solution is always artificial intelligence, Babinec quipped. “In this case, we’re going to use AI to massively reduced time to cycle life prediction.”

Sue Babinec, Program Lead, Grid Storage at Argonne National Laboratory

Babinec’s team categorized the variables impacting lithium ion batteries for grid applications—acknowledging that adjusting any one variable will always mean changes in others. “For grid storage, first and foremost, low cost is always the most important,” Babinec said. But others include state-of-charge swing, C-rate, average state-of-charge, and temperature.

“Today we handle this variability by estimating the cycle life, but those estimates do not really allow us to push these cells to the limits of what they can really do,” Babinec said. “We just simply don’t have enough information on the cycle life and we are limited by the information that is provided by the cell manufacturer, which is really all about them making sure they can live up to their warranty.”

Babinec is prioritizing overall cost per cycle (levelized cost of storage, LCOS). This is a better metric than capital cost because grid storage batteries are durable goods, she explained. The Department of Energy’s target for LCOS is $0.02/kWh, a target for which we currently fall far short.

“No matter how you look at it, we are not there today with any combination of capital and cycles,” Babinec said. “We need to bring the capital down, but right here and now we need to bring the number of cycles up.”

Looking to AI to Decrease Testing Time from Two Years to Two Weeks

Argonne is applying artificial intelligence to the problem. Babinec’s group is developing rapid cycle life evaluations using AI to decrease testing from the current two years to a goal of two weeks. Argonne is the right spot for this research, Babinec argues. As the DOE’s battery hub, Argonne has plenty of data, a team of AI experts, and a new supercomputer up to the task. Aurora, created in partnership with Argonne, Cray and the DOE, will be the first exascale computer in the U.S.

The scope of the project is broad. They are using several AI approaches—from physics-based tools to deep neural nets. “We want to see which AI approach is the best for this problem,” Babinec said. All of the Li-ion chemistries will be tested deliberately and sequentially, and the current, voltage, and time will be recorded for every second, of every cycle, for every cell.

Babinec describes the basic AI process as encoding data from one cell running one cycle. Each cell cycle generates 150 features. Narrowing in on one feature from many cells, you determine correlations and relationships and decode for one behavior: cycles to failure.

To test their plan, the group used public data published last year in Nature Energy (DOI: 10.1038/s41560-019-0356-8). They compared the capacity at a certain voltage in cycle one to the capacity at the same voltage in cycle 20 and generated correlations and relationships then predictions from there. The results: the experimental cycles to failure and the predicted cycles to failure aligned.

Her presentation at Florida Battery was the first presentation of Argonne’s experimental results, and Babinec shared that the approach seems to be working. When testing many chemistries, like cells self-organize by chemistry and cycles to failure. When run on real cells, predictions match observed. So far, Babinec says it looks like it will take as few as 40-60 cycles to predict cycle life—more for high cycle life, less for low cycle life.

The key to a high-quality prediction, she emphasized, is using training data from cells with a cycle life that is similar to your goal cycle life. For example, cells that failed at 150 cycles will not accurately train an algorithm to predict 2,000 cycles.

While work on the cycle life predictions continues, Babinec says Argonne is also focused on cleaning up more than 20 years’ worth of spreadsheets, databases, and machine files containing battery data. “The data is wonderful, but it has to be cleaned up. It’s a major effort, which we are working on,” she said. The team is working toward machine learning-ready training data including, for example, capacity vs. cycle comparisons and discharge curves. Some data are available on Github: https://github.com/materials-data-facility/battery-data-toolkit

“There is promise for this,” Babinec said. Testing timelines will decrease, which she says may open up assessments of complex and changing use scenarios, eventually enhancing deployment flexibility while minimizing risk.

Learn more at Nature Energy (DOI: 10.1038/s41560-019-0356-8).

Source: https://www.aitrends.com/ai-research/argonne-national-labs-using-ai-to-predict-battery-cycles/

Continue Reading
14 seconds ago

Cannabis2 hours ago

How to make a cannabis-infused lemony hemp shandy

Cannabis2 hours ago

What role could Kamala Harris play in cannabis legalization as vice president?

Blockchain3 hours ago

Warren Buffett Buying Gold May Push Bitcoin to $50K, Investors Say

Blockchain3 hours ago

Unraveling the Blockchain and Crypto Gaming World One Click at a Time

Blockchain6 hours ago

Samsung Phone Support for Gemini Exchange Can Further Crypto Adoption

Blockchain10 hours ago

Blockchain Tracing the Cannabidiol Supply Chains Will Help Define Legal Standards

Cannabis16 hours ago

Montana Cannabis Legalization Initiatives Qualify For November Ballot

Cannabis16 hours ago

Temescal Wellness Destroys All Quarantined Vape Cartridges

Cannabis16 hours ago

Drug Plastics & Glass Launches New Tools to Calculate Carbon Footprint

Cannabis16 hours ago

Higher Ground ‘Ballot Box’ Counters Stoner Stereotypes

Cannabis16 hours ago

University of Pittsburgh Partners with Parallel in Marijuana Research Program

Cannabis16 hours ago

SiliCycle Receives Cannabis Processing License from Health Canada

Cannabis17 hours ago

New Study Finds Breathing Techniques Can Improve Your Mental Health

Blockchain17 hours ago

Ethereum Price Hits 2-Year High as ETH Futures Open Interest Tops $1.5B

Cannabis18 hours ago

Study Shows Cannabis Legalization Doesn’t Impact Youth Cannabis Use

Biotechnology20 hours ago

NIH study shows multifocal contact lenses can slow down childhood nearsightedness

Cannabis20 hours ago

How To Stop Overthinking Everything

Cannabis22 hours ago

CBD And Blood Flow: What The Brain Wants You To Know

Cannabis23 hours ago

Did Marijuana Prohibition Cause The COVID-19 Pandemic?

AR/VR23 hours ago

Gnomes & Goblins to be Wevr’s Biggest Production, 10x Larger Than the Preview

AI23 hours ago

Is It Worth Investing in a Website Builder?

AR/VR23 hours ago

How to Create a Cloud-connect AR Experience in 15 Minutes or Less

Biotechnology24 hours ago

FDA clears RapidAI’s occlusion-spotting stroke software for CT scans

AR/VR24 hours ago

Mortal Blitz: Combat Arena’s PlayStation VR Open Beta Begins Next Week

Biotechnology1 day ago

Philips deploys hospital monitor kits to ramp-up ICU capacity against COVID-19

Biotechnology1 day ago

CureVac banks $213M IPO to push COVID-19 vaccine, boost manufacturing

Cannabis1 day ago

Cannabis Litigation Q&A Webinar – A Few More Questions and Answers

Crowdfunding1 day ago

AvidXchange Announces New “Tech Rising” Initiative to Remove Barriers to Technology Education

Blockchain1 day ago

Swipe Is the Latest Project to Integrate Chainlink’s Price Oracles

Blockchain1 day ago

Craig Wright Won’t Need to Pay Hodlnaut $60K Until Appeal Is Over, Says Counsel

Blockchain1 day ago

Bitcoin a Hedge Against Elon Musk Mining Asteroid Gold, Say Winklevoss Twins

Biotechnology1 day ago

Chutes & Ladders—WuXi taps veteran operations head Dong to lead new vaccines unit

AR/VR1 day ago

Solaris Offworld Combat has Been Delayed to September

Crowdfunding1 day ago

Mastercard Announces Global Commercial Partnership With Pollinate

AR/VR1 day ago

Oculus Social VR App ‘Venues’ to Get Overhaul in Preparation for ‘Facebook Horizon’

Blockchain1 day ago

Thailand’s Central Bank Eyes DeFi Use Cases for Its Digital Baht

Blockchain1 day ago

Bitcoin Proceeds of COVID-19 Business Support Scheme Fraud Seized

AR/VR1 day ago

VR Giants’ Co-op Kickstarter Achieves Funding Success

Payments1 day ago

Huntington Bancshares picks BillGo for faster payments

Trending