Connect with us


I’m the Google whistleblower. The medical data of millions of Americans is at risk | Anonymous




When I learned that Google was acquiring the intimate medical records of 50 million patients, I couldnt stay silent

I didnt decide to blow the whistle on Googles deal, known internally as the Nightingale Project, glibly. The decision came to me slowly, creeping on me through my day-to-day work as one of about 250 people in Google and Ascension working on the project.

When I first joined Nightingale I was excited to be at the forefront of medical innovation. Google has staked its claim to be a major player in the healthcare sector, using its phenomenal artificial intelligence (AI) and machine learning tools to predict patterns of illness in ways that might some day lead to new treatments and, who knows, even cures.

Here I was working with senior management teams on both sides, Google and Ascension, creating the future. That chimed with my overall conviction that technology really does have the potential to change healthcare for the better.

But over time I grew increasingly concerned about the security and privacy aspects of the deal. It became obvious that many around me in the Nightingale team also shared those anxieties.

After a while I reached a point that I suspect is familiar to most whistleblowers, where what I was witnessing was too important for me to remain silent. Two simple questions kept hounding me: did patients know about the transfer of their data to the tech giant? Should they be informed and given a chance to opt in or out?

The answer to the first question quickly became apparent: no. The answer to the second I became increasingly convinced about: yes. Put the two together, and how could I say nothing?

So much is at stake. Data security is important in any field, but when that data relates to the personal details of an individuals health, it is of the utmost importance as this is the last frontier of data privacy.

With a deal as sensitive as the transfer of the personal data of more than 50 million Americans to Google the oversight should be extensive. Every aspect needed to be pored over to ensure that it complied with federal rules controlling the confidential handling of protected health information under the 1996 HIPAA legislation.

Working with a team of 150 Google employees and 100 or so Ascension staff was eye-opening. But I kept being struck by how little context and information we were operating within.

What AI algorithms were at work in real time as the data was being transferred across from hospital groups to the search giant? What was Google planning to do with the data they were being given access to? No-one seemed to know.

Above all: why was the information being handed over in a form that had not been de-identified the term the industry uses for removing all personal details so that a patients medical record could not be directly linked back to them? And why had no patients and doctors been told what was happening?

I was worried too about the security aspect of placing vast amounts of medical data in the digital cloud. Think about the recent hacks on banks or the 2013 data breach suffered by the retail giant Target now imagine a similar event was inflicted on the healthcare data of millions.

I am proud that I brought this story to public attention. Since it broke on Monday several Congress members have expressed concerns including the Democratic presidential candidate Senator Amy Klobuchar of Minnesota who said the deal raised serious privacy concerns.

A federal inquiry has been launched into whether HIPAA protections have been fully followed.

I can see the advantages of unleashing Googles huge computing power on medical data. Applications will be faster; data more accessible to doctors; new channels will be opened that might in time find cures to certain conditions.

But the disadvantages prey on my mind. Employees at big tech companies having access to personal information; data potentially being handed on to third parties; adverts one day being targeted at patients according to their medical histories.

Id like to hope that the result of my raising the lid on this issue will be open debate leading to concrete change. Transfers of healthcare data to big tech companies need to be shared with the public and made fully transparent, with monitoring by an independent watchdog.

Patients must have the right to opt in or out. The uses of the data must be clearly defined for all to see, not just for now but for 10 or 20 years into the future.

Full HIPAA compliance must be enforced, and boundaries must be put in place to prevent third parties gaining access to the data without public consent.

In short, patients and the public have a right to know whats happening to their personal health information at every step along the way. To quote one of my role models, Luke Skywalker: May the force be with you.

Read more:


How AI can empower communities and strengthen democracy




Each Fourth of July for the past five years I’ve written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good.

This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes. AI literacy is also, as Microsoft CTO Kevin Scott asserted, a critical part of being an informed citizen in the 21st century.

This year, I posed the question on Twitter to gather a broader range of insights. Thank you to everyone who contributed.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

This selection is not meant to be comprehensive, and some ideas included here may be in the early stages, but they represent ways AI might enable the development of more free and just societies.

Machine learning for open source intelligence 

Open source intelligence, or OSINT, is the collection and analysis of freely available public material. This can power solutions for cryptology and security, but it can also be used to hold governments accountable.

Crowdsourced efforts by groups like Bellingcat were once looked upon as interesting side projects. But findings based on open source evidence from combat zones — like an MH-17 being shot down over Ukraine and a 2013 sarin gas attack in Syria — have proved valuable to investigative authorities.

Groups like the International Consortium of Investigative Journalists (ICIJ) are using machine learning in their collaborative work. Last year, the ICIJ’s Marina Walker Guevara detailed lessons drawn from the Machine Learning for Investigations reporting process, conducted in partnership with Stanford AI Lab.

In May, researchers from Universidade Nove de Julho in Sao Paulo, Brazil published a systematic review of AI for open source intelligence that found nearly 250 examples of OSINT that uses AI in works published between 1990 and 2019. Topics range from AI for crawling web text and documents to applications for social media, business, and — increasingly — cybersecurity.

Along similar lines, an open source initiative out of Swansea University is currently using machine learning to investigate alleged war crimes happening in Yemen.

AI for emancipation 

Last month, shortly after the start of some of the largest protests in U.S. history engulfed American cities and spread around the world, I wrote about an analysis of AI bias in language models. Although I did not raise the point in that piece,  the study stood out as the first time I’ve come across the word “emancipation” in AI research. The term came up in relation to researchers’ best practice recommendations for NLP bias analysts in the field of sociolinguistics.

I asked lead author Su Lin Blodgett to speak more about this idea, which would treat marginalized people as coequal researchers or producers of knowledge. Blodgett said she’s not aware of any AI system today that can be defined as emancipatory in its design, but she is excited by the work of groups like the Indigenous Protocol and Artificial Intelligence Working Group.

Blodgett said AI that touches on emancipation includes NLP projects to help revitalize or reclaim languages and projects for creating natural language processing for low-resource languages. She also cited AI aimed at helping people resist censorship and hold government officials accountable.

Chelsea Barabas explored similar themes in an ACM FAccT conference presentation earlier this year. Barabas drew on the work of anthropologist Laura Nader, who finds that anthropologists tend to study disadvantaged groups in ways that perpetuate stereotypes. Instead, Nader called for anthropologists to expand their fields of inquiry to include “study of the colonizers rather than the colonized, the culture of power rather than the culture of the powerless, the culture of affluence rather than the culture of poverty.”

In her presentation, Barabas likewise urged data scientists to redirect their critical gaze in the interests of fairness. As an example, both Barbara and Blodgett endorsed research that scrutinizes “white collar” crimes with the level of attention typically reserved for other offenses.

In Race After Technology, Princeton University professor Ruha Benjamin also champions the notion of abolitionist tools in tech. Catherine D’Ignazio and Lauren F. Klein’s Data Feminism and Sasha Costanza-Chock’s Design Justice offer further examples of data sets that can be used to challenge power.

Racial bias detection for police officers

Taking advantage of NLP’s ability to process data at scale, Stanford University researchers examined recordings of conversations between police officers and people stopped for traffic violations. Using computational linguistics, the researchers were able to demonstrate that officers paid less respect to Black citizens during traffic stops. Part of the focus of the work published in the Proceedings of the National Academy of Science in 2017 was to highlight ways police  body camera footage can be used as a way to build trust between communities and law enforcement agencies. The analysis used a collection of recordings collected over the course of years, drawing conclusions from a batch of data instead of parsing instances one by one.

An algorithmic bill of rights

The idea of an algorithmic bill of rights recently came up in a conversation with Black roboticist about building better AI. The notion was introduced in the 2019 book A Human’s Guide to Machine Intelligence and further fleshed out by Vox staff writer Sigal Samuel.

A core tenet of the idea is transparency, meaning each person has the right to know when an algorithm is making a decision and any factors considered in that process. An algorithmic bill of rights would also include freedom from bias, data portability, freedom to grant or refuse consent, and a right to dispute algorithmic results with human review.

As Samuel points out in her reporting, some of these notions, such as freedom from bias, have appeared in laws proposed in Congress, such as the 2019 Algorithmic Accountability Act.

Fact-checking and fighting misinformation

Beyond bots that provide citizen services or promote public accountability, AI can be used to fight deepfakes and misinformation. Examples include Full Fact’s work with Africa Check, Chequeado, and the Open Data Institute to automate fact-checking as part of the Google AI Impact Challenge.

Deepfakes are a major concern heading into the U.S. election season this fall. In a fall 2019 report about upcoming elections, the New York University Stern Center for Business and Human Rights warned of domestic forms of disinformation, as well as potential external interference from China, Iran, or Russia. [Was this report referencing the upcoming elections?] The Deepfake Detection Challenge aims to help counter such deceptive videos, and Facebook has introduced a data set of videos for training and benchmarking deepfake detection systems.

Recommendation algorithms from companies like Facebook and YouTube — with documented histories of stoking division to boost user engagement — have been identified as another threat to democratic society. uses machine learning to achieve opposite aims, gamifying consensus and grouping citizens on a vector map. To reach consensus, participants need to revise their answers until they reach agreement. has been used to help draft legislation in Taiwan and Spain.

Algorithmic bias and housing

In Los Angeles County, individuals who are homeless and White exit homelessness at a rate 1.4 times greater than people of color, a fact that could be related to housing policy or discrimination. Citing structural racism, a homeless population count for Los Angeles released last month found that Black people make up only 8% of the county population but nearly 34% of its homeless population.

To redress this injustice, the University of Southern California Center for AI in Society will explore how artificial intelligence can help ensure housing is fairly distributed. Last month, USC announced $1.5 million in funding to advance the effort together with the Los Angeles Homeless Services Authority.

The University of Southern California’s school for social work and the Center of AI in Society have been investigating ways to reduce bias in the allocation of housing resources since 2017. Homelessness is a major problem in California and could worsen in the months ahead as more people face evictions due to pandemic-related job losses. 

Putting AI ethics principles into practice

Putting ethical AI principles into practice is not just an urgent matter for tech companies, which have virtually all released vague statements about their ethical intentions in recent years. As a study from the UC Berkeley Center for Long-Term Cybersecurity found earlier this year, it’s also increasingly important that governments establish ethical guidelines for their own use of the technology.

Through the Organization for Economic Co-operation and Development (OECD) and G20, many of the world’s democratic governments have committed to AI ethics principles. But deciding what constitutes ethical use of AI is meaningless without implementation. Accordingly, in February the OECD established the Public Observatory to help nations put these principles into practice.

At the same time, governments around the world are formulating their own ethical parameters. Trump administration officials introduced ethical guidelines for federal agencies in January that, among other things, encourage public participation in establishing AI regulation. However, the guidelines also reject regulation the White House considers overly burdensome, such as bans on facial recognition technology.

One analysis recently found the need for more AI expertise in government. A joint Stanford-NYU study released in February examines the idea of “algorithmic governance,” or AI playing an increasing role in government. Analysis of AI used by the U.S. federal government today found that more than 40% of agencies have experimented with AI but only 15% of those solutions can be considered highly sophisticated. The researchers implore the federal government to hire more in-house AI talent for vetting AI systems and warn that algorithmic governance could widen the public-private technology gap and, if poorly implemented, erode public trust, or give major corporations an advantage over small businesses.

Another crucial part of the equation is how governments choose to award contracts to AI startups and tech giants. In what was believed to be a first, last fall the World Economic Forum, U.K. government, and businesses like Salesforce worked together to produce a set of rules and guidelines for government employees in charge of procuring services or awarding contracts.

Such government contracts are an important space to watch as businesses with ties to far-right or white supremacist groups — like Clearview AI and Banjo — sell surveillance software to governments and law enforcement agencies. Peter Thiel’s Palantir has also collected a number of lucrative government contracts in recent months. Earlier this week, Palmer Luckey’s Anduril, also backed by Thiel, raised $200 million and was awarded a contract to build a digital border wall using surveillance hardware and AI.

Ethics documents like those mentioned above invariably espouse the importance of “trustworthy AI.” If you roll your eyes at the phrase, I certainly don’t blame you. It’s a favorite of governments and businesses peddling principles to push through their agendas. The White House uses it, the European Commission uses it, and tech giants and groups advising the U.S. military on ethics use it, but efforts to put ethics principles into action could someday give the term some meaning and weight.

Protection against ransomware attacks

Before local governments began scrambling to respond to the coronavirus and structural racism, ransomware attacks had established themselves as another growing threat to stability and city finances.

In 2019, ransomware attacks on public-facing institutions like hospitals, schools, and governments were rising at unprecedented rates, siphoning off public funds to pay ransoms, recover files, or replace hardware.

Security companies working with cities told VentureBeat earlier this year that machine learning is being used to combat these attacks through approaches like anomaly detection and quickly isolating infected devices.

Robot fish in city pipes

Beyond averting ransomware attacks, AI can help municipal governments avoid catastrophic financial burdens by monitoring infrastructure, catching leaks or vulnerable city pipes before they burst.

Engineers at the University of Southern California built a robot for pipe inspections to address these costly issues. Named Pipefish, it can swim into city pipe systems through fire hydrants and collect imagery and other data.

Facial recognition protection with AI

When it comes to shielding people from facial recognition systems, efforts range from shirts to face paint to full-on face projections.

EqualAIs was developed at MIT’s Media Lab in 2018 to make it tough for facial recognition to identify subjects in photographs, project manager Daniel Pedraza told VentureBeat. The tool uses adversarial machine learning to modify images in order to evade facial recognition detection and preserve privacy. EqualAIs was developed as a prototype to show the technical feasibility of attacking facial recognition algorithms, creating a layer of protection around images uploaded in public forums like Facebook or Twitter. Open source code and other resources from the project are available online.

Other apps and AI can recognize and remove people from photos or blur faces to protect individuals’ identity.  University of North Carolina at Charlotte assistant professor Liyue Fan published work that applies differential privacy to images for added protection when using pixelization to hide a face. Should tech like EqualAIs be widely adopted, it may give a glimmer of hope to privacy advocates who call Clearview AI the end of privacy.

Legislators in Congress are currently considering a bill that would prohibit facial recognition use by federal officials and withhold some funding to state or local governments that choose to use the technology.

Whether you favor the idea of a permanent ban, a temporary moratorium, or minimal regulation, facial recognition legislation is an imperative issue for democratic societies. Racial bias and false identification of crime suspects are major reasons people across the political landscape are beginning to agree that facial recognition is unfit for public use today.

ACM, one of the largest groups for computer scientists in the world, this week urged governments and businesses to stop using the technology. Members of Congress have also voiced concern about use of the tech at protests or political rallies. Experts testifying before Congress have warned that if facial recognition becomes commonplace in these settings, it has the potential to dampen people’s constitutional right to free speech.

Protestors and others might have used face masks to evade detection in the past, but in the COVID-19 era, facial recognition systems are getting better at recognizing people wearing masks.

Final thoughts

This story is written with the clear understanding that techno-solutionism is no panacea and AI can be used for both positive and negative purposes. And the series is published on an annual basis because we all deserve to keep dreaming about ways AI can empower people and help build stronger communities and a more just society.

We hope you enjoyed this year’s selection. If you have additional ideas, please feel free to comment on the tweet or email to share suggestions for stories on this or related topics.


Continue Reading


An AI founder’s struggle to be seen in the age of Black Lives Matter 




I’m founder of AI4US. We build high-performing AI teams with majority Black women scientists to help companies overcome the tech gender and diversity talent shortage.

Our flagship program is RAPIDS. Here’s how it works: In Fall 2020, students will be placed in virtual cohorts of 20-25 students based on computing proficiency. These students take Nvidia’s Fundamentals of Accelerated Data Science. The course is given as a three semester-credit course. (Our first cohort went through the program last year, but they were taught by White male instructors. 2020 is our first cohort with Black women instructors.)

We are uniquely using culturally responsive instruction; that’s “a pedagogy that empowers students intellectually, socially, and emotionally.” This allows us to incorporate Breonna Taylor into a class on Logistic Regression.

The goal is not for students to just complete the class; they commit to becoming certified to teach it (and other courses in the NVIDIA Deep Learning Institute curriculum). So we can offer training to the federal government, or provide diverse on-site employee development training, or training for other potential customers. Our students can also move on to fill data-science/AI related roles, as interns or employees, giving companies access to a new talent pool.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The goal

I’ve been doing substantial outreach for the program across the tech and business community in an effort create job-pipeline partnerships for our alumni and to obtain funding for our teachers so that we can keep tuition free for students. (The plan is to become self-funding via subcontracting opportunities with these companies.) But I’m having a hard time being heard.

Google announced their “Google for Startups Accelerator: Black Founders” this month as part of their commitment to racial equity. It’s a three-month digital accelerator program for high potential Seed to Series A tech startups based in the U.S. “The accelerator program is designed to bring the best of Google’s programs, products, people, and technology to Black founder communities across the U.S. In addition to mentorship and technical project support.” I also found their initiative “Google for Startups Accelerator for Women Founders.” I thought I had access to two perfect opportunities! They both are interested in startups that leverage AI/ML technology in their product. Unfortunately, they both require companies to have a minimum of 10 employees. I have a team of three. (As entrepreneur James Norman recently explained, most Black entrepreneurs don’t match the expected pattern of having several technical cofounders they’ve known for years and so frequently miss out on accelerator opportunities.)

I attended Spelman College. Spelman, Howard, and Hampton College (HBCUs) are not producing more than 10 Black CS/Math/or Stats graduates a year (according to 2015 figures) and certainly do not produce 10 Black women in computing in a given year (let’s not add in what happens to those numbers when we see the full impact of Covid on HBCUs). 

The purpose of founding AI4US was so that I — and all the women who go through our training program — can name 10 Black women, descendants of the brutal institution of chattel slavery, working in AI. I am a powerful promising Black woman entrepreneur. Yet I am locked out of the very opportunities created to advance racial equity for Black communities.

In 2018, Melinda Gates partnered with McKinsey to collect data directly from tech companies to understand their philanthropic and CSR initiatives. The 32 tech companies collectively spent more than $500 million on philanthropic giving in 2017, but only around 5% of that went toward programs aimed at correcting the gender imbalance. Less than 0.1% of philanthropic investing —  $335,000 — was directed at removing barriers hindering women of color from pursuing careers in tech.

What does that $335,000 represent? It includes both programs that are exclusive to women of color, as well as programs open to a larger group of students that makes a deliberate and successful effort to attract and serve women of color.

Struggling to be heard

Since the death of George Floyd unleashed protests across the United States, tech executives have been speaking out forcefully against racial violence in the U.S., with some promising millions of dollars in contributions to organizations pursuing justice. Facebook CEO Mark Zuckerberg announced that the company will contribute $10 million to “groups working on racial justice,” and told followers that he’s working with advisers and employees to find organizations that “could most effectively use this right now.”

I answered their call! I shared the mission of AI4US with the 32 tech companies from the Gates/McKinsey report. I wrote Facebook, Google, Clarifai, Duolingo, Netflix, Zoom, Hulu, AWS, Uber, Lyft, Amazon, Asana, Github, Salesforce, PagerDuty, YouTube, Fastly, Kleiner Perkins, Sequoia, Away, Twilio, Square, Twitter, Medium, Box, Shopify, Intel, The ChanZuckerberg Initiative, BAE, Cisco, LinkedIn, The Gates Foundation, Snap, Dropbox, Omnisci, Redhat, Walmart, Mercury, Pure storage, Carahsoft, GDIT, Talentseer, Elementai, Splunk, Zoox, Lockheed Martin, Alion, Modzy, Goldman Sachs, Verizon, Pinnacle, Niantic, Apple, IBM, Go Daddy, Dell, NetApp, EA, Adobe, PayPal, Best Buy, Workday, Chase, Charles Schwab, Vista Equity, The Business Roundtable, AT&T, Bosch, T-Mobile, Fitbit, Capital One, Accenture, Hyundai, Subaru, Booz Allen Hamilton, General Dynamics, Bank of America, BP,  Chevron, PepsiCo, Associate Resource Group, Comcast, IBM, OmniSci, The Foundry Group, ServiceNow, the Fourmation ERG, Sofi, Synnex, SoftBank, Hypergiant, Andreessen Horowitz, Okta, Humu  … should I go on? And then I emailed them again!

I have sent hundreds of letters. 99% were ignored. All of these companies have positions for data scientists and machine learning engineers. And most have few (if any) Black women occupying these roles. I kept reading that corporate America had pledged over a billion dollars for Justice. Not only have I not been able to tap into the less than 0.1% of 2017 philanthropic spending, I can barely get a meeting.

When I contacted the Chief Diversity and Inclusion officer at Intel, who was included in the Gates/McKinsey report, I was ignored. I secured a meeting with Intel only after contacting the CEO! This was also the case with Bank of America. The CEO of Bank of America read my letter and connected me with the right people! I have not yet created a partnership with BofA, but I am hopeful. I have also been able to obtain one, hour long meeting with a tech firm, but that was because I had already volunteered my services to the company for free.

I am discouraged. But I will persist. My resilience comes from the spirit of my ancestors.

Hagar Murrell founded Garnett Training School around the year 1900 in Pollocksville, North Carolina. At the time, there were four schools for White students, but no school for Black students. So, Hagar stepped up and stepped in!

Black students were educated there until integration allowed students to attend the White school in 1968. My letters to the CEO of Intel — my letter to you — are inspired by Hagar’s letters and legacy.

She probably raised at least half a million dollars by today’s standards. You can find her funding requests in newspapers across the country. I have found articles from Texas to New York, from 1888 to 1928 in which she attempted to raise money for teachers and dorms.

Her story is one of a woman, born into slavery, but who is STILL freeing generations of students. One of them is me, Andrea Roberson, descendant of Garnett students and teachers, who became the first Black woman to receive a PhD in Applied Math from Stony Brook University. Me writing this today bears witness to the truth that her program worked. The success of AI4US is encoded in my DNA.

If tech companies give me a chance, a meeting, a portion of that less than 0.1%, imagine what we could do together. How many students can we free to realize their wildest dreams?

Will you respond? I struggle with the anxiety of being ignored, unheard and unseen. Invisible. But Grandma Hagar already broke ground for me, and I will persevere to break ground for other Black women.

Andrea Roberson is CEO of AI4US. She was previously a Machine Learning Researcher in the Economic Statistical Methods Division (ESMD) of the U.S. Census Bureau for over a decade.  Her work includes a variety of design strategies to build and operationalize predictive text analytics solutions. She has authored papers and presentations for conferences including the Association for Computational Linguistics (ACL), the Symposium on Data Science & Statistics (SDSS), New Techniques and Technologies for Statistics (NTTS), and FCSM. 


Continue Reading


This Week’s Awesome Tech Stories From Around the Web (Through July 4)





How Holographic Tech Is Shrinking VR Displays to the Size of Sunglasses
Kyle Orland | Ars Technica
“…researchers at Facebook Reality Labs are using holographic film to create a prototype VR display that looks less like ski goggles and more like lightweight sunglasses. With a total thickness less than 9mm—and without significant compromises on field of view or resolution—these displays could one day make today’s bulky VR headset designs completely obsolete.”


Stock Surge Makes Tesla the World’s Most Valuable Automaker
Timothy B. Lee | Ars Technica
“It’s a remarkable milestone for a company that sells far fewer cars than its leading rivals. …But Wall Street is apparently very optimistic about Tesla’s prospects for future growth and profits. Many experts expect a global shift to battery electric vehicles over the next decade or two, and Tesla is leading that revolution.”


These Plant-Based Steaks Come Out of a 3D Printer
Adele Peters | Fast Company
“The startup, launched by cofounders who met while developing digital printers at HP, created custom 3D printers that aim to replicate meat by printing layers of what they call ‘alt-muscle,’ ‘alt-fat,’ and ‘alt-blood,’ forming a complex 3D model.”


The US Air Force Is Turning Old F-16s Into AI-Powered Fighters
Amit Katwala | Wired UK
“Maverick’s days are numbered. The long-awaited sequel to Top Gun is due to hit cinemas in December, but the virtuoso fighter pilots at its heart could soon be a thing of the past. The trustworthy wingman will soon be replaced by artificial intelligence, built into a drone, or an existing fighter jet with no one in the cockpit.”


NASA Wants to Build a Steam-Powered Hopping Robot to Explore Icy Worlds
Georgina Torbet | Digital Trends
“A bouncing, ball-like robot that’s powered by steam sounds like something out of a steampunk fantasy, but it could be the ideal way to explore some of the distant, icy environments of our solar system. …This round robot would be the size of a soccer ball, with instruments held in the center of a metal cage, and it would use steam-powered thrusters to make jumps from one area of terrain to the next.”


Could Teleporting Ever Work?
Daniel Kolitz | Gizmodo
“Have the major airlines spent decades suppressing teleportation research? Have a number of renowned scientists in the field of teleportation studies disappeared under mysterious circumstances? Is there a cork board at the FBI linking Delta Airlines, shady foreign security firms, and dozens of murdered research professors? …No. None of that is the case. Which begs the question: why doesn’t teleportation exist yet?”


Nuclear ‘Power Balls’ Could Make Meltdowns a Thing of the Past
Daniel Oberhaus | Wired
“Not only will these reactors be smaller and more efficient than current nuclear power plants, but their designers claim they’ll be virtually meltdown-proof. Their secret? Millions of submillimeter-size grains of uranium individually wrapped in protective shells. It’s called triso fuel, and it’s like a radioactive gobstopper.”


A Plan to Redesign the Internet Could Make Apps That No One Controls
Will Douglas Heaven | MIT Techology Review
“[John Perry] Barlow’s ‘home of Mind’ is ruled today by the likes of Google, Facebook, Amazon, Alibaba, Tencent, and Baidu—a small handful of the biggest companies on earth. Yet listening to the mix of computer scientists and tech investors speak at an online event on June 30 hosted by the Dfinity Foundation…it is clear that a desire for revolution is brewing.”


To Save the World, the UN Is Turning It Into a Computer Simulation
Will Bedingfield | Wired
“The UN has now announced its new secret recipe to achieve [its 17 sustainable development goals or SDGs]: a computer simulation called Policy Priority Inference (PPI). …PPI is a budgeting software—it simulates a government and its bureaucrats as they allocate money on projects that might move a country closer to an SDG.”

Image credit: Benjamin SuterUnsplash


Continue Reading
CovId196 hours ago

Keeper killed by Siberian tiger in Zurich zoo

Nano Technology6 hours ago

Towards lasers powerful enough to investigate a new kind of physics: An international team of researchers has demonstrated an innovative technique for increasing the intensity of lasers

Nano Technology6 hours ago

A path to new nanofluidic devices applying spintronics technology: Substantial increase in the energy conversion efficiency of hydrodynamic power generation via spin currents

Nano Technology6 hours ago

The lightest shielding material in the world: Protection against electromagnetic interference

AR/VR6 hours ago

How Social VR Helped This Esports Player Overcome Social Anxiety

Gaming6 hours ago

Pokemon Go July 2020 Field Research Tasks

Cannabis6 hours ago

The James Burton Story- Medical Cannabis in the Netherlands – Volteface

Code7 hours ago

The Cheap Way to Glitch an STM8 Microcontroller

Automotive7 hours ago

Commentary: What manufacturing activity tell us about freight

AR/VR7 hours ago

Angry Birds VR: Isle Of Pigs Level Editor Updated With Online Sharing Capability

AI7 hours ago

How AI can empower communities and strengthen democracy

CovId197 hours ago

Man in his 20s shot dead in north London

Automotive8 hours ago

Feds arrest murder suspect enrolled in C.R. England’s trucking school

Cannabis8 hours ago

GP’s concerns over easy access to pill, medical marijuana

Cannabis8 hours ago

Best Ways to Store Cannabis and Keep Marijuana Fresh

IOT8 hours ago

Brian Jones reads Frederick Douglass #FourthOfJuly

IOT8 hours ago

Resistance Is Patriotism on the Fourth of July – @TheAtlantic by Ibram X. Kendi @DrIbram #FourthofJuly

IOT8 hours ago

What It Took to Recreate a Portrait of Thomas Jefferson – Smithsonian Magazine

CovId198 hours ago

‘UK countryside at risk from Boris Johnson’s planning revolution’

Publications8 hours ago

Mercier-Hochelaga-Maisonneuve and Anjou boroughs – Boil-water advisory lifted

Publications8 hours ago

Statement from the Chief Public Health Officer of Canada on July 4, 2020

Publications8 hours ago

The Global Blood Collection Market is expected to grow from USD 9,513.83 Million in 2019 to USD 13,257.71 Million by the end of 2025 at a Compound Annual Growth Rate (CAGR) of 5.68%

Crowdfunding8 hours ago

Bruce Davis, Co-founder of UK-based Online Investment Platform Abundance, Supports IMF’s Recommendation to back Green Investment Projects

Crowdfunding8 hours ago

Australia based TAGZ, a Digital Asset Exchange which Claimed to be One of the Largest Crypto Trading Platforms, to Shut Down

AI8 hours ago

An AI founder’s struggle to be seen in the age of Black Lives Matter 

Energy8 hours ago

My Interview With Peter Mertens, Former Board Member of Audi, Volkswagen Group, Volvo, & Jaguar Land Rover

Crowdfunding8 hours ago

Mt Pelerin Holds its First General Assembly on Blockchain in Geneva Via its New Bridge Wallet Mobile Wallet

Cannabis9 hours ago

Indoor cannabis farm found in residence north of Elsinore

Publications9 hours ago

The Global Blood Collection Tubes Market is expected to grow from USD 994.14 Million in 2019 to USD 1,532.27 Million by the end of 2025 at a Compound Annual Growth Rate (CAGR) of 7.47%

Publications9 hours ago

XCMG Celebrates Partnership Achievements with Industry Partners on International Day of Cooperatives

Publications9 hours ago

Reddit and LinkedIn will fix clipboard snooping in their iOS apps

CovId199 hours ago

How CBDCs Might Change Our Daily Payments

Energy9 hours ago

EVs in a Time of COVID

Blockchain9 hours ago

Financial Independence Day: 268% Average ROI Buying Bitcoin On July 4

CovId199 hours ago

On my radar: Rachel Parris on her cultural highlights

Blockchain9 hours ago

‘Everything Will Move to Confidential DeFi‘ Beam’s CEO Says

AR/VR9 hours ago

Editorial: Iron Man VR Shows Why We Need PSVR 2 Sooner Rather Than Later

Business Insider9 hours ago

Fewer first responders will be available for the usual spike in firework incidents and ER visits during July 4th weekend — one of America’s most dangerous holidays

CovId199 hours ago

Amber Heard can be in court for Johnny Depp’s evidence, high court rules

Blockchain9 hours ago

Bored With Bitcoin? This BTC Price Level Is Key for a Big Breakout