When Apple announced its Security Bounty Program last year, researchers lined up to locate potentially dangerous bugs, keeping them secret in exchange for potentially large payouts. Developer Jeff Johnson promptly told Apple about a zero-day exploit that gives malicious actors access to a Safari browser user’s private files — an issue affecting even the beta version of macOS Big Sur. But he claims the company left the flaw unpatched for over six months, leading Johnson to give up on the bounty program and describe the company’s efforts as “security theater.”
The exploit is troubling: A Safari user tricked into downloading a seemingly innocuous file from a website can allow an attacker to create a dangerously modified clone of Safari, which macOS then treats as the original app. “Any restricted file that is accessible to Safari” then becomes accessible to the attacker, who can automate the sending of what should have been protected files to the attacker’s server.
As Johnson explains, this exploit is possible because Apple’s Transparency, Consent, and Control (TCC) privacy protection system allows exceptions that only look at the app’s identifier, not where the file is being run from, and “only superficially checks the code signature of the app.” Consequently, a modified copy of Safari can be run from the wrong directory without triggering TCC protection, a problem that spans macOS 10.14 (Mojave), 10.15 (Catalina), and 11 (Big Sur), exposing untold millions of consumers and businesses to unauthorized sharing of their supposedly secure private data.
Apart from the exploit, Johnson notes that Apple’s intermittent responses haven’t instilled confidence in either the speed or likelihood of timely payouts from the Security Bounty Program. Having reported the exploit in December 2019, on the day the company opened the Bounty Program, Johnson received a confirmation that Apple was planning to address the issue, but as of the end of June 2020, nothing has happened. That goes “well beyond the bounds” of a 90-day “reasonable disclosure,” Johnson says, and for at least the second time in his personal experience. It’s “becoming obvious that I will never get paid a bounty by Apple for anything I’ve reported to them, or at least not within a reasonable amount of time.”
Complaints regarding Apple’s slow responses to zero-day bug reports predate the Security Bounty Program and include back-and-forth exchanges between Apple and Google’s Project Zero security teams. Johnson’s story of delayed responses and problematic payouts certainly isn’t unique, but it arrives with the warning to users that “macOS privacy protections are mainly security theater,” harming legitimate Mac developers while permitting malicious actors to weasel through cracks. “You have the right to know that the systems you rely on for protection are not actually protecting you,” Johnson says, adding that despite claims to the contrary, “Apple’s debilitating lockdown of the Mac is not justified by alleged privacy and security benefits.”
Yesterday, Apple told Johnson the company is still investigating the exploit. We’ll update this article if and when Apple patches the bug in the beta version of Big Sur, which focuses a lot of attention on improvements to Safari.
Brock Pierce, entrepreneur, crypto venture capitalist and child star, has announced his USA Presidential run on Twitter July 5. His tweet stated that:
“I, Brock Pierce, am running for President of the United States of America.”
Pierce’s campaign site states that he is a pioneer digital currency and has raised more than $5 billion for the companies he has founded. Pierce is the Chairman of the Bitcoin Foundation and co-founder of EOS Alliance, Block.one, Blockchain Capital, Tether, and Mastercoin (first ICO). His website, sparse on details, does not say if he is seeking a nomination in a political party or if he is running as an Independent.
Facial recognition systems are a powerful AI innovation that perfectly showcase The First Law of Technology: “technology is neither good nor bad, nor is it neutral.” On one hand, law-enforcement agencies claim that facial recognition helps to effectively fight crime and identify suspects. On the other hand, civil rights groups such as the American Civil Liberties Union have long maintained that unchecked facial recognition capability in the hands of law-enforcement agencies enables mass surveillance and presents a unique threat to privacy.
Research has also shown that even mature facial recognition systems have significant racial and gender biases; that is, they tend to perform poorly when identifying women and people of color. In 2018, a researcher at MIT showed many top image classifiers misclassify lighter-skinned male faces with error rates of 0.8% but misclassify darker-skinned females with error rates as high as 34.7%. More recently, the ACLU of Michigan filed a complaint in what is believed to be the first known case in the United States of a wrongful arrest because of a false facial recognition match. These biases can make facial recognition technology particularly harmful in the context of law-enforcement.
One example that has received attention recently is “Depixelizer.”
The project uses a powerful AI technique called a Generative Adversarial Network (GAN) to reconstruct blurred or pixelated images; however, machine learning researchers on Twitter found that when Depixelizer is given pixelated images of non-white faces, it reconstructs those faces to look white. For example, researchers found it reconstructed former President Barack Obama as a white man and Representative Alexandria Ocasio-Cortez as a white woman.
While the creator of the project probably didn’t intend to achieve this outcome, it likely occurred because the model was trained on a skewed dataset that lacked diversity of images, or perhaps for other reasons specific to GANs. Whatever the cause, this case illustrates how tricky it can be to create an accurate, unbiased facial recognition classifier without specifically trying.
Preventing the abuse of facial recognition systems
Currently, there are three main ways to safeguard the public interest from abusive use of facial recognition systems.
First, at a legal level, governments can implement legislation to regulate how facial recognition technology is used. Currently, there is no US federal law or regulation regarding the use of facial recognition by law enforcement. Many local governments are passing laws that either completely ban or heavily regulate the use of facial recognition systems by law enforcement, however, this progress is slow and may result in a patchwork of differing regulations.
Second, at a corporate level, companies can take a stand. Tech giants are currently evaluating the implications of their facial recognition technology. In response to the recent momentum of the Black Lives Matter movement, IBM has stopped development of new facial recognition technology, and Amazon and Microsoft have temporarilypaused their collaborations with law enforcement agencies. However, facial recognition is not a domain limited to large tech firms anymore. Many facial recognition systems are available in the open-source domains and a number of smaller tech startups are eager to fill any gap in the market. For now, newly-enacted privacy laws like the California Consumer Privacy Act (CCPA) do not appear to provide adequate defense against such companies. It remains to be seen whether future interpretations of CCPA (and other new state laws) will ramp up legal protections against questionable collection and use of such facial data.
Lastly, people at an individual level can attempt to take matters into their own hands and take steps to evade or confuse video surveillance systems. A number of accessories, including glasses, makeup, and t-shirts are being created and marketed as defenses against facial recognition software. Some of these accessories, however, make the person wearing them more conspicuous. They may also not be reliable or practical. Even if they worked perfectly, it is not possible for people to have them on constantly, and law-enforcement officers can still ask individuals to remove them.
What is needed is a solution that allows people to block AI from acting on their own faces. Since privacy-encroaching facial recognition companies rely on social media platforms to scrape and collect user facial data, we envision adding a “DO NOT TRACK ME” (DNT-ME) flag to images uploaded to social networking and image-hosting platforms. When platforms see an image uploaded with this flag, they respect it by adding adversarial perturbations to the image before making it available to the public for download or scraping.
Facial recognition, like many AI systems, is vulnerable to small-but-targeted perturbations which, when added to an image, force a misclassification. Adding adversarial perturbations to facial recognition systems can stop them from linking two different images of the same person1. Unlike physical accessories, these digital perturbations are nearly invisible to the human eye and maintain an image’s original visual appearance.
This approach of DO NOT TRACK ME for images is analogous to the DO NOT TRACK (DNT) approach in the context of web-browsing, which relies on websites to honor requests. Much like browser DNT, the success and effectiveness of this measure would rely on the willingness of participating platforms to endorse and implement the method – thus demonstrating their commitment to protecting user privacy. DO NOT TRACK ME would achieve the following:
Prevent abuse: Some facial recognition companies scrape social networks in order to collect large quantities of facial data, link them to individuals, and provide unvetted tracking services to law enforcement. Social networking platforms that adopt DNT-ME will be able to block such companies from abusing the platform and defend user privacy.
Integrate seamlessly: Platforms that adopt DNT-ME will still receive clean user images for their own AI-related tasks. Given the special properties of adversarial perturbations, they will not be noticeable to users and will not affect user experience of the platform negatively.
Encourage long-term adoption: In theory, users could introduce their own adversarial perturbations rather than relying on social networking platforms to do it for them. However, perturbations created in a “black-box” manner are noticeable and are likely to break the functionality of the image for the platform itself. In the long run, a black-box approach is likely to either be dropped by the user or antagonize the platforms. DNT-ME adoption by social networking platforms makes it easier to create perturbations that serve both the user and the platform.
Set precedent for other use cases: As has been the case with other privacy abuses, inaction by tech firms to contain abuses on their platforms has led to strong, and perhaps over-reaching, government regulation. Recently, many tech companies have taken proactive steps to prevent their platforms from being used for mass-surveillance. For example, Signal recently added a filter to blur any face shared using its messaging platform, and Zoom now provides end-to-end encryption on video calls. We believe DNT-ME presents another opportunity for tech companies to ensure the technology they develop respects user choice and is not used to harm people.
It’s important to note, however, that although DNT-ME would be a great start, it only addresses part of the problem. While independent researchers can audit facial recognition systems developed by companies, there is no mechanism for publicly auditing systems developed within the government. This is concerning considering these systems are used in such important cases as immigration, customs enforcement, court and bail systems, and law enforcement. It is therefore absolutely vital that mechanisms be put in place to allow outside researchers to check these systems for racial and gender bias, as well as other problems that have yet to be discovered.
It is the tech community’s responsibility to avoid harm through technology, but we should also actively create systems that repair harm caused by technology. We should be thinking outside the box about ways we can improve user privacy and security, and meet today’s challenges.
Several months ago, VR Heaven — a blog that we, Aaron Santiago (VR software engineer) and Winston Nguyen (VR marketer), run — posted an informal survey on Reddit asking participants how often they experienced VR motion sickness and their gender.
They could answer frequently, sometimes, rarely or never. These are the results:
The full data and our collection method is at VR Heaven. Note: This is not a scientific survey; it’s informal. It had 292 participants, most coming from Reddit, with some coming from Discord groups and family/friends we reached out to on Facebook.
If you took into account those factors, the discrepancy in VR sickness should disappear, meaning VR doesn’t have anything to do with gender-related motion sickness.
Interestingly enough, our survey showed a different result. We asked people how often they experience motion sickness in cars, boats and airplanes with the same multiple choice answers as before: frequently, sometimes, rarely, or never.
After taking into account people’s susceptibility to general motion sickness, we still found a correlation between VR motion sickness and gender:
So what is it about VR that plays a factor in sex and motion sickness? There are several plausible theories out there:
Men dominate the tech industry, so it’s no surprise to hear theories about how headsets were designed with specifications based on men. Whether this is junk science or not, we shall look at the main argument about interpupillary distance:
Interpupillary distance (IPD) is the distance between one eye pupil to the other. Some experts say that if the IPD of a headset is too high, it leads to discomfort and can cause motion sickness.
According to a 2017 report from Statista, men play more games from genres such as 3D action games and first-person shooters, which are both types of flatscreen games that improve mental rotation.
Genres that don’t contribute, like puzzle games and family/farm simulators, are the ones that women play more than others, like first-person shooters. This difference between female gamers and male gamers could explain why women still get sick more often.
Compared to men, women are much less likely to call themselves “gamers.” Even though the statistics say that women play games more than men, it’s not a part of today’s culture for female gamers to take this hobby on as an identity.
Even with the advent of high quality standalone 6-degree-of-freedom headsets, VR is still considered an enthusiast’s hobby, and software purchases in VR heavily favor games. It’s easy to see how having and using a VR headset consistently is something that female users would be hesitant to do.
Many users say that the more you use VR, the less you get sick in VR, sometimes referred to as “VR legs.” If women are less likely to use the technology often, then they also would grow their VR legs less often.
There are a couple other reasons for why women might get VR sickness more than men:
Hormonal differences. Studies have shown that women are most vulnerable to VR sickness during ovulation.
Differences in depth cue recognition between genders, although the experiments testing this hypothesis were inconclusive.
Does gender play a role in overcoming VR motion sickness?
We collected data about VR legs, asking participants if they were able to overcome VR motion sickness.
More than two-thirds of all participants who experienced VR sickness were able to grow their VR legs.
Interestingly, our female respondents overcame VR sickness much less often, at less than half.
This suggests that there is some relation between VR legs and gender. It could be that women are physically less able to overcome VR sickness, or like we suggested above, they are less likely to use VR consistently.
Direction for future research
There is little research on the effects of a mismatched IPD on VR sickness, and even less research on the effect of gaming experience. We would love to see more research on VR legs as well. Research like this could help the technology become more inclusive and more palatable for everyone, and could bring on the fabled VR revolution even sooner.