Zephyrnet Logo

The Intelligence Coup of the Century: For decades, the CIA read the encrypted communications of allies and adversaries

Date:

The RISKS Digest Volume 31 Issue 58

The Risks Digest

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 31 Issue 58

Saturday 15 February 2020

Contents


The Intelligence Coup of the Century: For decades, the CIA read the encrypted communications of allies and adversaries
Greg Miller

The US Fears Huawei Because It Knows How Tempting Backdoors Are
WIRED

U.S. Charges Chinese Military Officers in 2017 Equifax Hacking
NYTimes

Voatz: Ballots, Blockchains, and Boo-boos?
MIT via PGN retitling

Lax FAA oversight allowed Southwest to put millions of passengers at risk, IG says
WashPost

Pentagon ordered to halt work on Microsoft’s JEDI cloud contract after Amazon protests
WashPost

Linux is ready for the end of time
ZDNet

Google redraws the borders on maps depending on who’s looking
WashPost

Car renter paired car to FordPass, could still control car long after return
ZDNet

European Parliament urges oversight for AI
Politico Europe

AI can create new problems as it solves old ones
Fortune

AI and Ethics
NJ Tech Weekly

The future of software testing in 2020: Here’s what’s coming
Functionize

Will Past Criminals Reoffend? Humans Are Terrible at Guessing, and Computers Aren’t Much Better
Scientific American

Apple joins FIDO Alliance, commits to getting rid of passwords
ZDNet

IRS paper forms vs. COVID-19
Dan Jacobson

The Politics of Epistemic Fragmentation
Medium

Why Is Social Media So Addictive?
Mark D. Griffiths

The high cost of a free coding bootcamp
The Verge

Debunking the lone woodpecker theory
Ed Ravin

Re: Benjamin Netanyahu’s election app potentially exposed data for every Israeli voter
Amos Shapir

Re: Backhoes, squirrels, and woodpeckers as DoS vectors
Tom Russ

Re: A lazy fix 20 years ago means the Y2K bug is taking down computers, now
Martin Ward

Re: Autonomous vehicles
Stephen Mason

Info on RISKS (comp.risks)


The Intelligence Coup of the Century: For decades, the CIA read the encrypted communications of allies and adversaries (Greg Miller)

“Peter G. Neumann” <neumann@csl.sri.com>

Tue, 11 Feb 2020 08:53:12 PST

Greg Miller, *The Washington Post*, 11 Feb 2020
<https://www.washingtonpost.com/graphics/2020/world/national-security/cia-crypto-encryption-machines-espionage/> For more than half a century, governments all over the world trusted a
single company to keep the communications of their spies, soldiers and
diplomats secret. That company was secretly run by the CIA, which had the
ability to read all those communications for decades. The company, Crypto AG, got its first break with a contract to build
code-making machines for U.S. troops during World War II. Flush with cash,
it became a dominant maker of encryption devices for decades, navigating
waves of technology from mechanical gears to electronic circuits and,
finally, silicon chips and software. The Swiss firm made millions of dollars selling equipment to more than 120
countries well into the 21st century. Its clients included Iran, military
juntas in Latin America, nuclear rivals India and Pakistan, and even the
Vatican. But what none of its customers ever knew was that Crypto AG was secretly
owned by the CIA in a highly classified partnership with West German
intelligence. These spy agencies rigged the company's devices so they could
easily break the codes that countries used to send encrypted messages. The decades-long arrangement, among the most closely guarded secrets of the
Cold War, is laid bare in a classified, comprehensive CIA history of the
operation obtained by The Washington Post and ZDF, a German public
broadcaster, in a joint reporting project. The account identifies the CIA officers who ran the program and the
company executives entrusted to execute it. It traces the origin of the
venture as well as the internal conflicts that nearly derailed it. It
describes how the U.S. and its allies exploited other nations' gullibility
for years, taking their money and stealing their secrets. The operation, known first as `Thesaurus' and later `Rubicon', ranks among
the most audacious in CIA history. [Very long, but remarkably illuminating item abridged for RISKS. PGN] 


The US Fears Huawei Because It Knows How Tempting Backdoors Are (WIRED)

Gabe Goldberg <gabe@gabegold.com>

Thu, 13 Feb 2020 19:04:06 -0500

https://www.wired.com/story/huawei-backdoors-us-crypto-ag/ [See also
https://www.businessinsider.com/us-accuses-huawei-of-spying-through-law-enforcement-backdoors-2020-2 PGN] 


U.S. Charges Chinese Military Officers in 2017 Equifax Hacking (NYTimes)

Monty Solomon <monty@roscom.com>

Mon, 10 Feb 2020 14:17:46 -0500

https://www.nytimes.com/2020/02/10/us/politics/equifax-hack-china.html https://www.washingtonpost.com/national-security/justice-dept-charges-four-members-of-chinese-military-in-connection-with-2017-hack-at-equifax/2020/02/10/07a1f7be-4c13-11ea-bf44-f5043eb3918a_story.html [Let's not forget the massive loss of personal data from the attack on the Office of Personnel Management. which might be even more damaging. Reported (for example) in RISKS-28.69,70,71,72,75,80,83,94,95,96 in 2015. PGN] 


Voatz: Ballots, Blockchains, and Boo-boos? (MIT via PGN retitling)

“Peter G. Neumann” <neumann@csl.sri.com>

Thu, 13 Feb 2020 17:01:05 PST

This is an outstanding paper. Michael A. Specter, James Koppel, Daniel Weitzner (MIT)
The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz,
the First Internet Voting Application Used in U.S. Federal Elections
https://internetpolicy.mit.edu/wp-content/uploads/2020/02/SecurityAnalysisOfVoatz_Public.pdf See also some of the subsequent items: "Their security analysis of the application, called Voatz, pinpoints a
number of weaknesses, including the opportunity for hackers to alter, stop,
or expose how an individual user has voted."
http://news.mit.edu/2020/voting-voatz-app-hack-issues-0213 Voting on Your Phone: New Elections App Ignites Security Debate,
*The New York Times*, 13 Feb 2020
https://www.nytimes.com/2020/02/13/us/politics/voting-smartphone-app.html Kim Zetter
https://www.vice.com/en_us/article/akw7mp/sloppy-mobile-voting-app-used-in-four-states-has-elementary-security-flaws The general consensus seems to be that Voatz's responses neither address
their criticisms more give any reasonable assurance. https://blog.voatz.com/?p=1209
https://www.prnewswire.com/news-releases/new-york-times-profiles-voatz-301004581.html 


Lax FAA oversight allowed Southwest to put millions of passengers at risk, IG says (WashPost)

Richard Stein <rmstein@ieee.org>

Tue, 11 Feb 2020 19:33:16 -0800

https://www.washingtonpost.com/local/trafficandcommuting/lax-faa-oversight-allowed-southwest-to-put-millions-of-passengers-at-risk-ig-says/2020/02/11/a3fdb714-4d22-11ea-b721-9f4cdc90bc1c_story.html [That's "lax", not "LAX". PGN] "The Federal Aviation Administration allowed Southwest Airlines to put
millions of passengers at risk by letting the airline operate planes that
did not meet U.S. aviation standards and by failing to provide its own
inspectors with the training needed to ensure the highest degree of safety,
according to a report released Tuesday by the Department of Transportation's
inspector general." The flying public experiences elevated risk when FAA inspectors are not
qualified or are under-trained to competently fulfill mandated
assignments. Trust but verify rigor is required to ensure life-critical
operational readiness. Coffee cup inspections don't cut it. "The FAA's overreliance on industry-provided risk assessments and failure to
dig deeply into many of those assessments is a broader concern raised by
several outside experts and reviews following the crashes of two Boeing 737
Max jets that killed 346 people..." See http://catless.ncl.ac.uk/Risks/31/17#subj2.1 for an expose' on industry
self-regulation efforts, and why the US government promotes the
practice. Alternatively, the EU's precautionary measures regulatory approach
might reduce the frequency of disruptive brand outrage incidents and
declining product orders. 


Pentagon ordered to halt work on Microsoft’s JEDI cloud contract after Amazon protests (WashPost)

Gabe Goldberg <gabe@gabegold.com>

Fri, 14 Feb 2020 10:17:27 -0500

A lawsuit brought by Amazon has forced the Pentagon to again pump the brakes
on an advanced cloud computing system it sought for years, prompting yet
another delay the military says will hurt U.S. troops and hinder its
national security mission. A federal judge Thursday ordered the Pentagon to halt work on the Joint
Enterprise Defense Infrastructure cloud computing network, known as JEDI, as
the court considers allegations that President Trump improperly interfered
in the bidding process. The order comes just one day before the Defense Department had planned to
“go live'' with what it has long argued is a crucial national defense
priority. https://www.washingtonpost.com/business/2020/02/13/court-orders-pentagon-halt-work-microsofts-jedi-cloud-contract-after-amazon-protests/ Halt work? ...one day before? ...a crucial national defense priority?
Politicize technology decisions? Sounds about right. 


Linux is ready for the end of time (ZDNet)

Gabe Goldberg <gabe@gabegold.com>

Fri, 14 Feb 2020 10:21:08 -0500

2038 is for Linux what Y2K was for mainframe and PC computing in 2000, but
the fixes are underway to make sure all goes well when that fatal time rolls
around. ... But look at this way: After we fix this, we won't have to worry about 64-bit
Linux running out of seconds until 15:30:08 GMT Sunday, December 4,
29,227,702,659. Personally, I'm not going to worry about that one. https://www.zdnet.com/article/linux-is-ready-for-the-end-of-time/ 


Google redraws the borders on maps depending on who’s looking (WashPost)

Richard Stein <rmstein@ieee.org>

Fri, 14 Feb 2020 12:21:04 -0800

Dynamic map border revisions: a catastrophic recipe for navigation errors
and munitions deployment. 


Car renter paired car to FordPass, could still control car long after return (ZDNet)

Mary M Shaw <mary.shaw@cs.cmu.edu>

Fri, 14 Feb 2020 17:53:13 -0500

Someone rented a Ford from Enterprise and paired it with FordPass to get
remote control. Five months later he could still start and stop the engine,
lock and unlock the car, and track it—remotely.   Same thing happened to
him a second time. Recent piece in ZDNet
https://www.zdnet.com/article/he-returned-the-rental-car-long-ago-he-can-still-turn-the-engine-on-via-an-app/ Earlier report in Ars Technica Text of ZDNet article ... *He returned the rental car long ago. He can still turn the engine on via an
app* Imagine you've parked your rental car and are walking away. Suddenly, the
car starts up, seemingly on its own. Yes, it's another day in technology
making everything better. ... You think we're living in the end of times? No, this is just a transitional period between relative sanity and robot
inanity. The problem, of course, is that our deep, mindless reliance on technology is
causing severe disruption. I'm moved to this fortune cookie thought by the tale of a man who rented a
Ford Expedition from Enterprise. He gave it back and, five months later, he
discovered that he could still start its engine, switch it off, lock and
unlock it and even track it. Remotely, that is. You see,as Ars Technica described last October
<https://arstechnica.com/information-technology/2019/10/five-months-after-returning-rental-car-man-still-has-remote-control/>,
Masamba Sinclair had connected his rental car to FordPass, an app that's
presumably very useful. Who wouldn't want to remotely unlock the doors of a
car someone else is renting? Just to imagine their faces, you understand. It
so happened that Sinclair hadn't unpaired his app from the car. Cue the
absurdity. At the time, I thought Sinclair's tale entertaining. But surely the app's
vulnerability would be patched, secured or whatever technical verbal emoji
you might choose. Yet Sinclair just rented another Ford—this time, a Mustang. And what do
you know, four days after he'd returned it, he could still make the car do
things from his phone. Which could have been a touch bemusing to anyone who
happened to have subsequently rented it.
<https://arstechnica.com/information-technology/2020/02/rental-car-agency-continues-to-give-remote-control-long-after-cars-are-returned/> It seems that Ford does offer warning notifications inside the car when it's
paired with someone's phone. Yet if subsequent renters or, indeed, the rental company's cleaners don't
react to such notifications—or simply don't see them—a random somebody
who happens to still have an app paired to the car may incite some remote
action, like a ghostly jump start. You might think Sinclair should have already disconnected his app from any
car he'd previously rented. Some might grunt, though, that it shouldn't be
his responsibility. For its part, Enterprise gave Ars a statement that began: "The safety and
privacy of our customers is an important priority for us as a company." An
important priority, but not the most important priority? The company added: "Following the outreach last fall, we updated our car
cleaning guidelines related to our master reset procedure. Additionally, we
instituted a frequent secondary audit process in coordination with Ford. We
also started working with Ford and are very near the completion of testing
software with them that will automate the prevention of FordPass pairing by
rental customers." Here's the part that always make me curl up on my sofa and offer
intermittent bleats. Why is it that when technologies such as these are
implemented, the creators don't sufficiently consider the potential
consequences and prevent them from happening? If Sinclair could so easily keep his app paired to any Ford he'd rented --
and this surely doesn't just apply to Fords—why wasn't it easy for the
Ford and/or Enterprise to ensure it couldn't happen? Why does it take a customer to point out the patent insecurity of the system
before companies actually do anything about it? Perhaps one should be grateful that at least nothing grave occurred. But
imagine if someone of brittle brains realized they could be the ghost in a
machine and really scare a stranger. Too often, tech companies place the onus on customers to work things out for
themselves and even to save themselves. Or, worse, to only discover a breach
when it's too late. Wouldn't it be bracing if tech companies, I don't know, showed a little
responsibility in advance? 


European Parliament urges oversight for AI (Politico Europe)

“Peter G. Neumann” <neumann@csl.sri.com>

Thu, 13 Feb 2020 10:08:23 PST

Lawmakers in Strasbourg adopted a resolution calling for strong oversight of
artificial intelligence technology, approving the text by hand vote while
rejecting six potential amendments.
<https://www.europarl.europa.eu/doceo/document/B-9-2020-0094_EN.pdf> The document, which was adopted by the Parliament's Committee on Internal
Market and Consumer Protection (IMCO) late last month, marks the first time
since new lawmakers were elected last year that the assembly takes a
position on what kind of safeguards are needed for automated decision-making
processes. It comes as political leaders at the European Commission, the
EU's executive body, are set to initiate far-reaching legislation on
artificial intelligence next week. 


AI can create new problems as it solves old ones (Fortune)

Gabe Goldberg <gabe@gabegold.com>

Fri, 14 Feb 2020 18:51:07 -0500

Some of the world's biggest companies are relying on AI to build a better
workforce. But be warned: The tech can create new problems even as it
solves old ones. ... In his Amsterdam offices, about an hour's drive from his company's largest
non-American ketchup factory, Pieter Schalkwijk spends his days crunching
data about his colleagues. And trying to recruit more: As head of Kraft
Heinz's talent acquisition for Europe, the Middle East, and Africa,
Schalkwijk is responsible for finding the right additions to his region's
5,600-person team. It's a high-volume task. Recently, for an entry-level trainee program,
Schalkwijk received 12,000 applications—for 40 to 50 openings. Which is
why, starting in the fall of 2018, thousands of recent university graduates
each spent half an hour playing video games. “I think the younger
generation is a bit more open to this way of recruiting,'' Schalkwijk says. The games were cognitive and behavioral tests developed by startup
Pymetrics, which uses artificial intelligence to assess the personality
traits of job candidates. One game asked players to inflate balloons by
tapping their keyboard space bar, collecting (fake) money for each hit until
they chose to cash in”or until the balloon burst, destroying the
payoff. (Traits evaluated: appetite for and approach to risk.) Another
measured memory and concentration, asking players to remember and repeat
increasingly long sequences of numbers. Other games registered how generous
and trusting (or skeptical) applicants might be, giving them more fake money
and asking whether they wanted to share any with virtual partners. [...] Still, he too is proceeding cautiously. For example, Kraft Heinz will likely
never make all potential hires play the Pymetrics games. “For generations
that haven't grown up gaming, there's still a risk'' of age discrimination,
Schalkwijk says. He's reserving judgment on the effectiveness of Pymetrics until this
summer's performance reviews, when he'll get the first full assessment of
whether this machine-assisted class of recruits is better or worse than
previous, human-hired ones. The performance reviews will be data-driven but
conducted by managers with recent training in avoiding unconscious
bias. There's a limit to what the company will delegate to the machines. AI “can help us and it will help us, but we need to keep checking that it's
doing the right thing, Humans will still be involved for quite some time to
come.'' https://fortune.com/longform/hr-technology-ai-hiring-recruitment/ But ... how can it work without quantum computing hosted blockchain? 


AI and Ethics (NJ Tech Weekly)

DrM <notable@mindspring.com>

Thu, 13 Feb 2020 07:06:49 -0500

https://njtechweekly.com/ai-and-ethics-part-1-will-vulnerable-ai-disrupt-the-2020-elections/ [We're doomed... Rebecca Mercuri] 


The future of software testing in 2020: Here’s what’s coming (Functionize)

Gabe Goldberg <gabe@gabegold.com>

Wed, 12 Feb 2020 18:12:27 -0500

Artificial intelligence and machine learning aren't the only changes to
expect in QA, but they're a big part of it. https://www.functionize.com/blog/the-future-of-software-testing-in-2020-heres-whats-coming/ 


Will Past Criminals Reoffend? Humans Are Terrible at Guessing, and Computers Aren’t Much Better (Scientific American)

Richard Stein <rmstein@ieee.org>

Fri, 14 Feb 2020 15:21:07 -0800

https://www.scientificamerican.com/article/will-past-criminals-reoffend-humans-are-terrible-at-guessing-and-computers-arent-much-better/ "Although all of the researchers agreed that algorithms should be applied
cautiously and not blindly trusted, tools such as COMPAS and LSI-R are
already widely used in the criminal justice system. 'I call it techno
utopia, this idea that technology just solves our problems,' Farid says. 'If
the past 20 years have taught us anything, [they] should have taught us that
that is simply not true.'" In "Talking to Strangers: What We Should Know about the People We Don't
Know," Malcolm Gladwell discusses judges during an arraignment hearing to
determine "own recognizance release," or to imprison a suspect based on
numerous factors. What tips a judge's decision to release or hold? Judges study prior criminal history, the crime, eyeball the suspect, etc. Do
they always make a correct determination? No. News reports tragically
document instances when a judge mistakenly interprets a suspect's public
safety assessment, should the suspect commit a crime while on bail and
caught. https://www.govtech.com/public-safety/Civil-Rights-Groups-Call-for-Reforms-on-Use-of-Algorithms-to-Determine-Bail-Risk.html
discusses algorithmic public safety assessments which can assist judicial
bail decisions. Risk: State or Federal legislation that establishes algorithmic priority
over human judicial ruling. 


Apple joins FIDO Alliance, commits to getting rid of passwords (ZDNet)

Gabe Goldberg <gabe@gabegold.com>

Wed, 12 Feb 2020 18:07:46 -0500

Passwords are a notorious security mess. The FIDO Alliance wants to replace
them with better, more secure technology and now Apple is with them in this
effort. https://www.zdnet.com/article/apple-joins-fido-alliance-commits-to-getting-rid-of-passwords/ ...I wonder about non-tech people reacting to and adopting this... 


IRS paper forms vs. COVID-19

Dan Jacobson <jidanni@jidanni.org>

Fri, 14 Feb 2020 12:50:16 +0800

In some cases* the US IRS still accepts only paper tax forms. Compare this
to the government's FBAR form, which can be filed only electronically.
But in some COVID-19 areas, paper mail is no longer an option...
* E.g., Form 5329, when filed separately. 


The Politics of Epistemic Fragmentation (Medium)

John Ohno <john.ohno@gmail.com>

Wed, 12 Feb 2020 18:49:02 -0500

https://medium.com/the-weird-politics-review/the-politics-of-epistemic-fragmentation-175d6bbb98a4?source=3Dfriends_link&sk=3Deaa79383d2d43444507d0053f9803e1b Over the past few years, it has seemed as though the only thing real news
outlets can agree on is the danger of *fake news*. Foreign powers or domestic traitors are accused of engineering political
divisions, creating *polarization*, and seeding arbitrary disinformation for
the sole purpose of making it impossible for people from different
subcultures to communicate. This is blamed on the Internet (and, more
specifically, social media)—and there is some truth to this accusation.
However, as is often the case with new communication technologies, social
media has not accelerated this tendency towards disinformation so much as it
has made it more visible and legible.
<https://modernmythology.net/contra-ovadya-on-post-truth-83bb15acce7c?source=3Dfriends_link&sk=3Dc0aed65c0f5befe2a1e241efd8d695e3> When widespread Internet access broke down our sense of a collective
reality, what it was toppling was not the legacy of the Enlightenment, but
instead an approximately 100-year bubble in media centralization. Current
norms around meaning-making cannot survive the slow collision with
widespread private ownership of duplication & broadcast technologies. These norms are built around an assumption that consensus is normal and
desirable among people who communicate with each other—in other words,
that whenever people calmly and rationally communicate, they will come to an
understanding about base reality. This ignores the role of power relations
in communication: in modern, liberal contexts, the party that can perform
calm diplomatic rationality the best will win, and the best way to remain
calm and diplomatic is to know that if you fail in your attempts at
diplomacy, a technically advanced army will continue that diplomacy through
more direct means. It also ignores the potential value of ideas (including
myths) to people who do not fully understand their mechanism of action --
what the rationalist community calls *Chesterton's Fence*. Just as we benefit from medical innovations like SSRIs and anesthesia
without knowing how or why they work, many cultures benefit from beliefs
that aren't grounded in observation, deduction, or strong evidence that
they correspond to base reality—but, rather, by the fact that everybody
who didn't hold those beliefs eventually died for reasons that remain
obscure. In situations of extreme cosmopolitanism, where people from different
cultures and environments communicate on equal terms, there will be
disagreements that cannot be dismissed as merely aesthetic preferences or
historical relics—but that nevertheless cannot be worked out through
debate or discussion, simply because discovering their material bases is a
project of immense complexity. Epistemic fragmentation—the tendency for different people to have
different sources of knowledge and different, often conflicting,
understandings—is irreducible, and epistemic centralization—the
centralized control of shared sources of information—cannot provide a
universally-applicable shared understanding of the world. We should be wary of attempts to solve this problem through `trust in
institutions'—in other words, through a return to the epistemic
centralization that characterized the twentieth century. This epistemic centralization was produced by tight control over broadcast
communication—organizations were `trusted' because they had the power
(through reserves of capital, ownership of expensive equipment, and/or
explicit government support) to reach many people with the same messages,
but they were not `trustworthy' in the sense that they did not (and could
not) accurately report on reality. While plenty of these organizations
worked in good faith to be responsible and accurate, no handful of
organizations has the manpower to report upon and fact check everything
important. Organizational or institutional meaning-making is a slightly scaled-up form
of individual meaning-making. An institution provides a structure for organizing individual work, and
this structure organizes flows of resources and information. These flows control what information can be expressed externally by
enforcing broadcast norms, house style, determining what sections are
allocated to what topics and determining what counts as newsworthy based on
whether or not it fits into any of these topics, and so on; they control
what information can be expressed internally, based on norms about
professional communication, expectations about shared spaces (like DC
reporters socializing after-hours in particular bars, or tech and culture
beat journalists socializing on twitter where strict character counts force
a terse style), and social hierarchy and stigma around covering particular
topics; they control what material can even be effectively researched
through the control of resources like travel expenses, deadline length, and
materials for stunt-reporting. All of these actions are essentially filters: they prevent journalists from
researching and reporting on a wide variety of things they would like to
cover, while producing incentives to cover a handful of specific things.
Because of this, no institution can produce better-quality meaning (i.e.,
meaning formed by serious consideration of a wider variety of sources) than
the individuals working for it could produce under a looser confederation,
assuming the resources necessary for access remained available. Consensus reality is merely a side effect of ignoring or erasing the pieces
that cannot be made legible and cannot be made to fit any narrative or
model—and this erasure is political, in the sense that it shapes what can
be imagined and what can be spoken about. We cannot effectively consider topics we are not allowed to discuss; we
cannot make good personal decisions about topics we cannot effectively
consider; we cannot make good collective decisions on topics about which we
cannot make good personal decisions; therefore, the soft-censorship
necessitated by the limited resources of the centralized meaning-making
that engineers the illusion of consensus reality prevents politics from
effectively addressing problems that affect only a few but that require
mass action and solidarity to solve
<https://medium.com/the-weird-politics-review/my-revolution-was-never-a-possibility-notes-on-adhd-anarchism-and-accelerationism-ed9d5113f9e0>
. The private supplementation of centralized shared knowledge is insufficient. The twentieth century model of broadcast media is an extension of earlier
models of (print-based) publishing: in the beginning, printing presses and
radio stations are expensive and a handful of early experimenters create
content for a handful of early adopters; as equipment costs drop, more
people get into the market, leading to a push to regulate and
re-centralize helmed
on one side by the biggest players in the market and on the other by folks
who are concerned about signal-to-noise ratio. This leads to self-enforced standards—rules for journalism, for instance
-- along with state-enforced measures to create a `legitimate' class and
separate it from `illegitimate' amateurs—copyright, spectrum
subdivisions, broadcast content rules. Broadcast mechanisms have typically remained expensive, regardless of how
technology has progressed: prior to widespread Internet access, the
cheapest broadcast medium (in terms of the ability of an individual of
modest means to reach many people) was the production of xerox pamphlets --
tens of cents per copy, plus postage. With the Internet, copying has a much lower cost & can be performed without
the direct, intentional involvement of recipients --what costs do exist can
be automatically distributed more evenly, rather than being concentrated in
the hands of some central node. (Because of a historical mistake, the web concentrates costs centrally, but
peer to peer communications technologies do not.)
<https://medium.com/@enkiv2/there-was-never-an-open-web-69194f9b1cf1?source=3Dfriends_link&sk=3D7aed9c67a373e1334f671be2b0b78afc> This breaks the economic justification for a tendency toward
re-centralization in the distribution of information—a justification that
had previously made the institutionalization of consensus-making
unavoidable. Prior to widespread literacy and widespread access to oral mass broadcast
media, consensus-making and meaning-making was a social process rather than
a parasocial one. Time-binding technologies—mechanisms to permanently record and retrieve
information, so that information that originated long ago or far away could
be transmitted without distortion—were limited to print. Writing, in the absence of mass-production technologies, had more of an
oral aspect to it: while Babylonian kings would manufacture negative molds
for exactly reprinting laws, manuscripts were largely transcribed by
students in lecture halls who included the lecturer's asides in their
transcriptions alongside their own notes, and these modified manuscripts
would be the basis for later lectures or would be copied by hand. In other
words, before print, it was rare for even writing to be `broadcast' in the
sense of a large number of people receiving exactly the same information,
and before radio, this kind of standardization was not available to the
illiterate at all. In the print age, the intelligentsia got their ideas from the canon of
great literature and so were capable of groupthink at scale, but their
illiterate or semi-literate peers were exploring epistemic space together
in a more organic fashion, without powerful time-binding technology
tethering them to any baseline. The sharedness of their realities mirrored their social connectedness --
almost always bidirectional, if not even or equitable—and their social
graphs mirrored geography (because transport technologies, though they
could warp transit-space, did not flatten it—it may be easier to go 100
miles by train than 10 miles by horse, but it has never become equally easy
to physically transport oneself to anywhere on earth). What the Internet did was to make visible the already-existing alien
realities of the outgroup and allow faster mutation through the
cross-pollination of fringe groups. Telephony could have done to these oral cultures some of what social media
has done to our literate culture, had it become common and affordable a
decade earlier and had party lines remained normal, but charging by the
minute (with a multiplier for long-distance calls) prevented the telephone
network from being the basis for the kind of multi-continent perpetual
hangouts that make the Internet so cosmopolitan—with the exception of
phone phreaks, who used exploits to create exactly this kind of community
behind Bell's back. 


Why Is Social Media So Addictive? (Mark D. Griffiths)

the keyboard of geoff goodfellow <geoff@iconia.com>

Tue, 11 Feb 2020 09:33:13 -0700

Social media is awful and whatever pleasures it confers in the form of
mildly amusing memes or a fleeting sense of community/belonging are
massively outweighed by its well-documented downsides. Their psychic
consequences are of interest to its owners only in the sense that, past a
certain threshold, people might turn away from their platforms and cut off
the endless stream of monetizable private data that sustain their business
models and corrode conventional ideas about privacy, self-determination,
etc. [...] I guess this is something I believe, though even typing it out is
embarrassing—because at this point it's so obvious/trite, and because its
obviousness/triteness hasn't stopped me or anyone I know from using it.
Some vague comfort is extractable from the fact that these platforms were
designed to foster just this kind of behavior, but it might be nice to know
how, exactly, that end was/is achieved. To that end, for this week's Giz
Asks <https://gizmodo.com/c/giz-asks> we've reached out to a number of
experts to find out why social media is so addictive. <https://www.ntu.ac.uk/staff-profiles/social-sciences/mark-griffiths> Distinguished Professor, Behavioural Addiction, Nottingham Trent University https://gizmodo.com/why-is-social-media-so-addictive-1841261494 


The high cost of a free coding bootcamp (The Verge)

Monty Solomon <monty@roscom.com>

Tue, 11 Feb 2020 12:14:19 -0500

https://www.theverge.com/2020/2/11/21131848/lambda-school-coding-bootcamp-isa-tuition-cost-free 


Debunking the lone woodpecker theory

Ed Ravin <eravin@panix.com>

Mon, 10 Feb 2020 22:23:48 -0500

Looking up more information about that acorn woodpecker stash, according to
a couple of sources (especially the Nat Geo article below), an entire family
of woodpeckers generally works as a team to build their stash, and it might
have taken them as long as five years to squirrel away that 300-pound load: https://www.nationalgeographic.com/news/2015/11/151113-antenna-cache-acorn-woodpecker-california/ Even more interestingly, that video is from 2009, leading to yet another
RISK of finding things on the Internet - thinking something you've
discovered is new just because it's new to you and the source conveniently
didn't mention any dates on it. The "first woodpecker" quote is attributed to Gerald Weinberg, I remember
that because I checked for a canonical version before putting in my post in
RISKS-28.21. At the time I thought I was being novel, but I see now that I
was merely the 3rd person to have that same great idea. https://en.wikiquote.org/wiki/Gerald_Weinberg 


Re: Benjamin Netanyahu’s election app potentially exposed data for every Israeli voter (RISKS-31.57)

Amos Shapir <amos083@gmail.com>

Wed, 12 Feb 2020 17:06:31 +0200

While the WP article is technically correct when saying that the app had
exposed every "registered voter" in Israel, it makes the fault seem a bit
less severe that it really is. The fact is, voters in Israel do not have to
register; every citizen over 18 can vote, and is listed automatically. This
means there is no opting out, everyone is on the exposed list, voting or
not. 


Re: Backhoes, squirrels, and woodpeckers as DoS vectors (R 31 57)

Tom Russ <taruss@google.com>

Wed, 12 Feb 2020 16:09:59 -0800

A colleague points out that a longer video of this was uploaded to YouTube
in 2009: https://www.youtube.com/watch?v=cZkAP-CQlhA It identifies the
location as being Bear Creek Road microwave site. The video is mis-titled "Squirrel [sic] fills Antenna with Acorns", but the comments identify a
woodpecker as the culprit. 


Re: A lazy fix 20 years ago means the Y2K bug is taking down computers, now (New Scientist)

Martin Ward <martin@gkc.org.uk>

Wed, 12 Feb 2020 10:57:21 +0000

The two options were: Extend the year field from 2 digits to 4 digits (which
might have knock-on effects all over the system), or use a "sliding window"
which would treat all dates whose 2 digit year could be interpreted as up
to, say, 20 years in the future as actually in the future and not the past. In 2020 the sliding window should treat dates "20" to "40" as 2020
to 2040 while "41" would be interpreted as 1941. Another option is simply to pick the closest date to the current date:
this is approximately equivalent to a 50 year sliding window. Implementing a Y2K "fix" which is guaranteed to fail in a few years seems
insane given that this is exactly the kind of short-sightedness which
created the Y2K mess in the first place! (Unless it was a cunning plan for
the programmers to give themselves extra business in 20 years time: like the
programmer who was implementing a payroll system and programmed the system
to crash if his name was not found on the payroll!) [And there won't be any COBOL programmers around when we hit Year 2100, PGN] 


Re: Autonomous vehicles (RISKS-31.57)

Stephen Mason <stephenmason@stephenmason.co.uk>

Tue, 11 Feb 2020 16:42:09 +0000

Reading through the latest RISKS, I think your readers might be interested
in the article by Professor Roger Kemp, 'Autonomous vehicles, who will be
liable for accidents?"—not quite a legal analysis, but an excellent
overview of some of the practical issues that do not get discussed very
often: https://journals.sas.ac.uk/deeslr/issue/view/528 The books listed below are published on paper and available as open source
from: https://ials.sas.ac.uk/about/about-us/people/stephen-mason Stephen Mason and Daniel Seng, editors, Electronic Evidence (4th edition,
Institute of Advanced Legal Studies for the SAS Humanities Digital Library,
School of Advanced Study, University of London, 2017) Electronic Signatures in Law (4th edn, Institute of Advanced Legal Studies
for the SAS Humanities Digital Library, School of Advanced Study, University
of London, 2016) Open source journal: Digital Evidence and Electronic Signature Law Review
http://dev-ials.sas.ac.uk/digital/ials-open-access-journals/digital-evidence-and-electronic-signature-law-review
(also available via the HeinOnline subscription service) 

Please report problems with the web pages to the maintainer

Top

Source: https://catless.ncl.ac.uk/Risks/31/58/#subj1

spot_img

Latest Intelligence

spot_img