Connect with us

Cyber Security

Arron Banks’ private messages leaked by hacker



Image copyright PA

The Twitter account of Arron Banks, the founder of the pro-Brexit campaign Leave.EU, has been hacked.

The perpetrator has leaked thousands of his private messages to and from dozens of other people spanning several years.

In a statement, Mr Banks accused Twitter of taking too long to tackle the issue and said the social network had “deliberately chosen” to leave his personal information online.

Twitter said it had “taken steps to secure the compromised account”.

“We will continue to take firm enforcement action in line with our policy which strictly prohibits the distribution on our service of materials obtained through hacking,” Twitter said in a statement.


It is not known who carried out the attack.

The data was made available by the hackers in the form of a link to a download. The original file is no longer online.

One expert said the hacker, if caught, could be prosecuted under the Computer Misuse Act, and that others who made use of the material would be walking into a legal minefield.

“Even if Arron Banks was using Twitter in a private capacity rather than as Leave.EU, the data was misappropriated from Twitter and that likely engages the Data Protection Act,” commented Tim Turner, a data protection consultant.

“There are public interest defences for using unlawfully obtained data, but that requires a journalist or other person to gamble that they can successfully argue that the public interest supports whatever use they make of it.

“You cannot know for certain that the public interest will back up any particular course of action; a person would have to act first, and see what follows.”

Avon and Somerset Police has confirmed that it is investigating the matter.

“We’re investigating whether any offences have been committed under the Computer Misuse Act after we received a report a Twitter account was compromised,” said a spokesman.

In February 2019, Leave.EU and an insurance company owned by Mr Banks were fined £120,000 by the Information Commissioner’s Office for breaching data protection laws.

“Arron Banks has shown extraordinary contempt for the ICO and British data laws and so this is a moment for him to reflect on the need for those laws and a regulator to enforce them,” said the journalist Carole Cadwalladr.

Ms Cadwalladr and Mr Banks have had many battles over her investigations into his affairs.

She said in a tweet that she had been sent some direct messages, said to be from the hacked account.

They were “pretty explosive” she tweeted.

Ms Cadwalladr told the BBC she had not downloaded any data.

Mr Banks’ Twitter account was suspended following the breach but is now working again.

Related Topics

Read more:

Cyber Security

The Annoying MacOS Threat That Won’t Go Away




In two years, the adware-dropping Shlayer Trojan has spread to infect one in 10 MacOS systems, Kaspersky says.

Mac users generally tend to be better protected against malware and other online threats than Windows users. That doesn’t mean they are immune, however.

Shlayer, a malware tool for distributing unwanted advertisements on MacOS systems, is a case in point. Since first surfacing in February 2018, the malware has emerged as the most widely distributed threat on the MacOS platform. Among those most impacted by the malware are MacOS users in the US, Germany, France, and the UK.

Kaspersky, which has been tracking Shlayer for some time, this week described it as currently infecting at least one in 10 Mac users. Though the malware has little to separate it from other malicious software from a purely technical standpoint, it continues to remain as active as when it first surfaced.

According to Kaspersky, in 2019 Shlayer-related attacks accounted for nearly 30% of all attacks on MacOS devices protected by the company’s products. Worse, almost all of the other remaining top 10 MacOS threats were adware products distributed by Shlayer. Among them were AdWare.OSX.Bnodlero, AdWare.OSX.Geonei, AdWare.OSX.Pirrit, and AdWare.OSX.Cimpli, the security vendor noted.

One reason for Shlayer’s continuing prevalence is the manner in which it is being distributed. Currently, over 1,000 “partner” websites distribute Shlayer on behalf of the malware’s authors. Unsuspecting users who arrive on these sites — many of which hawk pirated content — are typically redirected to fake Flash Player update pages from where the malware gets downloaded on MacOS systems. The partner sites get paid for each download.

“The affiliate network is an intermediate link between the creators of the Trojan and those who are willing to distribute it for a fee,” says Vladimir Kuskov, head of advanced threat research and software classification at Kaspersky. “The role of partner sites is to attract users to their resource and instill the need to download and run a malicious file.” 

Shlayer is being distributed in a variety of other ways, including malicious links to fake Adobe Flash update sites embedded in article references on Wikipedia and video descriptions on YouTube. Kasperksy researchers have so far found links to at least 700 malicious domains for downloading Shlayer hidden in a variety of legitimate sites.

Users looking for pirated content are more likely to get infected, Kuskov says. At the same time, even those clicking on links below a YouTube video or while searching for something on Wikipedia are at risk, he notes.

Annoying but Less Harmful
Shlayer is distributed under the guise of a Flash Player installer and, at first sight, looks pretty legitimate. Like other installers, the malware installs software, except that in this case it installs adware instead of legitimate software.

One alleviating fact is that Shlayer does not load on its own. Users have to actively click and download the installer for it to load on a system. But those distributing the malware have employed a variety of social engineering tricks to redirect users to fake Flash Player update sites to get users to download the malware, Kuskov notes.

Shlayer itself is also not persistent on an infected system. A user who discovers the malware can simply delete the installation file to get rid of it, he says.

The real problem is the adware it installs. “It’s important to understand that Shlayer itself performs only the initial stage of the attack — it penetrates the system, loads the main payload, and runs it,” Kuskov says. The installed adware is not easy for the average user to remove. It can be especially challenging because of the multiple adware family Shlayer can install on a single system.

Also, some adware like AdWare.OSX.Cimpli can intercept a user’s HTTP and HTTPS traffic and inject code into the Web pages requested by the user. “In theory, that means that Cimpli can steal any data entered by the user on the Web page,” Koskov said.

Even so, Shlayer is relatively innocuous compared to other more destructive malware. It is also an example of how attackers are constantly looking for ways to earn money by attacking MacOS systems.

The threat landscape for Apple devices is changing, and the amount of malicious and unwanted software is growing, Kaspersky said. Since at least 2012, the volume of malicious and potentially unwanted files targeted at MacOS has been doubling each year. 

“But instead of full-fledged malware, MacOS users increasingly receive annoying, but less harmful, adware,” Kuskov says. “It seems that this way of monetizing an infection allows attackers to make more profit and save on expenses.”

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

More Insights


Continue Reading


High-quality Deepfake Videos Made with AI Seen as a National Security Threat




Deepfake videos so realistic that they cannot be detected as fakes have the FBI concerned about they pose a national security threat. (GETTY IMAGES)

By AI Trends Staff

The FBI is concerned that AI is being used to create deepfake videos that are so convincing they cannot be distinguished from reality.

The alarm was sounded by an FBI executive at a WSJ Pro Cybersecurity Symposium held recently in San Diego. “What we’re concerned with is that, in the digital world we live in now, people will find ways to weaponize deep-learning systems,” stated Chris Piehota, executive assistant director of the FBI’s science and technology division, in an account in WSJPro.

The technology behind deepfakes and other disinformation tactics are enhanced by AI. The FBI is concerned natural security could be compromised by fraudulent videos created to mimic public figures. “As the AI continues to improve and evolve, we’re going to get to a point where there’s no discernible difference between an AI-generated video and an actual video,” Piehota stated.

Chris Piehota, executive assistant director, FBI science and technology division

The word ‘deepfake’ is a portmanteau of “deep learning” and “fake.” It refers to a branch of synthetic media in which artificial neural networks are used to generate fake images or videos based on a person’s likeness.

The FBI has created its own deepfakes in a test lab, that have been able to create artificial personas that can pass some measures of biometric authentication, Piehota stated. The technology can also be used to create realistic images of people who do not exist. And 3-D printers powered with AI models can be used to copy someone’s fingerprints—so far, FBI examiners have been able to tell the difference between real and artificial fingerprints.

Threat to US Elections Seen

Some are quite concerned about the impact of deepfakes on US democratic elections and on the attitude of voters. The AI-enhanced deepfakes can undermine the public’s confidence in democratic institutions, even if proven false, warned Suzanne Spaulding, a senior adviser at the Center for Strategic and International Studies, a Washington-based nonprofit.

“It really hastens our move towards a post-truth world, in which the American public becomes like the Russian population, which has really given up on the idea of truth, and kind of shrugs its shoulders. People will tune out, and that is deadly for democracy,” she stated in the WSJ Pro account.

Suzanne Spaulding, senior adviser, Center for Strategic and International Studies

Deepfake tools rely on a technology called generative adversarial networks (GANs), a technique invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple, according to an account in Live Science.

A GAN algorithm generates two AI streams, one that generates content such as photo images, and an adversary that tries to guess whether the images are real or fake. The generating AI starts off with the advantage, meaning its partner can easily distinguish real from fake photos. But over time, the AI gets better and begins producing content that looks lifelike.

For an example, see NVIDIA’s project which uses a GAN to create completely fake—and completely lifelike—photos of people.

Example material is starting to mount. In 2017, researchers from the University of Washington in Seattle trained a GAN can change a video of former President Barack Obama, so his lips moved consistent with the words, but from a different speech. That work was published in the journal ACM Transactions on Graphics (TOG). In 2019, a deepfake could generate realistic movies of the Mona Lisa talking, moving and smiling in different positions. The technique can also be applied to audio files, to splice new words into a video of a person talking, to make it appear they said something they never said.

All this will cause attentive viewers to be more wary of content on the internet.

High tech is trying to field a defense against deepfakes.

Google in October 2019 released several thousand deepfake videos to help researchers train their models to recognize them, according to an account in Wired. The hope is to build filters that can catch deepfake videos the way spam filters identify email spam.

The clips Google released were created in collaboration with Alphabet subsidiary Jigsaw. They focused on technology and politics, featuring paid actors who agreed to have their faces replaced. Researchers can use the videos to benchmark the performance of their filtering tools. The clips show people doing mundane tasks, or laughing or scowling into the camera. The face-swapping is easy to spot in some instances and not in others.

Some researchers are skeptical this approach will be effective. “The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” stated Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes, to Wired. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Going further, the Deepfake  Detection Challenge competition was launched in December 2019 by Facebook — along with Amazon Web Services (AWS), Microsoft, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley; University of Maryland, College Park; and State University of New York at Albany, according to an account in VentureBeat.

Facebook has budged more than $10 million to encourage participation in the competition; AWS is contributing up to $1 million in service credits and offering to host entrants’ models if they choose; and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.

“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” noted Facebook CTO Mike Schroepfer in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them. The [hope] is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”

The data set contains 100,000-plus videos and was tested through a targeted technical working session in October at the International Conference on Computer Vision, stated Facebook AI Research Manager Christian Ferrer.  The data does not include any personal user identification and features only participants who have agreed to have their images used. Access to the dataset is gated so that only teams with a license can access it.

The Deepfake Detection Challenge is overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity. It is scheduled to run through the end of March 2020.

Read the source articles in  WSJProLive Science, Wired and VentureBeat.


Continue Reading

Cyber Security

Privacy takes a hit, as storage bucket leaks cannabis dispensary POS data




A misconfigured Amazon Web Services S3 storage bucket was discovered leaking data that had been collected by a point-of-sale system used by multiple cannabis dispensaries, researchers from vpnMentor reported on Wednesday.

The exposed bucket, which was found on Christmas eve and closed by Jan. 14, was found to contain more than 85,000 files. These included scanned government and employee photo IDs of over 30,000 individuals, the signatures of dispensary visitors and patients, and customer attestations acknowledging state cannabis laws, according to a vpnMentor company blog post.

vpnMentor researchers Noam Rotem and Ran Locar spotted the open database while conducting their ongoing web mapping project, and determined that it belonged to THSuite, a Seattle-based software supplier to the cannabis industry.

The records found within the storage bucket correspond to the customer sales data of various marijuana dispensaries using THSuite’s POS solution. The researchers specifically named three of the affected dispensaries: Amedicanna Dispensary in Maryland, Bloom Medicinals in Ohio (with corporate headquarters in Florida) and Colorado Grow Company in Colorado.

Depending on the dispensary, the exposed order and inventory data at times also included names, phone numbers, email addresses, birthdates, street addresses, medical/state ID numbers and expirations dates, date of first purchase, cannabis varieties purchased, quantities of purchase, cannabis gram limits, transaction cost, date received, and whether or a customer requested financial assistance. Additionally, the researchers observed Bloom Medicinals’ monthly sales, discounts returns and taxes paid, and Colorado Grow Company’s gross sales, discounts, taxes, net sales, totals for each payment type, employee names and the number of hours employees worked.

SC Media reached out to all three named dispensaries, as well as to THSuite, for comment.

“We have been made aware that our third-party technology provider, THSuite, experienced a data breach which may have affected some of our patients’ data,” said RJ Starr, head of compliance and regulatory affairs at Bloom Medicinals, in a statement. “… We are working closely with our technology vendor to identify which, if any, of Bloom Medicinals patients have been affected. Once we have identified any affected patients, we will notify each individual, and follow all state and federal breach notification requirements.”

“As a result of this data breach, sensitive personal information was exposed for medical marijuana patients, and possibly for recreational marijuana users as well. This raises some serious privacy concerns,” vpnMentor stated in its blog post, noting that the incident could very well constitute a HIPAA violation. “Medical patients have a legal right to keep their medical information private for good reason. Patients whose personal information was leaked may face negative consequences both personally and professionally.”

“Many workplaces have specific policies prohibiting cannabis use. Customers and patients may face consequences at work due to their cannabis use being exposed. Some could even lose their jobs, especially if they work for a federal agency,” the blog post continued. Additionally, customers could be subject to targeted phishing scams that leverage the vast amount of information gleaned from the leaky storage bucket.

It is not known if any malicious parties accessed any of the leaked data. vpnMentor said it alerted THSuite of the problem on Dec. 26, and subsequently contacted Amazon AWS on Jan. 7. It is not clear whether it was THSuite or Amazon that resolved the issue on Jan. 14.

“It seems like every week we hear about another company that’s left an AWS bucket unprotected, leaving sensitive data exposed. We will continue to see an escalation in these types of incidents because of the complexity of gaining visibility and managing over privileged identities in a multi-cloud enterprise environment,” said CloudKnox CEO Raj Mallempati. “Enterprises need to proactively address these security risks by understanding their cloud infrastructure risk posture and delivering continuous detection and remediation of over-privileged human and machine identities.”

“No matter what industry you’re in, if you collect customer data and use cloud storage, you absolutely must ensure that storage is protected from exposure,” added Tim Erlin, VP of product management and strategy at Tripwire.” Unsecured Amazon S3 buckets are not a new phenomenon. There are tools, from Amazon and from other vendors, to help with this problem.”


Continue Reading