Detectify, the Sweden-born cybersecurity startup that offers a website vulnerability scanner powered by the crowd, has raised €21 million in further funding.
Leading the round is London-based VC firm Balderton Capital, with participation from existing investors Paua Ventures, Inventure and Insight Partners.
Detectify says the new funding will be used to continue to hire “world-class” talent to further accelerate the company’s growth and deliver on its mission to reduce internet security vulnerabilities.
Founded in late 2013 by a self-described group of “elite hackers” from Sweden, the company offers a website security tool that uses automation to scan websites for vulnerabilities to help customers (i.e. developers) stay on top of security. The more unique part of the service, however, is that it is in part maintained — or, rather, kept up to date — via the crowd in the form of Detectify’s “ethical hacker network.”
As we explained when the startup raised its €5 million Series A round, this sees top-ranked security researchers submit vulnerabilities that are then built into the Detectify scanner and used in customers’ security tests. The clever part is that researchers get paid every time their submitted module identifies a vulnerability on a customer’s website. In other words, incentives are kept aligned, giving Detectify a potential advantage and greater scale compared to similar website security automation tools.
Detectify co-founder and CEO Rickard Carlsson tells me the company has made a lot of progress in the past 12 months, including building out the crowdsourcing part of its proposition in order to grow the number of known vulnerabilities.
“Modules from crowdsourcing hackers have now generated 110,000 plus vulnerabilities in our customer base,” he says. “And the community is about 2.5 times as large now”.
In the last year, Detectify has also expanded its client base in the U.S, and says it now counts leading software companies such as Trello, Spotify and King as customers.
The young startup seems to be scoring well on the gender diversity front, too. It says that almost half (45%) of its 83 employees are female, including 50% at C-level. In addition, there are close to 30 nationalities across Detectify’s Stockholm and Boston offices.
Adds James Wise, partner at Balderton Capital, in a statement: “Detectify brings together the power of human ingenuity, the immense scalability of software, and a strong culture of transparency and integrity to provide world-class security to everyone. This is a fundamentally new approach to protecting businesses from new cyber security threats, and alongside our other cyber security investments, including Darktrace, Recorded Future & Tessian, we see Detectify as part of a new wave of solutions to make the web safer for everyone.”
In two years, the adware-dropping Shlayer Trojan has spread to infect one in 10 MacOS systems, Kaspersky says.
Mac users generally tend to be better protected against malware and other online threats than Windows users. That doesn’t mean they are immune, however.
Shlayer, a malware tool for distributing unwanted advertisements on MacOS systems, is a case in point. Since first surfacing in February 2018, the malware has emerged as the most widely distributed threat on the MacOS platform. Among those most impacted by the malware are MacOS users in the US, Germany, France, and the UK.
Kaspersky, which has been tracking Shlayer for some time, this week described it as currently infecting at least one in 10 Mac users. Though the malware has little to separate it from other malicious software from a purely technical standpoint, it continues to remain as active as when it first surfaced.
According to Kaspersky, in 2019 Shlayer-related attacks accounted for nearly 30% of all attacks on MacOS devices protected by the company’s products. Worse, almost all of the other remaining top 10 MacOS threats were adware products distributed by Shlayer. Among them were AdWare.OSX.Bnodlero, AdWare.OSX.Geonei, AdWare.OSX.Pirrit, and AdWare.OSX.Cimpli, the security vendor noted.
One reason for Shlayer’s continuing prevalence is the manner in which it is being distributed. Currently, over 1,000 “partner” websites distribute Shlayer on behalf of the malware’s authors. Unsuspecting users who arrive on these sites — many of which hawk pirated content — are typically redirected to fake Flash Player update pages from where the malware gets downloaded on MacOS systems. The partner sites get paid for each download.
“The affiliate network is an intermediate link between the creators of the Trojan and those who are willing to distribute it for a fee,” says Vladimir Kuskov, head of advanced threat research and software classification at Kaspersky. “The role of partner sites is to attract users to their resource and instill the need to download and run a malicious file.”
Shlayer is being distributed in a variety of other ways, including malicious links to fake Adobe Flash update sites embedded in article references on Wikipedia and video descriptions on YouTube. Kasperksy researchers have so far found links to at least 700 malicious domains for downloading Shlayer hidden in a variety of legitimate sites.
Users looking for pirated content are more likely to get infected, Kuskov says. At the same time, even those clicking on links below a YouTube video or while searching for something on Wikipedia are at risk, he notes.
Annoying but Less Harmful Shlayer is distributed under the guise of a Flash Player installer and, at first sight, looks pretty legitimate. Like other installers, the malware installs software, except that in this case it installs adware instead of legitimate software.
One alleviating fact is that Shlayer does not load on its own. Users have to actively click and download the installer for it to load on a system. But those distributing the malware have employed a variety of social engineering tricks to redirect users to fake Flash Player update sites to get users to download the malware, Kuskov notes.
Shlayer itself is also not persistent on an infected system. A user who discovers the malware can simply delete the installation file to get rid of it, he says.
The real problem is the adware it installs. “It’s important to understand that Shlayer itself performs only the initial stage of the attack — it penetrates the system, loads the main payload, and runs it,” Kuskov says. The installed adware is not easy for the average user to remove. It can be especially challenging because of the multiple adware family Shlayer can install on a single system.
Also, some adware like AdWare.OSX.Cimpli can intercept a user’s HTTP and HTTPS traffic and inject code into the Web pages requested by the user. “In theory, that means that Cimpli can steal any data entered by the user on the Web page,” Koskov said.
Even so, Shlayer is relatively innocuous compared to other more destructive malware. It is also an example of how attackers are constantly looking for ways to earn money by attacking MacOS systems.
The threat landscape for Apple devices is changing, and the amount of malicious and unwanted software is growing, Kaspersky said. Since at least 2012, the volume of malicious and potentially unwanted files targeted at MacOS has been doubling each year.
“But instead of full-fledged malware, MacOS users increasingly receive annoying, but less harmful, adware,” Kuskov says. “It seems that this way of monetizing an infection allows attackers to make more profit and save on expenses.”
Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio
The FBI is concerned that AI is being used to create deepfake videos that are so convincing they cannot be distinguished from reality.
The alarm was sounded by an FBI executive at a WSJ Pro Cybersecurity Symposium held recently in San Diego. “What we’re concerned with is that, in the digital world we live in now, people will find ways to weaponize deep-learning systems,” stated Chris Piehota, executive assistant director of the FBI’s science and technology division, in an account in WSJPro.
The technology behind deepfakes and other disinformation tactics are enhanced by AI. The FBI is concerned natural security could be compromised by fraudulent videos created to mimic public figures. “As the AI continues to improve and evolve, we’re going to get to a point where there’s no discernible difference between an AI-generated video and an actual video,” Piehota stated.
The word ‘deepfake’ is a portmanteau of “deep learning” and “fake.” It refers to a branch of synthetic media in which artificial neural networks are used to generate fake images or videos based on a person’s likeness.
The FBI has created its own deepfakes in a test lab, that have been able to create artificial personas that can pass some measures of biometric authentication, Piehota stated. The technology can also be used to create realistic images of people who do not exist. And 3-D printers powered with AI models can be used to copy someone’s fingerprints—so far, FBI examiners have been able to tell the difference between real and artificial fingerprints.
Threat to US Elections Seen
Some are quite concerned about the impact of deepfakes on US democratic elections and on the attitude of voters. The AI-enhanced deepfakes can undermine the public’s confidence in democratic institutions, even if proven false, warned Suzanne Spaulding, a senior adviser at the Center for Strategic and International Studies, a Washington-based nonprofit.
“It really hastens our move towards a post-truth world, in which the American public becomes like the Russian population, which has really given up on the idea of truth, and kind of shrugs its shoulders. People will tune out, and that is deadly for democracy,” she stated in the WSJ Pro account.
Deepfake tools rely on a technology called generative adversarial networks (GANs), a technique invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple, according to an account in Live Science.
A GAN algorithm generates two AI streams, one that generates content such as photo images, and an adversary that tries to guess whether the images are real or fake. The generating AI starts off with the advantage, meaning its partner can easily distinguish real from fake photos. But over time, the AI gets better and begins producing content that looks lifelike.
For an example, see NVIDIA’s project www.thispersondoesnotexist.com which uses a GAN to create completely fake—and completely lifelike—photos of people.
Example material is starting to mount. In 2017, researchers from the University of Washington in Seattle trained a GAN can change a video of former President Barack Obama, so his lips moved consistent with the words, but from a different speech. That work was published in the journal ACM Transactions on Graphics (TOG). In 2019, a deepfake could generate realistic movies of the Mona Lisa talking, moving and smiling in different positions. The technique can also be applied to audio files, to splice new words into a video of a person talking, to make it appear they said something they never said.
All this will cause attentive viewers to be more wary of content on the internet.
High tech is trying to field a defense against deepfakes.
Google in October 2019 released several thousand deepfake videos to help researchers train their models to recognize them, according to an account in Wired. The hope is to build filters that can catch deepfake videos the way spam filters identify email spam.
The clips Google released were created in collaboration with Alphabet subsidiary Jigsaw. They focused on technology and politics, featuring paid actors who agreed to have their faces replaced. Researchers can use the videos to benchmark the performance of their filtering tools. The clips show people doing mundane tasks, or laughing or scowling into the camera. The face-swapping is easy to spot in some instances and not in others.
Some researchers are skeptical this approach will be effective. “The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” stated Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes, to Wired. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”
Going further, the Deepfake Detection Challenge competition was launched in December 2019 by Facebook — along with Amazon Web Services (AWS), Microsoft, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley; University of Maryland, College Park; and State University of New York at Albany, according to an account in VentureBeat.
Facebook has budged more than $10 million to encourage participation in the competition; AWS is contributing up to $1 million in service credits and offering to host entrants’ models if they choose; and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.
“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” noted Facebook CTO Mike Schroepfer in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them. The [hope] is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”
The data set contains 100,000-plus videos and was tested through a targeted technical working session in October at the International Conference on Computer Vision, stated Facebook AI Research Manager Christian Ferrer. The data does not include any personal user identification and features only participants who have agreed to have their images used. Access to the dataset is gated so that only teams with a license can access it.
The Deepfake Detection Challenge is overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity. It is scheduled to run through the end of March 2020.
A misconfigured Amazon Web Services S3 storage bucket was discovered leaking data that had been collected by a point-of-sale system used by multiple cannabis dispensaries, researchers from vpnMentor reported on Wednesday.
The exposed bucket, which was found on Christmas eve and closed by Jan. 14, was found to contain more than 85,000 files. These included scanned government and employee photo IDs of over 30,000 individuals, the signatures of dispensary visitors and patients, and customer attestations acknowledging state cannabis laws, according to a vpnMentor company blog post.
vpnMentor researchers Noam Rotem and Ran Locar spotted the open database while conducting their ongoing web mapping project, and determined that it belonged to THSuite, a Seattle-based software supplier to the cannabis industry.
The records found within the storage bucket correspond to the customer sales data of various marijuana dispensaries using THSuite’s POS solution. The researchers specifically named three of the affected dispensaries: Amedicanna Dispensary in Maryland, Bloom Medicinals in Ohio (with corporate headquarters in Florida) and Colorado Grow Company in Colorado.
Depending on the dispensary, the exposed order and inventory data at times also included names, phone numbers, email addresses, birthdates, street addresses, medical/state ID numbers and expirations dates, date of first purchase, cannabis varieties purchased, quantities of purchase, cannabis gram limits, transaction cost, date received, and whether or a customer requested financial assistance. Additionally, the researchers observed Bloom Medicinals’ monthly sales, discounts returns and taxes paid, and Colorado Grow Company’s gross sales, discounts, taxes, net sales, totals for each payment type, employee names and the number of hours employees worked.
SC Media reached out to all three named dispensaries, as well as to THSuite, for comment.
“We have been made aware that our third-party technology provider, THSuite, experienced a data breach which may have affected some of our patients’ data,” said RJ Starr, head of compliance and regulatory affairs at Bloom Medicinals, in a statement. “… We are working closely with our technology vendor to identify which, if any, of Bloom Medicinals patients have been affected. Once we have identified any affected patients, we will notify each individual, and follow all state and federal breach notification requirements.”
“As a result of this data breach, sensitive personal information was exposed for medical marijuana patients, and possibly for recreational marijuana users as well. This raises some serious privacy concerns,” vpnMentor stated in its blog post, noting that the incident could very well constitute a HIPAA violation. “Medical patients have a legal right to keep their medical information private for good reason. Patients whose personal information was leaked may face negative consequences both personally and professionally.”
“Many workplaces have specific policies prohibiting cannabis use. Customers and patients may face consequences at work due to their cannabis use being exposed. Some could even lose their jobs, especially if they work for a federal agency,” the blog post continued. Additionally, customers could be subject to targeted phishing scams that leverage the vast amount of information gleaned from the leaky storage bucket.
It is not known if any malicious parties accessed any of the leaked data. vpnMentor said it alerted THSuite of the problem on Dec. 26, and subsequently contacted Amazon AWS on Jan. 7. It is not clear whether it was THSuite or Amazon that resolved the issue on Jan. 14.
“It seems like every week we hear about another company that’s left an AWS bucket unprotected, leaving sensitive data exposed. We will continue to see an escalation in these types of incidents because of the complexity of gaining visibility and managing over privileged identities in a multi-cloud enterprise environment,” said CloudKnox CEO Raj Mallempati. “Enterprises need to proactively address these security risks by understanding their cloud infrastructure risk posture and delivering continuous detection and remediation of over-privileged human and machine identities.”
“No matter what industry you’re in, if you collect customer data and use cloud storage, you absolutely must ensure that storage is protected from exposure,” added Tim Erlin, VP of product management and strategy at Tripwire.” Unsecured Amazon S3 buckets are not a new phenomenon. There are tools, from Amazon and from other vendors, to help with this problem.”