Connect with us

Artificial Intelligence News

AI ‘reveals Shakespeare and Fletchers different roles in Henry VIII’

Published

on

Czech academic Petr Plechac has run tiny pieces of text through a new algorithm that he says identifies their distinct contributions

When the scholar James Spedding analysed the authorship of Shakespeares Henry VIII in 1850, he pored over the details of the text and eventually attributed the play not only to the Bard, but to his successor at the Kings Men theatre company, John Fletcher. Now 169 years later, an academic has used artificial intelligence to back up Speddings theory and pin down exactly who wrote what.

Petr Plechac from the Czech Academy of Sciences in Prague trained an algorithm on scenes from Shakespeares later plays Coriolanus, Cymbeline, The Winters Tale and The Tempest, and on Fletchers Valentinian, Monsieur Thomas, The Womans Prize and Bonduca. He also ran a selection of scenes from works by Philip Massinger, Fletchers successor at the Kings Men and another possible candidate for authorship of Henry VIII, through the algorithm.

Plechac then showed the algorithm Henry VIII. Looking at the rhythms of the text, and the combination of words used, it indicated that Shakespeare had written the first two scenes of the play, with Fletcher responsible for the next four scenes. Shakespeare then picked up the pen, according to Plechacs algorithm, with Fletcher taking over for Act II Scene III.

According to the computer, this scene was written by both authors, with Shakespeare solely responsible for the first scenes in Acts IV and V, and possibly for part of the fourth scene in Act V. The participation of Massinger is not indicated, Plechac concludes.

Plechac told the Guardian that his analysis looked for combinations of frequently used words and common rhythmic patterns.

This turned out to be a very reliable discriminator for both authors styles. When applied to the text of Henry VIII, the result clearly indicated that both authors were involved, he said.

The findings roughly correspond to Speddings analysis, which he laid out in the essay Who Wrote Henry VIII? The main difference of opinion is over the second scene of Act II, where Spedding supposed mixed authorship, and the first scene of Act IV, which was originally attributed to Fletcher.

Combined versification- and word-based models trained on 17th-century English drama yield a high accuracy of authorship recognition, writes Plechac.

He said that, since 1850, many studies have been published supporting Speddings theory, but many have also rebutted it including another piece of computer research in 2000, which attributed the whole play to Shakespeare. Plechacs approach differs, he said, because it does not classify entire scenes but rather tiny pieces of text to trace the authorship much more precisely.

Read more: https://www.theguardian.com/books/2019/nov/26/ai-reveals-shakespeare-and-fletchers-different-roles-in-henry-viii

Artificial Intelligence News

Omnius CEO Sofie Quidenus-Wahlforss is joining us at Disrupt Berlin

Published

on

By

When you think about artificial intelligence, chances are you think about anthropomorphic robots that can make decisions on their own. But artificial intelligence already has huge impacts in the insurance space. That’s why I’m excited to announce that omni:us founder and CEO Sofie Quidenus-Wahlforss is joining us at TechCrunch Disrupt Berlin.

omni:us is an AI-driven service that can process a ton of documents (including documents with handwriting), classify them and extract relevant data. This way, omni:us customers can use the platform for automated claims handling.

The startup doesn’t want to disrupt existing insurance companies. Instead, it is working with some of the biggest insurance companies out there, such as Allianz, Baloise, AmTrust and Wefox.

Last year, omni:us raised a $22.5 million Series A funding round (€19.7 million) led by Berlin-headquartered VC firm Target Global, followed by MMC Ventures and Talis Capital. Existing investors Unbound and Anthemis, also participated. Up next, omni:us wants to expand to the U.S.

omni:us is well aware that relying more heavily on artificial intelligence can create some issues. Many AI-driven platform act as a sort of black box — you input data and get a result without really knowing why. omni:us says front and center that it wants to make fast, transparent and empathetic claims decisions.

Buy your ticket to Disrupt Berlin to listen to this discussion and many others. The conference will take place on December 11-12.

In addition to panels and fireside chats, like this one, new startups will participate in the Startup Battlefield to compete for the highly coveted Battlefield Cup.


Sofie Quidenus-Wahlforss is an experienced managing director with a strong entrepreneurial spirit. Her strategic skills coupled with a passion for AI led her to create omni:us with the goal of redefining the way people work and how companies are handling their business operations. omni:us is as an MI-based, SaaS solution to massively optimize workflows, and empower businesses to make comprehensive data-driven decisions.

Prior to omni:us, Sofie founded Qidenus Technologies which quickly became the leader in the market of robotics and digitization. Sofie is also the patent owner of the Vshape scanner Technology and winner of several awards including the Woman Technology. omni:us is an Artificial Intelligence as a Service (AIaaS) provider for cognitive claims management. Built on a fully data-driven approach, omni:us is transforming the way insurers interact with their insured parties. It provides all the necessary tools and information to make fast, transparent and empathetic claims decisions, whilst improving operational efficiency and reducing loss adjustment expenses. The company is headquartered in Berlin, with research partners in Barcelona and representations in the UK, France and the United States. For further information visit omnius.com.

Read more: https://techcrunch.com/2019/11/21/omnius-ceo-sofie-quidenus-wahlforss-is-joining-us-at-disrupt-berlin/

Continue Reading

Artificial Intelligence News

Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

Published

on

By

Image search engine Giphy bills itself as providing a “fun and safe way” to search and create animated GIFs. But despite its ban on illicit content, the site is littered with self-harm and child sex abuse imagery, TechCrunch has learned.

A new report from Israeli online child protection startup L1ght — previously AntiToxin Technologies — has uncovered a host of toxic content hiding within the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics like white supremacy and hate speech. The report, shared exclusively with TechCrunch, also showed content encouraging viewers into unhealthy weight loss and glamorizing eating disorders.

TechCrunch verified some of the company’s findings by searching the site using certain keywords. (We did not search for terms that may have returned child sex abuse content, as doing so would be illegal.) Although Giphy blocks many hashtags and search terms from returning results, search engines like Google and Bing still cache images with certain keywords.

When we tested using several words associated with illicit content, Giphy sometimes showed content from its own results. When it didn’t return any banned materials, search engines often returned a stream of would-be banned results.

L1ght develops advanced solutions to combat online toxicity. Through its tests, one search of illicit material returned 195 pictures on the first search page alone. L1ght’s team then followed tags from one item to the next, uncovering networks of illegal or toxic content along the way. The tags themselves were often innocuous in order to help users escape detection, but they served as a gateway to the toxic material.

Despite a ban on self-harm content, researchers found numerous keywords and search terms to find the banned content. We have blurred this graphic image. (Image: TechCrunch)

Many of the more extreme content — including images of child sex abuse — are said to have been tagged using keywords associated with known child exploitation sites.

We are not publishing the hashtags, search terms or sites used to access the content, but we passed on the information to the National Center for Missing and Exploited Children, a national nonprofit established by Congress to fight child exploitation.

Simon Gibson, Giphy’s head of audience, told TechCrunch that content safety was of the “utmost importance” to the company and that it employs “extensive moderation protocols.” He said that when illegal content is identified, the company works with the authorities to report and remove it.

He also expressed frustration that L1ght had not contacted Giphy with the allegations first. L1ght said that Giphy is already aware of its content moderation problems.

Gibson said Giphy’s moderation system “leverages a combination of imaging technologies and human validation,” which involves users having to “apply for verification in order for their content to appear in our searchable index.” Content is “then reviewed by a crowdsourced group of human moderators,” he said. “If a consensus for rating among moderators is not met, or if there is low confidence in the moderator’s decision, the content is escalated to Giphy’s internal trust and safety team for additional review,” he said.

“Giphy also conducts proactive keyword searches, within and outside of our search index, in order to find and remove content that is against our policies,” said Gibson.

L1ght researchers used their proprietary artificial intelligence engine to uncover illegal and other offensive content. Using that platform, the researchers can find other related content, allowing them to find vast caches of illegal or banned content that would otherwise and for the most part go unseen.

This sort of toxic content plagues online platforms, but algorithms only play a part. More tech companies are finding human moderation is critical to keeping their sites clean. But much of the focus to date has been on the larger players in the space, like Facebook, Instagram, YouTube and Twitter.

Facebook, for example, has been routinely criticized for outsourcing moderation to teams of lowly paid contractors who often struggle to cope with the sorts of things they have to watch, even experiencing post-traumatic stress-like symptoms as a result of their work. Meanwhile, Google’s YouTube this year was found to have become a haven for online sex abuse rings, where criminals had used the comments section to guide one another to other videos to watch while making predatory remarks.

Giphy and other smaller platforms have largely stayed out of the limelight, during the past several years. But L1ght’s new findings indicate that no platform is immune to these sorts of problems.

L1ght says the Giphy users sharing this sort of content would make their accounts private so they wouldn’t be easily searchable by outsiders or the company itself. But even in the case of private accounts, the abusive content was being indexed by some search engines, like Google, Bing and Yandex, which made it easy to find. The firm also discovered that pedophiles were using Giphy as the means of spreading their materials online, including communicating with each other and exchanging materials. And they weren’t just using Giphy’s tagging system to communicate — they were also using more advanced techniques like tags placed on images through text overlays.

This same process was utilized in other communities, including those associated with white supremacy, bullying, child abuse and more.

This isn’t the first time Giphy has faced criticism for content on its site. Last year a report by The Verge described the company’s struggles to fend off illegal and banned content. Last year the company was booted from Instagram for letting through racist content.

Giphy is far from alone, but it is the latest example of companies not getting it right. Earlier this year and following a tip, TechCrunch commissioned then-AntiToxin to investigate the child sex abuse imagery problem on Microsoft’s search engine Bing. Under close supervision by the Israeli authorities, the company found dozens of illegal images in the results from searching certain keywords. When The New York Times followed up on TechCrunch’s report last week, its reporters found Bing had done little in the months that had passed to prevent child sex abuse content appearing in its search results.

It was a damning rebuke on the company’s efforts to combat child abuse in its search results, despite pioneering its PhotoDNA photo detection tool, which the software giant built a decade ago to identify illegal images based off a huge database of hashes of known child abuse content.

Giphy’s Gibson said the company was “recently approved” to use Microsoft’s PhotoDNA but did not say if it was currently in use.

Where some of the richest, largest and most-resourced tech companies are failing to preemptively limit their platforms’ exposure to illegal content, startups are filling in the content moderation gaps.

L1ght, which has a commercial interest in this space, was founded a year ago to help combat online predators, bullying, hate speech, scams and more.

The company was started by former Amobee chief executive Zohar Levkovitz and cybersecurity expert Ron Porat, previously the founder of ad-blocker Shine, after Porat’s own son experienced online abuse in the online game Minecraft. The company realized the problem with these platforms was something that had outgrown users’ own ability to protect themselves, and that technology needed to come to their aid.

L1ght’s business involves deploying its technology in similar ways as it has done here with Giphy — in order to identify, analyze and predict online toxicity with near real-time accuracy.

Read more: https://techcrunch.com/2019/11/15/giphy-illegal-content/

Continue Reading

Artificial Intelligence News

Startups Weekly: Understanding Ubers latest fintech play

Published

on

By

Hello and welcome back to Startups Weekly, a weekend newsletter that dives into the week’s noteworthy startups and venture capital news. Before I jump into today’s topic, let’s catch up a bit. Last week, I wrote about how SoftBank is screwing up. Before that, I noted All Raise’s expansion, Uber the TV show and the unicorn from down under.

Remember, you can send me tips, suggestions and feedback to kate.clark@techcrunch.com or on Twitter @KateClarkTweets. If you don’t subscribe to Startups Weekly yet, you can do that here.


Uber Head of Payments Peter Hazlehurst addresses the audience during an Uber products launch event in San Francisco, California, on September 26, 2019. (Photo by Philip Pacheco / AFP) (Photo credit should read PHILIP PACHECO/AFP/Getty Images)

The sheer number of startup players moving into banking services is staggering,” writes my Crunchbase News friends in a piece titled “Why Is Every Startup A Bank These Days.”

I’ve been asking myself the same question this year, as financial services business like Brex, Chime, Robinhood, Wealthfront, Betterment and more raise big rounds to build upstart digital banks. North of $13 billion venture capital dollars have been invested in U.S. fintech companies so far in 2019, up from $12 billion invested in 2018.

This week, one of the largest companies to ever emerge from the Silicon Valley tech ecosystem, Uber, introduced its team focused on developing new financial products and technologies. In a vacuum, a multibillion-dollar public company with more than 22,000 employees launching one new team is not big news. Considering investment and innovation in fintech this year, Uber’s now well-documented struggles to reach profitability and the company’s hiring efforts in New York, a hotbed for financial aficionados, the “Uber Money” team could indicate much larger fintech ambitions for the ride-hailing giant.

As it stands, the Uber Money team will be focused on developing real-time earnings for drivers accessed through the Uber debit account and debit card, which will itself see new features, like 3% or more cash back on gas. Uber Wallet, a digital wallet where drivers can more easily track their earnings, will launch in the coming weeks too, writes Peter Hazlehurst, the head of Uber Money.

This is hardly Uber’s first major foray into financial services. The company’s greatest feature has always been its frictionless payments capabilities that encourage riders and eaters to make purchases without thinking. Uber’s even launched its own consumer credit card to get riders cash back on rides. It’s no secret the company has larger goals in the fintech sphere, and with 100 million “monthly active platform consumers” via Uber, Uber Eats and more, a dedicated path toward new and better financial products may not only lead to happier, more loyal drivers but a company that’s actually, one day, able to post a profit.


VC deals


Meet me in Berlin

The TechCrunch team is heading to Berlin again this year for our annual event, TechCrunch Disrupt Berlin, which brings together entrepreneurs and investors from across the globe. We announced the agenda this week, with leading founders including Away’s Jen Rubio and UiPath’s Daniel Dines. Take a look at the full agenda.

I will be there to interview a bunch of venture capitalists, who will give tips on how to raise your first euros. Buy tickets to the event here.


Listen to Equity

This week on Equity, I was in studio while Alex was remote. We talked about a number of companies and deals, including a new startup taking on Slack, Wag’s woes and a small upstart disrupting the $8 billion nail services industry. Listen to the episode here.

Equity drops every Friday at 6:00 am PT, so subscribe to us on iTunesOvercast and all the casts.

Read more: https://techcrunch.com/2019/11/02/startups-weekly-understanding-ubers-latest-fintech-play/

Continue Reading

Trending