Zephyrnet Logo

Text-to-image models are so last month, text-to-video is here

Date:

In brief AI progresses rapidly. Just months after the release of the most advanced text-to-image models, developers are showing off text-to-video systems. 

Meta announced a multimodal algorithm named Make-A-Video that allows its users to type a text description of a scene as input and get a short computer-generated animated clip as output, typically depicting what was described. Other types of data, such as an image or a video, can be used as an input prompt, too. The text-to-video system was trained on public datasets, according to a non-peer reviewed paper [PDF] describing the software. 

The examples given by Meta show that the quality of these fake AI videos isn’t as high as some of the images created by generative models. Text-to-video, however, is more computationally intensive and relies on producing several images in sequence to capture motion. Meta’s Make-A-Video is currently not generally publicly available; people interested in trying the model out can sign up for access. 

“We are openly sharing this generative AI research and results with the community for their feedback, and will continue to use our responsible AI framework to refine and evolve our approach to this emerging technology,” the Facebook owner said in a statement.

Bruce Willis sells image to make deepfakes

Bruce Willis has sold his image rights to Deepcake, a video-generating AI startup, allowing it to craft deepfake footage of the Die Hard superstar for any future movies.

A fake digital twin of Willis has already appeared in a commercial for a Russian telecommunications company MegaFon:

Youtube Video

AI technology has been used to recreate an actor’s voice and appearance, but Willis may be the first to officially sell the rights of his likeness for all future deepfake creations in media, according to Gizmodo. Willis has retired from Hollywood after he was diagnosed with aphasia, a medical condition that affects a person’s ability to understand and communicate in language.

“I liked the precision of my character,” a statement attributed to Willis and posted on Deepcake’s website reads. “It’s a great opportunity for me to go back in time. The neural network was trained on content of ‘Die Hard’ and ‘Fifth Element,’ so my character is similar to the images of that time.”

“With the advent of the modern technology, I could communicate, work and participate in filming, even being on another continent. It’s a brand new and interesting experience for me, and I am grateful to our team.”

Using NLP to crackdown on paper mills

Natural language processing algorithms can help publishers figure out if a scientific manuscript may have been churned by a sham scientific paper mill.

Paper mills are shady businesses that produce fake research for authors who want to appear legitimate. People are paid to ghost-write science papers, and often plagiarize existing research though change the wording enough to avoid detection. These fake papers often are published by less reputable journals that care more about accepting publishing fees than a paper’s quality.

Six publishers, including SAGE Publications, are now interested in testing AI-powered software to automatically flag papers that appear to be produced by a paper mill, according to Nature. Papermill Alarm, developed by Adam Day, a director and data scientist at Clear Skies, a company in the UK, uses NLP to analyze the writing style of papers.

The tool looks at whether the wording of a paper’s title and abstract is similar to manuscripts from paper mills, and assigns a score predicting how likely it was the work of a faker. Day ran all the titles of papers that have received citations on the PubMed system, and found that one percent seem likely to be sham research produced by paper mills. 

David Bimler, described as a research-integrity sleuth, also known by the pseudonym Smut Clyde, said the figure was “too high for comfort.” “These junk papers do get cited. People seize on them to prop up their own bad ideas and sustain dead-end research programs,” he said.

Palantir expands controversial Project Maven contract

In 2018 leaders at Google dropped a contract with the US Department of Defense for Project Maven, which uses AI technology to analyze military drone footage, paving the way for companies like Palantir to pick up where it left off.

The big data analytics firm announced it was expanding its work to support the US armed services, joint staff, and special forces with AI software in a one-year contract worth $229 million. Part of that money comes from continuing Project Maven, according to Bloomberg. 

“By bringing leading AI/ML capabilities to all members of the Armed Services, the Department of Defense continues to maintain a leading edge through technology and by delivering best-in-class software to those on the frontlines,” Akash Jain, President of Palantir USG, a subsidiary unit of the company, said in a statement.

“We are proud to partner with the Army Research Lab to deliver on their critical mission to support our nation’s armed forces.”

Palatir also reportedly planned to buy its way into the UK’s NHS by acquiring smaller rivals that already contracts or links with the health service. ®

spot_img

Latest Intelligence

spot_img