Tech

Papercup raises $20M for AI that automatically dubs videos – TechCrunch

Dubbing is a lucrative market, with Verified Market Research predicting that film dubbing services alone could bring in $3.6 billion annually by 2027. But it is also a tedious and costly process. On Average, it can take an hour of recording studio time for five minutes of narration; one calculator even sets the price for a basic video at $75 per minute.

The promise of AI in this area, in particular Natural Language Processing, speeds up the task by creating human-sounding dubs in multiple languages. A British startup pursuing this, Papercup, claims its technology is being used by media giants Sky News, Discovery and Business Insider and is being used to translate 30 seasons of Bob Ross’ iconic show. TThe joy of painting.

CEO Jesse Shemen estimates that more than 300 million people have viewed videos translated by Papercup in the last 12 months.

“There is a significant mismatch between the demand for localization and translation and the ability to meet the demand,” Shemen said. “Show preferences [Netflix’s] ‘Squid Game’ confirms that people will watch content created anywhere and in any language if it is entertaining and interesting. That’s why the industry is so poised for growth.”

Papercup announced today that it has raised $20 million in a Series A funding round led by Octopus Ventures with participation from Local Globe, Sands Capital, Sky and Guardian Media Ventures, Entrepreneur First and BDMI. That brings the London-based company’s total to about $30.5 million, most of which will go towards researching expressive AI-generated voices and expanding Papercup’s support for foreign languages, Shemen told TechCrunch per E -Mail.

Founded in 2017 by Shemen and Jiameng Gao, Papercup provides an AI-powered dubbing solution that identifies human voices in a target movie or show and generates dubbing in a new language. Video content producers upload their videos, specify a language, wait for Papercup’s native speaker teams to check the audio for quality, and receive a translation with a synthetic voiceover.

Shemen claims that Papercup’s platform can generate dubs at a scale and pace that cannot be matched with manual methods. In addition to the custom translations it creates for clients, Papercup offers a catalog of voices with “realistic” tones and emotions. Many of these have been used in internal communications, company announcements, and training materials alongside movies and television, according to Shemen.

“Our ‘human-in-the-loop’ approach means that human translators do the quality control and guarantee accuracy, but need to be far less hands-on than if they were to deliver the entire translation, which means they can do more, faster Translations can work across,” said Shemen. “People watched more video content during the pandemic, which significantly increased demand for our services.”

The market for AI-generated “synthetic media” is growing. Video and speech-focused companies like Synthesia, Respeecher, Resemble AI, and Deepdub have launched AI synchronization tools for shows and films. Beyond startups, Nvidia has evolved technology altering the video in a way that takes an actor’s facial expressions and matches them with a new language.

But there could also be disadvantages. As The Washington Post Steven Zeitchik mention, thatAI-synced content without attention to detail could lose its “local flair”. expressions in one language may not mean the same thing in another. In addition, AI dubs raise ethical questions, e.g. B. whether to do that Create new the voice of a deceased.

Also unclear are the effects of the voices produced by the performances of working actors. The Wall Street Journal reports that more than one company has attempted to replicate Morgan Freeman’s voice in private demos, and that studios are increasingly including provisions in contracts aimed at using synthetic voices in place of cast members “if necessary” – for example, to avoid lines of dialogue during the optimize post-production.

Shemen positions Papercup as a largely neutral platform, albeit one that monitors usage of their platform for potential abuse (like creating deepfakes). According to Shemen, work is being done on real-time translation of content such as news and sporting events, as well as the ability to more precisely control and refine the expressiveness of its AI-generated voices.

“The value of [dubbing] is clear: people retain 41% of the information when watching a short video that is not in their language – if it’s subtitled they retain 50% and if it’s dubbed via Papercup they retain 70%. That’s a 40% increase just from subtitling alone,” said Shemen. “With truly emotive cross-language AI synchronization, Papercup handles all forms of content, making video and audio more accessible and enjoyable for everyone.”

Papercup currently employs 38 people in London and a network of translators on three continents. The company expects that number to double by the end of the year.

Papercup raises $20M for AI that automatically dubs videos – TechCrunch Source link Papercup raises $20M for AI that automatically dubs videos – TechCrunch

Related Articles

Back to top button