Brandspanking | Dynamic News

About

This tech demo features a news anchor summarizing today's news and ending with a twist where the user can have him say and show anything they want. This concept blurs the lines between real and fake and evokes thoughts on morality and ethics provided by technological advances. By enabling anyone to create their own news narrative with just a few clicks we hope to spark creativity and discussion.

Without going into too much detail we would like to show you a little glimpse on the research and technical solutions used in the video production, lip-sync, lip-movement and news source we combined to create this unique newscast.

Production wise, we set up a green-screen, a simple desk and chair, some lights and the camera in a fixed position in our office. After makeup and hair coloring our anchor was all dolled up and ready to be directed by our intern who researched the anchor man’s subtle movements and mouth positions needed in the animations. In a short time we recorded an introduction and a list of mouth and head movements, including the iconic news frown.

We starting by manually animating all the available mouth movements our actor made in front of the green-screen and fetching synthesized test audio from Amazon's polly. With the audio comes a list of data that contains what and when a tone is synthesized. Now all we had to do is create a script that uses this data to activate all the successive mouth movements. However, it became very clear the timing in the data was "off" and we had to adjust in such a way that the mouth movement matches the audio perfectly and so another script was born.

 
analyse.png
forloop.JPG
 

With the animation matched to the synthesized voice we initially had the news anchor talking like Canadians in South Park, because when you speak you often glide over certain vowels and string them together. Getting a smooth and realistic mouth movement was quite the study, thankfully an expert character animator helped out and with his insight we were able to create yet another script that applies character animation rules to the mouth movement.

In terms of visual effects we needed to figure out a way to combine the moving face and the moving mouth, kind of like a deep fake, but then without the heavy AI simulations. We did so by using Keentools to track the face and gather 3d data on the transformations. This data is then used to add the mouth and make sure it travels across the screen accordingly.

With all the ingredients in place, something was still needed… a proper news source! Not a single news agency in the world that allows for use of, or changes to their online content without repercussions. Obviously we did not want to get into trouble for spreading ‘fake’ news with their content and after reading most of the news corporations content policies, we had a thought. If Wikipedia exists, surely a Wiki News must be part of the Wikimedia Foundation, so a BIG shout out to Wikinews.org for providing independent open-source news with a creative common, CC-BY, license. We use 3 of their articles (image and title) to generate the first messages, the final fourth headline and image is user input.

With all of the above combined we setup the project in our Canvas platform and created an API call to automate the newscaster rendering, news gathering, add some background music and make it publicly available. Now we are able to create a new fully automated talking head with just

- Half a day of production
- and 4 days post production
- In 29 languages

We made this as a tech demo, there are some rough edges but we are very happy with the results. Hope you like it as much as we do, enjoy:

R&D: Felix Geerts
Camera: Vincent Oudendijk
Lights: Joris van Gulik
Character animation help: Jasper Kuijpers
Music: Miston Music