TFTRH #28 - Brian Dunning


Source: https://www.youtube.com/watch?v=Yk9C4XkCMD8


Brian Dunning is a prolific skeptical podcaster with his award-winning show “Skeptoid” coming up on its 700th episode. He’s also a writer, with his most recent book Conspiracies Declassified: The Skeptoid Guide to the Truth Behind the Theories, explaining the facts behind 50 different conspiracy theories. He’s also a documentary producer, currently working on Science Friction, a documentary about scientists who get misrepresented by the media. We discuss all these topics and more.

 
Really enjoyed the, too short, discussion about Deepfakes, deep neural networks and AI faked videos and the potentially unwinnable arms race. Great stuff.

This reminded me of something sort of related.

I’m certainly aware of Mick's game dev history and as a long-ago game artist myself, I was always asked “when are we going to see photoreal games that are indistinguishable from reality?”.

Years ago, many of my coworkers always seemed to more or less agree “10-15 years”, while I always thought we’re, at least, several decades away.

That time has since long since passed.

There are some stunningly beautiful games out there right now, but 10-15 years from now they are indistinguishable from photographic reality? I’m still skeptical ;)
 
I think there's real-time face rendering that's getting pretty close.

Getting close, for sure.

I was going to say I probably should’ve said indistinguishable real-time 3D because those clips are pre-rendered, but after poking around a bit I was reminded the last two clips, “Siren” and Andy Serkis, are real-time using Unreal Engine. Pretty nice.

I think all those clips are all still firmly in the Uncanny Valley, though.

But, when we do get to photoreal, real-time 3D video, it’s going to make Deepfakes look like child’s play.


Reality Defender 2020, Concerted Effort by The AI Foundation Nonprofit, Kicks Off to Protect US Elections from Deepfakes & Media Manipulation

The AI Foundation Nonprofit, in collaboration with top artificial intelligence (AI) researchers, industry partners, campaigns, news agencies and social media platforms, and pioneers of deepfake detection, is launching Reality Defender 2020 to help protect the 2020 elections from the threat of deepfakes. Reality Defender 2020 is part of AI Foundation’s larger Reality Defender initiative of synthetic media detection, specifically geared towards political disinformation and manipulation.
Content from External Source
Source
 
Last edited:
But, when we do get to photoreal, real-time 3D video, it’s going to make Deepfakes look like child’s play.
Maybe. Or maybe the two techniques will start to overlap more (think, deep learning algorithms that generate photorealistic 3D models and textures for rendering by conventional methods). I think "deep fake" or at least "deep learning" technology will continue to be a game changer for some time to come. The reason I say this is because of how relatively automated the process can be and how a single individual can produce amazing quality altered videos.

It makes it possible for one person to do it relatively quickly, instead of a team of artists, or one person working for a long time. Of course this isn't about applying it to games, but for making videos I think deep learning algorithms are here to stay.
 
Really enjoyed the, too short, discussion about Deepfakes, deep neural networks and AI faked videos and the potentially unwinnable arms race.

I think the arms race is built into deepfakes. That's how generative adversarial networks work, by pitting the generative network against the discriminative network, and training them together to generate and discriminate fakes.
That said...
https://www.darpa.mil/program/semantic-forensics
However, existing automated media generation and manipulation algorithms are heavily reliant on purely data driven approaches and are prone to making semantic errors. For example, generative adversarial network (GAN)-generated faces may have semantic inconsistencies such as mismatched earrings.
upload_2019-11-8_1-3-26.png
These semantic failures provide an opportunity for defenders to gain an asymmetric advantage. A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.
The Semantic Forensics (SemaFor) program seeks to develop innovative semantic technologies for analyzing media. These technologies include semantic detection algorithms, which will determine if multi-modal media assets have been generated or manipulated. Attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies will help identify, deter, and understand adversary disinformation campaigns.
Content from External Source
Semantic Forensics complements the Media Forensics (MediFor) program.
https://www.darpa.mil/program/media-forensics
DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.
Content from External Source
 
Last edited:

Yeah, you’re probably right. In hindsight, “it’s” going to have to be a combination of human artistry and machine learning.

Deepfakes lack of creativity and human’s lack of unrelenting, timeless perseverance seems to be a peanut butter and jelly situation.

I think the arms race is built into deepfakes.

Your post reminds me that I should probably start messing around with creating my own deepfakes before I speak with any command.

I did some hasty searches the other day for the most popular methods/software but wasn’t able to settle on anything in particular.

Anyone have any recommendations?
 
Yeah, you’re probably right. In hindsight, “it’s” going to have to be a combination of human artistry and machine learning.

Deepfakes lack of creativity and human’s lack of unrelenting, timeless perseverance seems to be a peanut butter and jelly situation.



Your post reminds me that I should probably start messing around with creating my own deepfakes before I speak with any command.

I did some hasty searches the other day for the most popular methods/software but wasn’t able to settle on anything in particular.

Anyone have any recommendations?
This is what I've played around with:
https://github.com/iperov/DeepFaceLab
My results with it are very amateurish, but for my purposes I wasn't going for realism and the source material was low quality and limited. I'm also not a graphic artist, my video editing experience is limited to astrophotography/videography, so just being able to automatically throw someone's "face" on another character's face was something I could not have done without this toolset.
 

Source: https://www.youtube.com/watch?v=Yk9C4XkCMD8


Brian Dunning is a prolific skeptical podcaster with his award-winning show “Skeptoid” coming up on its 700th episode. He’s also a writer, with his most recent book Conspiracies Declassified: The Skeptoid Guide to the Truth Behind the Theories, explaining the facts behind 50 different conspiracy theories. He’s also a documentary producer, currently working on Science Friction, a documentary about scientists who get misrepresented by the media. We discuss all these topics and more.



Nice show, I'm going to watch more.
 
Back
Top