Media Forensics: What the Deepfake?

Jan 27 2021
When: 5:00 PM - 6:30 PM
Where: ONLINE (Zoom)
Event Type: Polymathic Pizza

Event Details

What the deep fake is going on?  Was that Queen Elizabeth in a tic tok video dancing on top of her regal desk or a cautionary production highlighting the dangers of misinformation? The word deepfake combines the terms “deep learning” and “fake,” and is a subset of AI. There are useful sides to this developing technology to be sure.  But with the democratization of technology, anyone, yes anyone, can access the myriad deepfake tutorials on YouTube and apps available at Walmart to create a fictional video of a person doing or saying anything the creator wants.  Just as fast as technology develops to detect deepfakes and protect machine learning for beneficial purposes, malevolent techy actors work just as quickly to develop more sophisticated fakes to avoid detection. If this sounds like an arms race, it’s because it is. It is using technology to fight technology in a race that has potentially dangerous outcomes.  Still, humans are the solution, so let’s have a real, face-to-face conversation about it. What the deepfake can we lose?

Speaker Information

Speaker

Wael AbdAlmageed

Research Associate Professor of Electrical And Computer Engineering

Dr. Abd Almageed is a Research Associate Professor at Department of Electrical and Computer Engineering, and a Research Team Leader and Supervising Computer Scientist with Information Sciences Institute, both are units of USC Viterbi School of Engineering. His research focus is applying large-scale machine learning techniques to computer vision and image processing problems. His research interests also include implementing machine learning and computer vision algorithms on modern high performance and distributed computing platforms. Prior to joining ISI, Dr. AbdAlmageed was a research scientist with the University of Maryland at College Park, where he lead several research efforts for various NSF, DARPA and IARPA programs. He obtained his Ph.D. with Distinction from the University of New Mexico in 2003 where he was also awarded the Outstanding Graduate Student award. He has two patents and over 70 publications in top computer vision and high performance computing conferences and journals.

Hao Li

Associate Professor of Computer Science

Hao Li is an Associate Professor of Computer Science at the University of Southern California, the Director of the Vision and Graphics Lab at the USC Institute for Creative Technologies, and the CEO/Co-Founder of Pinscreen, an LA-based startup that makes photorealistic avatars accessible to consumers. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication, scalable 3D content creation, and telepresence in virtual worlds. His research involves the development of novel data-driven and deep learning algorithms for geometry processing. He is known for his seminal work in non-rigid shape alignment, real-time facial performance capture, hair digitization, and dynamic full body capture. His work on depth sensor-driven facial animation has also led to the Animoji feature on Apple’s iPhone X. He also worked at Weta Digital as a Visiting Professor on to the digital reenactment technology for Paul Walker in the movie Furious 7. He was previously a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. In 2016, he was ranked #1 on the Top 10 Leaderboard of Computer Graphics research for the past five years by Microsoft Academic. He won the USC Steven Commercialization Award in 2017 and the Office of Naval Research (ONR) Young Investigator Award in 2018. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).