Jennifer Gradecki & Derek Curry, Infodemic, Neural network generated video, 6 minutes 6 seconds, 2020.

UNITED STATES
Infodemic
July 31, 2020
Infodemic is a neural network-generated video that questions the mediated narratives created by social media influencers and celebrities about the coronavirus. The speakers featured in the video are an amalgam of celebrities, influencers, politicians, and tech moguls that have contributed to the spread of misinformation about the coronavirus by either repeating false narratives, or developing technologies that amplify untrue content. The talking heads are generated using a conditional generative adversarial network (cGAN), which is used in some deepfake technologies. Unlike deepfake videos where a neural network is trained on images of a single person to produce a convincing likeness of that person saying things they did not say, we trained our algorithms on a corpora of multiple individuals simultaneously. The result is a talking head that morphs between different speakers or becomes a glitchy Frankensteinian hybrid of different people that contributed to the current infodemic speaking the words of academics, health professionals, or news anchors that are correcting false narratives or explaining the role of predictive content recommendation algorithms in the spread of online misinformation. The plastic, evolving, and unstable speakers in the video evoke the mutation of the coronavirus, the instability of truth, and the limits of knowledge. 
Infodemic was created using an experimental Pix2Pix model that was trained on a corpus of multiple individuals simultaneously. Pix2Pix is a conditional generative adversarial network (cGAN) that is trained on sets of two images where one image becomes a map to produce a second image. In Infodemic, the model was frames from multiple videos of different individuals that are mapped to their corresponding facial landmarks. Video frames were then generated from the facial landmarks of a new speaker who is a news anchor, academic, or health professional talking about the uncertainty of the coronavirus or the role of predictive content recognition algorithms in the spread of misinformation. Because the new speaker was not in the training corpus, the generated frames are often a hybrid of multiple speakers simultaneously. Finally, the new frames are assembled into a video with the audio track of the new speaker.
About The Artists
 Derek Curry (US) is an artist-researcher whose work critiques and addresses spaces for intervention in automated decision-making systems. His work has addressed automated stock trading systems, Open Source Intelligence gathering (OSINT), and algorithmic classification systems. His artworks have replicated aspects of social media surveillance systems and communicated with algorithmic trading bots. Derek earned his MFA in New Genres from UCLA's Department of Art in 2010 and his PhD in Media Study from the State University of New York at Buffalo in 2018. He is currently an Assistant Professor at Northeastern University in Boston.
Jennifer Gradecki (US) is an artist-theorist who investigates secretive and specialized socio-technical systems. Her artistic research has focused on social science techniques, financial instruments, technologies of mass surveillance, intelligence analysis, artificial intelligence, and social media misinformation. She received her MFA in New Genres from UCLA in 2010 and her PhD in Visual Studies from SUNY Buffalo in 2019. She is currently an Assistant Professor at Northeastern University in Boston.
Curry and Gradecki have presented and exhibited at venues including Ars Electronica (Linz), Media Art History (Krems), NeMe Arts Center (Cypress), Art Machines (Hong Kong), ISEA (Vancouver), ADAF (Athens), and the Centro Cultural de España (México). Their research has been published in Big Data & Society, Visual Resources, Leonardo, and Leuven University Press. Their artwork has been funded by Science Gallery Dublin, Science Gallery Detroit, the Puffin Foundation, and the NEoN Digital Arts Festival.