Thwarting Deepfakes

Software analyzing movie footage for clues to it being a deepfake.

Generative AI has lowered the barrier to fabricating convincing video, audio, and images, threatening our ability to understand whether online media are real or fake. This makes work harder for information professionals such as intelligence analysts. 

The DeFake project is answering that threat with a user-centered, all-in-one digital media-forensics platform that unites dozens of state-of-the-art analytics in a single workflow. Built on extensive interviews with 46 journalists and 30 forensic analysts, the proposed tool will be built to meet their needs. Game-based training scenarios, pre- and post-explanations, and comprehensive ethical guidelines further support responsible adoption, while outreach materials and a growing network of professionals ensures the platform reaches those who need it most. The ESL Global Cybersecurity Institute project is in partnership with the School of Journalism and New Media at Ole Miss and Department of Computer Science and Engineering (CSE) at Michigan State.

For this pioneering fusion of AI research and newsroom practice, the DeFake project was selected as a winner of the Knight Foundation’s AI and the News Open Challenge. ​