Informed voting in the age of artificial intelligence
Artificial intelligence (AI) has entered the election—globally.
More than 2 billion voters in at least 50 countries are heading to the polls in 2024. This comes just as powerful AI tools, including ChatGPT, have gone mainstream.
“AI is increasing cybersecurity automation—helping detect attackers, analyze malware, and assist those who protect our critical networking systems,” said Matthew Wright, chair of RIT’s Department of Cybersecurity. “But any new technology can be used for good and bad. Malicious intruders are also taking advantage of these technologies.”
These days, being digitally literate is key to being an informed voter. To help voters navigate these advancements in technology, a few RIT experts are weighing in.
All about social media
• Slow down and evaluate what you read. Critical thinking is part of digital literacy.
• Just because an account is verified on social media doesn’t mean it can be trusted.
• Election officials with the National Association of Secretaries of State have assembled information on each step of the election process. Look for the #TrustedInfo2024 initiative at nass.org.
• Understand that disinformation campaigns are deliberately intended to mislead, while misinformation is simply getting the facts wrong—we’re all guilty of that sometimes.
Expertise: Activist scholar and expert in Black social media usage.
Katrina Overby
Assistant Professor, School of Communication
Real concerns with deepfakes
• Deepfakes are video, audio, images, and text that have been generated or manipulated using deep-learning AI models.
• People can trust news outlets with a proven track record of verifying sources. Journalists verify videos by talking to original sources and seeking comments from people in the video. Good journalism means making sure the pieces match.
• Be aware that it’s easier to create convincing audio deepfakes than video deepfakes.
• Get outside your echo chamber. Media literacy means checking multiple news sources.
Expertise: Works on developing deepfake detection tools for journalists and intelligence analysts through a federal government and Knight Foundation-funded project.
Matthew Wright
Chair of the Department of Cybersecurity
Keep an eye on image manipulation
• Question the source before you share an image on social media. Do a reverse image search to see where else it appears and in what context.
• Look beyond the main subject, that’s where details might fall apart or appear mismatched in an AI-generated image. However, generative models are always improving.
• Check out the Content Authenticity Initiative, where camera companies, social media, and academia are coming together to improve digital content verification.
• An image shared out of context can be just as deceptive as a manipulated or manufactured one.
Expertise: Led a Department of Defense-funded project to help develop a tool that determines how images have been manipulated.
Christye Sisson
Gannett Chair and director of the School of Photographic Arts and Sciences
Threats of cyberwarfare
• Understand that generative AI is reducing barriers to entry in cyberwarfare, making it easier for malicious actors to build targeted campaigns that seek to divide our population.
• Advertisements and news sources are susceptible to influence campaigns that can manipulate voting outcomes. The most critical voting machine is your brain. Don’t let your values get hacked.
• Be on the lookout for phishing attacks. Criminals can take advantage of the election season to exploit your data for financial or personal gain.
Expertise: U.S. Army Reserve Lieutenant Colonel with experience in cyber threat analysis and voting accessibility research.
Justin Pelletier
Director of the Cyber Range and Training Center in RIT’s ESL Global Cybersecurity Institute