Evans Data Corp. Analyst Insight Report | Fake News or Deepfake News: Synthetic Content and Elections
/* Adding custom CSS for the overlay */
.gradation-overlay {
position: relative;
z-index: 1;
}
.gradation-overlay::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
right: 0;
height: 500px; /* Adjust the height for more gradual effect */
background: linear-gradient(transparent, white);
z-index: 2;
}
.overlay-content {
position: relative;
z-index: 2;
}
.clickable-button {
position: relative;
z-index: 3; /* Ensure the button is clickable by placing it above the overlay */
}
Evans Data Corp. Analyst Insight Report
Fake News or Deepfake News:
Synthetic Content and Elections
Abstract
Deepfake technology can be used for political gain. This was a concern in the 2020 US Presidential Election cycle, and technologies have only advanced since then. Generative AI, video creation tools, and a sense that we know better all contribute to the potential problem of manipulating voters with deepfakes. If deepfakes are successful, they can discredit photo evidence, foster complacency, impact reputations, and sow seeds of doubt. Fortunately, policy and technology are combatting deepfakes’ influence, and AI – the same discipline that makes deepfakes more convincing – is also used to defend against misinformation.
This analyst insights report marries research across three of our syndicated survey report series with secondary research about the tech industry and society to highlight the challenges and influences deepfake technology can have on politics and society.
Deepfakes: Fake Problem or Underestimated Threat?
It is another election year in the United States. In many ways, we have already seen variations on the same theme unfold in the debates, political ads, and attempts to undermine opponents. For the past several election cycles, the possibility of AI-generated deepfakes has added another complication to familiar themes: how well can you trust that the images and videos you see reflect historical fact? Political advertisements and propaganda were formerly the savvy editor’s domain and now have the potential of being driven by AI.
Developers generally believe that deepfakes exist on social media, a common belief within the industry since we first explored this question in Spring 2020. In our Developer Marketing Survey Report, an annual survey of approximately 400 English-speaking software developers worldwide, we ask developers to state their level of agreement with a series of statements. For four of the past five reporting periods, this has included the statement, “Deepfakes made with AI are already being used on social media sites.” Often, when we ask questions about agreement, the intensity of the agreement is the most important factor. The responses not only show how strongly developers feel about a topic but also hint at the potential intensity of a technological problem or challenge. Thus, while 83% of developers today agree that AI-made deepfakes are present on social media, the 35% of developers who strongly agree with this statement may be more important to consider.