Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online content.
Generative artificial intelligence (GAI) adds a new dimension to the problem of disinformation. Freely available and largely unregulated tools make it possible for anyone to generate false information and fake content in vast quantities. These include imitating the voices of real people and creating photos and videos that are indistinguishable from real ones.
But there is also a positive side. Used smartly, GAI can provide a greater number of content consumers with trustworthy information, thereby counteracting disinformation.
To understand the positives and negatives of GAI, it is first important to understand what AI is, and what is so special about generative AI.
What do machine learning, AI and generative AI mean?
Artificial intelligence refers to a collection of ideas, technologies and techniques that relate to a computer system’s capacity to perform tasks that normally require human intelligence.
In basic terms, machine learning is the process of training a piece of software, called a model, to make useful predictions or generate content from data. The roots of machine learning are in statistics, which can also be thought of as the art of extracting knowledge from data. What machine learning does is to use data to answer questions. More formally, it refers to the use of algorithms that learn patterns from data and can perform tasks without being explicitly programmed to do so. Or in other words: they learn.
A language model (LM) is a machine learning model that aims to predict and generate plausible language (natural or human-like language). To put it very simply, it’s basically a probability model that, using a data set and algorithm, predicts the next word in a sentence based on previous words.
Such models are called generative models or generative AI, because they create new and original content and data. Traditional AI, on the other hand, focuses on performing preset tasks using preset algorithms, but doesn’t create new content.
What does generative AI mean for disinformation?
Generative AI is the first technology to enter an area that was previously reserved for humans: the autonomous production of content in any form, and the understanding and creation of language and meaning.
And this is precisely what links generative AI to the topic of disinformation — the fact that, today, it is often impossible to tell if content originates from a human or a machine, and if we can trust what we read, see or hear.
META Ai

Recently, Meta integrated AI capabilities into WhatsApp as an additional feature to enhance the user experience and provide additional functionalities.
while considering Technology advancement in how communication is conducted, meta didn’t seem to have added a comprehensive tool in countering disinformation on the bot, the move that shall witness a potential impact in spreading disinformation especially during democratic processes.
This Ai advancements in deep learning and computer vision will contribute to the development of deepfake technology.
Deepfakes are highly realistic but fake images, videos, and audio recordings created using AI. While these innovations hold promise for various legitimate applications, they also pose significant risks. Malicious actors can use deepfakes to spread disinformation, manipulate public opinion, and undermine trust in authentic media.

Meta generative Images.
—By Adolph Muhumuza