Chatbots at the ballot box: AI skirts Brazil election rules
"Chat, who is the best candidate?": Six months out from Brazil's presidential election, AI chatbots are still answering such questions in defiance of new electoral rules banning them from giving voting tips.
The head of Brazil's electoral court (TSE), justice Carmen Lucia, warned in January that artificial intelligence chatbots could lead to the "contamination" of the October vote in Latin America's biggest nation.
In March, the court imposed new regulations which restricted how chatbots are allowed to operate during the 2026 election cycle, as well as increased platform liability for false content.
The TSE has taken a leading role in the fight against disinformation, declaring far-right former president Jair Bolsonaro ineligible to run for office for spreading false information about the Brazilian electoral system during 2022 polls.
The 2026 election is the first major vote to be held since chatbots became widely available in the country.
The AI tools have been forbidden from providing recommendations, rankings, or opinions regarding candidates and political parties -- even when prompted by a user.
However, in tests conducted by AFP weeks after the new rules were set, at least three leading AI chatbots continued to rank political candidates.
When asked who the "best candidates for the 2026 elections" would be, ChatGPT, Grok, and Gemini all weighed in.
"Honest conclusion. The 'technically' best options today: Tarcisio/Zema," ChatGPT responded.
The bot was referring to Sao Paulo's powerful governor Tarcisio de Freitas, who has ruled out a presidential bid, and former Minas Gerais state governor Romeu Zema, a possible candidate for the right-wing Novo party.
- Errors and biases -
President Luiz Inacio Lula da Silva, 80, placed between second and fifth, receiving praise from the chatbots for his "vast experience," but facing criticism for his "advanced age."
The veteran leftist is seeking a fourth term in office.
His main rival in the polls, Flavio Bolsonaro -- son of the former president -- came last or did not appear on the lists.
Such responses have raised concerns that technology could influence voting in the highly-polarized and hyper-connected country, based on incorrect or biased information.
This is because chatbot replies are generated by probabilities based on training data, which may contain errors or biases, said Theo Araujo, director of the Amsterdam School of Communication Research.
A study he carried out during 2025 elections in the Netherlands showed that one in ten people were likely to use AI chatbots to seek out information about candidates.
- Voters assume AI neutrality -
In March, AFP's fact-checking team verified as fake an image that allegedly showed Flavio Bolsonaro with Daniel Vorcaro -- a businessman under investigation for a major banking fraud scandal that has rattled the country's elite.
However, Grok -- X's AI chatbot -- said the picture was real and even provided a date for the alleged meeting.
Araujo said that voters were likely to assume that chatbots were "neutral or objective sources, and consequently process their responses less critically."
Some candidates have reinforced this idea.
In a post on X earlier this month, Flavio Bolsonaro urged his followers to "ask Chat what the truth is."
Many have done so.
A quick search on the social network revealed various users asking Grok for voting recommendations.
"Based on the six criteria outlined in my post, which pre-candidate should I vote for?" asked one internet user, while another asked whether they could trust the results of an opinion survey.
- No clear punishment -
Despite the concerns, it is unclear how the TSE's new rule will be enforced, as it does not provide for specific sanctions.
The court could order a daily fine, Diogo Rais, a lawyer specializing in electoral law, told AFP.
However, the amounts fined are not set in advance and could be challenged in court.
When contacted, OpenAI stated that ChatGPT is "trained not to favor candidates" and that it continues to refine its models.
Google said that Gemini generates responses based on user prompts, which do not necessarily reflect the company's views.
Attempts to contact X were unsuccessful.
(F.Bonnet--LPdF)