2026-03-11 13:04:08

Meta should do more to tackle the “proliferation” of deceptive artificial intelligence content across its platforms, the company’s independent oversight body has said.

The warning was issued amid fears the spread of manipulated videos during armed conflicts risks undermining public trust in online information.

The criticism was issued by the Meta Oversight Board, a 21-member panel set up by Meta to review decisions on content moderation across its platforms, including Facebook, Instagram and WhatsApp.

It rebuked the company over its handling of an AI-generated video posted on Facebook that claimed to show extensive damage in Haifa, Israel caused by Iranian forces.

The clip was not labelled as AI-generated despite complaints from users.

Meta said it would add a label to the video within seven days.

In its ruling, the oversight body said Meta’s current system for identifying and labelling synthetic media was inadequate.

The board said: “Meta must do more to address the proliferation of deceptive AI-generated content on its platforms… so that users can distinguish between what is real and fake.”

The board warned the rapid spread of fabricated videos connected to global military conflicts had already begun to undermine people’s ability to separate fact from fiction.

It said the growth of such material had “challenged the public’s ability to distinguish fabrication from fact… risking a general distrust of all information.”

Meta created the oversight board in 2020 as a semi-independent body intended to supervise and review the company’s content moderation decisions.

Although the board frequently disagrees with the company’s rulings, Meta retains ultimate authority over how its policies are implemented.

In the case examined by the board, the disputed video was posted in June by a Facebook account based in the Philippines describing itself as a news source.

According to the board’s findings, the video was one of a number of AI-generated clips circulating on social media following the outbreak of conflict involving Israel and Iran.

A previous analysis by the BBC found similar videos promoting either pro-Israel or pro-Iran narratives quickly accumulated at least 100 million views online.

Despite multiple user complaints flagging the video as misleading, Meta initially declined to label it as AI-generated or remove it from the platform.

The oversight board said the company only responded after a Facebook user appealed directly to the board and the panel agreed to review the case.

Meta had argued the video did not require a label or removal because it did not “directly contribute to the risk of imminent physical harm”.

The oversight board said that threshold was too high for material connected to armed conflict.

But the board said Meta should be proactively identifying and labelling manipulated content rather than relying on users to disclose when they have used AI tools or waiting for complaints.

It said the company’s current system was “neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform”.

In a statement responding to the ruling, Meta said it would follow the board’s recommendations if it encounters “identical” content posted “in the same context” as the video examined in the case.

Visit Bang Premier (main website)