Meta is expressing regret for an error that caused certain Instagram users’ Reels feeds to contain violent and explicit content.
A bug that led some users to view an excessive number of violent and explicit movies in their Instagram Reels feed has been addressed by Meta. The change was made after some users who had Instagram’s “Sensitive Content Control” activated viewed violent and horrifying content.
“Videos showing dismemberment, visible innards, or charred bodies” and “sadistic remarks towards imagery depicting the suffering of humans and animals” are examples of prohibited content.
In a statement released Thursday, a Meta representative told CNBC, “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended.” “We are sorry for the error.”. However, she did not elaborate on the specifics of the issue.
According to Meta’s policy, it forbids “sadistic remarks towards imagery depicting the suffering of humans and animals” and “videos depicting dismemberment, visible innards, or charred bodies.” Nevertheless, films that seemed to depict dead corpses and violent violence against people and animals were displayed to users.
The apology was made a day after many Instagram users complained that short-form films featuring violence and gore with the designation “sensitive content” were being recommended as “not safe for work” on their Reels section.
The mistake follows Meta’s announcement that it will relax its content filtering guidelines in order to better support free expression. This announcement has been seen as the corporation repositioning itself for the Trump administration.
According to corporate policy, Meta works to shield consumers from upsetting information and eliminates particularly violent or graphic content.
But according to Meta, it does permit certain graphic material if it aids users in denouncing and bringing attention to significant problems like terrorism, armed conflicts, and violations of human rights. There may be restrictions on such content, such warning labels.
Following a policy change in January to discontinue its third-party fact-checking program and switch to a community-driven approach akin to Elon Musk’s X social media platform, the error occurred.
CNBC has access to a number of Instagram reel posts on Wednesday night in the United States that seemed to depict brutal attacks, gruesome injuries, and dead bodies. “Sensitive Content” was the label applied to the postings.
According to its website, Meta employs a staff of over 15,000 reviewers and in-house technologies to assist in identifying troubling pictures.
According to the website, the technology, which combines machine learning and artificial intelligence capabilities, helps prioritize postings and eliminate “the vast majority of violating content” before users even report it.
Furthermore, Meta attempts to avoid promoting content on its platforms that may be “low-quality, objectionable, sensitive or inappropriate for younger viewers,” it says.
However, the Instagram Reels problem occurs after Meta said that it will be updating its moderation guidelines to better support free speech.
To cut down on errors that have resulted in users being censored, the firm announced in a statement released on January 7 that it will modify how it enforces certain of its content restrictions.
As part of this, Meta stated that its automated systems will now concentrate on “illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud, and scams” rather than looking for “all policy violations.” The business also stated that it will wait for users to report problems before taking action in cases of less serious policy violations.
In the meantime, Meta claimed that it was “getting rid of most of these demotions” and that its algorithms were demoting excessive amounts of content because they “might” breach rules.
CEO Mark Zuckerberg also said that the business will use a “Community Notes” approach, akin to the system on Elon Musk’s platform X, for its third-party fact-checking program and permit more political material.
Many have seen the actions as an attempt by Zuckerberg to patch things up with US President Donald Trump, who has previously chastised Meta’s moderation practices.
The CEO went to the White House earlier this month “to discuss how Meta can help the administration defend and advance American tech leadership abroad,” according to a Meta representative on X.
In 2022 and 2023, Meta let off 21,000 workers, or about a quarter of its staff, as part of a wave of tech layoffs that impacted many of its safety, trust, and civic integrity departments.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.