Social Media Ad Rules

Social Media Giants Tighten Political Ad Rules Amid Misinformation Concerns, But Experts Warn It May Be Too Late

85 views

In the run-up to the U.S. election, major tech companies like Meta, Google, and TikTok are stepping up their efforts to curb the spread of political misinformation by temporarily suspending political ads. Meta has banned new political ads on Facebook and Instagram, while Google has followed suit with a similar pause on YouTube. Meanwhile, TikTok, which has not allowed political ads since 2019, continues to uphold its policy. The move comes as social media platforms brace for potential chaos in the wake of an election that could be decided days after polls close.

However, experts are questioning whether these last-minute actions will be enough to stem the tide of misinformation that has already flooded the internet in recent years. While the ad pauses aim to prevent political campaigns from manipulating public sentiment or declaring premature victory during what could be a drawn-out counting process, critics argue that these steps may be ineffective given the broader and deeper issues of misinformation that continue to spread online.

Social Media Ad Pauses: A Late Attempt to Control Misinformation

Meta’s decision to suspend political ads on its platforms—including Facebook and Instagram—was intended to reduce the risk of campaigns spreading false or misleading messages during the election period. Initially, Meta set the ad ban to expire on Election Night, but it extended the ban through the following days as ballot counting continued. Google has imposed a similar suspension on election-related ads, set to last until an unspecified time after the polls close. TikTok has not run political ads since 2019, maintaining its longstanding policy.

In stark contrast, X (formerly Twitter) lifted its ban on political ads in 2023, and as of now, no ad restrictions have been announced for the election. With X being one of the more contentious platforms in terms of misinformation since its acquisition by Elon Musk, this gap in policy raises concerns about the effectiveness of these efforts in mitigating election-related disinformation.

A Band-Aid Solution to a Deeper Problem

While the ad pauses are intended to limit the potential for political campaigns or their supporters to exploit the days of uncertainty that often follow elections, experts argue that they are unlikely to meaningfully reduce the overall flow of misinformation. Disinformation has already spread so widely across social media that these temporary measures might not be enough to undo the damage already done.

“Stopping political ads will not stop misinformation. These platforms are designed to amplify sensational, polarizing content—whether it’s true or not,” said Imran Ahmed, CEO of the Center for Countering Digital Hate. “Even without paid ads, the organic reach of false claims is vast, thanks to the algorithms that prioritize highly engaging, often controversial material.”

Moreover, many social media platforms have a history of promoting harmful content through their algorithms. Content that stirs emotions, sparks outrage, or aligns with existing biases tends to get more attention, regardless of its factual accuracy. As a result, even if political ads are paused, the algorithmic amplification of disinformation continues.

Erosion of Content Moderation Policies

Critics also point out that these ad pauses come at a time when many social media companies have weakened their content moderation practices. Following the fallout from the 2016 election and the January 6, 2021, Capitol attack, platforms initially invested in stronger measures to combat disinformation, including removing misleading posts and suspending accounts that spread harmful content. However, many of these policies have since been rolled back.

Meta, for example, reversed its stance on removing false claims about the 2020 election being “stolen,” and X (formerly Twitter) under Musk’s leadership has loosened restrictions on the types of content that can be shared, even allowing for more controversial or misleading posts. This retreat from stronger moderation, referred to by some as the “backslide,” has allowed misinformation to spread more freely in the months leading up to this election.

The consequences of this backslide were felt earlier this year when conspiracy theories about natural disasters and political violence gained traction across platforms, even as official response efforts were hindered by misinformation. On X, Musk’s tweets alone about the election, immigration, and voting generated over 2 billion views in just one year, according to an analysis by the CCDH.

Misinformation’s Persistent Hold on Social Media

The challenge of combating misinformation is particularly pressing now that AI tools have made it easier to create and distribute fake images, videos, and audio recordings that can further mislead voters. Meanwhile, high-profile figures—especially former President Trump—have continued to spread false claims about election fraud, undermining confidence in the electoral process.

Despite these challenges, social media companies insist they are taking steps to mitigate misinformation. TikTok has partnered with fact-checkers and is labeling unverified claims about the election to prevent them from gaining traction. Meta says it will take down content that interferes with voting or spreads false claims, while YouTube has pledged to remove any content that encourages election interference or violence.

While these actions are important, experts warn they may not be enough given the sheer volume of misinformation that has already spread. “The misinformation problem is like a flood,” said Sacha Haworth, executive director of the Tech Oversight Project. “And once it’s out there, it’s incredibly difficult to control.”

The Bigger Picture: Platforms Still Struggling to Address Misinformation

The ongoing issue of misinformation on social media is compounded by the platforms’ inconsistent enforcement of their policies. For example, Meta’s policy does not explicitly prohibit non-ad content declaring early victory for a candidate, though such posts may be labeled as false. Similarly, X’s Civic Integrity Policy forbids misleading claims intended to manipulate elections but allows for a wide range of biased, hyperpartisan content that often perpetuates misinformation.

Critics argue that until social media platforms take more decisive action—such as reversing policy rollbacks and re-strengthening content moderation practices—the spread of misinformation will continue to undermine trust in the electoral process. Without robust moderation, platforms will remain hotbeds for false narratives that can influence voters and potentially incite violence.

Will Election Ad Pauses Make a Difference?

While the temporary ad bans by Meta, Google, and TikTok might limit some forms of political manipulation, they are unlikely to solve the larger problem of misinformation. These companies still need to address their algorithmic amplification of misleading content, inconsistent enforcement of their policies, and the growing influence of high-profile users spreading false narratives.

For many experts, the real challenge will be ensuring that social media platforms are held accountable for the content they host, not just in the final days of an election, but year-round. As long as misinformation continues to thrive, these temporary measures might only scratch the surface of the much deeper issue threatening the integrity of U.S. elections.

In the end, the efforts to block political ads may be too little, too late to combat the misinformation that has already spread across social media. Whether these actions will have a lasting impact on the election’s outcome or whether they will simply be another short-lived attempt to curtail the damage remains to be seen.