The Platform Accountability and Transparency Act (PATA) is a bill drafted by a bipartisan group of US Senators to increase the transparency of social media data.
Initially released as a discussion draft in December 2021, and formally introduced as a bill in December 2022, PATA contains several mechanisms designed to enable greater independent scrutiny of platforms’ decision-making around key issues like content moderation and newsfeed recommendations.
What does the bill do?
1. Lets researchers access important data
This bill would make it far easier for researchers to gain access to critical data they need to understand how platforms function behind the scenes. Under this Act, qualified researchers from US universities and nonprofit organisations who have been approved by the National Science Foundation can request access to privacy-protected data. If approved, platforms would be legally compelled to provide it.
If they fail to comply, they face losing immunity under Section 230 of the Communications Decency Act. This would mean that they could potentially be held liable for the actions of users on their platforms and exposed to lawsuits for content published on their sites. It would be a very serious sanction, one that would threaten their entire business model.
2. Ensures that data shared with researchers respects users’ privacy
Given the sensitive nature of data held by platforms, the bill contains data protection measures to ensure that any information provided to researchers is handled in a secure and privacy-conscious way. When sharing data, both platforms and researchers would need to comply with the privacy and security safeguards set out by the Federal Trade Commission, which could include provisions such as requiring the encryption and anonymisation of data, or obligating researchers to delete raw data after the completion of their project. The bill also excludes sensitive personal data such as private messages, biometric data and location data from the research access programme.
Providing that researchers comply with the appropriate privacy and security safeguards, the bill also provides them with protection from legal liability when carrying out research on platform data. It also attempts to close potential loopholes for government access by explicitly preventing government entities from obtaining data shared with researchers.
3. Requires platforms disclose more information about content moderation and recommendations
Separate to this, PATA also requires social media companies to provide greater transparency around the management of content on their sites, by sharing information about their content moderation practices and the way that content is being recommended to users. This includes information on the extent to which an advert was recommended, amplified or restricted by platform algorithms or policies. Social media sites will also need to disclose metrics on viral, highly-disseminated content, and content shared by major accounts.
What would this mean for political ads?
This act could have several implications on how we understand online political advertising. While certain platforms already share some details about political ads, like Facebook’s ad library, PATA would give researchers and non-profits access to more detailed data on how ads function on a wider range of platforms.
By making a comprehensive ad library compulsory for all major platforms, the act could give us valuable insights into how social media companies are responding to political ads that violate their policies. For instance, platforms will be obligated to disclose the measures they are taking (or not taking) to respond to political ads that contravene their policies, such as content removal, demonetisation, de-prioritisation or account suspension. This will prevent platforms from being able to “disappear” violating content without a proper explanation as to why they’ve taken that decision.
They will also need to share information on how many people have viewed a violating ad before it was identified and taken down, as well as wider estimates of the presence of violating material on their sites. This would help us understand how long violating content stays left up for, and what consequences, if any, accounts face for posting such content.
The new data access and reporting requirements for platforms may also offer insights into which political ads and advertisers are exerting the greatest influence, particularly during election periods. This will create new ways for understanding how and which advertisers are seeking to influence our voting behaviours, and identifying the presence of coordinated political misinformation campaigns. These findings could be used to develop more tailored and effective regulatory responses in the future.
How likely is the bill to become law?
PATA was introduced in December 2022 right at the end of the 117th session of a Democratic-run Congress. However, with the House now having a slim Republican majority, the bill will likely struggle to be passed before 2024. The bill also faces some potential First Amendment hurdles, given that the legislation would compel platforms to publish information about their internal operations, it is possible that such ‘forced transparency’ could be construed as an infringement on free speech and ultimately struck down.
If this bill fails to become law, research focus could shift towards Europe, where the successful passage of the Digital Services Act will impose similar transparency obligations on major tech platforms. If researchers in the US are unable to gain access to research data through US programmes directly, there may be opportunities to conduct research in partnership with European-based organisations and universities. This could lead to more research and funding flowing from the US into Europe.
Conclusion: The right approach, looking for the right moment
Despite the challenges of passing the bill, PATA’s focus on transparency is a sensible approach. Unlike the UK’s Online Safety Bill (which in our view is too broad and prescriptive), the legislation doesn’t propose to know all the answers. Instead, it recognises that transparency is the best first step for helping us understand how users are being influenced online, and how social media companies are dealing (or not) with actors who seek to abuse their platforms. Currently, some of narratives around the impacts of social media on society seem to be up in the air. For every piece of research that finds a negative effect, another is published that seems to suggest little impact (and occasionally a positive one). Through transparency, we can use what we learn to develop more effective solutions that protect individuals and democracy in general.