With the arrival of TikTok’s new „commercial content library“ (a.k.a ads + whatever influencers get paid to do library), the need for transcripts for video-based ads and captions for images to be provided as standard has never been more obvious.
For researchers, processing video and images is hard. The files are large, processing them without powerful hardware is slow and, quite simply, you can’t watch them all. Everyone we speak to who wants to take a look at what’s in TikTok’s ad library has done so with a weary, resigned „I don’t know if we’ll have the resources“.
It should therefore be seen as a stroke of luck that all of the big platforms already scan the content of the videos and images they serve to users. They do it for moderation purposes (trying to detect illegal content and things that break their rules), to improve accessibility (auto-subtitling and adding automated alt-text to images) and to offer search (e.g. TikTok’s search looks at the content of videos, not just the account names, captions or hashtags).
To make research easier (or even possible), platform ad libraries should include this information as standard.
Every ad displayed in an ad library or available via an ad library API should come with the associated automatically generated image description, keywords or transcript/subtitles. Yes, you have to trust that the automatic services they use are accurate (though this can be verified), but the computational cost is so high that for most researchers, it’s mostly moot.
We’ve held a spot for transcripts and image descriptions in our ad transparency data standard for a long time now. It’s still unfilled.
Good regulation would work with platforms, regulators and researchers to refine and adopt a standard like this. We hope that happens, but in the meantime, the major services should take the voluntary first step.