Originally announced last October during Adobe MAX 2019, the Content Authenticity Initiative (CAI) is an effort by Adobe, the New York Times and Twitter to develop a content attribution solution. The goal “is to increase transparency online by giving digital media artists and creators a tool to claim authorship and empowering consumers to access what they are seeking.”
Today, Adobe released further details of the CAI in a white paper, covering topics like privacy, workflows, user experience, security and more. Below are some highlights.
The importance of content attribution
The white paper starts off by introducing the CAI and its goals, drawing attention to the current limitations of metadata:
“Currently, creators who wish to include metadata about their work (for example authorship) or share details of its provenance cannot do so in a secure, tamper-evident and standard way across tools and platforms. Without this attribution information, publishers and consumers lack critical context for determining the authenticity of media.”
With that said, the CAI focuses on a solution to inauthentic content by relying on efforts in three distinct areas — detection, education and content attribution.
The white paper discusses three CAI-based workflows, including photojournalism, creative professionals and human rights activists.
With the growing sense of distrust in digital media, a CAI-based workflow is meant to “provide trust and transparency for photojournalists, editors and content consumers.”
With this workflow, photojournalists would use a CAI-enabled capture device, with attribution details like authorship, geolocation, time and file storage preference included. From there, they would move their files into a photo editing application, ensuring that the editing application has CAI functionality enabled. They would then manually add metadata about subjects and context, in addition to completing some light editing.
A photo editor would then be sent the files for verification. Following that, the asset is moved into the content management system of the organization — which has a CAI implementation — and the article is published. Throughout sharing on social media, the CAI metadata survives any alterations. Any modifications to an asset are captured as CAI claims.
The result is a CAI-enabled post, which provides consumers the ability to learn more details about the asset like the author, original publication, date and more.
Similar to photojournalists, creative professionals capture and begin editing their image. During this time, they ensure that no detailed edit history or work-in-progress thumbnails are captured, as they are not attempting to represent a news image.
This allows the creative professional to transform the image into something that does not accurately represent a real-world event. In this case, the CAI-enabled process captures only the “before” and “after” renditions.
Human rights activists
Following the capture of the asset, the user may choose to operate online or offline. In an offline workflow, CAI data is written to the file itself, so that CAI data is embedded when the asset is shared via social media or messaging applications.
Furthermore, as the asset is discovered by other outlets, the CAI data is preserved within the asset. This enables organizations to find out details about the asset, including the original creator.
Technologies and security
The power behind the CAI
In all of the above workflows, certain technologies are utilized to ensure authenticity. This includes encoding, hashing, signing, compression and metadata.
Each CAI claim is digitally signed, ensuring the integrity and making the system tamper-evident. Each claim also has a digital identity, powered through common formats such as Decentralized Identifiers-DID, WebIDs and OpenID.
The CAI will also leverage standard XMP rules to ensure that the claim can be reliably retrieved using existing technology, including a variety of open source tooling.
Privacy and security concerns
A number of potential concerns arise from attacks against certificates used for signing of an asset. TO address this, the ecosystem is built around its own CAI trust lis, providing the CAI with a model for only allowing certain approved signers.
What does this mean for photographers?
With today’s concerns around image manipulation and fake news, the CAI should allow photographers and other content creators verify their content before sharing it. This not only protects the consumer, but also the original creator, as they ensure they receive proper attribution.
I’ve long been against watermarks being put on photographs. Think of the CAI process as a digital “stamp,” where your attribution stays with you no matter what occurs. You’ll be able to further protect your images, while at the same time, you’re held accountable for making sure your content is authentic.
For additional information and to read the entire white paper, visit contentauthenticity.org.