Concluding what was otherwise an optimistic speech about the myriad social benefits of technology at a political dinner in Washington in 2014, then-Google CEO Eric Schmidt issued a stop warning. Schmidt warned that while we marvel at the benefits and limitless potential of technology, we must never lose sight of the fact that there will always be people who are committed to using that same progress in improving lives for a deeply despicable purpose. The technological ecosystem delivered the goods, but in the process it exposed us to new dangers.
The speech that night followed about seven months after Edward Snowden spilled beans across the National Security Agency, but Schmidt’s warning was still predictable. It came before the wider world began to prick up its ears at the escalating risk of cyber malfeasance. It happened before the Russians teamed up to manipulate American public opinion ahead of the 2016 election. It came before cable news abandoned all pretense of objectivity, before the deep politicization of social media platforms, and before “Fake News,” content moderation, and Article 230 of the Law on Decency of Communications became part of the vernacular.
Widespread access to technology and communication channels has provided ordinary people with the tools to generate and disseminate endless amounts of news, commentary, photos, videos and audio. Statistics given the amount of data we generate is staggering. Something like 90 percent of the world’s data (from the beginning of time!) Has been generated in the past two years alone.
Of course, these trends and opportunities have empowered bad actors to abuse technology for the purpose of misinforming and manipulating public opinion and make accomplices to millions of well-meaning Internet users who inadvertently propagate fake news with the click of a mouse. The abuse and misuse of technology and the deepening perception of prejudice in the media have conspired with Americans to destroy trust in what they see, read and hear – especially on the Internet.
Citing the annual Edelman’s confidential barometer for 2021, Axiosa reports: “For the first time ever, less than half of all Americans trust traditional media [and] trust in social media reached its lowest level of 27%. “Fictional stories, doctoral photographs, and fake videos designed to support false narratives have become widespread problems that breed mistrust, incite hatred, undermine democracy, and make our communications infrastructure – a public good that can dramatically improve efficiency, encourage civility, and raise living standards – unreliable, if unreliable. not dangerous.In the meantime, according to Pew from 2020 review, roughly three-quarters of adults in the U.S. say technology companies are responsible for preventing the misuse of their platforms to influence elections, but only about a quarter have confidence that these companies will do so. The media and technology have earned public contempt, as confirmed in this Gallup review survey.
Skepticism can be good in that it encourages content consumers to be more perceptive, but skepticism alone is not enough. Something must be done to reduce the supply of fake news and inauthentic content. Protecting the public from misinformation and misinformation, adopting measures that reduce the frequency of both, and restoring trust in the media and social media are responsibilities that should be shared by individuals, governments, technology companies, and media houses. Popular platforms, like Facebook and Twitter have established rules of the road when it comes to publishing and disseminating “manipulated media”. But more comprehensive solutions that rely less on subjective human determinations will have to play more significant roles, as data accumulation – and its potential for misuse – continues to increase every minute.
In that sense, one approach that shows promise is something called Content Authenticity Initiative (CAI), which is a partner of technology company Adobe, New York Times, Twitter and other companies and individuals operating in these spaces to help “content consumers make more informed decisions about what to believe”.
Inauthentic content – both unintentionally and intentionally misleading – is on the rise. With the rapid proliferation of digital content and tools for creating and editing that content, developing more reliable ways to ensure proper attribution and transparency is crucial to restoring and maintaining trust.
CAI seeks to help consumers make more informed decisions about the authenticity of content and the origin of that content – who produced it and how; and when, where, why and who could have changed it. Currently, there are ways for content creators to incorporate authorship metadata into their work, but there are no standards for transmitting attribution data in a secure, untouchable way through media platforms. And that undermines the ability of publishers and consumers to establish the authenticity of media content.
CAI wants to solve this problem by developing a system of digital provenance using cryptographically verifiable metadata that contains information about asset creation, authorship, editing actions, capture device details, software used, and other features. This will make it easier to identify manipulated or inauthentic content and will allow content creators and editors to disclose information about who created or changed the material, what has changed, and how it has changed. The ability to provide content attribution to authors, publishers, and consumers is essential to fostering trust online.
Whether CAI or other open source collaboration between content producers, publishers, and consumers is what ultimately alleviates the problem of “fake news,” technology that is a blessing and a curse has the opportunity to redeem itself and play a central role in moving society across the trust gap .