Digital Witnessing: Devika Mohan’s Screenshot Photographs of 9/11
As an artist and designer, Devika Mohan works with digital images ranging from collages to found photography and experiments with digital photography. Through her work, she often invokes the anarchic possibilities in the ways in which images might travel, be understood and repurposed on the internet. Mohan uses the tropes and conventions of digital media to expose their breaking points as meaningful, representational images.
The series of images titled Good Morning, America are screenshot photographs taken from American news channel footage aired at the time of the 11 September 2001 attacks—commonly referred to as 9/11—found on YouTube. The bombing of the Twin Towers of the World Trade Centre in New York became one of the definitive events of the twenty-first-century. It considerably changed international politics and resulted in heightened Islamophobia which then led to a rise in anti-Muslim hate crimes. The event also induced a public fear and hatred of extremist terror groups in the United States of America, as well as across international borders.
A landmark event in contemporary American history, this moment was very visibly and extensively documented. Numerous photographic projects have attempted to capture and record this particular moment. These images present the before and after of the event, endlessly repeated on the internet since. At 9.58 am, newscasters in the United States of America were promising their audiences a sunny and beautiful September morning, right before they cut to the Twin Towers collapsing and falling to the ground. Preserved on video sharing platforms like YouTube, these media objects narrated a past that has since often been repeatedly dramatised through cinema and referenced across American and global popular culture of the 2000s. Having grown up with an exposure to American popular culture and media, the recurrence of images and references to the incident—even a decade later—provoked Mohan to think more about it. In 2016, as a student of art and media in India, Mohan accessed American television news reports of the event on YouTube to re-examine this encounter.
Mohan’s intervention presents to us not only a historic image of the crash, but a media historical moment. It catches a shifting of intensities, a second of unplanned shock, when the networks of media that claim to “represent” reality in real time are disoriented by it. Describing her project, Mohan says, “…we see with our eyes and think with our minds. But technology sees with data and thinks with algorithms.” On YouTube, these videos are automatically captioned by the platform’s algorithms, which use the technique of natural language processing to generate written text from speech. However, like most algorithms, YouTube's captioning works to varying degrees of efficacy. It sometimes misinterprets words and syllables, retroactively inscribing its own algorithmic “understanding” of the event onto the news footage.
This mediatic history is written through digital technologies that are constantly witnessing the important (as well as the ordinary) events in contemporary times. As an audience looking at the set of images, we are removed both temporally and spatially from these actual events. We “hear” what these algorithms narrate about them, in consistently lossy transmissions, until any tracing back to the original image—it’s message, meanings or intentions—is no longer possible. These ideas continue to be relevant as commercial news media today struggles with a crisis of hyperbolic representations. In the post-truth world, where fake news and “alternative” facts are predominantly circulated, the “image” of reality continues to break at its seams.
Between the time this article was written, and the time it is being published, the artist received a notice from YouTube, where she had uploaded a compiled video of this work. YouTube informed her that “they think” that her video Good Morning, America violated their “violent criminal organisations policy” as a result of which it had been taken down from the platform permanently. The letter elaborated on this policy in further detail as pertaining to “…content that glorifies violent criminal organisations or incites violent acts against individuals or a defined group of people.” One can make an informed guess that Mohan’s video was processed by another one of YouTube’s algorithms—this time filtering speech that might allude to terrorism or organised violence. Interestingly, the letter ended with the statement that “Limited exceptions are made for content with sufficient and appropriate context and where the purpose of posting is clear.” Looking at this incident through the lens provided in this article, it becomes apparent that it is precisely this ambiguity or lack of clear context that makes this video questionable under YouTube’s algorithm. This is so while large amounts of hateful, racist and violent rhetoric from all around the world continues to be actively ignored or unnoticed by these algorithms. Content filters have no mechanisms for understanding civic intentionality, let alone the more complex intentional processes used by artists to create critical commentary. In a strange (yet familiar) irony, YouTube’s algorithms become the last audience and a graveyard for Mohan’s work—as the news clips return to the platform from where they were found, and are rejected by it, while their original copies continue to exist.
All images from Good Morning, America by Devika Mohan. Images courtesy of the artist.