Philipp Schmitt’s Declassifier uses a computer vision algorithm trained on COCO, an image dataset developed by Microsoft in 2014. In the work, photographs from Schmitt’s series “Tunnel Vision” are tested and overlaid with images from which the algorithm learned in the first place. The generated collages unfold over the duration of the project, showing all COCO images that contributed to a given classification. If a car is identified in one of Schmitt’s photographs, all the cars included in the dataset that trained the algorithm surface on top of it.

Experience the series online here.

Taking photographs to be tested on the algorithm challenge Schmitt’s way of looking: “I take photos for a computer “audience” first. The human viewer is secondary. I shoot matter-of-fact, with little occlusion, as too much complexity will confuse the algorithm. It’s challenging to still take pictures that I myself find interesting.”

This process makes Schmitt consider the underlying power relationships in algorithmic photography. Neither the photographers whose images were scrapped from the internet to create COCO nor the people portrayed in the streets by Schmitt gave their approval for their images to be used.

Declassifier exposes the myth of magically intelligent machines; The data by which machine learning algorithms learn to make predictions is hardly ever shown, let alone credited. By doing both, Schmitt highlights instead the diverse photographic sources that is made use of.

humans of ai 2