CALL FOR PAPERS

Viktor Witkowski
AI IN CONTEMPORARY ART – BETWEEN CAPITALIST COMPLIANCE AND MODES OF RESISTANCE

Abstract ↓

Il. Magdalena Lazar

This essay focuses on a range of questions regarding generative AI and its role in contemporary visual art. As a recent and emergent technology, AI has already been used by scores of artists and in the near future we can expect that number to increase. For this contribution, I introduce the artists Alex Israel, Jordon Matthew Wolfson, Hito Steyerl and Trevor Paglen to stake out their opposing positions. The first two artists approach AI as just another tool at their disposal that opens up new ways of viewer engagement, means of production, and a more seemless and therefore less visible alignment with coporate interests (the latter is more apparent in Israel’s work). Steyerl and Paglen, on the other hand, examine the use of AI by connecting it to a larger set of issues that are rooted outside the art world and raise questions of governmental and coporate use/abuse and artists’ complicity within that system. Towards the end of my essay, I suggest that AI in art cannot and must not be considered in isolation from its ‘real‑world’ use. AI is increasingly embedded in social media apps, search engines, disinformation campaigns, and surveillance systems which pose a challenge to artists who appropriate this technology for their purposes without acknowledging AI’s larger implications. If artists embrace a wholistic and critical approach to AI, their artworks will become more meaningful, ethically sound and most likely better works.

AI Origins and the Artist Dilemma

On November 28th, 2023, the auto company BMW announced a collaboration with Alex Israel on his interactive AI project, titled REMEMBR. Since 1971, BMW’s aim has been, in the words of Dr. Thomas Girst, BMW Group’s Head of Cultural Engagement, “to facilitate meaningful exchanges and projects and not just throw money at artists.”1 Israel’s multi‑channel work is a slick product that has all the marks of a project that received perhaps too much funding while failing to produce a meaningful encounter. Until July 13th of last year, Gagosian gallery visitors in London were encouraged to connect their phones via an AI‑powered app (designed by AI engineer Yunus Saatchi) to Israel’s installation which gained access to their “Photos & Videos” folder. The app used the content as raw material (a filter prevented any explicit materials to be shown) to create a 120 second‑long, thematically edited slideshow of images and clips, paired with prerecorded music and distributed across seven screens which were shaped like the artist’s head. An accompanying manual explained that a filter prevents any explicit materials to be shown and that nothing of the shared contents would be stored or saved.

Israel’s description of his installation as “very pop, very much like the types of shorter content formats that I used to watch growing up, like music videos or short cartoons”2 demonstrates his disbelief in or at least indifference to AI’s negative implications and technological advancements. Israel’s approach to AI can be described in the words of another fellow artist, Jordon Matthew Wolfson, who has been using the artificial intelligence system DALL‑E to create realistic imagery from descriptive prompts. Wolfson says: “AI is a means to an end, it’s like a synthesizer… The most interesting part is the manifestation of the image through a machine onto the world.”3 If AI is just another machine or tool, a means to an end, how then do we address its generative potential and computational creativity? Neither music videos, short cartoons or a synthesizer have anything in common with the computational operations carried out by AI. Each time we use a ‘text‑to‑image’ prompt, we ask the utilized artificial intelligence to generate an outcome. If we are not pleased with the results, we can ask it to try again to produce an image that might be closer to our own expectations. It is also possible to prompt an outcome which is surprising to us and unexpected – something we could have never imagined ourselves. Is a tool or machine that learns over time, no longer just a machine or tool? Some philosophers of the mind and computer scientists have dedicated themselves to asking how artificial intelligence might challenge our understanding of not just ethics, but the self, consciousness and free will.

In 1956, my home institution Dartmouth College, organized a conference that became the starting point of AI as a field of study. In a 1955 proposal spearheaded by computer scientist John McCarthy and which lead to the Dartmouth conference a year late, McCarthy and his contributors Marvin Minsky, Nathaniel Rochester and Claude Shannon concluded: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”4 If we take this thought, that AI simulates intelligence and with it learning and imagining, one step further, we arrive at computer scientists Stuart Russel’s and Peter Norvig’s 2003 definition of what distinguishes ‘weak’ AI from ‘strong’ AI:

“[T]he assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking), is called the strong AI hypothesis.”5

When artists Alex Israel and Jordon Matthew Wolfson claim that AI is “just another tool” in the artist’s toolbox, they are mischaracterizing and downplaying what AI is and what it is capable of – independent of their own use and intentions. Both artists want to remain the authors of their work. They see their creative results as enhanced by AI, although AI acts as a co‑author of their work; a co‑author who follows their prompts and its own algorithmic framework, but with a degree of artificial invention that appropriates the data sets it was trained with.

Because ‘learning’ is a crucial element of AI’s generative potential, the quantity of data and computation helps to refine and improve AI outcomes.6 The more AI images are generated, the more training and learning opportunities occur. The visual outcomes become less generic and repetitive, they can produce varying modes and styles of representation and they become less prone to glitches and ‘mistakes’, especially when the prompts ask for naturalistic outcomes (think of the six‑fingered hands we saw in the early stages of AI images which have now almost completely disappeared). From 2022 to 2023 (within 18 months to be accurate), a total of 15 billion AI generated images (text‑to‑image) were generated which represents the amount of all photographs made between 1826 (at its inception) and the early 1970s.7

According to “Everypixel Journal”, since the launch of AI application DALL‑E 2, “people are creating an average of 34 million images per day.”8 It is relevant to point out that during the same time (2022–2023), the NFT trading volume was cut by more than half: from $26.3 billion in 2022 to $11.8 billion in 2023.9 It declined even further throughout 2024. One conclusion is to say that there is no correlation between an increase in AI‑generated images and sold NFTs. Most AI‑generated images do not have to be turned into NFTs or if they were, they did not sell. If one were to believe the predictions of NFT‑affiliated media and cryptocurrency platforms, NFT sales will bounce back over the next years – partially thanks to AI NFTs (two of the main AI platforms that generate NFTs are DeepAI and Midjourney).10 The most likely conclusion to draw from AI and NFT market data is that even though we can expect some overlap between AI‑generated artworks and NFTs in the foreseeable future, what is more plausible is that AI will remain embedded in traditional modes of making and showing artworks. When I say ‘embedded’, I mean that the AI component of an artwork will be integrated at best, downplayed, obscured, and denied at worst.

Most artists are not interested in giving up authorship of their work which explains the awkward comments by Israel and Wolfson: AI reminds them of “short content like music videos”, “cartoons” or “a synthesizer”. The use of AI in Alex Israel’s REMEMBR falls short of its claim to have artificial intelligence: what he describes as “AI app” is not much more than a simple algorithm that randomly selects source materials, photos and video clips, and presents them as slideshow – very similar to Facebook’s Year in Review or Apple iPhone’s Memories in the Photos folder app. Alex Israel’s AI is not that intelligent in the end. Jordon Matthew Wolfson’s work UNTITLED which he began in 2021 relies on AI‑generated images that are printed as conventional digital photographs before they are mounted on wooden panels.11 Yet he cannot bring himself to downplay his own role in their creation and says: “Those art AI images are directly out of my mind, but they’ve had to go through an intermediary kind of device to create (them).”12 Wolfson’s statement reveals the artist’s AI dilemma: if AI is described as mere ‘tool’, it might lose its alleged revolutionary potential for the Arts but if it is presented as a medium with the ability to create independently of or indistinguishable from an artist’s vision, then it could overshadow its creator. Therefore, it is quite plausible to assume that AI in the hands of artists will never be allowed to outgrow or outshine the latter. For some artists the need to distance themselves from AI as an artist tool is connected to their sense of critical awareness mixed with the urge to reject new art trends that have been unequivocally declared to be ‘en vogue’. This group represents the artist stereotype of the anti‑establishment and back‑to‑the‑roots skeptic we hold so dear and their number is most likely small. Who is to tell if they are skeptical or ignorant? Then there are artists like Alex Israel and Matthew Wolfson who want to give the art market what it is asking for and who have internalized over the years as professional, blue‑chip artists how to stay afloat in a market that is unpredictable, prefers to play safe, yet always asks for the next ‘big’ thing. Art works, as in case of Israel and Wolfson, that engage with AI but diminish its role in the creative process, seem to strike a balance that collectors are willing to support. With artists proclaiming that they are in charge while AI is a ‘useful tool’, the ratio between artistic ‘genius’ and groundbreaking technology is just right for a dealer and collector base that wants to be perceived as contemporary art pioneers (without losing too much money in the process). Most artists, aware of AI, its risks and shortcomings, remain in waiting to see how AI evolves, what opportunities or dangers it might unlock and how it impacts the role of the artist. Will we move from being makers of objects to content creators?13 Will the physical aspect of artmaking and art‑viewing become rare and obsolete, or will it gain in relevance due to AI’s omnipresence in all spheres of art and life? Will AI art become indistinguishable and inseparable from artworks – in which case all art museums turn into mausoleums of collective cultural loss. Making predictions about AI comes with an abundance of possible scenarios – both favorable and unfavorable. None of these possibilities are particularly enlightening and all of them are speculative. Instead of focusing on how transformative AI might become to our every‑day experience, it is worth to critically examine what it currently is and how it works.

Technological Dependency, State Power and Disinformation

In a March 16th, 2023 “The Guardian” article, artist James Bridle describes the origin of data needed to render AI functional:

“The entirety of this kind of publicly available AI, whether it works with images or words, as well as the many data‑driven applications like it, is based on this wholesale appropriation of existing culture, the scope of which we can barely comprehend. Public or private, legal or otherwise, most of the text and images scraped up by these systems exist in the nebulous domain of “fair use” (permitted in the US, but questionable if not outright illegal in the EU). (…) Far from being the magical, novel creations of brilliant machines, the outputs of this kind of AI is entirely dependent on the uncredited and unremunerated work of generations of human artists.”14

At the end of his article, James Bridle points out the new tech oligarchy that sits at the intersection of harvested, invisible labor and a product that is intelligent and groundbreaking only in name:

“AI image and text generation is pure primitive accumulation: expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners.”

Bridle is not the only artist who is looking under the hood of AI to demystify widespread notions of a system that is about to become autonomous and then potentially murderous. This simplistic and reductive characterization of AI as an existential threat to humanity tells us more about humanity’s current state than it does about AI. German filmmaker and artist Hito Steyerl echoes a similar sentiment. In a March 2023 “Artnet” interview with Kate Brown she says on the use of AI:

“It’s a great PR move by the big corporations. The more people talk and obsess over it, the more the corporations profit. For me, these renderings—I call them “statistical renderings”—they are the NFTs of 2022 (…).”15

Steyerl continues:

“Companies try to establish some kind of quasi‑monopoly over these services and try to draft people to basically buy into their services or become dependent on them. That’s the stage we’re at. The renderings are basically the sprinklings over the cake of technological dependency.”

These characterizations of AI as a tool of exploitation and dependency, concocted by our very own tech oligarchs, does not leave much room to imagine and discuss AI as a positive technology within the context of art. But filmmakers like Hito Steyerl have made it their task to consider AI critically while also looking at ways to integrate it into their artistic practice. One such example is Steyerl’s exhibition at the MdBK (Museum of Fine Arts) Leipzig titled LEAK. The end of the pipeline. This exhibition, which took place from April 25th to August 4th, 2024 was carried out in collaboration with Ukrainian filmmaker Oleksiy Radynski and German cultural researcher Philipp Goll and focused on the controversial Russian‑German pipeline project Nord Stream 1 and 2. Her video was a five‑channel installation that addressed the history of several pipeline projects and how they were framed by German and Russian politicians as ‘cultural pipelines’ from the 1980s onward. Germany’s dependency on inexpensive natural gas was rebranded as an opportunity for both countries to overcome historical and political rifts. The Russian state and later Gazprom, the Russian state‑run operator of Nord Stream, invested in art exhibitions as a means of soft diplomacy. Two such examples are the 2012 exhibition Russian and Germans: 1000 years of Art, History and Culture16 that took place in the Russian Embassy in Berlin and the more recent Dreams of Freedom: Romanticism in Russia and Germany which was on display at the Albertinum in Dresden from October 2021 until early February 2022. The latter was made possible with the help of Russian state‑owned Tretyakov Gallery which in turn is partially funded by Gazprombank (Gazprom owns a 46% stake in Gazprombank).

In her installation LEAK, Hito Steyerl reveals the various connections between individual politicians on both sides like former German chancellor Gerhard Schröder who was a key driver of the Nord Stream project and who famously called Vladimir Putin a “flawless democrat”.17 She exposes a network of governmental self‑interest, greed, exploitation of natural resources, dependency on fossil fuels and Nord Stream’s sudden and spectacular demise when unknown assailants planted and detonated explosives, effectively rendering both pipelines inoperable. LEAK was entirely created by using footage from the public domain. This included underwater shots of the destroyed pipelines and other archival materials in sound and image.

In this twenty‑one‑minute‑long video there is one exception: during his research for Steyerl’s work, her collaborator Philipp Goll came across the existence of a video recording from 1986 that shows two German journalists reporting for the public broadcaster WDR.18 In the clip both journalists attempt to explain where the Russian natural gas, that has been arriving in Germany since 1970, originates from. In the 1960s, the largest natural gas deposits in the world were discovered in the north‑western part of Siberia around the Yamal peninsula.19 This area has traditionally been inhabited by several indigenous peoples such as the Nenets and the Khanty. During their report, the two German journalists made several racist remarks about the peninsula’s and western Siberia’s indigenous groups. When Goll contacted the WDR with a request to use this material for LEAK, he was told that it would not be possible to release the video, because the copyright is co‑owned by a now defunct Soviet‑era TV station. In the end, left with only the audio from this clip, Hito Steyerl decided to use AI to recreate the unavailable footage. While the exhibition visitor could hear the archival sound recording of the conversation between both journalists, the AI‑generated footage was a direct pathway into the uncanny valley. Typical of current AI video renditions, the panning camera remained on a relatively stable trajectory while all other aspects, the human figures and the space surrounding them, kept morphing and shifting shapes, appearances and textures. One individual appeared without legs at first only to grow a pair as soon as the camera started panning. Faces melted and re‑emerged but remained without any clear distinguishable features. Hands grew and lost fingers in an instant. These scenes were closer to a feverish nightmare than a historical reenactment. In an interview with the German “TAZ” magazine Steyerl says about this specific scene:

“It all looks very ugly. AI‑generated video makes everything ugly and dumb. But the aesthetic fits perfectly here.”20

Unlike Alex Israel in his installation REMEMBR, Steyerl does not frame AI as a central part of LEAK. She uses her work to expose and question AI. While Israel celebrates the potential and semi‑autonomy of AI as a ‘user experience’, Steyerl emphasizes its shortcomings and failings. In her 2009 essay In Defense of the Poor Image, Steyerl argues that there is value in images circulating online that are of a lesser, lower‑resolution quality, because they lack an excess of information. Her view of the ‘poor image’ gains new meaning within the context of AI since most images used to train neural networks are scaled down to 224 x 224 pixels. In her installation LEAK, Steyerl demonstrates that not all images are equal. The ‘poor’ quality of some of the archival and AI‑generated footage stakes out the boundaries of what constitutes evidence and authenticity. Her work invites viewers to consider how archival and AI‑generated material can be used to amplify historical facts and, in case of LEAK, historical injustice. High‑definition source materials, on the other hand, deny any falsehood: if we can see and identify something clearly, it must be true even when it is not – as in case of Deepfakes.

By contrasting archival audio with an AI‑generated sequence, Steyerl points to the relation between fact and fiction, between truth and lies. How will AI shift this balance when its output becomes indistinguishable from archival or actual footage? If we look at the advance of AI‑generated images just within the past year, we can notice how much harder it has become to tell a fake image from a real one, a simulated event from one that actually happened. Recent examples include AI‑generated videos and images of the 2025 Los Angeles wildfires with the main purpose to generate engagements through views, likes and re‑posts. This type of AI ‘clickbait’ leads to real‑world profits for the accounts that create and disseminate such content. One of the AI videos that was circulating during the fires in early January 2025 and contributed to the creation of a false news story, showed a raging fire surrounding the iconic Hollywood sign – as if the displacement of 100,000 LA residents, around 10,000 destroyed structures and at least 24 fatalities was not tragic enough. In a perverse way, the representation of an imagined scenario – the burning of the Hollywood sign – unmasks the desire to overshadow the devastation of human loss with an inaminate object, a prominent sign in the Hollywood hills. One has to wonder if the creator of this video was aware that using AI to bemoan the imagined loss of an object, even if the object in question carries cultural and historical value, diminishes the loss that humans, domestic animals, wildlife and the environment experienced during these catastrophic days. This machine‑object alliance which envisions a material loss that never occurred, carries something disturbing within. Maybe it is the thought of how this AI video triggered people who were watching it to feel overwhelmed, exhale or gasp in horror, maybe even hold back tears over something that – on a factual level – does not exist. Will AI in the near future make us believe that all is lost, even though it is not? Will it compell us to doubt or give up agency?

At the height of Israel’s war against Gaza and Lebanon in 2024, some AI‑generated footage and images built upon existing conditions (a mostly civilian population under siege in Gaza and entire apartment buildings being razed by Israeli airstrikes in Beirut) to enhance and spectacularize the overwhelming military power used by the IDF. Online users who already had a negative opinion of Israel found their views on its disproportionate use of force justified when they came across this type of AI‑generated content. At the same time, pro‑Israeli users were able to call out what they perceived as anti‑Israeli or antisemetic bias once the social media content was outed as AI‑generated. As seen in this example, any false and artificial content can serve either side. State‑sanctioned propaganda has worked in similar ways throughout history with the exception that today each citizen has a powerful tool at hand which allows them to be an author, publisher and distributor all at once and with access to an enormous reader- and viewership. Self‑declared ‘freedom‑of‑speech‑absolutists’ like the tech oligarchs Elon Musk and Mark Zuckerberg have created an environment that not only allows, but encourages and rewards disinformation (e.g. clickbait) to spread on their social media platforms. Autocratic states, perhaps for the first time in history on such a scale, have the chance to pose as individual users (bots, fake and AI‑generated accounts) who spread disinformation with the goal to fortify ideological bubbles, undermine facts, further estrange groups from each other, deepen anxieties, worries, and distrust. It remains to be seen how much more relevant, advanced and indistinguishable AI‑images and footage will become compared to non‑AI material. The most worrisome outcome of AI‑generated content is not that we will be faced with more falsified imagery, but when truthful images will be labeled as false and when the events they depict will be called into question.

The Role of the Artist

One can argue that the broader discussion outlined above about the potential risks of AI does not play much of a role for the Visual Arts. Most artists are not malignant individuals trying to abuse AI in their own or a state actor’s favor. Some artists might be more focused on short‑term financial success and so they arrange themselves with whatever the new technology is at any given moment. They do not worry about downsides, nor do they make work that reflects on AI’s dangers and shortcomings. Artists like Hito Steyerl, Trevor Paglen and the more playful vignettes by Jess MacCormack (Dissociative Dreams)21 manage to uphold a critical distance to AI‑generated content, and its social and political implications. In the 2016 essay Invisible Images (Your pictures Are Looking at You) by Trevor Paglen for “The New Inquiry”, he argues that AI is an “excercise of power”.22 Paglen mentions the example of a specific surveillance technology that is employed by local Texas governments and allows them – using the services of a private company called Vigilant Solutions – to scan thousands of license plates to identfy the owners who have outstanding court fees. The results are fed into a system used by police who can then easily match any license plate they detect with those outstanding fees. When the police pulls over an identified driver, the latter is offered a choice of either arrest or paying the fee (and if the fee is paid on the spot, it includes a 25 percent surcharge on behalf of Vigilant Solutions).

AI is a technology that wields state, governmental and corporate power. In order to critically examine AI‑generated content within the visual arts, AI’s underlying processes, such as harvesting user data on social media apps, the spread of disinformation and the enormous energy resources required to run AI have to be scrutinized and exposed. There is no scenario in which an artist can use AI and exclaim with confidence that the resulting work should only be seen for what it shows. AI is here to stay so what else can artists do to reduce their technological dependency? Under the given circumstances, the most radical act imaginable to counter the advance of AI in the field of visual art is to take up a pencil, a brush, use our bare hands and make a drawing, a painting or a sculpture – not because we intend to create art, but because we must practice indivdual agency in its purest and most tangible form.

1Y‑J. Mun‑Delsalle, The BMW Group Celebrates 50 Years Of Supporting Culture And Its Partnership With Art Basel, „Forbes” 19.05.2021, <https://www.forbes.com/sites/yjeanmundelsalle/2021/05/19/the‑bmw‑group‑celebrates‑50‑years‑of‑supporting‑culture‑and‑its‑partnership‑with‑art‑basel/> [accessed: 18.01.2025].

2L. Jebb, AI on AI: Alex Israel uses artificial intelligence to re‑engage with memory, „The Art Newspaper” 11.06.2024, <https://www.theartnewspaper.com/2024/06/11/ai‑on‑ai‑alex‑israel‑uses‑artificial‑intelligence‑to‑re‑engage‑with‑memory> [accessed: 18.01.2025].

3Instagram, David Zwirner Gallery, <https://www.instagram.com/davidzwirner/p/DA54265MldX/?img_index=1> [accessed: 18.01.2025].

4J. McCarthy et al., A proposal for the Dartmouth summer research project on artificial intelligence, „Stanford University” 03.04.1996, <http://www‑formal.stanford.edu/jmc/history/dartmouth/dartmouth.html> [accessed: 18.01.2025].

5S. L. Russell and P. Norvig, Artificial Intelligence – A Modern Approach (New Jersey: Pearson Education, Inc, 2003), 947.

6The non‑profits LAION and Common Crawl are an example of web organizations that collect, index and store much of the public world wide web and arrange their troves into datasets (for example, text‑image pairs) to train large AI models.

7A. Hobbs, [Stats] How Many Photos Have Ever Been Taken?, „Fstoppers” 10.03.2012, <https://fstoppers.com/other/stats‑how‑many‑photos‑have‑ever‑been‑taken‑5173> [accessed: 18.01.2025].

8A. Valyaeva, People Are Creating an Average of 34 Million Images Per Day. Statistics for 2024, „Everypixel Journal” 15.08.2023, <https://journal.everypixel.com/ai‑image‑statistics> [accessed: 18.01.2025].

9J. Kubinec, NFT volume fell $14.5B in 2023: CoinGecko, „Blockworks” 18.01.2024, <https://blockworks.co/news/nft‑trading‑volumes‑fall‑from‑2022> [accessed: 18.01.2025].

10NFTevening, AI NFT: How AI is Impacting the NFT Scene, „NFTevening” 25.11.2024, <https://nftevening.com/ai‑nft‑how‑ai‑is‑impacting‑the‑nft‑scene/> [accessed: 18.01.2025].

11Instagram, David Zwirner Gallery, <https://www.instagram.com/davidzwirner/p/DA54265MldX/?img_index=2> [accessed: 18.01.2025].

12Instagram, David Zwirner Gallery, <https://www.instagram.com/davidzwirner/p/DA54265MldX/?img_index=2> [accessed: 18.01.2025].

13According to a June 2024 YouTube‑poll, 65% of people born between 1997 and 2012 identify as “content creators” which could become another term to be used anologous to ‘artist’.

14J. Bridle, The Stupidity of AI, „The Guardian” 16.03.23, <https://www.theguardian.com/technology/2023/mar/16/the‑stupidity‑of‑ai‑artificial‑intelligence‑dall‑e-chatgpt> [accessed: 18.01.2025].

15K. Brown, Hito Steyerl on Why NFTs and A.I. Image Generators Are Really Just ‘Onboarding Tools’ for Tech Conglomerates, „Artnet” 10.03.2023, <https://news.artnet.com/art‑world/these‑renderings‑do‑not‑relate‑to‑reality‑hito‑steyerl‑on‑the‑ideologies‑embedded‑in‑a-i‑image‑generators‑2264692> [accessed: 18.01.2025].

16Russian Embassy in Germany, Über die Pressekonferenz zur Eröffnung der Ausstellung „Russen und Deutsche: 1000 Jahre Kunst, Geschichte und Kultur“, <https://germany.mid.ru/de/aktuelles/pressemitteilungen/de_de_2012_06_28_uber‑die‑pressekonferenz‑zur‑eroffnung‑der‑ausstellung‑russen‑und‑deutsche‑1000‑jahre‑kunst‑geschichte‑und‑kultur/> [accessed: 18.01.2025].

17S. Sarhaddi Nelson, Why Putin’s Pal, Germany’s Ex‑Chancellor Schroeder, Isn’t On A Sanctions List, „NPR” 18.04.2018, <https://www.npr.org/sections/parallels/2018/04/18/601825131/why‑putins‑pal‑germanys‑ex‑chancellor‑hasnt‑landed‑on‑a-sanctions‑list> [accessed: 18.01.2025].

18S. Jung, Eher ein fraktaler Kolonialismus, „TAZ“ 13.06.2024, <https://taz.de/Filmemacher‑Steyerl‑und‑Radynski/!6016739/> [accessed: 18.01.2025].

19A. Metz, 50 years of pipes for gas: German‑Russian century deal and German‑American economic crime novel, „Ost‑Ausschuss” 17.06.2020, <https://www.ost‑ausschuss.de/sites/default/files/pm_pdf/German‑Russian‑Energy‑Relations‑since‑1970.pdf> [accessed: 18.01.2025].

20S. Jung, Eher ein fraktaler Kolonialismus, „TAZ“ 13.06.2024, <https://taz.de/Filmemacher‑Steyerl‑und‑Radynski/!6016739/> [accessed: 18.01.2025].

21Instagram, Dissociative Dreams, <https://www.instagram.com/dissociative_dreams/> [accessed: 18.01.2025].

22T. Paglen, Invisible Images (Your Pictures Are Looking at You), „The New Inquiry” 08.12.2016, <https://thenewinquiry.com/invisible‑images‑your‑pictures‑are‑looking‑at‑you/> [accessed: 18.01.2025].

  Bibliography:

  • Bridle J., The Stupidity of AI, „The Guardian” 16.03.23, <https://www.theguardian.com/technology/2023/mar/16/the‑stupidity‑of‑ai‑artificial‑intelligence‑dall‑e-chatgpt> [accessed: 18.01.2025].
  • Brown K., Hito Steyerl on Why NFTs and A.I. Image Generators Are Really Just ‘Onboarding Tools’ for Tech Conglomerates, „Artnet” 10.03.2023, <https://news.artnet.com/art‑world/these‑renderings‑do‑not‑relate‑to‑reality‑hito‑steyerl‑on‑the‑ideologies‑embedded‑in‑a-i‑image‑generators‑2264692> [accessed: 18.01.2025].
  • Hobbs A., [Stats] How Many Photos Have Ever Been Taken?, „Fstoppers” 10.03.2012, <https://fstoppers.com/other/stats‑how‑many‑photos‑have‑ever‑been‑taken‑5173> [accessed: 18.01.2025].
  • Instagram, David Zwirner Gallery, <https://www.instagram.com/davidzwirner/p/DA54265MldX/?img_index=1> [accessed: 18.01.2025].
  • Instagram, David Zwirner Gallery, <https://www.instagram.com/davidzwirner/p/DA54265MldX/?img_index=2> [accessed: 18.01.2025].
  • Instagram, Dissociative Dreams, <https https://www.instagram.com/dissociative_dreams/> [accessed: 18.01.2025].
  • Jebb L., AI on AI: Alex Israel uses artificial intelligence to re‑engage with memory, „The Art Newspaper” 11.06.2024, <https://www.theartnewspaper.com/2024/06/11/ai‑on‑ai‑alex‑israel‑uses‑artificial‑intelligence‑to‑re‑engage‑with‑memory> [accessed: 18.01.2025].
  • Kubinec J., NFT volume fell $14.5B in 2023: CoinGecko, „Blockworks” 18.01.2024, <https://blockworks.co/news/nft‑trading‑volumes‑fall‑from‑2022> [accessed: 18.01.2025].
  • McCarthy J. et al., A proposal for the Dartmouth summer research project on artificial intelligence, „Stanford University” 03.04.1996, <http://www‑formal.stanford.edu/jmc/history/dartmouth/dartmouth.html> [accessed: 18.01.2025].
  • Metz A., 50 years of pipes for gas: German‑Russian century deal and German‑American economic crime novel, „Ost‑Ausschuss” 17.06.2020, <https://www.ost‑ausschuss.de/sites/default/files/pm_pdf/German‑Russian‑Energy‑Relations‑since‑1970.pdf> [accessed: 18.01.2025].
  • Mun‑Delsalle Y‑J., The BMW Group Celebrates 50 Years Of Supporting Culture And Its Partnership With Art Basel, „Forbes” 19.05.2021, <https://www.forbes.com/sites/yjeanmundelsalle/2021/05/19/the‑bmw‑group‑celebrates‑50‑years‑of‑supporting‑culture‑and‑its‑partnership‑with‑art‑basel/> [accessed: 18.01.2025].
  • Nelson S. S., Why Putin’s Pal, Germany’s Ex‑Chancellor Schroeder, Isn’t On A Sanctions List, „NPR” 18.04.2018, <https://www.npr.org/sections/parallels/2018/04/18/601825131/why‑putins‑pal‑germanys‑ex‑chancellor‑hasnt‑landed‑on‑a-sanctions‑list> [accessed: 18.01.2025].
  • Jung S., Eher ein fraktaler Kolonialismus, „TAZ“ 13.06.2024, <https://taz.de/Filmemacher‑Steyerl‑und‑Radynski/!6016739/> [accessed: 18.01.2025].
  • NFTevening, AI NFT: How AI is Impacting the NFT Scene, „NFTevening” 25.11.2024, <https://nftevening.com/ai‑nft‑how‑ai‑is‑impacting‑the‑nft‑scene/> [accessed: 18.01.2025].
  • Paglen T., Invisible Images (Your Pictures Are Looking at You), „The New Inquiry” 08.12.2016, <https://thenewinquiry.com/invisible‑images‑your‑pictures‑are‑looking‑at‑you/> [accessed: 18.01.2025].
  • Russell S. and Norvig P., Artificial Intelligence – A Modern Approach, Pearson Education, Inc, New Jersey 2003, p.947.
  • Russian Embassy in Germany, Über die Pressekonferenz zur Eröffnung der Ausstellung „Russen und Deutsche: 1000 Jahre Kunst, Geschichte und Kultur“, <https://germany.mid.ru/de/aktuelles/pressemitteilungen/de_de_2012_06_28_uber‑die‑pressekonferenz‑zur‑eroffnung‑der‑ausstellung‑russen‑und‑deutsche‑1000‑jahre‑kunst‑geschichte‑und‑kultur/> [accessed: 18.01.2025].
  • Valyaeva A., People Are Creating an Average of 34 Million Images Per Day. Statistics for 2024, „Everypixel Journal” 15.08.2023, <https://journal.everypixel.com/ai‑image‑statistics> [accessed: 18.01.2025].

Viktor Witkowski

Viktor Witkowski is a painter and filmmaker. He graduated from the HBK Braunschweig, Germany with a combined master’s degree in Studio Art, Art History and Art Education in 2006. In 2010, he earned an MFA in Visual Arts from Rutgers University. He teaches as lecturer in Dartmouth College’s Studio Art Department in New Hampshire, USA. Witkowski’s writing and criticism has been published on Hyperallergic, the Painters’ Table, in The Brooklyn Rail, the New Art Examiner, BLOK Magazine and MOST.