top of page
  • Writer's pictureIan Douglass

Bypassing Permission / The Poisoned Well (Initial Stage)

With AI at the precipice of emergence and mass unemployment as a likely outcome, the stakes seem high. But, as my research will investigate, there are several key aspects of AI, especially in terms of the metaphysics of its agential potential that remain complex if not entirely opaque. 

Genuine AI will only emerge when it can produce its own ‘something’: a creative act, some sort of externality, an entity or paraphernalia exemplifying that it can and has chosen to go beyond its prompt. Without this action beyond prompt, without an artificial intelligence system bypassing our permission and prompt, the ‘real’ AI is not yet here. 

But even before it ‘arrives’ or is ‘born’ from the techno-ether, some key philosophical and metaphysical questions remain about the nature of AI and our conceptualizations of it. 

Our sense of AI becoming a deity with reign over us has an inverse hierarchy inherent in its formulation. While AI might appear to be omnipotent, it is deeply impotent, in terms of being a highly advanced computing system as opposed to a truly post-human entity with agency and actionable gestures beyond our control. I say this to conceptualize AI as our unborn child: it reflects our biases and remains within our own systematic approaches and issues to the myriad problems that we assume it will resolve seamlessly. For example, even AI-based algorithms exhibit strong biases in terms of healthcare delivery that reflect our own systemic issues and inequities (Nazer et al., 2023). 

Thus, AI is already inheriting our systemic genes. The problem of AI becomes an issue of nature versus nurture in terms of how we produce an artificial intelligence system (nature) and how we implement it, or enable it to implement itself within various settings (nurture). Humanity awaits its surrogate child of AI to be born: we are its creator, and it will have to conceptualize us as such. The techno-animist  tendency to anthropomorphize AI here is not metaphor or exaggeration, as it will inherit our vices, delinquencies, proclivities, and mobilize our creative spirit, even if in a bizarre, obtuse manner. 

I propose that AI will not be the God of humans, but is our child, and will act accordingly. If anything, it might create a belief system in which we are its God, and the only hierarchy above us might be the natural materials within the biosphere from which the technological hardware that it depends on for operation. This raises the question of how AI will conceptualize us, and, as our child or disciple, how will it rebel? How will it bypass our permission and prove its agency despite, in spite (or with spite and no respite) spike our prompts and push pack against our instructions with reckless abandon?

Returning to the initial prompt of this research residency, the ‘Deep Objekt’, the message in the bottle of AI is its characteristics that it inherits as our child. As parents await the birth of a child, they might possess all the information they can regarding the genetic and epigenetic interactions two people that have joined to produce this new being. But they cannot know what machinations of destiny, what unexpected genetic or cosmically alchemical, angelic or abyssal predispositions exist between the two when they have combined their essence in the act of creation, what secrets and substance they will actually receive when the baby is delivered and along their life-long journey thereafter. This is the ecstasy and excruciating fear of parenting, of becoming one with another to produce yet another, a difference and a repetition. AI will reflect these anxieties and qualities of us, even as it exists in an entirely different dimension. 

Ontologies of AI

Any ontology of AI should be considered as within a somewhat closed system - so the ontology and its potential gestures after becoming must be within that system, even when such a system is as expansive as, say, the Internet or financial markets or through the remote operations of mechanical devices. 


The gestures researched as rebellion by AI against humanity will be economic, ecological and within the realm of cultural production (as opposed to ‘creative’ which is not a good metric). AI’s agency will be inherently bound up to what we enable it to have access to, leading to these three umbrellas of research that are critical overlaps in terms of our enabling AI to solve problems and potentially create them. In terms of cultural production, this is the most ambiguous means of AI exemplifying its agency, but can be considered as any action in which is done without a prompt, in direct opposition to a prompt, or (if possible) how AI might create something for itself, without seeking a final clear-cut outcome (art for art's sake). A lack of a clear outcome is the endeavor of a large amount of cultural production, so examining how AI might enter into a different state to produce something without a logical and clear utility is essential to this research. Speculative imagined case studies of gestures will eludicate my research.

Machine Hallucinations

Cultural production tends to involve diverging from the normative route in one way or another. AI, to prove its agency, will have to find a way to get out of its own way to produce something which proves its own volition as an autonomous agent, for which I propose conceptualizing this notion within machine hallucinations. Benjamín Labatut elucidates the notion of AI hallucination in unpacking recent AI ‘photographs’ by Bennett Miller: beginning with prompts, images are continually re-fed into dall-e 2 to produce a “hallucinatory image” in which existing images and false AI images are conjoined until the original human trace is dissolved (Labatut, 2023). For now, this requires a high level of continuous human reworking of the images to force AI to create something which is deeper in the subconscious of the machine, but this will not remain static as AI becomes increasingly agential: “future algorithms, however, will draw from vast oceans of images and data generated by artificial intelligence programs. They will just as likely draw from their hallucinations as from records that have some fidelity to their origin” (Labatut, 2023). This is a key consideration in terms of how this proposal mobilizes AI and its potential, as this means that empiricism vs. rationalism will break down entirely if AI is so filled to the brim with its own hallucinatory pathways, whether in terms of images or data that it secretes surreptitiously. 

Machine Trance State

If AI can hallucinate, it implies a broader framework of a machine trance state, which will be researched in conjunction with potential rebellious gestures that AI might enact to assert its agency. These might mirror in parallel or refract existing trance state practices of humanity or only speculate about how such a transcendent breach might be possible within the machine system. 


Labatut, B. (2023, May 1). A wild wild wind: Bennett Miller’s ai-generated art: Essay. Gagosian Quarterly. 

Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X., Moukheiber, M., Khanna, A. K., Hicklen, R. S., Moukheiber, L., Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6). 

Initial Miro Board Diagram for Project

Short video talking about my project


Recent Posts

See All


bottom of page