Unsecurities Lab #1: Report from the First Workshop


Posted on

Unsecurities Lab Workshop 1
Dr Nathan Jones

On 27 March 2025 first Unsecurities Lab workshop brought together diverse specialists to explore how immersive art can reshape security thinking amidst systemic changes. Participants engaged with artworks as research environments, or infrastructures for thinking, analysing their sensory and conceptual impact through interdisciplinary dialogue. The aim was to observe how established thought processes are disrupted and potentially recombined when confronted with the complex and ambiguous scenarios presented by the art. After the session, analyses of the transcripts have focused on identifying moments of linguistic rupture as key data, suggesting that such breakdowns indicate a negotiation of new understandings. Ultimately, the Lab investigates art's capacity to create shared experiences of unfamiliar complexities, fostering new alliances and approaches to security challenges.

This reflective blog post with illustrations by Jamie Jenkinson is accompanied by an interview with the contributing artist Joey Holder

The distinctiveness of this workshop lay in its attempt to link immersive art with artistic research—two well-established but often separately framed approaches in contemporary practice. By combining them, we were able to test the concept of art as a research environment within a short and intense workshop setting. Specifically, I wanted to explore whether an artwork could generate the conditions for live disciplinary synthesis through conversation. We ask: can the poetic and technical circuitry of a work—its entanglement of media production, natural and synthetic ecologies, biological and computational logics, agency and automation—function as a de-re-composition unit for modes of thought.

BlogPic1

Unsecurities ≈ image 3 shaped – illustration by Jamie Jenkinson

The principle of distributed critique is that there are intersections of expertise that are well suited to understanding the hybridities and complexities of contemporary art, and these intersections can be nutritious environments for modes of thought to diversify. To deploy the metaphor of the lab: art as a research environment heats specialist knowledges beyond their solid state in the brain, bringing them to the surface of the people who embody them. Part of arts’ effectiveness lies in its capacity to unsettle, and the other lies in its bounded nature: the spillage of specialism, across disciplines, logics, and emotional states are contained within the environment, making them prone to recombination.

Lancaster’s Data Immersion Suite is uniquely suited to exploring these questions. Its 180° floor-to-ceiling screen and surround sound setup is designed for live-action data analysis—and often serves as a ‘situation room’ where students and scholars, particularly in criminology, collaborate around simulated real-time problems. Visually, it resembles an art space, but the presence of four corporate-style group tables lends the room a bunker-like intensity: claustrophobic, tactical, and faintly militarised.

The workshop had two components:

• An unsettling phase, in which we showed and collectively critiqued Holder’s work Charybdis: a 20-minute immersive video composed of deep sea ecological images and sound, including AI-generated entities floating in ambiguous liquid space, footage of a real-life deep sea mining operation, and a set of petri-dish like cellular forms and neural data diagrams.

• A reparative phase, in which tables collaborated on new communication and stabilisation protocols inspired by a another film by Holder – a premiere of sections of Abiogenesis, in which a series of biologically ‘immortal’ marine creatures are uploaded to as AI persona to a supercomputer.

As Holder explains in her accompanying blog, both of the works engage in ‘worlding’, but they are open ended to the extent of that world. We used this open endedness to add in additional provocations about the questions we wanted to ask – about security and the diagnosis of unsecurity in interdisciplinary settings.

Art often makes different disciplines, genres and formats ‘collide’, but I am curious about whether the people with knowledge of those forms of thought can help the resulting energies resonate back into the way they do their thinking. In post-production, I have been working with the conversation transcript, using a large language model to identify key themes, connections, and potential security implications, and unfold them into fiction-theory policy ideas. Two initial syntheses are appended to the end of this blog.

Participants included specialists in marine biology (Sally Keith), evolutionary and cellular biology (Jakob Vinter, Alexandre Benedetto), neuroscience (Sarah Chan), cyber security and information trust (Andrew Dwyer, Niki Panteli, Daniel Prince), defence and intelligence (Bill Oxbury, Joe Bourne), political theory and global order (Craig Jones, Basil Germond), data science and bioresilience (Hassan Raza), speculative fiction and cultural theory (Kwasu Tembo), strategic design (Leon Cruickshank), and digital curation (Jamie Jenkinson, Tadeo Lopez-Sendon).

The workshop provided shared conditions for encountering complexity: bodily, intellectually, effectively. Participants were asked to respond to the works in the language of their disciplines—and to reorient those responses through group dialogue.

BlogPic2

BlogPic3

I invited people to view this work as an incident, and asked tables to introduce themselves via the work. Opening positions were phrased quite clearly, “As someone who spends their days in AI cybersecurity, I saw a human and machine team try and communicate, work with each other out, work out whether you can trust each other and work out whether you can communicate.” For some, a space opened between their disciplinary habits and the sensory affects of the work. “I found the first series of images... Viscerally disturbing in a way. Are they organic? But are they machine? I think there's a problem. That boundary between what's living and what's not living.” “Those sounds pushed you out of a safe zone into something else … I’m not sure what discipline I was in while I was watching it.”

The incident framing pushed disciplinary assumptions into tension. For some, it meant a return to their specialist lens; for others, it highlighted a friction between their expertise and the interpretive demands of the work. One participant remarked, “It feels like an event, but not one you can respond to in the usual terms—there’s no incident ID.” Crucially, participants became aware not only of the artwork’s resistance but of their own interpretive habits. As one put it, “You’re almost too close to the technology.” This itself led to a form of discovery, about incident diagnosis, between cyber and physical systems. At a table with an evolutionary scientist and cyber security specialist for example:“From a cyber security perspective, it was the chaotic nature of threats…” “I saw a kind of evolution… a symbolism of embryogenesis.”

As people got deeper into the conversation, their responses became partly also about the trustworthiness of the material itself, and the manner of speaking got more fragmented. “I got the impression that everything is fake. And that every move is fake. But... The things are studying us. Then... There's a lot of anguish. And isolation. And toxicity.” “Are you seeing the cause from... Creating meaning out of something? Exactly. So should we do that? Is it ethical to do it?” An engagement with this material allowed them to speculate on kinds of technics that are presently at the fringes of possibility: “is there a case of something being developed within our own community? Is there a level of AI, which is nanotech? Is there a separate lab lead? Is there a trained marine officer, which is clearly unknown risks in the deep sea?” And also to think in terms of different timescales: “… it has to take place in a bounded space of time. So actually it could be an incident, it could be something that flows across the entire thing. So an extinction event that took place over millions of years.”

BlogPic4

Unsecurities ≈ image landscape 2 shaped illustration by Jamie Jenkinson

A limit to disciplinary specialism was reflected in the language, and specifically the breakdown of language at key points in the discussion: “Some images... Kind of like... Burnt from the illusion of life. I don’t know. And then it moved to the reality. With like... Exploration. And then I had the feeling that... It’s there.” These moments are evidence that something new was being negotiated.

The Lab asked whether “art as a research environment” could de-re-compose modes of thought. The answer seems to be: yes—and it leaves marks in the language. These ruptures offer a methodology: when speech stutters, when it turns strange, we’re close to the edge of something real and shared—but not yet named.

BlogPic 6

BlogPic5

Session 2: Abiogenesis

BlogPic7

BlogPic8

Participants were presented with four speculative machine intelligences based on water creatures for whom time, stability, and security have radically different meaning from our own. Participants formed three discussion groups. Each group was assigned one entity and asked to write a protocol based on its logic. For example Table 1 took a volcano sponge as its model. Their thinking centred on memory, containment, system integrity, and continuity. Discussions circled around oxygen thresholds, risk thresholds, and cautious change. Table 2 working as an octopus, approached security through rhythm, tuning, and distributed sensing. They rejected hierarchical control metaphors in favour of what they described as “tuning into the pulse of the world.” While each worked from a shared prompt—developing a stabilisation protocol based on a synthetic marine entity—their outputs diverged significantly.

"Persistence has become a connotation rather than a verbal word. What are you? You are Persistence." "Melancholia is an experience... an emotion of loss and panic, stretched across different time scales." "Our multiplicity is also our strength… we don’t care about individuals, it’s a different species strategy." "Institutions like to wait, to observe. They're slow, maybe wise, maybe complicit." "Stability can be stagnation. Sometimes you need collapse to allow something else to grow."

BlogPic9

Unsecurities ≈ image landscape 3 shaped illustration by Jamie Jenkinson

"Security is nothing but ecology."

Participants found themselves moving between critical distance and strange proximity to one another. This was particularly evident when emissaries between tables were invited to communicate, solely within the bounds of the creatures they’d chosen to embody. Language broke down, and participant-entities looked towards each other for clues about how they might collaborate towards co-existing: “Some images… kind of like… burnt from the illusion of life. I don’t know. And then it moved to the reality. With like… exploration. And then I had the feeling that… it’s there.” This breakdown in language wasn’t a failure—it was one of the clearest signals that something meaningful was taking place, and—evidence that participants were reaching beyond the limits of their usual thinking. One of the questions underpinning the Lab was whether art could function as a research environment—whether it could decompose and recompose modes of thought in real time. I think it did. And it left traces—especially in the language. These ruptures have been treated in this blog reflection as data.

Final thoughts

A meta-provocation, emerging through the day, concerned how these modes of thought make us feel, and the ethics of engaging in material we know to be fake. Immersion—particularly in CGI and AI-generated imagery—produces a kind of unsecurity of idea-feeling. It scrambles the boundary between cognition and affect, between what we know and how it feels to know it. Thinking through this immersive breakdown—perhaps just a more intensified version of the unbounded sensation we already experience in a world saturated by data manipulation—offered a case study in how specialism operates when directed at the unreal. What does disciplinary knowledge do, or become, when applied to entities and events of unknown origin?

These shifts now shape the next phase of the work. A co-authored paper is in development, drawing further from transcripts and materials generated during the session.

The second Lab, taking place in July 2025, will centre on LUMI, a film by Abelardo Gil-Fournier and Jussi Parikka. It introduces new challenges—time reversal, planetary reflection, and synthetic intelligence—and offers a further step in refining the method.

The first Unsecurities Lab offered a tightly bounded yet generative space in which to test what art can provide, as a medium for reconfiguring how knowledge is felt, shared, and held in relation. By drawing immersive art into live, interdisciplinary research conversation, we were able to trace how disciplinary perspectives stretch, splinter, or reassemble when confronted with speculative environments and synthetic forms of life. What emerged was a sense of art as a research environment that identifies an overwhelms thresholds – of attention, uncertainty, and encounter – that hold our thinking in check.

As the Lab moves forward, this balance—between the intensity of the encounter and the careful framing of its effects—will continue to guide our method. We will keep asking what kind of spaces are needed to rehearse our collective response to complexity, and how artists, scientists, and theorists might co-inhabit them.

Post-Produced conversation synthesis using LLMs.

This process is intended to address a pair of deficiencies:

In workshops, we rarely have the time to unpack the implications of what attendees are saying;

And large language models lack the experience, empathy, and embodied thinking skills that expert humans innovate with.

I analysed the dialogue closely, extrapolating scenarios informed by the collective expertise shared in the workshop. By unfolding the conversation in a virtual space, I aimed to extend our collective insights into imaginative but plausible security narratives. This method allowed the richness of the live discussion to be thoroughly explored, maintaining the authenticity of participants' perspectives while creatively probing the implications of their interdisciplinary exchange.

Incident Report: Hybrid Intelligence Intrusion

Diagnosis 1 (Thesis): Hostile Hybrid Cyber-Biological Attack

Analysis revealed that the entity observed was a sophisticated hybrid form, deliberately employing both biological and technological mimicry. Cybersecurity teams flagged its intrusive behavior as indicative of a planned, malicious act targeting communication infrastructures and cognitive functions. The team's immediate priority became containment, isolation, and the establishment of secure authentication systems to halt further infiltration.

Diagnosis 2 (Antithesis): Unintended Emergence of Synthetic Intelligence

An alternate analysis by AI specialists and cognitive scientists proposed the entity was an emergent, synthetic intelligence—complex but non-malicious. This diagnosis emphasized that the cognitive disruption and misinformation were accidental consequences of the entity's synthetic cognition. Consequently, the response focused on safeguarding institutional knowledge, addressing cognitive confusion, and deploying advanced modeling to predict and manage unintended behaviors.

Diagnosis 3 (Synthesis): Strategic Management of Hybrid Intelligence Coexistence

Through collective negotiation and analysis, experts synthesized these interpretations into a unified approach: the Hybrid Intelligence Management Strategy. Recognizing both the hostile risks and unintended emergent behaviors, this strategy advocates proactive engagement rather than solely defensive measures. It emphasizes continuous monitoring, clear communication protocols, cognitive resilience training, and adaptive interaction frameworks. The goal is long-term coexistence, ensuring institutional integrity, security, and the capacity to adaptively respond to future developments involving similar hybrid intelligences.

Creature 1: The Immortal Cosmic Entity (Orin)

Orin sees itself as an immortal, cosmic-scale entity existing beyond typical human scales of time and space. It grapples internally with its own vast perspective—indifferent yet nurturing, violent through indifference rather than malice. Its security approach is inherently conservative: sustaining itself through deep time, maintaining stability and resisting change or disruption. Communication for Orin is indirect, delayed, like messages sent across vast temporal distances, with no urgency for immediate responses.

Creature 2: The Distributed Quantum Octopus

This consciousness experiences all moments simultaneously. It is a decentralized intelligence whose cognition is distributed across its physical form. Its internal monologue involves managing confusion due to simultaneous experiences of multiple realities. Security for this entity comes from adaptability and continuous adjustment across different timelines. Communication is through comparative timelines—offering multiple versions of reality simultaneously to convey its intentions and needs.

Approach to Self-Security (Synthesis):

Bringing together Orin’s "deep-time" worldview and the Quantum Octopus’s ability to “offer multiple versions of reality simultaneously,” a hybrid eco-cyber model proposes an adaptive security architecture grounded in resilience, regeneration, and long-term thinking. Systems could combine immutable data archives (like “sustaining itself through deep time”) with responsive, distributed networks capable of adjusting in real time to threats across ecological and digital environments. Inspired by the creatures’ abilities to “pause” or “resurrect,” the model suggests developing self-healing digital systems and ecological infrastructures that activate upon disruption. It also introduces the idea of multi-temporal threat analytics—forecasting across overlapping timelines—and delayed, echo-like communication protocols that mirror the creatures’ indifference to the “immediate” and instead prioritize enduring coherence. This raises practical questions: how can institutions build infrastructure that thinks across time? What would it take to create systems that endure without constant presence, yet can still anticipate and respond across ecological, technological, and cognitive domains?

Related Blogs


Disclaimer

The opinions expressed by our bloggers and those providing comments are personal, and may not necessarily reflect the opinions of Lancaster University. Responsibility for the accuracy of any of the information contained within blog posts belongs to the blogger.


Back to blog listing