Dates: 14 February – 24 April 2025
Location: TheGallery, AUB Campus; and in a virtual exhibition space
TheGallery, The Library, and the Innovation Studio working in partnership with the Schools of Arts, Media, and Creative Industries Management, Arts and Communications, and Design and Architecture, Graduate School, and AUB Outreach and Alumni Office, present Design; Disruption; Divergence – an exhibition that looks at how Generative AI is impacting on artists’ practice.
This exhibition and the associated events explore identity politics in digital representation and creative AI, contributing to important and current conversations around, authenticity, diversity, and ownership in digital spaces.
Overview
Design; Disruption; Divergence opens the door to the evolving relationship between imagination and innovation. This exhibition doesn’t just showcase what AI can make – it interrogates what AI means for the act of making itself. In a time when algorithms are part of our everyday world, this is all about how AI collaborates with human imagination. From hyperreal avatars to surreal designs that dissolve the line between the real and the virtual, you’ll discover the many ways AI can be utilised in creative practice. AI emerges here as more than just an assistant; it’s a mirror reflecting our hopes and anxieties about the future of creativity in the digital age. These works illustrate AUB’s proactive engagement with AI and its impact on creative industries, underscoring the need for human guidance and design thinking in shaping its role in artistic practice.
Artists across the UK are finding new uses for AI, inspiring works ranging from uncanny AI kittens at Somerset House to deepfake drag cabarets at the V&A. This exhibition builds on these advancements, showcasing how AI-assisted art is reshaping cultural discussions and redefining what's conceivable in artistic practice. However, alongside this are deeper questions about the ethical, environmental, and social perils of these tools. We must foster a realistic understanding of the limitations and risks of AI. So, while looking at these works, we also invite the question: What does this technology cost us?
So, wander through the fragmented realities, let yourself be drawn into alternate histories, and confront impossible futures. But most importantly, ask questions. This exhibition is as much about discovery as it is about dialogue – between humans, machines, and the possibilities yet to come.
This exhibition can also be viewed from the comfort of your own home, school, or college via a VR headset in our virtual exhibition space.
Exhibition development team
This exhibition was developed and made possible by Lisa Mann, Jennifer Anyan, Violet M. McClean, Tim Metcalf, Christian McLening, Suzanna Hall, Penelope Norman, Edward Ward, Jordan Cutler, Beverley Bothwell, Cameron-James Wilson, Andrew England, Millie Lake, Fiona Bavinton, James Garside and William Hernandez Abreu.
Meet the curators and artists
Curator of the exhibition at TheGallery, AUB
Jennifer Anyan
Jennifer is the Director of AUB’s School of Arts, Media, and Creative Industries Management, and is well positioned...
Curators of the virtual exhibition space
Edward Ward
Edward Ward is a multimedia artist, academic, spatial designer, maker, and creative technologist. He's drawn to the process of...
Jordan Cutler
Jordan Cutler is a Lecturer on BA (Hons) Games Art and Design and a Technician Demonstrator for AUB’s Lab for Creative Technology.
Rob Amey
Rob Amey is a photographer and filmmaker whose work explores the multifaceted relationship between spirituality, technology...
Burcu Aslan
Burcu Aslan is a multidisciplinary designer and researcher specialising in AI-driven workflows for fashion, interiors, and digital...
Libby Billings
Libby Billings is a photographer and moving image artist. Before joining Arts University Bournemouth as a Technician Demonstrator...
Clare Cahill
Clare Cahill is a creative producer and educator, a Senior Lecturer on the BA (Hons) Film Production course at AUB...
Fiona Bavinton
Fiona Bavinton is a creative technologist and storyteller with 20 years’ experience as a software engineer and technical lead...
Jessica Lowther
Jessica Lowther is a third-year BA (Hons) Photography student at AUB. Her work examines memory, authenticity, and AI by...
Lisa Moro
Lisa Moro’s practice navigates the thresholds between the absurd and the profound, the factual and the fantastical...
Richard Paul
Richard Paul is an artist based in Southeast London. His work is concerned with material transformation and subjective perception...
Anya Pearson
As a fashion design and manufacturing specialist, statue builder, published academic, and feminist campaigner, Anya's work challenges...
Ione Pinto De Matos De Sousa
Ione Pinto De Matos De Sousa is a multidisciplinary artist with a background in 3D, animation, filmmaking and game development...
Shawn Hailemariam Sobers
Dr Shawn Sobers is Professor of Cultural Interdisciplinary Practice at the University of the West of England...
Cameron-James Wilson
Cameron Wilson is a British visual artist and founder of The Diigitals, the world’s first all-digital modelling agency. Known for...
Exhibition information
Anya uses fashion to challenge contemporary surveillance systems in her project Dazzle Design.
Inspired by the visual language of historical protest movements – from World War I dazzle camouflage and the suffragette movement, through to the magenta Pussyhats of the 2017 Women’s March – these designs aim to subvert facial recognition and gait-tracking technologies, creating a form of wearable disruption.
The development process began with the digitisation of traditional design sketches and knitted swatches using Vizoo technology, followed by their integration into CLO3D. This combination of physical craft and digital simulation enabled a detailed exploration of both the aesthetic and functional impact of the garments. AI tools such as Midjourney and Runway were subsequently employed to extend the design process, generating optical illusions intended to confuse surveillance systems.
Runway further facilitated the translation of these designs into a filmic format, allowing for the visualisation of garments within a collective protest context. AI-assisted motion tracking enabled dynamic renderings, demonstrating how these garments could operate in mass movements and speculative scenarios of civil resistance.
By integrating emerging technologies with craft-based methodologies, the project critically engages with the dual nature of digital advancements—both as instruments of empowerment and mechanisms of control.
Anya says, “The most significant interaction and outcomes occurred when AI-generated imagery was manipulated to fit the intended activist message better; rather than passively accepting AI outputs, human intervention and creative eye shaped the final aesthetic, ensuring that technology served as a collaborator rather than an author.”
Burcu Aslan explores the integration of AI in fashion, and spatial design, demonstrating how technology can drive innovation, sustainability, and efficiency across multiple disciplines. Her work investigates AI's potential to drive sustainability, efficiency, and precision while preserving artistic vision.
Each project began with a structured conceptual framework, guiding AI tools through layered prompts that specified textures, colours, patterns, and spatial dynamics. Generative AI tools played a crucial role in visual exploration, alongside 3D modelling, and digital prototyping to refine and develop concepts that bridge the digital and physical worlds. Finalised AI-generated concepts were then further refined using CLO3D, Twinmotion, Blender and Unreal Engine. These platforms enabled seamless integration of digital and physical workflows, optimising design precision while maintaining creative integrity.
The works on display not only highlight AI's role in streamlining design processes through visualisation and prototyping, but also its potential to reduce waste and carbon footprint. This methodology, developed during Burcu's MA collection, challenges conventional methods of creation and offers a forward-thinking perspective on the future of sustainable, technology-driven design.
She says, “AI has significantly expanded the scope of what is possible in design, offering opportunities for early-stage visualisation, rapid prototyping, and reduced waste. By refining workflows and embracing technology as a collaborator, we can push creative boundaries across disciplines.”
Jessica Lowther emphasises AI’s relationship with the image, highlighting its ability to fabricate convincing yet artificial realities and the threat this poses to trust and authenticity.
Familiar Strangers utilises a combination of image and textual prompts to guide AI in generating images that mimic personal and family photographs. While some images were created using direct reference images from the artist’s personal archive, manipulated using Photoshop’s Generative AI feature to replace human subjects while maintaining the visual integrity of the original composition, other images were entirely AI-generated based on simple prompts, such as ‘generate an image of a four-year-old and a father painting the shed’ or ‘generate an image of a grandmother and child on the beach.’
The AI was prompted with instructions such as ‘replace the man in this image with another similar-looking man’ and ‘replace the girl with another two-year-old blonde girl,’ effectively erasing and reconstructing human presence within a photographic framework. By instructing the AI to emulate the aesthetic of a 2000s family photo album with stylistic choices like ‘grainy film,’ ‘nostalgic,’ and ‘modernism,’ the work deliberately creates the illusion of authenticity. The reliance on reference images in particular enhances the deceptive realism of the generated results, reinforcing the project’s exploration of AI’s ability to shape visual memory and personal history.
Through this investigation, the artist challenges the evolving role of photography in an era where AI can reconstruct pasts that never existed, questioning the ethical and emotional implications of machine-made memories.
Jessica says, “AI’s ability to reconstruct my own archival past is both eerie and incredible, simulating photographic qualities without relying entirely on human experience. The work challenges Susan Sontag’s assertion that photographs are definitive documents of truth and reality, deconstructing the image to create an alternative reality and fabricated memories.”
Jordan Cutler investigates the intersection of artificial intelligence, traditional photographic processes, and natural imagery in his latest project.
The project began as an experiment in AI-generated imagery, with initial outputs consisting of digitally rendered trees. These images were then subjected to traditional darkroom techniques, printed as cyanotypes, and toned using strawberry leaf. By integrating digital tools with hands-on processes, the final form underscores the project’s core inquiry: the tensions between digital and analogue and between technology and nature.
Created for a gallery exhibition in the New Forest, the work invites viewers to reflect on their own perceptions of AI-generated art and the realism of machine-made imagery. By employing photographic techniques without a camera, the project provokes discussion on the role of technology in contemporary artistic practice. The absence of direct photographic capture invites a critical examination of AI’s legitimacy as a creative tool and redefines the materiality of photographic art.
Jordan says, “The project invites viewers to question their own feelings about AI-generated art and the authenticity of machine-made imagery. By using photographic techniques without a camera, the work aims to be somewhat confrontational, challenging ideas of authorship and artistic process.”
In response to the loss of her brother, Libby Billings draws on AI to visualise the profound rupture of reality caused by grief. By envisioning flora, fauna, and fungi that have never existed—or may yet come into existence—she navigates the liminal space between life, death, and the unknown.
otherwhere was created through a process of close collaboration between human and machine, where DALL-E 2 served as an expansive yet unpredictable creative partner. Through iterative refinement, the artist guided the AI to generate forms that felt both unfamiliar and believable, testing its ability to extrapolate beyond pre-existing biological taxonomy. This tension between the known and the imagined became an integral part of the creative process, reflecting on the ways in which grief reshapes one’s perception of reality.
Throughout this work, imagination becomes a tool for exploring the gap between the unknowable nature of death and the way life emerges from nothingness. The resulting series estranges viewers with its speculative life forms – uncanny creatures that echo the disquieting qualities of early surrealist photography.
Libby says, “Working with a system trained on datasets of existing images presented challenges. The AI outputs were inevitably shaped by the visual language of the known, so prompting it to imagine something beyond its training data required careful experimentation and persistence.”
Through her works The English Dragon Snake and Exquisite Cords, Lisa Moro demonstrates how machine intelligence mediates meaning and understanding in artistic practice, revealing tensions between authorship, translation, and control.
The English Dragon Snake translates childhood games remembered by Moro’s Vietnamese friends into an AI-driven animation and sound piece, filtered through her own English interpretation. Using AI as a tool to reimagine these memories, the work reveals how meaning shifts as language and cultural nuances are negotiated.
Using Gravity Sketch in VR, she visualised AI-generated imagery derived from translated descriptions of these games. The resulting distortions—figures floating or morphing unexpectedly—expose the slippages in cultural translation and the surreal impact of machine interpretation. AI-generated lyrics in both Vietnamese and English were transformed into song using word-to-vocal software, reinforcing the work’s exploration of linguistic mediation. Rather than serving as a neutral translator, AI becomes an unpredictable collaborator, reshaping personal and collective memory through its own unpredictable logic.
Exquisite Cords, by contrast, interrogates AI’s potential to dictate an optimised existence, raising questions about autonomy and algorithmic control. The installation, drawing on themes of DNA, genetics, and perfection, was created by blindly following AI-generated directives. Through AI-generated imagery and found objects, the artist materialised the machine’s vision of an ideal world. At times harmonious, at others unsettlingly sterile. By surrendering decision-making to AI, the work confronts the paradox of algorithmic perfection, exposing the friction between human intuition and machine logic.
Lisa says, “Together, these works reveal AI’s dual role: an interpreter that reshapes cultural narratives and an authority that challenges human autonomy. Rather than neutral tools, AI systems introduce new layers of abstraction, disruption, and control, raising critical questions about authorship, agency, and the shifting boundaries between human and machine creativity.”
Error 404: Humanity Not Found is an AI-driven short film about AI, directed by Fiona Bavinton and produced by Clare Cahill. The film serves as both a conceptual investigation and a practice-based experiment in Explainable AI (XAI), transforming abstract computational processes into tangible, accessible narratives.
XAI seeks to make machine learning systems transparent and understandable, yet technical explanations alone often fail to engage wider audiences. Arts practice offers an alternative approach – rather than describing AI in purely technical terms, it embodies AI concepts through visual storytelling, sound, and narrative structures. By engaging audiences emotionally and aesthetically, Error 404 provides a creative entry point into discussions about AI's purpose, autonomy, and potential for sentience.
The filmmaking process itself became an exploration of human-AI collaboration. The project began with ChatGPT-4, which was prompted to generate five film ideas exploring the concept of the AI singularity. These initial ideas informed thematic directions that were visually developed through Midjourney, producing concept art and stylistic references. AI-driven sequences were generated using RunwayML, while the film’s score was composed with AIVA and later arranged by Fiona to shape the emotional arc. The film raises questions about authorship, obsolescence, and the shifting boundaries between human and artificial cognition, asking its audience, “Would you like to know what it feels like to be obsolete?”
Fiona and Clare say, “A critical moment in the project was the curation and refinement process, where human intervention became essential. AI provided outputs, but these required human selection, interpretation, and restructuring to form a coherent narrative. This mirrors the challenges of XAI: interpretability is not just about exposing AI’s inner workings, but about making those workings meaningful to humans.”
Rob Amey’s work uses the aesthetics of British folk horror, speculative post-humanism, and the uncanny weight of ancient landscapes. Told through the lens of AI, Altered Lands functions as a dialogue between history, mythology, and contemporary digital storytelling.
This series of images envisions landscapes as sentient, imbued with residual memory, where figures and artefacts merge with their surroundings, dissolving the boundaries between the organic and the spectral. The work leans into hauntology—the idea that the past lingers within the present—transforming the digital medium into a haunted space where folklore is manipulated and reanimated. AI becomes an instrument of spectral storytelling, conjuring visions that feel both ancient and speculative, familiar yet unsettling.
Through digital processes that generate and distort these visions, the work reflects on how contemporary media reinvents myths, forging new narratives from fragments of the past. The landscapes depicted aren't passive backdrops but active participants in these emergent fictions – sites of unease and transformation. From the eerie symbiosis of flesh and earth to ritualistic presences haunting liminal spaces, these images exist in the threshold between the known and the unknowable.
Rob says, “Through this series, I continue an ongoing exploration of how folklore, horror, and digital aesthetics converge, creating a visual language that speaks to both our deep past and our uncertain future.”
Type Your Idea explores the relationship between artificial intelligence, traditional wood carving, and spiritual practice. By combining AI-generated design with human craftsmanship, the project demonstrates how digital tools can be integrated into traditional artistic processes while maintaining cultural and spiritual significance.
The process began with an AI-generated image based on the prompt ‘Ethiopian Orthodox Church wooden carving design.’ Using the DaVinci app, the AI produced a pattern inspired by Ethiopian artistic traditions. This design was refined in Photoshop, incorporating an Ethiopian cross (meskel) and two Lion of Judah symbols, which are deeply connected to Ethiopian Orthodox and Rastafari traditions.
Once finalised, the digital design was prepared for laser etching. A 30 x 30 cm plywood piece was etched with the pattern, and a second, thicker base was cut to the same dimensions and glued together for added depth.
The laser etching provided a guide for the next phase: hand carving. Using traditional carving tools, the etched lines were deepened, shaped, and refined with careful attention to detail. This phase was undertaken as a deliberate and meditative process. While carving, films, documentaries, and discussions on Ethiopian culture, biblical themes, and Rastafari philosophy played in the background. This created an environment where carving wasn't just a physical task but also a reflective experience. After completion, the wood was varnished, sanded, and treated with wood dye to enhance its depth and texture.
By integrating AI, craftsmanship, and meditative practice, Type Your Idea explores how digital tools and human creativity can coexist, reinforcing artistic and cultural heritage.
Shawn says, “This project suggests how AI-generated designs can serve as a foundation for human craftsmanship rather than replacing it. While AI contributed to the initial concept, the hand-carving process ensured that the final artwork retained a personal and human touch.”
Richard Paul’s work occupies the uneasy intersection of collaboration and distortion – though not the seamless, polished collaboration of corporate culture, but rather an unpredictable, unruly exchange. His practice embraces friction, allowing jagged inputs to collide and leave their mark. Positioning himself as a kind of stenographer, the artist transcribes the murmurs of AI, feeding them back into the system to disrupt and destabilise it. By pushing the machine into moments of unpredictability, he provokes it to generate forms that are uncanny – lingering at the threshold between dreaming and waking.
The process begins with an image – something seemingly familiar yet inherently fractured. A fragment, perhaps an optician’s advertisement distorted beyond recognition, or a model’s face disassembled and reconfigured, its features redundantly stacked like an administrative error. These images act as a kind of Rosetta Stone, though their translation is left to AI. It's the machine, rather than the artist, that breathes life into these distorted forms, absorbing absurdity and exhaling creations that feel less constructed than unearthed.
The lineage of these distorted figures is erratic and non-linear, evoking references that range from the artist’s own concrete sculptures to ancient Sumerian votive figures and Francis Picabia’s Monster Series, but not, crucially, Mr. Potato Head. The resulting works resemble relics from a speculative past-future, excavated from a world where physiognomy follows rules adjacent to, but not quite the same as, our own.
Richard says, "When translated into 3D, the image gains an alibi, or perhaps just a greater believability. The shift makes the strangeness more persuasive, an illusionistic argument for the impossible.”
Cameron’s work explores the intersection of digital fashion and technology, with a focus on virtual representation and inclusivity. Central to this exploration is Shudu, the world’s first digital supermodel. A fusion of hyperrealism and creative imagination, Shudu challenges conventional beauty standards across fashion and media.
The project emerged from a desire to innovate beyond traditional photography and storytelling, embracing artificial intelligence as an integral part of the creative process. From concept generation to aesthetic refinement, AI played a key role in shaping Shudu’s design. Using tools such as Midjourney and custom generative models, the artist then experimented with AI-generated prompts to explore character development and visual composition.
Shudu highlights the evolving dialogue between human and machine-driven creativity, redefining the boundaries between physical and digital identity. She highlights AI’s potential not merely as a tool but as an active collaborator, and how it can be used to query notions of authorship, authenticity and representation in the digital age.
Cameron says, “Generative algorithms allow for experimentation, introducing unexpected creative tangents that enhance the authenticity and depth of the characters I create. This collaboration with AI often leads to new aesthetic possibilities, transforming how I approach digital art and fashion design.”
terraPen is a digital drawing machine that allows users to experiment with multimedia drawing utensils to create digital artworks. Throughout the exhibition, a computer-controlled pen plotter will actively produce a range of drawings, translating digital collaborations into physical form in real time. This interactive system explores the evolving relationship between human agency and AI-driven automation, inviting participants to engage in a dynamic process of co-creation. As the machine translates digital inputs into physical drawings, it raises fundamental questions about authorship, control, and artistic intent.
For decades, artists and researchers have grappled with similar questions about authorship and agency in computer-generated art. Pioneers such as Georg Nees and Harold Cohen experimented with algorithmic creation, exploring the extent to which machines could act as creative partners. Manfred Mohr’s generative art further blurred the line between programmed logic and artistic intent. Today, AI offers new tools that extend this lineage, providing novel ways to bridge digital and physical, tangible and intangible.
This collection of work will continue that dialogue, using contemporary AI models and workflows to engage in a process of co-creation. Each drawing reveals how the balance of control can shift, producing works that neither human nor machine could create alone.
Edward says, “By witnessing these pieces take shape in real time, viewers are invited to reflect on the role of tools and instruments in artistic practice in the ever-evolving boundaries between human intention and machine intelligence.”
Ione Pinto De Matos De Sousa’s Bout 2 Bloom (Visual Album) is a digital fashion collection presented as a series of music videos, each outfit designed in direct response to a track from the mixtape of the same name. In collaboration with musician Leeves, the work explores the intersection of digital fashion, sound, and moving image, creating a visually dynamic project that transforms garments into expressions of rhythm.
AI played a significant role in shaping the creative process, serving as a tool for ideation, material exploration, and conceptual refinement. During the initial development phase, ChatGPT was used to generate ideas, providing a foundation for the artistic direction. Krea.ai was employed to produce AI-generated visual references, influencing fabric textures and garment structures. Additionally, experimental image-to-video AI tools were tested to explore movement and digital animation possibilities. However, the final work remains distinctly human-led.
By leveraging AI as a means of exploration rather than an end result, Bout 2 Bloom (Visual Album) demonstrates how AI can be integrated into digital fashion workflows, offering new possibilities for creative collaboration and innovation. The process revealed AI’s capacity to enhance experimentation, introduce unexpected design pathways, and streamline workflow efficiencies.
Ione says, “AI is not evident in the final product. However, it formed a big part of the process, particularly in the exploratory stage where image to video was tested.”
Artificial Intelligence: in detail
1950: Alan Turing introduces the concept of "machine intelligence" and the Turing Test to evaluate if a machine can mimic human intelligence.
1956: John McCarthy coins the term "Artificial Intelligence".
1964: Joseph Weizenbaum develops ELIZA, the first chatbot, which simulates human conversation using a simple text-response algorithm.
1974: James Lighthill's critical report on AI's overhyped promises triggers the first "AI winter," slashing funding and interest.
1980: AI research rebounds as IBM pioneers machine learning models for probability-based decisions.
1982: John Hopfield develops the Hopfield Network that can learn and remember patterns, revolutionising our understanding of memory.
1987: AI faces another setback with the second "AI winter" caused by failures and budget cuts.
1997: IBM's Deep Blue defeats world chess champion Garry Kasparov, marking a milestone in AI ability.
2011: Apple launches Siri, bringing AI-driven voice assistants to mainstream smartphones.
2014: Ian Goodfellow introduces GANs (Generative Adversarial Networks), which can generate realistic synthetic images and videos, enabling the creation of "deepfakes."
2014: Google releases DeepDream, an algorithm that produces surreal, dream-like images using neural networks.
2016: Microsoft’s chatbot Tay, designed to learn from Twitter users, gets corrupted by inappropriate interactions, showcasing the risks of AI in uncontrolled environments.
2016: Sophia the robot, the world’s first non-human to be recognised with legal personhood, is activated, combining robotics and AI.
2017: Researchers at Google launch the first Transformer, capable of reading all words in a sequence simultaneously. First applied to Google Translate, it laid the groundwork for generative text models.
2018: OpenAI unveils GPT (Generative Pre-trained Transformer), capable of performing a wide range of language tasks.
2020: GPT-3, OpenAI’s groundbreaking language model, debuts, demonstrating exceptional versatility in text generation.
2021: OpenAI launches DALL·E, an AI model that creates images from textual descriptions.
2022: Stability AI introduces Stable Diffusion, revolutionising text-to-image generation and inspiring tools like Midjourney.
2022: AI goes mainstream. ChatGPT, launched by OpenAI, reaches a staggering one million users in five days.
2023: The Generative AI race heats up: Microsoft integrates ChatGPT into Bing. Google counters with Bard (later renamed Gemini), its generative AI chatbot.
2023: The release of ChatGPT-4 offers some key advancements, such as the abilities to use videos and images as input prompts, as well as to access the internet in real time.
2024: Meta introduces AI-generated accounts on Facebook and Instagram, but later deletes them after facing backlash.
“Any sufficiently advanced technology is indistinguishable from magic.” – Arthur C. Clarke
AI might feel like a recent breakthrough, but the journey to replicate human intelligence with machines has been unfolding for over 75 years. What once lived purely in the realm of science fiction—machines that can generate text, code, images, video, and sound in seconds—is now a reality. It feels like magic, doesn’t it? But look behind the curtain, and you’ll see it’s anything but.
Reaching this point has taken decades of research, colossal amounts of training data, and endless experimentation. And even now, AI doesn’t work alone. Human creativity and judgement are woven into every step – writing the prompts, curating the training datasets, and critically assessing outputs. The fingerprints of human intervention are everywhere, reminding us that AI isn’t a replacement for human effort but an extension of it.
So, while AI might look like magic, it’s really a collaboration – a fascinating blend of technology and human ingenuity working together to reshape what’s possible.
“AI is whatever hasn't been done yet.” – Larry Tesler
In 2025 and beyond, artificial intelligence remains a driving force in cultural, technological, and economic transformations. Once an invisible enabler of innovation, AI now dominates the global conversation, reshaping industries and redefining possibilities. Groundbreaking applications are emerging daily, from revolutionising personalised medicine to advancing creative fields, empowering both experts and everyday users to tackle problems that once seemed insurmountable.
But with great power comes even greater responsibility. Questions about privacy, data ownership, and copyright have taken centre stage as industries and governments race to shape the future of AI development. While the potential benefits are boundless, the need for responsible innovation has never been clearer.
From chatbots to personalised recommendations, AI tools have long quietly transformed how we work and live, often without being recognised as “intelligent.” As AI continues to push boundaries, the challenge isn’t just about advancing technology – it’s about ensuring that progress is guided by accountability and ethics. By balancing innovation with responsibility, we can shape AI to amplify human potential and serve as a force for collective good.
Looking backwards into the future: the history of AI
- Boden, M.A. (2018). Artificial intelligence: a very short introduction. Oxford: Oxford University Press.
- Burgess, M. (2021). Artificial intelligence: how machine learning will shape the next decade. London: Random House Business.
- Cave, S., Dihal, K. and Dillon, S. (2020). AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford: Oxford University Press.
- Mitchell, M. (2020). Artificial intelligence: a guide for thinking humans. London: Pelican.
- Wooldridge, M.J. (2021). The road to conscious machines: the story of AI. London: Pelican.
The artist in the machine: AI-assisted creativity
- Armstrong, H. and Dixon, K.D. (2021). Big data, big design: why designers should care about artificial intelligence. New York: Princeton Architectural Press.
- Bernstein, P. (2022). Machine learning: architecture in the age of artificial intelligence. London: RIBA.
- Del Campo, M. and Leach, N. (2022). Machine hallucinations: architecture and artificial intelligence. Oxford: Wiley & Sons.
- Du Sautoy, M. (2019). The Creativity Code: how AI is learning to write, paint and think. London: 4th Estate.
- Grba, D. (2022). Deep Else: A Critical Framework for AI Art. Digital. Vol. 2 No. 1. pp. 1–32. https://doi.org/10.3390/digita....
- Hageback, N. and Hedblom, D. (2022). AI for arts. Boca Raton: CRC Press.
- Luce, L. (2019). Artificial intelligence for fashion: how AI is revolutionizing the fashion industry. Berkeley: Apress.
- Magrini, B. (2017). Confronting the Machine: An Enquiry into the Subversive Drives of Computer-Generated Art. Berlin: Walter de Gruyter.
- Miller, A.I. (2019). The Artist in the Machine: The World of AI-Powered Creativity. Cambridge: MIT Press.
- Navas, E. (2023). The Rise of Metacreativity: AI Aesthetics After Remix. New York: Routledge.
- Pasquero, C. and Poletto, M. (2022). Biodesign in the age of artificial intelligence: deep green. New York: Routledge.
- Ploin, A., Eynon, R., Hjorth I. & Osborne, M.A. (2022). AI and the Arts: How Machine Learning is Changing Artistic Work. Report from the Creative Algorithmic Intelligence Research Project. Oxford Internet Institute: University of Oxford.
- Thiel, S. and Bernhardt, J.C. (2024). AI in Museums: Reflections, Perspectives and Applications. Bielefeld: Verlag.
- Voigts, E., Auer, R.M., Elflein, D., Kunas, S., Röhnert, J. and Seelinger, C. (2024). Artificial Intelligence - Intelligent Art?: Human-Machine Interaction and Creative Practice. Bielefeld: Verlag.
Reasons for robophobia: Confronting the big issues with AI
- Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Newark: Polity Press.
- Borg, J.S., Sinnott-Armstrong, W. and Conitzer, V. (2024). Moral AI: and how we get there. London: Pelican.
- Bridle, J. (2018). New dark age: technology, knowledge and the end of the future. London: Verso.
- Broussard, M. (2023). More than a glitch: confronting race, gender, and ability bias in tech. Cambridge: The MIT Press.
- Crawford, K. (2021). Atlas of Ai: power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press.
Design; Disruption; Divergence – Formal opening and private view
Join us for the formal opening of Design; Disruption; Divergence.
Representation and Digital Spaces
AUB would like to invite you to our panel event "Representation and Digital Spaces", featuring fashion photographer Cameron-James Wilson...
Troubling AI: Navigating the ethical, social, and existential implications of Artificial Intelligence
AUB would like to invite you to our panel event "Troubling AI", as part of our "Design; Disruption; Divergence" exhibition...
Design; Disruption; Divergence – schools competition
We challenge you to create a digital or physical artefact inspired by the themes or artworks in "Design; Disruption; Divergence".