ANP002: Weirdcore
We catch up with the enigmatic AV experimenter to find out how a scene in no small part helped define has changed, as well as the effects this has had on both his work and his audience.
It’s testament to his skill as an artist and image-maker that, though best known for his collaboration with one of the most infamously recognisable figures in electronic music, the work of Weirdcore is both totally singular and instantly recognisable. Beyond Aphex Twin, Weirdcore has created imagery for some of the most exciting music of the last decade. Musicians across various genres and scenes, from resolutely underground producers like Slikback, Gooooose and Le Dom, to some of the biggest artists in the world, like 1975, Miley Cyrus and The Smile, have been drawn into the strange orbit of Weirdcore.
Resonant with the surreal aesthetic style with which the artist shares a name, pre-digital visual references, the pixellated detritus of image manipulation and painterly swathes of glitch are instrumentalised to subvert, confuse and stimulate. Priding himself not in making what his collaborators want, but transcending their expectations to create visuals they never knew they needed, the Weirdcore signature is an image that is so entangled within the sound or identity of an artist as to become forever part of the material of their world.
In recent years, Weirdcore has been working across installations and events at the grandest possible scales, presenting his first solo exhibition ORIENT FLUX across the entire fourth floor of a department store in Beijing, accompanying Aphex Twin on his world-devouring tours and working with H&M on their colossal ‘&’ events, creating immersive visuals for music by Mimi Xu, Caterina Barbieri and Charli XCX. Regardless of scope, Weirdcore approaches everything he does with a conceptual agility, buoyed by a first principles attitude towards technical experimentation. With a career spanning over two decades, we caught up with the artist to find out how the space he in no small part helped define has changed, as well as the effects this has had on both his work and his audience.
You have spoken about the impact Photoshop had on you at the very start of your career. Has your adoption of different tools and technologies been a natural progression ever since?
I always think back to that. I was at college in the mid ‘90s and I saw someone using the ‘Select’ tool. I was never particularly into computers, but seeing that selection thing blew my mind. Pretty much everything I do has been self taught, nothing that I learned at college really led to anything I do now. Once I discovered Photoshop and Illustrator it just all went from there. I learnt how to use After Effects by myself, then I was using Flash for work because I was doing web design. In ‘99, when I first moved to London, I started to VJ. Back then, the main thing VJs used was Motion Dive, but I started using Composite Station, which was quite similar to VJamm, which was created by Coldcut, where you could assign a different clip or effect to different bits of the keyboard. That was quite fun, though you’d get to a point in the night where you’d have to put your foot on the keyboard to trigger a new clip to play.
After a while I started doing things in a more minimal way, because I found just using pre-rendered clips meant that, if you’re doing visuals all night, you eventually run out of stuff. I started using super simple things, but then layering loads of different effects in a certain order so that even with just a few images I could build a whole live set that was really modular and versatile. Then I discovered Max/MSP and I’ve not really moved on from there! Doing live visuals has never really been my main job, but if I was learning live visuals now, I’d probably learn TouchDesigner, because that seems really powerful. I just can’t be bothered really. Loads of people use DaVinci Resolve and Blender now, but I'm sticking with After Effects and Cinema 4D. Plus, I’m such a plugin addict.
So you’re in the camp of: it’s not what you use, it’s how you use it?
Exactly. But it’s quite hard to do that with AI, you really have to kind of keep up with what AI is doing at the moment. It’s moving so fast and there’s so many new things that are coming out.
Does using AI in 2025 feel similar to how using Photoshop did back in the ‘90s?
There are so many things that would take me forever to do normally that AI is just so much quicker at doing. I’ve been using it for a few years in different ways. I was using early versions of AI for style transfer, which I used in the Aphex Twin video for ‘T69 Collapse.’ When I did the visuals for Arca’s collaboration with BronzeAI, where they trained a model on one track so it could keep playing generatively, I used machine learning to train models on some of Andrew Benson’s pictures to then recreate in Max/MSP. People loved it in 2021 and it’s really exploded since. Last year I was really excited about it, but now the novelty has worn off. It was the same with Photoshop, but when Adobe started implementing generative AI stuff into the software, that was really magic.
From our perspective, over the last year, it can sometimes feel like AI is uncontrollably moving at such a speed that it’s difficult to keep up. Is it the same for you?
On ‘Blackbox Life Recorder 21f’ I was trying to implement loads of AI. The thing you couldn’t quite do back then was AI video generation, which has completely exploded now. The videos that I'm seeing look so realistic, but everyone’s using it. It’s very saturated. The way that I’m using AI now is a bit like how big companies are implementing AI, where it’s often used for rotoscoping. The ML implementation in Mocha by Boris FX is amazing, you select a person and it completely tracks them instantly, which would have taken hours, even five years ago.
The thing that I really don’t like about AI is the backlash to it. I did loads of stuff for The Smile last year and pretty much all of it was using AI at some point in the process. There were loads of people in the comments criticising my use of it. I find it so short sighted! I’m just using it as a tool, it massively speeds things up and enables me to use certain styles that I wouldn’t normally use. I think a lot of people don’t seem to realise that they’re probably using some form of AI everyday. From accountants using spreadsheets, even on our phones, there’s some kind of AI tool. In Dune: Part Two, AI is used in most shots because they trained it to make their eyes blue. In the David Lynch version in the ‘80s, someone had to painstakingly select the eyes from every shot. You can’t tell me that you took away someone’s job, no one really wanted to do that.
We can sometimes spend weeks doing clothing simulations or motion capture, only for someone to now assume it’s done with AI tools. Do you care whether people think that you haven’t spent as much time on something as you did?
I rarely put out something straight out of ComfyUI. It’s just one element that I’ve taken and then manipulated. Some people probably don’t put the effort in, they just put a prompt into Midjourney and then deliver that. I would get upset if people thought that what I’ve delivered was just a prompt, when they don’t realise that, to even get an output that I’m satisfied with, there’s so much trial and error.
What most people seem to be critical of from the artist’s point of view is lazy use of the tool. That’s inherent in this open letter to Christie’s describing the sale of AI-assisted art as “mass theft.” How do you feel as an artist about what constitutes training data and what’s fair use?
I think the best thing is to just train your own data. I rarely use models trained on other people’s work, but it’s hard to draw the line. What about if you train your own data set, or train it on images that don’t come from an artist? If I wanted to make something that looks like a tree, does anyone have the license of what a tree looks like? If you feed it loads of images of trees that you’ve taken, or that you found online, is that okay? Is it because it’s the work of an artist, as opposed to anyone else?
How would you feel about your images being used by other people in their data sets?
Not great, but in a way it pushes me to make new stuff. Whatever they’ve trained it on is my old stuff, so who cares. That’s motivation for me to keep on it.
Do you have any anxiety about AI?
Not so much within art, but more with regards to disinformation, especially at the moment, because everything has gone so batshit crazy. If it’s used by people to show so-called evidence of misinformation, that would be scary. When I was at college I was really interested in the art of propaganda and using AI for those purposes could be so effective. That’s my main concern. As for the way it’s used in art, I don’t think anyone’s going to die. I think the worry is we’re getting to the point where you can’t really tell what’s AI and what isn’t.
How do you see AI enhancing live visuals, or audio reactive visuals, for live performance?
I was a bit disappointed a few years ago, I was really keen to try using the face filters you’d see on Instagram or Snapchat in a live setting for one of the Aphex Twin tours as I thought doing that on a computer would be so much more powerful and amazing. After looking into it for a few months, it became obvious that it was kind of impossible to incorporate that technology into a laptop or desktop because all that stuff was proprietary to those platforms. I’ve had to be more creative and do something more stylish and lofi.
The AI that excites me most at the moment is using it to turn a 2D image into something 3D. For the H&M events, I was trying to make infinity cubes in which to place images provided by H&M to then reflect them from loads of different angles. I’ve also been really into Gaussian Splats, which, from my understanding, is a bit like photogrammetry, but AI is used to do it more accurately. That’s really exciting, I’m exploring the use of that in Max/MSP at the moment.
How can you picture using Gaussian Splats in the context of your work?
It’s actually technically very similar to the stuff I did for Arca, which were these splatters of colours that, when they overlap in a complex way, recreate an image. A Gaussian Splat is a 3D scan that is not made of geometry, it’s made of these little flickers of colours that seen from far away resolve into an image. You can generate it from something you film on your iPhone. Because it’s not using geometry or polygons it’s really quick to render, so you could have it on a website. It’s a bit like when you see sketches by Renaissance painters of really big, unfinished works, where portions of it are almost photo real, but then the rest is all sketchy. It’s really messy, but in a really beautiful way.
Are there any other specific periods from the history of image making that you find yourself going back to and referencing with your work?
In this day and age you can see something online and try to recreate it straight away, you have all the tools. I find it’s better to copy something that is from before the digital age with newer techniques. One of my favorite artists is Victor Vasarely. I grew up in the north of France in the ‘80s and you would see his work on the back of billboards. It’s super pop art, all these pre-CG spheres. I’m really into old sci-fi comics, like Moebius, all the old French sci-fi like Les Humanoïdes Associés and Enki Bilal. The making of certain films, like the illustrations used for Star Wars and Dune, are really inspiring.
I also remember music videos in the ‘80s as being so good, but it’s funny, sometimes I’ll want to do something in the style of a video and then I try and find it on YouTube and it’s so much more rubbish than I remembered. My memory is so rose-tinted. Back then music videos were so much more creative, it’s the same for sci-fi films, like all the stuff by Cronenberg or Carpenter. It was pre-CG, so they really had to plan things in creative ways. With digital stuff you can do whatever, which results in stuff that is not that exciting.
It’s the same thing with old rave visuals, some of which have dated very badly.
A lot of rave flyers were actually using old ‘70s stuff. I've got loads of books from this guy, Patrick Woodroffe. My dad actually gave me one. Loads of his stuff got used, I’ve got other books of rave flyers and his work is all over them. They’re more timeless, they’ve aged well. The stuff I used to really love back then was by Future Sound of London and all the stuff that Mark McLean and Colin Scott did as Stakker Humanoid.
What was it about that visual style in particular that captured your imagination at that time?
In the same way that the selection tool in Photoshop amazed me, or how the new AI stuff is amazing us now, or even how The Lawnmower Man amazed me in the early ‘90s, it was just so new. I met one of the guys from Stakker Humanoid. He worked at a post production place in Soho and he had to render his visuals over several weekends. It’s the kind of thing you could just do on your phone now. You look at all those late ‘80s videos by Technotronic and they’re just so bad. It’s hard for me to describe now what I liked about it then.
What are your thoughts on music videos in 2025?
For a full length music video to be interesting for its entirety, it needs to have someone singing. That’s why I really struggled with the Aphex Twin ones. I had to get Richard to trim ‘T69 Collapse’ down because, apart from the most hardened Aphex Twin fans, most people wouldn’t stay interested. I had to make it in steps, there are different phases to the video to keep it interesting. With pop stuff, people are keen to see a visualisation of the person singing the track. If I was to do more music videos, it would be for pop stuff. I didn’t post any of it online, but I did some graphics for the ‘How Sweet’ video for New Jeans last year. I really like their stuff.
I’ve always been more keen to do more shortform work, to focus on the whole campaign. When I work with Aphex Twin, the music videos are not the most exciting thing for me. I like doing something that works for the posters and the artwork, for it all to fit in the same ecosystem. When I did stuff with The Smile last year, I was keen to make something that was minimal and panned over multiple videos in a very loose narrative way. I’d much rather do one or two minute videos for all the tracks of a release, rather than one full music video. I think people consume music videos in a much more bitesize way. Even myself, I can’t be bothered to watch a whole video. People lose interest! I did loads of stuff for The Caretaker a while back. He uploaded stuff that I had made for the album, but there were more views on the videos of just the artwork.
How do you feel about how audiences for music videos have changed over time?
It’s just the way people consume stuff. You need to adapt to the market, when I was doing the last Aphex Twin video I was really pushing towards doing something for each track rather than a full thing, but the label wanted a music video. I tried to explain: that’s not how it works these days. Maybe we can understand the music video as a legacy format, like a vinyl record. Established cultural institutions still have difficulty accessing new media environments in which things are consumed, which are constantly and rapidly changing.
The thing to understand about music videos is that, pre-digital, they were promotional. A music video was to promote a track, to get people to go to the store and buy the single or the album. Now, you have the full album on the next link, you can just stream it straight away. I think music videos now should be more like teasers, to get you hooked to then go check out something. Why make the whole video when the budget is so much more limited? It’s counterproductive. I find it so much more fun to make something that works as a music video, as vignettes for the socials and as an animation for streaming platforms, that carries across the artwork and all these other different formats. It’s so much more interesting than just one music video.
That goes back to using AI to make more things, more quickly. The way in which your work is being consumed is changing, so the way in which you have to produce it also has to change. How do you approach maintaining consistency across different formats and media?
With any of my work I tend to get bored really quickly, so I try to get into the mindset of anyone viewing it for the first time. If I’m getting bored, they’re probably getting bored. For me, it’s more about visual impact.
Your work has evolved alongside the technology that you use. What comes first for you, an idea about the kind of visuals that you want to achieve, or what you can achieve with the tools that you want to use?
Sometimes a project comes up and I will go through the things that I’ve wanted to try out, but it depends on what tools are available at that point in time and the different techniques that I want to explore. I pride myself in trying to deliver what the client or project wants in a way they don’t particularly expect. I get really into what they want psychologically, but then deliver it in a perspective they weren’t anticipating.
Would you say that it’s the mind of Richard D. James [Aphex Twin] you find easiest to access?
I’ve got similar tastes to Richard. I know exactly what he likes, it’s pretty much the same things he liked when I first started working for him. I’ve been an Aphex Twin fan since the early '90s, so I know what an Aphex Twin fan, or a Warp fan, would like. If I only had one client, it would be Aphex Twin. It’s just so much fun. The only downside is that he can go into hibernation for years and no one knows, probably including himself, when he’s going to come out of this hibernation.
Interview by Henry Bruce Jones and Tom Wandrag.