ANP004: Laura Herman
For the last seven years Laura Herman, the Head of AI User Research at Adobe, has been figuring out how artists actually use their digital tools.
For the last seven years, researcher, technologist and curator Laura Herman has been figuring out how artists actually use their digital tools. As Head of AI User Experience Research at Adobe, Herman corrals the team of user researchers behind Adobe Firefly, the creative software giant’s move into the world of generative AI. These are the user researchers behind not only the Firefly app but also its underlying generative models, as well as the generative functionality within an array of other Creative Cloud tools. Adobe trains their models solely on content owned and licensed by the company, a position that affords the Firefly team unique perspective on a new frontier of technology roiling with tension between those that hail AI as a revolutionary next step in the history of image making and those that fear it as signalling the end of human creativity.
For Herman however, the truth lies somewhere in the noise of the middle. Drawing from her experiences working across Adobe’s Creative Cloud suite and a background in neuropsychology, she understands generative AI models as simply exciting new tools in the artist’s toolbox. By centring the tens of millions of people that actually use Adobe software in a conversation often crowded by anxiety, she has steadily built an understanding of the real impact AI is having on both consumers and creators. Far from seeking to replace human creativity with automation, Herman’s interest is in how this technology shifts and shapes how we understand creative labour, as well as how our responses to and rejections of these changes within the creative industries define the role of the artist in 2025.
Beyond her acute insight into the impact of new image making technologies on culture, Herman also has an intricate understanding of the networks and algorithms by which those images are distributed. During her PhD studies at the Oxford Internet Institute, Herman orchestrated ‘The Algorithmic Pedestal’, an exhibition that pitted artist Fabienne Hess against Instagram’s algorithm, tasking both with curating a show selected from the Metropolitan Museum of Art’s Open Access collection. The gesture was both a provocation and a prompt. Not only was the show a powerful example of just how essential the presence of the human curator is to any creative act, it was a reminder that, if we don’t build AI systems with intentionality, around notions of artistic quality based on communication, empathy and criticality, we’re at risk of sliding further into the slop world we’re currently witnessing rise up around us.
What led you to wanting to work at Adobe in the first place?
I originally studied neuroscience and psychology, but even then I was focused on the neuropsychology of artists; how art shapes perception and how perception shapes art. I’ve always had this very creative focus in my scientific inquiry. When I started doing user research at Adobe, working on Photoshop, it became clear there were so many emerging technologies that were going to influence what and how people create. I started to see AI emerging and wanted to find out how the technology was going to shift how creatives work, so I decided to go do a PhD about how machine learning was influencing creative practices. When I started that PhD, everyone in the department, at work, even friends and family, thought of creativity as the one thing that AI could not do and that I was wasting my time. By the time I finished my PhD, the entire conversation had completely changed, and generative AI was at the centre of every conversation and I had inadvertently positioned myself as a researcher in this new field. At the same time, Adobe was formulating the Firefly team, so I took up the helm of leading the team of researchers thinking about these topics.
Did you see the role of the creator changing in real time as you were writing the PhD?
Yes. A PhD is supposed to be a very in depth study of a static thing, so the entire field changing throughout the course of it made things a little tricky. Who knows, my entire PhD might already be moot! I was working at Adobe alongside the PhD, so trying to be a translator of the conversations happening in an industry setting and academia, which unfortunately are often worlds apart from each other, was a fun challenge. A lot of the work that I was doing, both in the PhD and at Adobe, was asking: how is the creative process changing? At Adobe we had a pretty good sense of how people used our tools, in what order, for what reasons and for which core use cases. Generative AI was going to change a lot of that, so I worked on predicting how this technology was going to shape our users’ processes going forward. Some of that has come true, some of it hasn’t. I got to sit down with creatives and hear about how they were using new tools, to look to the people really paving the way as examples of how the rest of the creative community might start working going forward. I also heard a lot of really strong emotions around this. In some ways, the emotional reaction to the tool was influencing creative processes more than the actual tool itself.
What were some of those predictions?
One of the big things that I was trying to emphasise to the team at that point was the sheer volume of content that creators will be able to make, to get them to think about the role of the creative more as a creative director and as a curator. In the early stages of the process the creative is really responsible for coming up with the ideas and directing the AI system, aiming a lens at a certain subset of ideas that they think are interesting and then using the generative tools to flesh those ideas out. In that process you have a ton of content that is developed and generated, so it’s then a matter of sorting through this content. Though maybe not as sexy, we need to think about how to deal with the mass amounts of content our users are drowning in. We wanted to give them interfaces that help them really clearly and easily cull through this data because, at least for now, people don’t fully trust the generative tools to make stuff that goes straight to the consumer. A big focus was keeping humans in the loop, where we can make sure that there are checks and balances that are easy for them to access, striking the right balance of giving them that control and precision that we know creative professionals crave, while also helping them be more efficient in leveraging this technology.
What we don’t want to do is recreate all of the complexity of Photoshop just for generative AI, but to think more intentionally about what controls and features are really needed to make the best work possible. I’ve started to see this kind of curation more and more, but what has most blatantly come true is the use of generative AI for creative direction. Our tools have historically focused on production, the moment of making the thing, and editing. Those are the processes that generative AI is going to disrupt the most. Where humans are going to be focused is earlier in the process, around ideation, gathering inspiration and brainstorming in that direction and we haven’t historically focused on specific tools for that, per se. Project Concept is a new interface leveraging generative technology more for ideation, inspiration and creative direction workflows, giving that same level of creative power to those early stages of the process, where we know people are going to be more and more focused.
In what ways are you seeing these scales of content increasing and how can AI help us to keep up?
Within a single campaign you need to create a ton of different versions of that same campaign, not only in different sizes for different platforms, but for different markets and personalised for individual users. There’s also the speed with which someone might need to create multiple different discrete campaigns. Because there is just so much content, for a lot of brands that are looking to stay competitive in such a fast moving landscape, it’s not enough to just have one big push on TikTok and YouTube per month, you need multiple per day. It becomes this exponentially enormous amount of stuff that’s being made. It seems like a lot of that right now is being done in these circuitous processes, where a big brand will hire an agency, who will then hire another agency, so my team is trying to chase down people who are actually doing this stuff at the end of the chain. As far as we can see, a lot of the storage and organisation is just spreadsheets and Dropbox folders. There’s not really a great solution for this, like Lightroom for photography, so trying to think through what that is for the generative age has been our provocation lately.
So, as far as the human using AI is concerned, creativity is tied even more closely to taste, in the sense that it’s their job to select images, rather than to produce them?
To me, curation is essentially distinguishing between things using taste. That’s where we need human creatives to have that curatorial role. We should really be honing and training those curatorial skills, both in the context of generative AI, but also in the context of algorithmically driven sites like TikTok, where young people are just being fed content without any sort of explicit agency over what they’re seeing on a daily basis. They’re not exercising any sort of muscle that we maybe did to seek out certain forms of inspiration, to find things online or in libraries, it’s all just being served to them. That scares me a bit. Will they just accept generative AI serving content to them on the strength of it being a good output, or will they realise when things are shit, we should change it and create new things. That’s where I hope we can encourage that curatorial sensibility. I think such a core mistake that’s being made across the board with AI technologies is the evaluation of what is ‘good’. That’s something that my team has been spending a ton of time on, thinking through how we evaluate the success of these models. It’s very easy to say, we’re going to put two images in front of people, whichever one they like better is good. The bigger challenge is trying to think of ways to measure artistic quality. If you talk to many humanities scholars they will tell you that you just can’t, but, while I appreciate we can never do it perfectly, if we don’t do something it’s just going to end up being all about the resolution of the image, which isn’t good enough. So what can we try to do to approximate artistic quality?
What were some of the emotional responses to generative AI you encountered in speaking with creatives?
I think a lot of it comes down to the media coverage of this and its cultural response. So many things right now are extremely culturally divisive, everyone seems to have very strong opinions on both ends of the spectrum and AI really hasn't been much different from that. Something that I liked about the Weirdcore interview was him reiterating that AI is just another tool. That is not an attention grabbing headline, but it’s very much the perspective that I take. This is a new tool in the artistic toolbox. We want to provide that tool in a way that is thoughtful and considered, but it is not trying to replace artists. It’s not the end of creativity. It’s also not an entirely new, revolutionary art form. Artists might use it to create new art that is revolutionary, but the tool itself is not. When photography came about, impressionism emerged as a reaction to photography. It wasn’t just that digital photography was the new art form in and of itself, there was also this reactionary gesture. I’m really curious about what the reactionary art form in response to generative AI is going to be. With regard to creative labour, I remember seeing a meme that said something along the lines of: creatives, stop freaking out, for your clients to use generative AI, they’d have to be able to describe what they want. As a creative you have the power to be able to clearly articulate a vision and then see when that vision has been met or not - that’s what people are hiring you to do. It’s very hard to imagine a CEO typing out some complex prompt when they don’t even have the thing in their mind to then type out the prompt to match.
Can you imagine a world in which every advertising image is AI generated?
On one hand, while I don’t know if I can imagine every single image being 100% generated, I can imagine generative technology playing a role in every advertising image. There are going to be so many tools that have generative technology inserted into them in different places that it’s hard to imagine something being completely created without them. Whether it’s that the initial idea came about with someone talking back and forth with ChatGPT for a brief, or that the final tweaks inside of Photoshop using GenerativeFill, I think AI will play a role in a lot of campaigns going forward. On the other hand, part of the reaction to generative AI is this obsession with authenticity, which is particularly cultivated through some of the social media platforms in this parasocial sense. So many brands find that what’s good for business is being hyper authentic and very rough around the edges, so when people see generative AI they assume that it’s the antithesis to that. There’s an assumption that, if something is generated, it is the opposite of authentic, which philosophically is a very interesting argument to have. I see a lot of big brands fearing it not only from a legal perspective, but also in terms of how it performs for their audience.
There was a huge conversation around the authenticity of images when Photoshop emerged, so are we seeing the same thing?
It feels very similar. At Adobe, we never wanted our tools to be used to make deepfakes, but because generative AI created much more societal awareness around the possibility of them and they can be made more efficiently with less skill, it accelerated conversations that were already happening. If you couldn’t believe anything that you see because it could have been Photoshopped, now you can’t believe anything you see because it’s AI-generated - the common refrain is that you can’t believe anything that you see, which has been a refrain since the Renaissance. Maybe we should just all stop believing what we see, then this whole problem would be solved! It’s always about questioning the entire context within which an image sits and how it’s been situated, rather than only how the image has been edited. If an image has been entirely generated, that is important information for a viewer to have, but there’s so much more as well. Who made it? Why did they make it? Where was it made? The Content Authenticity Initiative, which was around before generative technology as a reaction to how Photoshop was being used for mis- and disinformation, is trying to clearly, explicitly show people how an image was made or manipulated. Now that there’s generative technology, for better or worse, user-facing platforms are working with us on exposing this information to the users. We’re partnering with Meta, Google, Microsoft, Open AI, as well as the BBC and the New York Times— all these organisations are collaborating on a solution for how to transparently label AI-generated content, which is wonderful.
How has the shift from chronological feeds to algorithmic feeds on Instagram changed our relationship to the image?
In order to get the visibility that they need to sustain their career, many creatives feel an increasing amount of pressure to pander to this algorithmic system that has the power to give them this visibility. You could argue that that’s very similar to the curators of big art institutions in the past, or various other taste makers throughout history, but the difference is that this is now in a black box. It’s a technology system run by a big tech company that is deciding which creatives get visibility, have financial success and therefore can keep creating art. There’s a huge downstream influence on what is created and what is shown about what is created. We’ve seen this big rise of process videos and I think that’s partially a reaction to some of these concerns, showing that you sat at your computer for 300 hours and made something is a rebuttal against the idea of removing humans from the process, but it’s also this aesthetic that seems to be prioritised by algorithmic feeds. I talked to some creatives who explicitly said they didn’t make process videos, but now do, and, more terrifyingly, that they changed their process so that it was more suitable to being captured, which is wild!
Our relationship to these images that are being fed to us is really not critical. It’s so easy to just passively consume this stuff being thrown at you. I wish there was as much of a societal backlash against that as there has been against generative AI, because it arguably has a bigger impact on our visual ecology and our culture. Suddenly our feed went from being chronological, to being based on what you engage with (though what engagement even means is unclear) to now being bombarded with stuff from other people that you didn’t choose to follow. That shift is us losing agency and power over what we see and the images that we consume, which I think we all know have a huge downstream influence on our views of the world, our cultural engagement and the ideas that you come up with as a creative. We’re not questioning that enough. Meta recently released an advertising suite where they’re offering generative tools built into their algorithmic feed. Rather than you trying to move with content that they could feed to other people, they’ll actually create the content for you, because they know exactly what they will feed to your audience. If they create it for you, then it’s AI-generated perfectly to match with what the algorithm will select for this person to engage with and the human is completely cut out of the process. As a consumer of images, that’s what I’m frightened of. I just saw this video on TikTok, which was a TikTok ad for TikTok’s algorithmic advertising system and I’m pretty sure the person I was watching wasn’t real. There was no way for me to tell. It’s all beginning, right now.
Now we’re questioning images and videos of humans, how does that change the role of the fashion model?
A lot of people still don’t understand how generative tools work in the context of human identity. I saw that Levi’s was using generative AI to produce a more inclusive array of models for their campaign. I’m all there for the sentiment of having more inclusive representation, but the whole point is that those real people are included. The idea of generating an image of someone who is from the global majority sitting in a wheelchair to then say that you’re being inclusive, when in fact you’re actually erasing the existence of that very real model, who must exist somewhere, that you didn’t hire, is just completely backwards to me. I will also say that generative video models, which are all the rage right now, are not good at depicting humans. They’re really great at a lot of other things, but humans are really hard, particularly humans over a period of time, moving in space. Identity is not understood by these systems, but is central to human viewers and human creators. For AI models to be missing such a core thing really centers the importance of having actual models as part of this conversation.
Recent shifts in our media environments have relocated where artistic intervention happens within the process of cultural production. How have you seen this new media environment changing the roles of both creators and consumers?
The conflation of AI-generated media with AI distribution systems creates this really thorny reality for both creatives and consumers. For creatives, it essentially pits them against a system that is speaking to itself in two different permutations. It’s an AI generator speaking to an AI curator. Even though they are created by very different people for very different purposes, they’re both AI systems. It worries me how much humans might have a disadvantage in that context. When we’re creating media for other humans to consume we should have an advantage, but we are being kept apart by this wall of AI systems. What I’ve been seeing crop up more and more is people looking for ways to directly control what they see and speak directly to other humans. I think a good example of that is MUBI, a movie platform that is curated by movie experts, versus Netflix. The tricky part in any sort of human curation system is why certain humans are getting to curate instead of others. A bunch of educated people from the Global North who have the same tastes and sensibilities deciding what gets put on display is also problematic, but you’re essentially choosing between them and big tech.
From the consumer perspective, I think there is starting to be more of a sense of rebellion against predictability, where the images we see are no longer as exciting or soothing as they once were. We need to wake up and realise that we have our head down in the trough of the feed. Maybe it’s getting so bad and so boring that people are just gonna start realising anyways, the feed is soiled! Unfortunately, there’s a lot of research that shows that if you were to ask the public what they truly want through their engagement, it becomes a lot of sex and drugs and cars and not actually really interesting artistic work. Part of this is a byproduct of making decisions purely based on engagement, which comes back to the question of metrics. What should we have been optimising for instead of engagement? How can we give the consumer a voice to say: I know I might watch this video of a car more than I would like to, but what I’d really like to be seeing is this interesting commentary on artwork. Right now, they don’t have a way of conveying that to the system.
How do you envision the future relationship between creativity and AI?
I hope that AI systems become more and more capable, but that their capability is in the service of supporting humans and that those humans are always in the loop in developing, testing and utilising those capabilities. I think the biggest tension that we’re currently grappling with is how much control do you put in the hands of the human? Do we empower this technology to make things more efficient, or will that result in a human craving for creative control? This is the big challenge that my team and I are working on now: finding the human-centric balance between efficient innovation and creative control.
Interview by Henry Bruce Jones and Tom Wandrag.