VERO: Joel, here we are in the early days of Machine Learning meets Art. We are just witnessing the first glimpses of how this powerful technology might impact the future of human creativity and expression. Most people experimenting in this space are AI experts–but you’ve gone and built these incredible interfaces such that anyone with a laptop can get their feet wet– why is making these tools accessible so important?
JOEL: Well, thank you! There are a couple reasons. First, many higher-end creative tools today are built for professionals. I think machine learning is distinct from past software advances because it doesn’t just enable this kind of specialized content creation, it actually makes the whole creation process itself more accessible…to anyone! Machine learning will eventually enable someone to make a whole film by themselves! Second, Artbreeder users are able to understand the technology with very different intuitions than the researchers who made it. It’s a valid and valuable form of understanding which often ends up informing the research as well.
VERO: For Artbreeder, you’ve borrowed an interesting (and controversial) concept from the biotech domain “gene-editing.” Can you tell me more about that choice?
JOEL: The central premise of Artbreeder is to use biological metaphors–each image as a biological entity–to abstract away the complexity of a machine learning system. A “latent space” or “class vector” are not terms that most people know but we all have an understanding of mating and genes. The latent space can have points mixed (mating) or transformations applied (genes) and so the parallels just make sense. They also abstract it in a way that doesn’t really hide what’s going on and is clear also to anyone with a technical understanding.
VERO: Machine Learning is a telescope that can be pointed in many directions. Watching Artbreeder grow over the past few years, it’s safe to say that the kinds of images the users create (or perhaps I should say “co-create^n”) has been deeply influential for the evolution of the interface itself. What is it like to develop a tool in tandem with its user base?
JOEL: It’s been great to have them co-develop! There’s two levels this happens on. First, the whole corpus of images in the site is constantly being grown as users make new images. So every image made by someone helps everyone else! The idea is premised on the fact that the way images are created with machine learning–by sampling vast quantities of possibilities (ie high dimensional spaces)–are impossible for one person to comprehend and explore, so crowdsourcing it is helpful for everyone. Second, the community is very involved in the design process. As an indie developer for the first two years of Artbreeder it was helpful and rewarding to always have a group of people to bounce ideas off of and talk to.
VERO: Let’s talk about Prose Painter. You call it a “sibling” to Artbreeder…why? And are there going to be more kids in this family?
JOEL: So Prose Painter is an AI-driven, open-source tool which allows humans to “paint with words” by incorporating guidable text-to-image generation into a traditional digital painting interface.
I think that Prose Painter and Artbreeder are both the same kind of thing– a new interface applied to powerful technology to make it playful and accessible. But they also work very well together. Artbreeder is great for open ended exploration but has limits to what can be done. Editing an Artbreeder image in Prosepainter is a great way to synergize the two! There is another app in the family coming soon. It is based on taking the images made in Artbreeder and Prosepainter and bringing them to life with animation! So it’s a happy family / ecosystem of apps that all work together.
Explore the tools: Artbreeder, Prose Painter