Biomechanics and Cognition

Stochastic team leaps...and lands on the cover of Science!

Olympic gymnasts stun the world with their performances on the uneven bars. Fortunately they don’t have to compete with squirrels.

Suppose instead of the uneven bars, human gymnasts had to fly through the canopies of trees, leaping gaps of varying distances, from branches of varying thicknesses, some stiff, some springy. And every landing would be different, on everything from trunks to twigs.

Oh, and then there are the hawks to watch out for.

What makes squirrels so good?

Lucia F. Jacobs, a cognitive psychologist who has studied squirrels extensively, and was one of the authors of a report on the work in the journal Science, said, “In some ways as a squirrel biologist, none of this is very surprising. If we were going to have a squirrel Olympics, this would not even be the qualifying meet.”

But the meeting of cognitive and biomechanical minds to do a joint investigation was unusual…

 

Read James Gorman’s full article in The New York Times

Machine Learning + Art

OpenAI's first
artist-in-residence

At a reception for OpenAI’s first developer conference in San Francisco last month, a crowd mingled, wine in hand, as withering criticism of art created with artificial intelligence flashed on a blue wall at the front of the room. “I’ve seen more engaging art from a malfunctioning printer,” one critic jabbed. “The fine-art equivalent of elevator music,” huffed another. “Inoffensive, unmemorable and terminally dull.”

It might seem an odd strategy for OpenAI, the company behind widely used generative A.I. tools like ChatGPT and DALL-E, to promote scorn of A.I. art, until you catch the twist: A.I. itself wrote the criticism.  Alexander Reben, the MIT-educated artist behind the presentation, combined his own custom code with GPT-4, a version of the large language model that powers the ChatGPT online chatbot.

A Stochastic Labs artist, Mr. Reben, is OpenAI’s first artist in residence.

 

Read Leslie Katz’s full article in The New York Times

Biotech + Art

CRISPR decoded.
Creative contributions.

The popular dialogue around CRISPR to date has focused largely on its exceptional potential to cure challenging diseases such as HIV and Malaria or doomsday scenarios of epic proportions. Meanwhile, the scientists developing this radical and compelling technology face a much more nuanced set of investigative and social concerns. A select group of artists and creative practitioners will be given an unprecedented opportunity to work alongside these scientists as they explore the groundbreaking topics that will shape the future of this critical field – and the world as we know it.

Stochastic Labs and the Innovative Genomics Institute at UC Berkeley announce:

CRISPR (un) commons: creative considerations and genetic innovation

Residents will attend a weekly seminar in five program areas (biomedicine, technology, agriculture, microbiology, and society) featuring top international scientists, and a monthly meeting with cross-disciplinary scholars interested in the ethics surrounding CRISPR applications and the regulation of this pioneering technology.

Watch the TED talk by Innovative Genomics Institute founder, Jennifer Doudna.

Congratulations to Andy Cavatorta, Alison Irvine, Kate Nichols, Sheng-Ying Pao, and Dorothy Santos! More information on the artists and their projects is available here.

Silicon Valley's "Paul Reveres"

Where is technology taking
democracy (and humanity)?

Pat Morrison of the The LA Times caught up with Stochastic resident/Center for Humane Tech co-founder Aza Raskin, and asked…

Aza: Early on in the internet, there was Section 230 of the 1996 Telecommunications Act, which said that internet companies were not responsible for content that the users posted, which is a way of saying the Internet and software was creating a space of deregulation where there were no protections for users. At the beginning that felt like a great thing. The web was this this wild new world where creativity could be unleashed. You could connect with people and groups that could exist here that couldn’t exist elsewhere…

There is a [former Google engineer and] YouTube researcher, Guillaume Chaslot, who’s worked on the recommendation engine for what YouTube videos get played next, and what he’s discovered is that no matter where you start on YouTube, if you just let the recommended videos play four or five times, you always get pushed further and further down conspiracy roads, further and further toward radicalization. If you start on things about vegetarians or vegetarian food you end up in chemtrails [conspiracy sites] .

LA Times: So like Capt. Renault in “Casablanca,” are we shocked, shocked that this is happening, when this was in fact part of the business model all along?

Aza: Many times, when we talk to people, it’s like, Oh this is nothing, we’ve had advertising for a long time, we’ve had propaganda for a long time. What’s different this time — and that it’s hard to see when you’re inside of the machine — is for the very first time, our connections with our friends are intermediated.

Listen to Aza Raskin on The Joe Rogan Experience 

Read Pat Morrison’s full interview in The Los Angeles Times

Body as interface

Beyond Cyborgs.
Wearables that go skin-deep.

When 33-year-old quadriplegic Felipe Esteves saw Stochastic resident Katia Vega levitate a small drone just by blinking at it, he knew that was the kind of superhero he wanted to be. Wearing a white wig to keep her secret identity intact, channeling X-Men’s Storm, Vega was demonstrating her superhero tech at an expo. Each time she blinked with purpose, a tiny circuit nearly invisible to onlookers completed and instructed a controller to move the drone. That circuit was hidden under her wig, and was completed every time a pair of metallic false eyelashes met for a long enough time and connected to conductive eyeliner Vega was wearing. Signals were transmitted to a Zigbee radio, with the receiver kept in the superhero’s handbag.

Sometimes, her blinks instructed animated images that read “POW, Bam, Zap” to pop up.

The system builds on Vega’s existing body of Beauty Technology prototypes, many of which take inspiration from special effects makeup used in movies. There are even characters, many of whom would not look out of place on the set of the Fifth Element: Arcana, for instance, is a futuristic messenger who changes the world around her with each blink, controlling the proliferation of music and images. Then there’s the hauntingly beautiful Kinisi. “Kinisi could change the world with a smile, a wink, raising her eyebrow or closing her lips,” says Vega. “Each of these muscle movements will trigger different light patterns.” Here, Vega employed the skills of FX makeup artist Larca Meicap, who combined her traditional tools with sensors applied to muscles and LEDs hidden in the skin and hair in patterns that came to life everytime signals from the sensors activated a microcontroller.

“Wearable computing has changed the way individuals interact with computers, intertwining natural capabilities of the human body with processing apparatus,” Vega explains. “Beauty Technology transforms our body into an interactive platform by hiding technology in beauty products to create muscle-based interfaces that don’t give the wearer a cyborg look.” Vega lists some example products her company is working on: Conductive Makeup, Beauty Tech Nails, FX e-makeup and Hairware (“a new prototype I am working on in order to make your hair interactive”).

Read Liat Clark’s full article in Wired

AI + Animal Intelligence

AI-enabled, interspecies
climate conversation.

As the number of AI tools increases, providing researchers with new ways to tune into the animal kingdom, goals that once seemed fantastical may now be within reach.

Stochastic Labs’ Aza Raskin got the idea for Earth Species Project 10 years ago while listening to the radio. A scientist was describing her process of recording and transcribing what she believed to be one of the richest languages of primates — the sounds of gelada monkeys.

“Could we use machine learning and microphone arrays to understand a language we’ve never understood before?” Raskin remembers pondering.

In 2013, machine learning was not advanced enough to translate a language where no prior examples existed. But that started to change four years later, when researchers at Google published “Attention Is All You Need” — the paper that paved the way for large language models (LLMs) and generative AI. Suddenly, Raskin’s idea of translating animal languages “without the need for a Rosetta Stone” seemed possible.

The core insight of LLMs, explains Raskin, is that they treat everything as a language whose semantic relationships can be transcribed as geometric relationships. It is through this framework that Raskin conceptualises how generative AI could “translate” animals for humans.

 

Read the full article at The Financial Times

In Conversation

Artbreeder/Morphogen
founder Joel Simon
with Vero Bollow

VERO: Joel, you released the first version of Artbreeder in 2018…years before OpenAI’s Dall-E, Midjourney, or Stability appeared on the scene. Most people had no idea what it meant to “generate” an image back then. Can you speak a little about what those early days were like?

JOEL: In the early days, the field of machine learning meets art was mostly inhabited by researchers and enthusiasts pursuing academic or niche artistic interests. There were no user-friendly interfaces or widely recognized AI art models. Terms like “latent space” or “prompting” were unfamiliar to the general public and required explanation. The excitement surrounding the technology was based on its potential rather than its practical applications.

But…it was also less frenzied, the generations felt less threatening and more experimental due to both being more primitive but also not directly referencing living artists. This was before deep fakes, the fears of automation, and the extreme valuations which changed the incentives to share research. The meaning and value of these images was still being explored.

VERO: So, enter Artbreeder, and voila, anyone with an internet connection could generate images! But from the very start, your interest wasn’t in “prompting,” instead you borrowed an interesting (and controversial) concept from the biotech domain “gene-editing.”

JOEL: Back then, GAN’s (generative adversarial networks) were the dominant method of image generation and there was no intuitive method of controlling them. So I proposed a unifying biological metaphor to abstract away the technical complexity.   Since I’ve always felt that the creative process is based on exploration and playful discovery, the parallel biological and creative metaphors really resonated for me.

VERO: The first time I saw Artbreeder I remember thinking “wow, Joel has taken this really abstract technical concept from ML (ie the high dimensional “space” of images) and made it not only totally understandable and accessible, but a fully palpable, explorable playground for…artists! And then, well, along came the collective authorship aspect…

JOEL: Yes. The central motivation of Artbreeder was to use this new form of image representation to unleash the creative capacity of crowds. Images having a shared “space” enabled everything on the site to be remixable and shareable in this totally new, weird way. A unique discovery by one user could be shared and then evolved in new directions. Rather than focus on resolving the value of any one image, Artbreeder focused on forming a collective biological superorganism of discovery with images in the public domain.

VERO: What are your hopes for the future of this medium?

JOEL: I feel that this field has been focused on technical improvements and pursuing visual quality at the expense of authorship and expression. This has been further exacerbated by the way that these models are trained:learning from community feedback, which leads to a kind of regression to the mean of bland aesthetics and generic outputs.  There is a craft to painting or sculpting that gives everyone room to carve out their own style, which is what makes it such a deeply satisfying way to express ourselves – without that we are only playing with technical demos. I think this medium will be at its best when we figure out how to combine this new accessibility of authorship with an enhanced (rather than reduced) capacity for unique expression and development of  true craft. I hope to be able to view works that are both personal and that would be otherwise unimaginable.

Explore the tool: Artbreeder

Intellectual Property (deauthorized)...

Un-patenting algorithmic
bias, profiling, and addiction.

For years, Stochastic resident Paolo Cirio has been turning data into digital activist art in inventive ways. His Obscurity social justice project, for instance, took on the predatory online mugshot industry that charges people with even minor arrests exorbitant picture removal fees. Cirio cloned the sites and shuffled their data, obfuscating the records.

The Italian artist’s latest, Sociality, is no less impressive–and no less eye-opening.

Cirio aggregates and sorts 20,000 social media and other tech patents into a searchable database that reveals just how invasive our digital devices have become. Patents with names like:

  • Method of advertising by user psychosocial profiling.
  • Mental state analysis of voters.
  • Predicting user posting behavior in social media applications.

“We [understand] the power of mass media, like television, advertising, etc.–they teach this even at school,” Cirio tells Fast Company. “However, it’s not common knowledge how the media of algorithms, user interfaces, and personal devices are much more powerful and sophisticated in manipulating people. This should be an educational issue but also a legislative one.”

Read DJ Pangburn’s full article in Fast Company

Technology and Privacy

Alexa, Siri, and Cortana have a new competitor. Meet Lauren.

Eleven million Amazon Echoes sit on kitchen counters today. Most people who own one–or any other smart home speaker–probably don’t spend a lot of time questioning the fact that this always-listening device records data about them and then ferrets it away in a server, where it is used in ways they may never know about. But would we question that arrangement if Alexa were a real person, rather than a device?

That’s the idea Stochastic artist and UCLA assistant professor Lauren McCarthy is putting to the test. This week, McCarthy launched a project called Lauren in which the Los Angeles-based artist embodies a eponymous smart home assistant. For three days, she acts as the brains behind a willing volunteer’s smart home, doing everything from turning on lights to giving advice to just chatting, like a living, breathing Alexa, Cortana, or Siri.

“I’m thinking of myself like a learning algorithm,” she says. “The first day is rough–an early prototype of Lauren–and the future [Lauren] has learned and is more skilled and effective.” To carry out the project, McCarthy installs smart home appliances and cameras all over the home of the willing user. That means she has full control over the lights, music, and temperature, as well as locks, faucets, and even tea kettles and hair dryers…

Read Katharine Schwab’s full article in Fast Company

Autonomous Drones

Introducing Icarus 2.
Fly too close to the sun.

Autonomous drone technology in the military sphere is challenging structures of accountability and responsibility. Stochastic artist and creative technologist Troy Lumpkin uses drone technology to create art – his graffiti drone, which he hopes will soon be capable of autonomously creating its own artworks, challenges our notions of authorship, creativity and power.

Stochastic: Tell us about the Icarus drone project. What do you hope to achieve?

Troy: The Icarus drone is an ongoing experiment in examining automated painting systems as well as collaborative open source hardware initiatives. The drone is made of easily accessible materials. It’s a consumer grade camera quadcopter and a micro Arduino with a 3D printed robotic spray system (which allows it to spray work that’s larger and more far-reaching than anything that could be achieved with any other tool currently available on the market). Ultimately, I’m looking  to expand the creative reach of the human body, and to raise questions like whether artificial intelligence and computer systems are capable of creating art that humans will appreciate.

Stochastic: What does this mean for artists?

Troy: Aside from reaching previously unreachable surfaces, drone-painting technology begins to examine how the actual labor of art fabrication can be outsourced to autonomous systems. What if the things we created could create art? Would they create art? And if so, who is the author? At the moment, I have little control over the aesthetic with drone paintings, but technically, I retain the underlying authorship.

Exhibition

Stochastic @ Ars Electronica
"Strange Temporalities"

Can we continue to distinguish the future from the present? Should we? The rapidly accelerating impact of technology on our society, environment, and selves has, in recent years, left us questioning the boundaries between science and science-fiction, optimism and hindsight, the authentic and the fabricated, the familiar and the unimaginable. But what about the less perceptible boundaries, those strange delineations we draw unaware?

Stochastic convened a unique group of artists, engineers, scientists, thought leaders, and entrepreneurs to consider these questions through the production of artworks, prototypes, and social provocations. Drawing on the Bay Area’s longtime culture of innovation, deep sustainability focus, and multi-generational commitment to independent thinking, these works ask the viewer to be present and future at once —a useful strategy, perhaps, for anyone navigating temporalities mediated by technology.

The exhibition includes work by past and current Stochastic residents including Arts Electronica Golden Nica recipients Paolo Cirio and Lauren Lee McCarthy as well as pieces from the CRISPR (un)commons residency, which places Stochastic Labs artists alongside the world’s leading genomics pioneers at the Innovative Genomics Institute at UC Berkeley.

Read full exhibition catalogue at Ars Electronica

In Conversation

Advadnoun (Ryan Murdock)
with Vero Bollow

VERO: Ryan, let’s go back in time for a minute. The year is 2021, OpenAI has just open sourced CLIP and then you go and publish this totally groundbreaking open source colab notebook which allows people to…actually use it! You fueled this whole emergent ecosystem of independent, open source innovation in an entirely new domain. Can you speak a little about your experience?

RYAN: When CLIP came out, I wanted to know what it was “focusing” on, so I started to probe neurons using a method similar to what DeepDream did where you optimize an image to be “exciting” to specific parts of a neural network. Eventually, I realized I could use the approach to optimize the match between the CLIP image and text encoders’ outputs, which allows us to generate images from essentially any text with CLIP. Back in the day most models had very specific domains of what they could generate (like just faces or just certain classes from ImageNet) so this was pretty radical, considering that all we’d seen at that point were just a few demo images from the original DALL-E!

I open sourced the notebook for that and several others down the line, and it was a really special time for me. I loved getting to see how people were using it and creating their own tools in what really felt like an explosive few years. The community back then (and still in some places now, but it’s different when things become an industry — in some areas at least) was so happy to share, and there was so much excitement about the potential. I’m glad that this open spirit still exists in a lot of strains and places. And I really do think there was some phenomenal art being made — a lot of it by people who had been doing ML for years but also a fair amount by people who had a background in the humanities or writing who could leverage their expertise into a whole new modality-crossing medium.

VERO: Any fun anecdotes to share?

RYAN: One random anecdote that I always come back to was when I wanted to see what the notebooks would do with an impossible or unlikely image (I was prompted by Janelle Shane asking GPT-2 to identify how many eyes a horse has — and GPT-2 had no idea, saying everything from one to ten eyes), so I typed in “a horse with four eyes” expecting some kind of monstrosity. Instead the model produced an image of a horse wearing glasses, which I thought was delightful.

But it really nailed home to me that these models (as Ted Underwood likes to say) don’t just model text or images in a vacuum; they model culture. So I think pretty often about what these models know and what opinions they advance — in ways that can be charming or insidious.

VERO: Let’s talk a bit about your exploration last summer at Stochastic. How would you describe what you were/are working on? What motivates this project for you?

RYAN: Last summer at Stochastic I looked at a few projects, but my favorite right now was focused on personalized preference learning for image generation — I’m actually planning to share a little blog post summing up that thread soon! The general idea in its current form is to synthesize work in generative ML and recommendation systems to create a system that can take in user interactions with media at scale and generate new media for specific users based on those interactions. This is similar to what Joel Simon talks about in some ways: trying to avoid this sort of one-size-fits-all approach to model aesthetics in favor of fitting niches of people with shared interests and stylistic senses.

I’m imagining some of these types of systems can and will look a lot like TikTok (though they could exist for images, text, music, etc.) but instead of allowing for just algorithmic distribution, they’d also allow for algorithmic generation as well. Which all frankly looks fairly bleak & dystopic! Maybe it’s a bit fatalistic, but I think that getting ahead of ideas like this before they’re deployed (if that does happen) and providing some openness is probably preferable to the alternative of them still rolling out but in the form of corporate black boxes.

I’ve done some explorations in my own practice where I’ve focused on being in-the-loop in a system that takes in interactions (usually with a yes/no or 1-to-10 score) and produces images based on those interactions, which are then interacted with and fed back in, over and over, and it’s a kind-of weird experience. I feel like I’ve genuinely made some images that are specifically dazzling to me, and I’m still digesting whether I think the process is artistic and fulfilling or just wireheading.

VERO: You’ve mentioned previously that this kind of personalized preference learning for image generation really needs to be done thoughtfully in order to avoid ending up with an incredibly narrow system in which you can essentially only create what you’ve already liked  (you have a fun metaphor for this: “the artistic equivalent of drinking sugar water”) and moreover, to actually empower our deeper sense of exploration. Do you have any specific instincts or insights about what doing this work “thoughtfully” might entail?

RYAN: I think that doing it thoughtfully really requires the right incentive structures, mostly! If a company does it, they will probably try to maximize engagement, and it’ll be a time-suck at best. But I think if people do it for themselves, there’s a good chance it could be really interesting.

VERO: What has been the value of open source in the evolution of this stuff, and what role might the open source community fill in the future?

RYAN: I think that one way to approach it is considering what the area would look like without open source. In my opinion we’d have pretty much all of the same or similar concerns over economics, social impacts, etc. — all of which I think should be taken seriously — and we’d also be paying $22.99 per month for them.

I also think we’d have much less performative tech with worse biases (academic labs, for example, have done so much important work here that isn’t really possible behind an API.) People really underestimate how important accessibility is. So I think that the role here has been huge for shaping what this technology is and what it means, and I hope that we’ll continue to value that going into the future!

VERO: On a personal level, what has open source meant to you?

RYAN: I think that what’s really beautiful to me is seeing people work on something because they find it intrinsically interesting or engaging without any guarantee of personal gain. I feel really lucky that I was in a place where I had time and energy and space to do something that I wasn’t sure would ever come back. Getting to do that in an in-person community setting like Stochastic is just a joy.

Exhibition

An AI dreams up imaginary artworks...then the artist creates them

In one of the starkest pieces in Alexander Reben’s AI Am I? (The New Aesthetic), a series of plungers of varying lengths sit before a white wall, their descending pattern hearkening to cell phone bars. The description for the piece, titled “A Short History of Plungers and Other Things That Go Plunge in the Night,” reads: “The sculpture contains a plunger, a toilet plunger, a plunger, a plunger, a plunger, a plunger, each of which has been modified.” It states that the piece was created by a collective of anonymous artists founded in 1972 known as “The Plungers” (quotes theirs), who were dedicated to “the conceptualization and promotion of a new art form called Plungism.” The work apparently made such a splash that it became a “landmark of conceptual art and one of the most famous artworks of the late 20th century, and it was even featured on an episode of Seinfeld in 1997.”

None of the above, unfortunately, is historically accurate. The entire description—art, artist, history, even the title of the exhibition and the majority of the artist statement—was produced by the third generation of the language-predicting deep learning model created by OpenAI…

Read Jesse Damiani’s full article in Forbes