Biomechanics and Cognition

Stochastic team leaps...and lands on the cover of Science!

Olympic gymnasts stun the world with their performances on the uneven bars. Fortunately they don’t have to compete with squirrels.

Suppose instead of the uneven bars, human gymnasts had to fly through the canopies of trees, leaping gaps of varying distances, from branches of varying thicknesses, some stiff, some springy. And every landing would be different, on everything from trunks to twigs.

Oh, and then there are the hawks to watch out for.

What makes squirrels so good?

Lucia F. Jacobs, a cognitive psychologist who has studied squirrels extensively, and was one of the authors of a report on the work in the journal Science, said, “In some ways as a squirrel biologist, none of this is very surprising. If we were going to have a squirrel Olympics, this would not even be the qualifying meet.”

But the meeting of cognitive and biomechanical minds to do a joint investigation was unusual…

 

Read James Gorman’s full article in The New York Times

Machine Learning

"Mind-painting" with AI,
EEG, and amalGAN

Stochastic artist, provocateur, and technologist Alexander Reben has produced another series of visually arresting paintings — this time without picking up a brush or a tube of paint.

Reben’s technique begins with a computer model that combines words together to generate a single — often unsettling — image. The artificially intelligent program might combine the words “clock” and “jellyfish,” for example, creating a melted visualization that looks like the spawn of a psychedelic art exhibit and a nightmarish dream. The results are paired with other images, creating a “child” image that somehow enhances the visual chaos, according to Reben.

When Reben views those images, an AI measures his brain waves and body signals and selects which image he likes best. An image that passed the likability test undergoes several more rounds of AI retouching to enhance resolution. Once completed, the finished images are sent to China, where they are painted by hand.

“It’s going to be their best effort and interpretation of what they see,” Reben said, referring to the painter’s final product. “That whole loop from my brain waves to an anonymous painter is a really interesting chain.”

Read Peter Holley’s full article in The Washington Post

Biotech + Art

CRISPR decoded.
Creative contributions.

The popular dialogue around CRISPR to date has focused largely on its exceptional potential to cure challenging diseases such as HIV and Malaria or doomsday scenarios of epic proportions. Meanwhile, the scientists developing this radical and compelling technology face a much more nuanced set of investigative and social concerns. A select group of artists and creative practitioners will be given an unprecedented opportunity to work alongside these scientists as they explore the groundbreaking topics that will shape the future of this critical field – and the world as we know it.

Stochastic Labs and the Innovative Genomics Institute at UC Berkeley announce:

CRISPR (un) commons: creative considerations and genetic innovation

Residents will attend a weekly seminar in five program areas (biomedicine, technology, agriculture, microbiology, and society) featuring top international scientists, and a monthly meeting with cross-disciplinary scholars interested in the ethics surrounding CRISPR applications and the regulation of this pioneering technology.

Watch the TED talk by Innovative Genomics Institute founder, Jennifer Doudna.

Congratulations to Andy Cavatorta, Alison Irvine, Kate Nichols, Sheng-Ying Pao, and Dorothy Santos! More information on the artists and their projects is available here.

Body as interface

Beyond Cyborgs.
Wearables that go skin-deep.

When 33-year-old quadriplegic Felipe Esteves saw Stochastic resident Katia Vega levitate a small drone just by blinking at it, he knew that was the kind of superhero he wanted to be. Wearing a white wig to keep her secret identity intact, channeling X-Men’s Storm, Vega was demonstrating her superhero tech at an expo. Each time she blinked with purpose, a tiny circuit nearly invisible to onlookers completed and instructed a controller to move the drone. That circuit was hidden under her wig, and was completed every time a pair of metallic false eyelashes met for a long enough time and connected to conductive eyeliner Vega was wearing. Signals were transmitted to a Zigbee radio, with the receiver kept in the superhero’s handbag.

Sometimes, her blinks instructed animated images that read “POW, Bam, Zap” to pop up.

The system builds on Vega’s existing body of Beauty Technology prototypes, many of which take inspiration from special effects makeup used in movies. There are even characters, many of whom would not look out of place on the set of the Fifth Element: Arcana, for instance, is a futuristic messenger who changes the world around her with each blink, controlling the proliferation of music and images. Then there’s the hauntingly beautiful Kinisi. “Kinisi could change the world with a smile, a wink, raising her eyebrow or closing her lips,” says Vega. “Each of these muscle movements will trigger different light patterns.” Here, Vega employed the skills of FX makeup artist Larca Meicap, who combined her traditional tools with sensors applied to muscles and LEDs hidden in the skin and hair in patterns that came to life everytime signals from the sensors activated a microcontroller.

“Wearable computing has changed the way individuals interact with computers, intertwining natural capabilities of the human body with processing apparatus,” Vega explains. “Beauty Technology transforms our body into an interactive platform by hiding technology in beauty products to create muscle-based interfaces that don’t give the wearer a cyborg look.” Vega lists some example products her company is working on: Conductive Makeup, Beauty Tech Nails, FX e-makeup and Hairware (“a new prototype I am working on in order to make your hair interactive”).

Read Liat Clark’s full article in Wired

Silicon Valley's "Paul Reveres"

Where is technology taking
democracy (and humanity)?

Pat Morrison of the The LA Times caught up with Stochastic resident/Center for Humane Tech co-founder Aza Raskin, and asked…

Aza: Early on in the internet, there was Section 230 of the 1996 Telecommunications Act, which said that internet companies were not responsible for content that the users posted, which is a way of saying the Internet and software was creating a space of deregulation where there were no protections for users. At the beginning that felt like a great thing. The web was this this wild new world where creativity could be unleashed. You could connect with people and groups that could exist here that couldn’t exist elsewhere…

There is a [former Google engineer and] YouTube researcher, Guillaume Chaslot, who’s worked on the recommendation engine for what YouTube videos get played next, and what he’s discovered is that no matter where you start on YouTube, if you just let the recommended videos play four or five times, you always get pushed further and further down conspiracy roads, further and further toward radicalization. If you start on things about vegetarians or vegetarian food you end up in chemtrails [conspiracy sites] .

LA Times: So like Capt. Renault in “Casablanca,” are we shocked, shocked that this is happening, when this was in fact part of the business model all along?

Aza: Many times, when we talk to people, it’s like, Oh this is nothing, we’ve had advertising for a long time, we’ve had propaganda for a long time. What’s different this time — and that it’s hard to see when you’re inside of the machine — is for the very first time, our connections with our friends are intermediated.

LA Times: What remedies does your group suggest?

Aza: There are a couple of simple things that individual users can do, just to fight against the effects of digital addiction. One of our favorite ones is to turn your phone into black and white mode, What we’ve found is that just reducing the sugariness of the colorfulness of your icons makes it a little easier for you to put down your phone. Another one is turn off all notifications from non-humans. So, no apps, no likes, just stuff that real people said. And that immediately reduces the amount of buzzing in your pocket and reduces tech addiction.

Read Pat Morrison’s full interview in The Los Angeles Times

Intellectual Property (deauthorized)...

Un-patenting algorithmic
bias, profiling, and addiction.

For years, Stochastic resident Paolo Cirio has been turning data into digital activist art in inventive ways. His Obscurity social justice project, for instance, took on the predatory online mugshot industry that charges people with even minor arrests exorbitant picture removal fees. Cirio cloned the sites and shuffled their data, obfuscating the records.

The Italian artist’s latest, Sociality, is no less impressive–and no less eye-opening.

Cirio aggregates and sorts 20,000 social media and other tech patents into a searchable database that reveals just how invasive our digital devices have become. Patents with names like:

  • Method of advertising by user psychosocial profiling.
  • Mental state analysis of voters.
  • Predicting user posting behavior in social media applications.

“We [understand] the power of mass media, like television, advertising, etc.–they teach this even at school,” Cirio tells Fast Company. “However, it’s not common knowledge how the media of algorithms, user interfaces, and personal devices are much more powerful and sophisticated in manipulating people. This should be an educational issue but also a legislative one.”

Read DJ Pangburn’s full article in Fast Company

New Boundaries for Computation and Art

An AI dreams up imaginary artworks...then the artist creates them

In one of the starkest pieces in Alexander Reben’s AI Am I? (The New Aesthetic), a series of plungers of varying lengths sit before a white wall, their descending pattern hearkening to cell phone bars. The description for the piece, titled “A Short History of Plungers and Other Things That Go Plunge in the Night,” reads: “The sculpture contains a plunger, a toilet plunger, a plunger, a plunger, a plunger, a plunger, each of which has been modified.” It states that the piece was created by a collective of anonymous artists founded in 1972 known as “The Plungers” (quotes theirs), who were dedicated to “the conceptualization and promotion of a new art form called Plungism.” The work apparently made such a splash that it became a “landmark of conceptual art and one of the most famous artworks of the late 20th century, and it was even featured on an episode of Seinfeld in 1997.”

None of the above, unfortunately, is historically accurate. The entire description—art, artist, history, even the title of the exhibition and the majority of the artist statement—was produced by GPT-3, the third generation of the language-predicting deep learning model created by OpenAI…

Read Jesse Damiani’s full article in Forbes

Technology and Privacy

Alexa, Siri, and Cortana have a new competitor. Meet Lauren.

Eleven million Amazon Echoes sit on kitchen counters today. Most people who own one–or any other smart home speaker–probably don’t spend a lot of time questioning the fact that this always-listening device records data about them and then ferrets it away in a server, where it is used in ways they may never know about. But would we question that arrangement if Alexa were a real person, rather than a device?

That’s the idea Stochastic artist and UCLA assistant professor Lauren McCarthy is putting to the test. This week, McCarthy launched a project called Lauren in which the Los Angeles-based artist embodies a eponymous smart home assistant. For three days, she acts as the brains behind a willing volunteer’s smart home, doing everything from turning on lights to giving advice to just chatting, like a living, breathing Alexa, Cortana, or Siri.

“I’m thinking of myself like a learning algorithm,” she says. “The first day is rough–an early prototype of Lauren–and the future [Lauren] has learned and is more skilled and effective.” To carry out the project, McCarthy installs smart home appliances and cameras all over the home of the willing user. That means she has full control over the lights, music, and temperature, as well as locks, faucets, and even tea kettles and hair dryers…

Read Katharine Schwab’s full article in Fast Company

Autonomous Drones

Introducing Icarus 2.
Fly too close to the sun.

Autonomous drone technology in the military sphere is challenging structures of accountability and responsibility. Stochastic artist and creative technologist Troy Lumpkin uses drone technology to create art – his graffiti drone, which he hopes will soon be capable of autonomously creating its own artworks, challenges our notions of authorship, creativity and power: Who is the artist, the human or the machine?

Stochastic: Tell us about the Icarus drone project. What do you hope to achieve?

Troy: The Icarus drone is an ongoing experiment in examining automated painting systems as well as collaborative open source hardware initiatives. The drone is made of easily accessible materials. It’s a consumer grade camera quadcopter and a micro Arduino with a 3D printed robotic spray system (which allows it to spray work that’s larger and more far-reaching than anything that could be achieved with any other tool currently available on the market). Ultimately, I’m looking  to expand the creative reach of the human body, and to raise questions like whether artificial intelligence and computer systems are capable of creating art that humans will appreciate.

Stochastic: What does this mean for artists?

Troy: Aside from reaching previously unreachable surfaces, drone-painting technology begins to examine how the actual labor of art fabrication can be outsourced to autonomous systems. What if the things we created could create art? Would they create art? And if so, who is the author? At the moment, I have little control over the aesthetic with drone paintings, but technically, I retain the underlying authorship.

Exhibition

Stochastic @ Ars Electronica
"Strange Temporalities"

Can we continue to distinguish the future from the present? Should we? The rapidly accelerating impact of technology on our society, environment, and selves has, in recent years, left us questioning the boundaries between science and science-fiction, optimism and hindsight, the authentic and the fabricated, the familiar and the unimaginable. But what about the less perceptible boundaries, those strange delineations we draw unaware?

Stochastic convened a unique group of artists, engineers, scientists, thought leaders, and entrepreneurs to consider these questions through the production of artworks, prototypes, and social provocations. Drawing on the Bay Area’s longtime culture of innovation, deep sustainability focus, and multi-generational commitment to independent thinking, these works ask the viewer to be present and future at once —a useful strategy, perhaps, for anyone navigating temporalities mediated by technology.

The exhibition includes work by past and current Stochastic residents including Arts Electronica Golden Nica recipients Paolo Cirio and Lauren Lee McCarthy as well as pieces from the CRISPR (un)commons residency, which places Stochastic Labs artists alongside the world’s leading genomics pioneers at the Innovative Genomics Institute at UC Berkeley.

Read full exhibition catalogue at Ars Electronica

In Conversation

Artbreeder/Morphogen
founder Joel Simon
with Vero Bollow

VERO: Joel, here we are in the early days of Machine Learning meets Art. We are just witnessing the first glimpses of how this powerful technology might impact the future of human creativity and expression. Most people experimenting in this space are AI experts–but you’ve gone and built these incredible interfaces such that anyone with a laptop can get their feet wet– why is making these tools accessible so important?

JOEL: Well, thank you! There are a couple reasons. First, many higher-end creative tools today are built for professionals. I think machine learning is distinct from past software advances because it doesn’t just enable this kind of specialized content creation, it actually makes the whole creation process itself more accessible…to anyone! Machine learning will eventually enable someone to make a whole film by themselves! Second, Artbreeder users are able to understand the technology with very different intuitions than the researchers who made it. It’s a valid and valuable form of understanding which often ends up informing the research as well.

VERO: For Artbreeder, you’ve borrowed an interesting (and controversial) concept from the biotech domain “gene-editing.” Can you tell me more about that choice?

JOEL: The central premise of Artbreeder is to use biological metaphors–each image as a biological entity–to abstract away the complexity of a machine learning system. A “latent space” or “class vector” are not terms that most people know but we all have an understanding of mating and genes. The latent space can have points mixed (mating) or transformations applied (genes) and so the parallels just make sense. They also abstract it in a way that doesn’t really hide what’s going on and is clear also to anyone with a technical understanding.

VERO: Machine Learning is a telescope that can be pointed in many directions. Watching Artbreeder grow over the past few years, it’s safe to say that the kinds of images the users create (or perhaps I should say “co-create^n”) has been deeply influential for the evolution of the interface itself. What is it like to develop a tool in tandem with its user base?

JOEL: It’s been great to have them co-develop! There’s two levels this happens on. First, the whole corpus of images in the site is constantly being grown as users make new images. So every image made by someone helps everyone else! The idea is premised on the fact that the way images are created with machine learning–by sampling vast quantities of possibilities (ie high dimensional spaces)–are impossible for one person to comprehend and explore, so crowdsourcing it is helpful for everyone. Second, the community is very involved in the design process. As an indie developer for the first two years of Artbreeder it was helpful and rewarding to always have a group of people to bounce ideas off of and talk to.

VERO: Let’s talk about Prose Painter. You call it a “sibling” to Artbreeder…why? And are there going to be more kids in this family?

JOEL: So Prose Painter is an AI-driven, open-source tool which allows humans to “paint with words” by incorporating guidable text-to-image generation into a traditional digital painting interface.

I think that Prose Painter and Artbreeder  are both the same kind of thing– a new interface applied to powerful technology to make it playful and accessible. But they also work very well together. Artbreeder is great for open ended exploration but has limits to what can be done. Editing an Artbreeder image in Prosepainter is a great way to synergize the two! There is another app in the family coming soon. It is based on taking the images made in Artbreeder and Prosepainter and bringing them to life with animation! So it’s a happy family / ecosystem of apps that all work together.

Explore the tools: Artbreeder, Prose Painter