Dr. Rebecca Fiebrink is a Lecturer at Goldsmiths, University of London. She designs new ways for humans to interact with computers in creative practice, and she is the developer of the Wekinator software for interactive machine learning. This software has been downloaded thousands of times and used by world-renowned composers and artists including Laetitia Sonami, Phoenix Perry, Dan Trueman, Michelle Nagai, and Anne Hege to make new musical instruments and interactive experiences. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She is the creator of a massively open online course (MOOC) titled “Machine Learning for Musicians and Artists,” offered in 2016 by Kadenze. She holds a PhD in Computer Science from Princeton University. Prior to moving to Goldsmiths, she was an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra.

Creating Interactions with Machine Learning

Systems like Deep Dream that use machine learning to autonomously generate new media content have received a lot of attention recently. But how can machine learning actually function as a tool that aids human artists, musicians, and interaction designers achieve their own aims? In this talk, I'll discuss my own work using machine learning to enable new types of creative interactions between people and machines.

Building Creative Interactions with Machine Learning


Are you interested in creating real-time interactions with sensors, cameras, depth sensors, gaming controllers, or microphones? Machine learning can be a great tool for giving such inputs control over animation, sounds, robots, game engines, or other systems you’ve built. Machine learning makes it possible to build complex interactions that are difficult or impossible to create using only programming; machine learning also makes it possible for non-programmers to build and customize systems, and for programmers to build things more quickly.

In this workshop, you'll get a hands-on introduction to using machine learning for designing new interactive art, music, games, and other real-time systems. We’ll be using the Wekinator, a free and cross-platform software tool that connects to a wide variety of existing hardware and software (e.g., Arduino, Unity 3D, Max/MSP, PD, Ableton, openFrameworks, Processing, Kinect, Bitalino, …). We’ll teach you the basics of a few standard machine learning techniques and help you get started hacking with machine learning on your own projects.

We'll talk about how to use machine learning to work more effectively with sensors, audio, and video data, and to build expressive & embodied interactions. You don't need any prior machine learning knowledge (though you'll still learn a lot even if you've previously studied machine learning in a more conventional context!). We'll combine lectures and discussion with plenty of hands-on hacking. We'll be using the Wekinator software to hook up game controllers, sensors, webcams, and microphones to interact with sound, animation, game engines, actuators, and other creative gear.

SKILL LEVEL: Intro / Intermediate / Advanced
The workshop will be most useful for people who can do a bit of coding in some environment (e.g., Processing, openFrameworks). But people who don't do any programming will still be able to fully participate, as we have plenty of off-the-shelf examples which can be run without coding.

• Introduction: What is machine learning?
• Intro to classification + hands-on experimentation with classification for gesture and activity recognition
• Intro to regression + hands-on experimentation with regression for creating expressive, continuous controllers
• Intro to temporal modeling + hands-on experimentation with temporal modeling for spotting actions and events
• Finish with open-ended experimentation, hacking, and discussion

• All attendees should bring a laptop (any operating system).
• Optionally, attendees can also bring input devices such as those listed at Wekinator.org/examples (e.g., Leap Motion, Arduino + sensors, joysticks, mobile phone with touchOSC, ...).
• Attendees may also want to bring software/hardware they might want to control with machine learning (e.g., Arduino with motors; Max/MSP, Unity, Processing, openFrameworks, ...)

• Install Wekinator from wekinator.org/downloads
• Make sure it runs! If not, install the most recent version of Java for your operating system.
• If you're a Processing programmer, install the Processing code "Quick Start Pack" from Wekinator.org/examples/#Quick_Start_Pack. Follow the instructions at this Youtube How to run Wekinator examples in Processing Video to install the Processing libraries for OSC and video if you don't already have these.
• Or if you're not a Processing programmer, install the "Quick Start Pack" for your operating system at Wekinator.org/examples/#Quick_Start_Pack. Run the executable in Inputs/Simple_Mouse_DraggedObject_2Inputs/ and make sure you see a green box on a black screen. If you don't, please download the "last resort" examples from Wekinator.org/examples/#Quick_Start_Pack.