Shape

A group speculative project in collaboration with LAD DesignAEG Presents x BST Hyde Park 2020

Date
Feb 2020
Role
Prototyping
Concept Dev
Communication
Animation
Project Type
Speculative Design
Storytelling
Communication Design
Overview

A three-week design sprint, where we worked in small teams to look into the future of music, listening, performances and music experiences. The goal was to research, ideate, test our new technologies, and present a new vision of the future of music.

Design Brief

Design and develop an experiential solution for the future of music. The proposition should be part of a broader service, product or experience - but you will focus primarily on the design and build of the human-music interaction at the centre of your proposition.

Approach

How might we enable people to connect through expression of their musical persona and instinct? By helping them to create a musical identity through instinct & history.

Featured in:

Project Outcome

Shape enables people to create an expression of their musical persona; combining both your music listening history and your physiological reactions to curate how you listen in the future.

Final Concept

Shape is the manifestation of our concept, joining your music listening history with your instinctual reactions, in real-time. We believe that combining these two areas of expression will create a more intimate and accurate representation of your identity, which can be visualized, utilized as a tool, and communicated to others.

In this future space, we imagine a world where wearables are monitoring our daily life and can be easily integrated into the Shape platform—essentially introducing a new metric to drive your daily listening.

Behind the Scenes

Design Brief

Design and develop an experiential solution for the future of music. Your proposition should be part of a broader service, product or experience - but you will focus primarily on the design and build of the human-music interaction at the centre of your proposition.

Highlights of the collaboration. Video by Lawrence Azerrad and AEG

What kind of music do you listen to?

As we took a dive into our explorations, we found ourselves coming back to one simple question: what kind of music do you listen to? Broken down, your music taste is defined by two things: what you already know and listen to, and how you feel in the moment while you’re listening to music. This prompted our main driving question:

How might we enable people to connect through the expression of their musical persona and instinct? By helping them to create a musical identity through instinct & history.

Measuring Instinctual Reaction

We found quickly that a reliable and testable way of measuring someone’s reaction to music is through heart rate and galvanic skin response. Building our own set of sensors, using Arduino components, we were able to gather reliable* readings from classmates that we used in our experiment.

* As reliable as we could as design students, and from a few days’ work.

Team building sensors with Arduino components

Understanding Listening History

Our goal was to access Spotify’s API, to see if we could identify patterns between songs people listen to and the underlying traits of the songs. Spotify categorises songs based on certain metrics (history, genre, artist, listening time, skip rate, bpm, frequencies, variety) and serves you weekly songs that match those metrics.

We wanted to know if we could manipulate that process, but instead of using your listening history, we wanted to use emotion and instinctual response to be that catalyst, resulting in the framework below.

A simplified diagram showing our plan to replicate the Spotify algorithm in real-time with users.

Our hypothesis was that we could play the user a specific song (based on their listening history) and then based on their physical response, we can select and play them another song that would increase or decrease their mood. Then replicate it a second time immediately afterwards.

Faking the Algorithm

Once Seetha and Christian got access to the Spotify API, we set to work with our experiment: utilising our classmates, playing a song they like, taking a reading, then essentially faking the Spotify algorithm and playing new songs they would like—songs with similar back-end readings—and seeing if the readings matched, and most often they did.

Results

By measuring each person’s GSM (galvanic skin measurement) response, using a custom python script developed by Seetha, we were able to see a connection between the song choices and the results of their instinctual reaction.* We were able to both positively and negatively influence their arousal with the following songs.

* Of course, the reliability of the results was limited as it was only as good as our tools and training. However, it was enough of a correlation to be optimistic about our proposal.

An example of results from one of our users in the testing process, the X axis measuring time in seconds, and the Y axis measuring GSM Response.

A simplified diagram showing our plan to replicate the Spotify algorithm in real-time with users.

Concept Definition

User Agency & Control

Automated:

If you give Shape full control, it will run seamlessly in the background with your music listening platform of choice, monitoring your real-time reactions and adjusting your music accordingly.

Suggested:

If you would like a bit more control, allow Shape to monitor how you’re feeling and it will provide suggestions for music that fits your response. If you’re feeling down, Shape can provide suggestions to lift you up, or provide suggestions to let you stay in whatever funk you’re in, helping you process whatever it is you’re going through.

Self-controlled:

Finally, when it comes down to it, your music is exactly that — your music. Turn off shape and listen to the songs that you want to listen to, no questions asked.

Visualising & sharing

Shape is not only a system to help you more intimately connect with your music, but also to help you connect with other listeners, compare individual shapes, and help you discover new music. We’ve broken down the into to a few key parts, all based on your interactions with your music listening platform of choice.

Colour:

Colour simply represents individual music genres.

Form:

The changing form represents the amount of time you spend listening to a genre. The bigger it gets the more time you’ve spent listening to it.

Texture:

Indicated the variety of artists in a genre. A more densely packed texture represents a wider variety in the number of artists you have listened to in that specific genre.

Orientation:

The placement of the forms on the map (inspired by Silvan Tomkins’ work on emotional categorisation) indicates the emotion that you have associated with each genre. With the understanding that it will grow and change as time passes.

Defining Your Shape

Above are the building blocks that define how your shape takes form, each being informed by the inflow of data to your personal device/monitoring device, and then brought to life. Our hope is that Shape will become a new abstract representation of your musical identity, one that changes with you, as your music taste evolves. That makes it unique and entirely yours, and we see Shape becoming a tool for curation, an excuse for immersion, and an aid for expression.

Three core
principles of shape:
Curation:

The placement of the forms on the map (inspired by Silvan Tomkins’ work on emotional categorisation) indicates the emotion that you have associated with each genre. With the understanding that it will grow and change as time passes.

Immersion:

In working with AEG and the BST Festival, we imagined Shape being used to create immersive experiences that engage people through a personalized and interactive music booth, leaving you with a gift to take and share. Alternatively, it could be utilising the bio-feedback, from festival crowds, to create an evolving visualization reflecting the mood of the festival.

Expression:

Lastly, Shape will enable you to communicate and express your musical identity to your friends and family. Sharing and accessing your friends' individual Shape will allow you to see what they listen to, how they’re listening, and how it makes them feel - but only as much as they want you to know.

Storytelling & Video

To deliver this project we created a 1-minute pitch video, followed by a 5-minute pitch presentation, Our goal was to convey the main key points as simple and straightforward as possible. We worked as a team to refine our message and then broke up into teams to produce the content. Seetha and Maraid refined the script, wrote the narrative, recorded the voice-over, and finished the presentations. I created all the animations, while learning AfterEffects from scratch, and helped with Christian's work filming interviews, editing footage, and producing the final video, which you can watch at the top of the page.