The aim of this project is to improve user experience using wireless controllers enhanced with physical feedback. We believe that the force feedback provided by all the acoustic instruments is one of the missing components in many Digital Musical Instruments (DMI) and we want to build an interface that can give to the user something similar to the aforementioned force feedback. The interface we plan to build is not haptic, but will respond to the user’s actions with an encoded feedback provided by a vibrating motor. To achieve the goal of creating a simple DMI we design a complete system in which the interface is the main component, but cannot work without the support of a software hosted on a connected computer. The software component is divided into three main parts: the mapping interface, the gesture recognition system and the sound synthesis.

1. Introduction

We want to explore some key of the concepts of the interface design task that can be summarized with a few keywords:

1.1 Embodied
Stands for an interface that is controlled by the human natural movements made with the hand while is controlling the behavior by means of changing the position in a virtual space. To achieve this, a glove with sensors will be built allowing a smoother control based on the movements (ie., postures and gestures) made by the hand within the aforementioned virtual space.

1.2.User centered experience (feedback)

1.3.Augmented senses
Augmented senses has to be interpreted as the additional tactile sense that gives to the user some important information on how his ‘blind’ exploration of the 3D space is proceeding.

Note that the last two concepts clearly overlap but are related to two distinct objectives/research problems. With them, the user would be sort of able to "feel" the sound and, thus having a better representation of where is supposed to be located at within the 3D space.

2. The interface design

The interface is a glove equipped with some flexor sensors to read the fingers movements, an accelerometer and gyroscope which combined are used for computing the hand position and a motor that vibrate according to the user behavior. All those components are controlled by a single Arduino that is connected to a host computer through Bluetooth. The Arduino will control the sensors data preprocessing; in this early stage each flexor sensor data is filtered to smooth the unintended changes and sent via wireless communication protocol to the host computer; the data from the accelerometer is preprocessed on the board and sent in a human readable format to the host computer. Another task of the Arduino is to encode and control the vibrating motor movements according to the feedback received from the host computer. A simple schema of the interface is given in figure 1.

3. Hardware components

- 1 Arduino Board
- 5 flexor sensors
- 1 accelerometer + gyro
- 1 motor
- 1 bluetooth transmitter/receiver - 1 xbee WiFi transmitter/receiver
- 1 glove with conductive fingers
- 1 battery

4. The host computer

The host computer will execute a software that will process all the data, preprocessed on the Arduino board, for gesture recognition, sound synthesis, 3D space management and visualization.
Note that this can be expanded with other modules triggering puredata, external DAWs, etc.
It was explored the idea of sending those messages to other peripherals which can receive OSC messages (Eg. mobile apps) but are beyond the scope of this project. Might be implemented in the future though.

5. Communication.

The communication between the microcontroller and the host computer will be done through OSC protocol, we will use the CNMAT/OSC library ( At present is using Bluetooth. Underneath is the code we implemented for the messages via serial.

Link to the code developed for the encoded messages:

5.1 I2C

6. Gesture Recognition

We plan to use a software for gesture recognition in order to process the data received by the Arduino board. The target we want to use to implement the software is Wekinator ( This software is a machine learning framework designed to build new musical instrument, create a system for gesture analysis and feedback and other related tasks. It seems the perfect candidate for the goal we defined in this project. In the prototype version we plan to encode only a few gestures expressed by means of the combination of the sensors of the interface (e.g. The fingers movement will define the speed of the movements while the accelerometer data will be used in conjunction with gyroscope to choose which direction is followed during the movements).

The encoded gestures are mapped onto movement of a geometric shape that explores a virtual 3D space to control the timbre of a synthesis engine. Each dimension of the 3D space represent a timbral characteristic of the sounds encoded as low-level feature of a timbre space, as in McAdams et Al.[1]. The moving shape will define the values of the three features according to its position in the 3D space. The dimensions/features define the 3D space boundaries according to their value range. When the shape reach one of the boundaries the host computer will send a message to the Arduino board that will translate the received information into physical feedback(vibrating motor) to the user. The goal of this part of the interface is to provide the user with augmented senses that give information on the position in the 3D space without visual cues, so that he can concentrate on his performance and is not distracted by the visualization of the timbre space.

7. Synthesis Engine

The synthesis engine will use the information created by the position of the shape in the 3D space to model the sound timbral characteristics and change it over time. A basic implementation of an additive synthesis can encode the sound features in terms of frequency, wave shape and harmonic content as well as spectrum shape. A more complex synthesis engine can be controlled using the same interface but it is outside of the scope of this project. For prototyping it was developed a patch on Puredata to confirm functionality (check under files).

8. Reference

[1] McAdams, S., Winsberg, S., Donnadieu, S., De Soete, G., & Krimphoff, J. (1995). Perceptual scaling of synthesized musical timbres: Common dimensions, specificities, and latent subject classes. Psychological Research, 58(3), 177–192.

9. Diagram


10. Roadmap

The general tasks that we want to address each week as detailed in diagram below which are addressed before the deadline on April16.

  • Week 1 .Project design, basic tests of the hardware (sensors and motor)
  • Week 2. Communication - serial, bluetooth
  • Week 3. Gesture recognition (wekinator), serial protocol (serial to OSC) and I2C communication (Gyro, Accelerometer)
  • Week 4. Software model (3D space) TBC. Basic mapping for sound synthesis made with Puredata (patches attached under files) to confirm functionality and behavior of the system.
  • Week 5. Refine, map, tests, build the final prototype (hardware)

Note that some detailed tasks will be addressed during the project development.


11. Software

WEKINATOR Machine learning tool used for mapping.
PROCESSING Used for Protocol communication
ARDUINO Microcontroller programmer.
FRITZING: For making hardware diagrams For making PCB
PURE DATA Audio Synthesis
Prototype Patch
WEKA Data mining. For analysis and QA.

12. Resources

13. Related Courses

Wekinator online course given by the author from University of London. Machine learning for artists:

Machine learning. Given by Andrew Ng from Stanford


Run Readme
Herein it can be seen the instructions for connecting and configuring the glove.

15. Links and other relevant stuff:


Other interesting links

Paper reference provided by QMUL:
Navigation of Pitch Space on a Digital Musical Instrument with Dynamic Tactile Feedback

16. Related shows and Conferences

This project is made as part of the Advanced Interface Design subject for MSc in Sound and Music Computing at Pompeu Fabra University.

17. Authors

Daniele Scarano
Pedro Gonzalez

Kosmas Kritsis
Javier Arredondo

18. Acknowledgments

Special thanks to Martí Sanchez from UPF-SPECS, our professor, whose help was invaluable, allowing us to build and show the prototype in a timely fashion. Thanks to SPECS as well for providing us with some of the required components and tools used.
Also, we would like to thank all members of UPF-MTG (Music Technology group); such as Sergi Jorda and members of his team Angel Faraldo and Daniel Gomez, whose lessons were invaluable for the designing of the whole system based on NIME principles; and Rafael Ramirez as well, who introduced us to the (hard) topic of machine learning, which allowed us to dig in and take full advantage of the algorithms used for the training process and the data analysis.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License