What? Named after the Greek word root syn meaning “with” or “together”, Syng is a synesthesia-inspired ear training web application designed to improve its users’ singing skills and note recognition by visualizing pitch.
How? Syng uses an open source machine learning model, called ml5.js pitch detection, to identify a pitch from microphone input. Ml5.js pitch detection identifies the frequency of sound in hertz and Syng is programmed to match the frequency to its corresponding musical notes (with a margin of error). The application guides singers through a three-part experience.
Part 1: Intro. The intro pairs each note, regardless of octave, to a particular colored circle. The objective of this introduction is to establish to the user that sounds have pitch frequency and therefore correspond to musical notes. These note-color pairings remain the same throughout the entire experience to create consistency. For example, the note C will always be red.
Part 2: Pitch Match. Pitch Match is the ear training mode. Users can play a tone and sing it back with a visual cue of whether they are sharp, flat, or perfectly matched.
Part 3: Perform. This part is the final freestyle mode that provides a more elaborate and expressive visual while singing. The color changes to follow the pitch mapping used throughout the whole piece and the opacity changes with the vocalists volume.
Why? A common hurdle for novice singers and even advanced vocalists is learning to stay on pitch. An inexperienced singer may hear a note but will be unable to reproduce it precisely. However, if singers are not familiar with musical notes or even hearing themselves sing, how can they identify their mistakes? Syng was created to provide approachable music education that using the combination of sound and visualization to enhance learning.
The experience incorporates technical applications of dynamic web development, machine learning, JavaScript, and HTML/CSS.
See more about the research and design process:
RETURN TO PORTFOLIO