Verplank’s Design Framework:
Idea: Create a voice based music turn for learning and performance
metaphor: Synesthesia - a perceptual phenomenon where senses cross
model: web-based program (JavaScript.)….for now
display: projection or monitor
error: off pitch singing
scenario 1: beginner singers may not know they aren’t singing on key because they cannot not hear their errors
scenario 2: singers in an ensemble have trouble blending
scenario 3: singers want a live performance tool to add visuals to their sound
task 1: sing with a recording to match pitch
task 2: sing with recording or with others to create harmonies
task 3: sing freeform
control: voice controls visuals on the screen
7-degrees Axis Amount:
*I feel like these axis will move drastically depending on who uses my project, and I kind of like that. I’m just not sure how much of it is a natural outcome from the project or something I’m crafting. For example, a professional could make the sound quite expressive in the freeform version, but a beginner is using it to learn. I think it would be cool to play with more input somehow….perhaps volume or in a way “confidence” can be visualized as well.
Do, Feel, Know interaction 1: PITCH MATCH
User sings, and sees/feels the diameter of the change, they know pitch is matched if the diameter of the original circle lines up with the circle their voice controls
Prototype 1: https://editor.p5js.org/mpruitt/full/IOJ3ao-Wa
Video of user test - password: prototype1
Do, Feel, Know interaction 2: BLENDING
User sings, and sees color change, they know pitch has blended if the color mixes
Here is where I’m getting a little stuck. So far I’m looking at the project as quite modular, but it would be cool to give the user more control over the sounds they can make. I’m not sure how to implement this….do I do it by visualizing more nuance of voice to encourage them to sing in different ways? Should it be physical controls on the application that alter sound?