Final Project Progress: Details by Maya Pruitt

While developing the UFO, I remembered Mark saying that textures can be super effective. Rather than trying to do everything in 3D, I worked on creating a realistic texture to map onto the UFO ground model. The ground needs to resemble the final location of the scene, Union Square Park and make it appear as if the hexagon tiles have really been uprooted. The texture is created from my own photographs combined with researched imagery of sink holes.

Texture as flat UV map

Texture as flat UV map

Texture applied to UFO ground plane

Texture applied to UFO ground plane

I created the visual screen designs for the UI from scratch as well. In a future iteration, this would be fully interactive, which is why I created icons for a messages window and a map feature. The parallelograms on the right hand side would serve as a progress bar and fill one at a time as the user discovers strange occurrences in the story. 

screen1.png


Data Art - Maps & Publics: 40x Creation Process by Maya Pruitt

RESEARCH

As New Yorkers ourselves, Cara and I are no strangers to the cost of living in NYC. We began looking at articles covering rent costs and income levels of different populations throughout the city. We were especially inspired by a Curbed article entitled, New York’s most and least affordable neighborhoods and its supplemental map visualization. We were especially struck by how wealthy neighborhood and low income neighborhoods can exist right next to each other with such a disparity in income.

Areas like Long Island City, Queens, Williamsburg, Brooklyn, and the Lower East Side ranked among the highest for unaffordability. We decided to delve deeper into the Lower East Side as we are both Manhattanites living nearby LES and felt a more initial personal connection.

Screen Shot 2020-01-13 at 12.51.11 PM.png

From maps like these it was fascinating to see that neighborhoods considered “affordable” are actually those that are more affluent. The rents are still high, but because median income is higher, those living in these neighborhoods can more easily cover their rent payments. This lead us to researching more about the 40 times rule, requires renters to have annual incomes of 40x the monthly rent in order to lease apartments in NYC. If an individual cannot meet this requirement they are expected to have a guarantor who makes 80 times your monthly rent.

We compiled population, income, and rent cost data from multiple sources and averaged them to create the data set we would use for this project.

Table of our calculations.

Table of our calculations.

Annal household income data from censusreporter.org

Annal household income data from censusreporter.org

Chinatown & LES statistics from datausa.

Chinatown & LES statistics from datausa.

Our conclusions:

Median household income: $42,985

Average rent (2 bedroom apt.): $3952

40x rule projected income: $158,080

The LES population with an income bracket of $158,080 only represents 8% of the total population.

Household income brackets by population from statistical atlas.

Household income brackets by population from statistical atlas.

PROCESS

Cara and I were interested in creating a piece to intervene in public space. However, it was important to us that our intervention was legal, unobtrusive to the citizens of LES, and in a form not easily destroyed. We felt that augmented reality gave us the affordances of these considerations.

40x is built in Unity. In our design we wanted to use low poly 3D models to create the sensation of a crowd but also have them visually distinctive from people passing by. Our decision to have the people fade out slowly overtime was meant to create visual impact of the statistics we had researched. We also wanted to include written explanation of our research presented in a familiar form and in a way that could be toggled through at the user’s own pace.

Cara developed a wireframe of the design. I worked in Unity to build the AR and UI elements. Below is a test of the crowd animation using an image target.

Documentation of 40x was shot in the Lower East Side.

Jtait+FeQVOQH2qZVlfnLg.jpg
image-asset.jpeg



Final Project Progress: Demo Video by Maya Pruitt

This demo video shows scene one of the final video and the setup for the narrative. The first AR component is introduced in the context of the story. I also wanted to begin playing with the subtitle mechanism. It is important to me that the main character is the user, so this feels like the best way to give them their own voice. The demo was shown to Mark, classmates, and guest critics for feedback.


Final Project Progress: Testing ARKit by Maya Pruitt

Took a deeper look into unity and ARKit this week. These are some tests for different functionality that may be incorporated into the final version.

Tracking multiple image targets:

Testing animated UFO:
So far it can be placed in the scene (with a tap) and the animation does run, but lighting is off and it seems to be spawning over and over. Goal: fly in?

UI test (how HQ voice and subtitles would be displayed):

UI_screenTest.png

Hedwig & The Angry Inch: Color Model by Maya Pruitt

DIGITAL 3D MODEL:

Added textures to our digital model to get a sense of things. We decided we wanted to use the classic cloud wallpaper of Andy’s room, a wooden floor to represent that of a shelf, and colored boxes that emulate the palette of the Toy Story logo. We took elements from Al’s Woody collection that fit Hedwig, such as the record player. We found a great little Pixar style lamp from Flying Tiger – it even lights up! A 3D printed hand is meant to create scale and serve as a prop for Hedwig to cuddle with. We decided that any boxes we wanted to stand out, we would give full detail, such as a box for Woody, Buzz lightyear, and then boxes for Hedwig world characters, like her band, and her very own Barbie box. Our idea is that the actor can quick change inside her box, and that the outside can have projected imagery to emphasize the personas she calls out in “Wig in a Box”.

Screen Shot 2019-10-28 at 11.14.17 PM.png
Screen Shot 2019-10-28 at 11.14.27 PM.png
Screen Shot 2019-10-28 at 11.14.34 PM.png

COLOR MODEL FABRICATION:

Lots of cutting out printed materials we made ourselves. For the floor we found this awesome wood tape that has a very realistic look! We used a clear plastic (paper protector sleeves) to create the windows of our toy boxes.

IMG_5500.JPG
IMG_8719.JPG
IMG_1592.JPG
IMG_2879.JPG
imagejpeg_2.jpg
photo1.jpg

Final Project Progress by Maya Pruitt

This week I did some visual research to get an idea of the aesthetics of my project. I want the interactions with the HQ voice over to feel very futuristic and spy-like.

Screen Shot 2019-10-25 at 1.26.20 PM.png

I started working with ARKit and boy was I having a LOT of trouble. My main accomplishment was getting a successful build to the phone and getting the camera to open. For serval tries the app would crash on launch.
Unfortunately I am having trouble with image tracking now. I tried a script from a tutorial I watched on youtube, but it produces no results. This video is just of getting the app to open, celebrate the small victories! But I'm sad it isn't responding to the image. Please if anyone has resources on how to get a cube to appear triggered from an image target in ARKit, let me know!

Subsequent builds I tried, kept changing my bundle identifier to "Unity-Target-New" when I didn't tell it to, and it says my iPhone architecture is unsupported. An hour before, this wasn't the case.

This is the error (still appearing after following its directions to change the architecture settings). Very confused what's going on.

Screen Shot 2019-10-25 at 1.30.23 PM.png

That's all for now! My goal is to get image targets working before getting into the detailed animations. I want to test the responsiveness of image targets too, do pngs work, can an image represent all versions of an object that would appear in real life?

Data Art: Archive Conceptual Project by Maya Pruitt

Can data predict the next top pop song?

By exploring language processing and text analysis of lyric data, I hypothesize that there could be enough information to craft new music.

RESEARCH:

Billboard is an American entertainment media brand founded in 1894 originally as an advertising company. The brand began focusing on music in the 1920s and has famously been tracking the top 100 hit songs every year since 1940. This is decided by a combination of sales, radio play, and streaming popularity.

Pop music, or popular music, is exactly as it sounds. It is music that is popular for its time — songs that become ubiquitous in a way, because of their chart appearing status. Pop music has ebbed and flowed over time and can have influence from many different musical genres and styles.

But although, music is an art form, are there ways in which pop music becomes formulaic?

PARSING PROCESS:

How could we begin to make sense of what makes a hit pop song?

I began by looking at an existing dataset I found here. Using a scraping program this data set included the Billboard Top 100 songs form 1965 to 2015. This felt quite overwhelming, so I decided to focus on the most recent year first and create parsing programs that would look at 100 songs instead of 5,000.

To start, my goal was to find most common words, most common phrases, and lyrics that rhyme within the dataset. I used a combination of python and javascript to create parsers with different functions. My python code returns most common words and n-grams (sequences of words). The javascript code uses RiTa.js to find keywords in context as well as rhyming words in their context.

I noticed that the original dataset had quite a few flaws and it greatly effected the outcomes I was getting. I decided to go back and change my dataset (i.e words strung together without spaces, contained non lyrical information) , I selected the lyrics from the number one hit song for each year in the 2010s. With a new dataset I could ensure there were no issues like words being strung together or extraneous information like the song credits appearing as part of the lyric data. However, this is now a very small data set.

The results of the n-gram parser helped me identify words that occurred most among the lyrics. Since songs are often very repetitive I didn’t want this to skew the results. I made sure a word was only added to the count if it appeared again from a different song. I kept common words because they are often important to song lyrics and would increase the chance of finding grammatically correct phrases.

These are the results of the n-gram parsing:

n-gram.png

Interestingly, commonality got up to 4 words in sequence with the phrase “admit that I was”. This appears in “Love Yourself” by Justin Bieber and “Somebody That I used to Know” by Kimbra.

Next I used my other program to search for how the word “admit” appeared in the context of the whole song and for what words rhymed with “admit”.

*Key words in context shows a chosen number of words before and after the originally searched word.

*Key words in context shows a chosen number of words before and after the originally searched word.

Rhyming appears in the same way:

Screen Shot 2019-10-22 at 5.43.12 AM.png

For the final product the goal would be to answer the question, Can Data Predict the Next Hit Song? However, I realized that this is a really challenging endeavor. I felt like I got back interesting information, but didn’t quite know how to make sense of it. Below is how I began to link phrases across songs to each other.

Screen Shot 2019-10-22 at 5.52.57 AM.png
Screen Shot 2019-10-22 at 5.52.36 AM.png

key:

CAPITAL LETTERS = N-GRAM phrase

bold letters = keyword in context

colored = rhyming words

Perhaps a revealing visual would be to showcase where lyrics are pulled from

Perhaps a revealing visual would be to showcase where lyrics are pulled from

CODE

I added a chord progression column to my dataset as a way to guide the song making process. A really cool find is that although none of the songs in the dataset follow the resulting chord progression themselves, the parser still identified: C F G Am as the most common chords in the dataset and this progression is the one of the most common in pop music generally.

NEXT STEPS:

While I was hoping that data could make the songwriting process easier, I felt that it was just as hard. I feel like I was given puzzle pieces that don’t quite fit together. In a more realized version, there are different ways this could go:

1) with a larger dataset like thousands of songs, I suspect there would be more interesting N-gram results.

2) if lyrics could be parsed by phoneme or sentence structure, and have an algorithm produce all lyrics that match in structure, we could possibly obtain more words/phrases to create rhythm & melody of a new song. Could an algorithm even identify these structures for each part of a song, like verse, chorus, bridge?

3) while I was determined that the song lyrics should be produced only from existing data, there is also an option to use generative models. Perhaps, the computer could create new lyrics based off of what it learns about the dataset.

PROPOSAL: Ultimately, the final data art project would be an actual song that follows a musical & lyrical structure it learned from a dataset of past top hits. It would also be interesting to visualize how the song came to be. I imagine maps that show links between the existing songs in the database, or perhaps visuals that take a more literal approach of representing equations or formulas.

Hedwig & The Angry Inch: White Model by Maya Pruitt

Team: ChenShan Gao, Jenny Lin, Maya Pruitt, Raaziq Brown

CONCEPT SENTENCES:
(1) Hedwig and the Angry Inch is a story about overcoming insecurity.


(2) Hedwig and the Angry Inch is about the transformation of a belief that insecurities about one’s identity can be overcome through romantic relationships.


(3)Hedwig and the Angry Inch is a story about genderqueer singer Hedwig who searches for her other half in order to overcome insecurities about her identity (her body, her gender, the lack of respect for her music). Over time, she struggles with trauma, hurts those around her, and believes wholeheartedly that her own happiness lies with others. By the end, her belief transforms when she realizes that she is complete as an individual.


DESIGN:

In the broadway production of Hedwig and the Angry Inch, the set was designed to look like the set of “Hurt Locker: The Musical”, which in actuality does not exist. For our model, we chose to create our set as Toy Story 2 – Inspired by the film’s similar themes of identity and self-understanding. Toy Story 2 is when Andy’s toys discover they have monetary value as opposed to just sentimental value. Woody and Buzz learn about their “worth” by discovering a greater world outside Andy’s room. We felt this metaphor fit well with Hedwig’s journey and perceptions of self. Although the idea is that Hedwig simply takes over the set of another musical, we wanted the movie we chose to hold symbolism within the Hedwig story.

TOY STORY 2 VISUAL RESEARCH:

Screen Shot 2019-10-28 at 10.01.56 PM.png
Screen Shot 2019-10-28 at 10.02.16 PM.png

Screen Shot 2019-10-28 at 10.02.25 PM.png
Screen Shot 2019-10-28 at 10.02.37 PM.png

DIGITAL 3D MODEL:

WHITE MODEL SET FABRICATION:

Week 5: Final Project Proposal by Maya Pruitt

Working title: Among Us

An AR narrative about alien contact with earth. Using the themes of revealing the unseen, users would be prompted to find different locations in their realities and watch it be altered before their eyes. These scenes connect to reveal a final climax that extraterrestrials have already landed on earth.

PRESENTATION DECK