Our concept for the final assignment was similar to our original plan for the first BCI assignment that we could not get to work. This new plan also included adding a physical component to the project. Determined to make it work, we continued to push our skills.
The first attempt at prototyping, we created an Open Framworks program that played a sound file whenever the meditation level reached a certain point. This prototype did not work well and seemed glitchy.
Next we tried to use the ToneAC library in arduino. We successfully got the code to run, but it too was spotty and very electronic sounding. Arduino Prototype
Moving forward we began looking back into processing libraries, as it was simpler to code in. Eventually we agreed it was easier and we’d have more potential for growth if we used our own code in Open Frame works. We began by building a simple line that adjusted with the tone and frequency of the wave. Sample video can be found here. The code can be found here. We could not use this for our project because we could not make multiple lines that connected tonally and linear to other neural oscillations.
The next step we took to incorporate all the brainwaves with their individual sound frequencies was to map their motions to a bouncing ball that plays the equivalent note to each peak. This gave us more of an effect and we were able to have multiple headset inputs, so people could play together. Unfortunately, most of the time it just sounds like dissonant melodies. The code can be found here, and the video here.
I took this project further in my Major Studio collaboration with Gabrielle Patacsil. Gabrielle took the code even further while I created hardware hacks to emphasize the mindscape. Here is our documentation and video.
Last week I was lucky enough to attend the Neuro Gaming conference in San Fransisco. Not only was I able to listen to the panel discussions, but I spent a large portion of my time hooked up to Open BCI.
First, I attended my very first hackathon. I assisted in making the training software for a classifier, as well as hardware hacking a hex robot for the team to use. The goal was to have the robot move as the user thought of moving their right/left/forward… but the classifier was not completed in time. The end result was the robot moving on EMG readings through the Open BCI. The team was given honorable mention. Here is some video documentation taken during the second hackathon at Berkley. http://youtu.be/Adpo_ZuyIBI
The conference itself was not limited to neuro gaming. Representatives from Oculus, and many famous game designers attended. I was particularly interested in the therapy based aspect of the conference, as much of my work revolves around this topic. I took notes and brought back pamphlets of all the cool stuff they had there. It will be accessible on my google drive here.
Below are some pictures and a video of some of my own brainwaves from the conference/demos. Note how I have a hard time producing an alpha wave.
Music has a profound effect on one’s body and psyche. As with any physiological rhythm, a strong beat or counter rhythm can simulate a change in their tempos. Brainwave Music Therapy (BTM) uses tones that prompt the brain to synchronize its electrical pulses with the pitch of the tones. Changes in electrical pulses can trigger the user into meditative, calm, alert, or sharper concentrative states.
The therapeutic experience relies on repetition and awareness of self. Through creating a physical artifact of a person’s brainwave composition, we create an opportunity to revisit the mind and give an instance of reflection. The object magnifies the alpha-theta waves in auditory and physical form to influence a state of relaxation and awareness.
“The neocortex is responsible for sensory perception, recognition of everything from visual objects to abstract concepts, controlling movement, reasoning from spatial orientation to rational thought, and language…” Kurzweil works with this notion of hierarchal patterns (written, spoken, and visual) when explaining the ways of thinking. He is shaping one’s visual and perceptual understanding of brain activity, and essentially creating a form for thought – “…the goal of the effort is to refine our model to account for how the brain processes information to produce cognitive meaning.” Therefore, it is important to read the text with an understanding of how one must abstract reality in order to explain it.
This idea goes hand in hand with how he relates to pattern recognition in memory and language. He notes that our memories are organized as patterns of lists. These patterns are generated through experience and given a certain weight due to individual values. However, these memories are not necessarily visual. When recalling a memory and relaying it to another, these patterns aggregate and reconstruct a memory though a visual or linguistic imagination – “you will essentially be reconstructing…images in your mind, because the actual images do not exist.”
This acquisition of knowledge creates a fundamental understanding of how the neocortex operates through one’s reality. Its potential to shape one’s understanding of the self parallels the potential to alter one’s reality. Over time, we will see how the evolution of understanding the brain may change our associations with each other and with ourselves. It’s the ongoing conversation between scientists and technologists that is most beneficial in understanding our brains. It is this dialogue that will keep us from overly formalizing the brain, as it will open up a space for the abstract thinking.
Using ofSplitString and ofToInt, we were able to parse the mindflex data into integers. Unfortunately we were unable to get the numbers to update. After reworking the code, and modifying the order, we were able to get a successful data input stream.
Using a if/else if we broke the data down into the individual channels. Using the meditation channel, we created a program to encourage meditation levels to rise. As the level increases calming sounds and imagery is added. (See some screen grabs below).
Our first attempt to work with the mind flex did not turn out well. There was no power feeding into the headset. Our second attempt did not relay any data. After spending sometime struggling with things, it was becoming clear that there was a loose wire connection to the Mindflex board. While inspecting the cables, the white data cable inside was attached by a thread. This wire was re-soldered, and fixed the data input issue.
With the brain wave data feeding into Processing (as seen in this test video), the code was then able to be broken down. The sound libraries were then added to the processing file, as well as a simple wav file. The sound file plays when there is no data input. This file is a work in progress file, and still needs work. The code can be viewed here.
Currently, the OF version is connecting and parsing, but when printed in the console only 2 digits are importing from the string. This code is also a work in progress and can be found here.
The Raspberry Pi arrived with a 7″ LCD and LCD controller, at this time the next steps were taken to create the “realistic” screen effect.
Step 5: Load Noobs onto Raspberry Pi. This task proved challenging to a beginner. First the software had to be downloaded from the Pi Websitedownloads page. After downloading this disc image an 8 GB SD card was formatted using SDFormatter. Formatting the disc with this tool allowed the Pi to boot off of the system copied to it.
Step 6: Configure Noobs. After placing the SD card in the Pi. The Pi was connected to share the network connection from a laptop through Ethernet and a TV set using an HDMI connection. Upon boot, see image below, the screen loads with a minimal menu. From this screen, the timezone, and sd card configurations are set.
Step 7: Install XBMC. Once configured, XBMC was installed onto the SD card through the “sudo” commands in terminal. Once installed, the SD card was placed back in the Pi. After booting to make sure the Media Center loaded, a video screen recording was taken from a working desktop and loaded onto the Pi through a jump drive. The test video playback can be seen on the video below. http://youtu.be/FTWUWZ_TwIg
Step 8: Installing the Raspberry Pi to the Laptop LCD. The monitor was then taken apart stripping the shell away from the LCD. The inverter was and LCD cables were pulled out and found to be incompatible to the the controller purchased. An extensive search was made with no luck to find an adapter. http://youtu.be/-pqjWFVj0h0
Step 9: External monitor fix. Since no fix could be made to have a functioning screen work within the laptop an HDMI to DVI converter was used to create this effect on an external monitor. The Pi was then placed in the laptop. Wires were placed into the laptop in their usual places to ensure all connections needed were made and give the appearance the computer was actually running. (See image below.)
Step 10: Close casing. The casing was then closed and the machine was tested again.
Since the iMac was unable to hold a power on for a period more then a few minutes, the work around was to “fake it” using the shell and a raspberry pi to replicate a screen. The iMac was heavy and clunky to carry around. After making a trip to Parsons Electronic Green Space, a G4 – model A1106 (see pictures below), was acquired. This made an easier and more compact shell for transportation. It also is more personal to the creative, it is rare to find someone who does not have an intimate relationship with their laptop today.
Step 1: Gut the computer insides. The first step taken was dismantling the laptop, removing all of the components that were not needed.
Before & After
Step 2: Mount Arduino and breadboard. The next step was to mount the Arduino and breadboard into the casing using some double sided foam cushioned tape (see image below). To do this the framing also needed to be removed.
Step 3: Adding simple components and code tests. The component added was the small LED that lights the latch was added. (with a 330 resistor) This was programmed to fade in an out using sample code from the LED brightness tutorial (see video 1). The next component added was the fan. The internal fans were 4 wires, and hard to adapt to the means needed, so an additional fan was added with temporary placement. The sample code from the blink tutorial was modified and incorporated into the LED breath code (see video 2). LED Test Video 1 http://youtu.be/eNmoN0S8Z9o
Step 4: Adding interactivity to breath. To add interactivity of the chest expanding upon inhale, a stretch sensor (conductive rubber) was used. One end was plugged into power, and the other plugged into Analog Out 1, a 330 resister and then fed into ground. The code was then modified again to include the analog read from the stretch sensor, and a stretch limit (for the exhale start value). The fan was moved to a permanent placement (so the casing could close), and the wires for the stretch sensor were fed through the USB slot.
Here is a link to our (Roula & Barb) revised project and progressions made. The project, code named HAL, has expanded to project a human function on a computer. We are exploring how to link the breath of a human to the “breath” of a computer through its fan.
I am a 2015 MFADT Candidate and the Facilities Manager of Photography at Parsons The New School of Design. HBO was my home for over a decade and they have nurtured my career every step of the way, but now I am looking for an opportunity to grow and make a meaningful contribution with my work. (Recent themes have been based off of surfing, body metrics, and play.)