All posts by fabiola

Brainesthetics

Our goal is to create something that is aesthetically valuable using brainwaves. How can you influence the form, color and motion of piece of art or a new media work? How can you personalize a piece by mapping your attention levels? How can we embody this idea of an immersive experience in the form of a series of experiences and iterations using brain data?

We also came across the idea of Neuroesthetics in our research, which is an attempt to combine neurological research with aesthetics by investigating the experience of beauty and appreciation of art on the level of brain functions and mental states.

And in reference to our first iteration, we also put forward the idea of creating a piece of art from scratch by programming a drawing tool that functions with neuro data.

9781780673479.in07 (1)

So we are currently exploring the making of a series of three pieces that would reflect these notions. Our precedents and inspiration stem from visually beautiful or intriguing media.

mountainLake_seamSort

 

 

POV on Kurzweil’s Pattern Recognition Theory of Mind

This is a review on Chapter 3, in Ray Kurzweil’s “How To Create A Mind”: The task of reverse engineering the brain might be one of the biggest tasks in human civilization, and is one that Kurzweil casually approaches in his new book “How to Create a Mind: The Secrets of Human Thought Revealed”. Kurzweil’s work has undoubtedly been instrumental to the field of AI, and as a pioneer in speech recognition technologies he is the right man to consult on the topic.

That the brain understands patterns through a hierarchal grid system is nothing new, and humans have been under this assumption since the late 1950’s. We have been able to create pattern recognizers able to perform intelligent tasks to a certain extent, IBM’s Watson clearly being the best performer (although it’s parameters where very closed and focused) and Google only last year tried to create an algorithm using the same logic, but only reached an accuracy rate of about 18%. Yann LeCun, the N.Y.U. researcher who was just appointed to run Facebook’s new A.I. lab mentioned the that: “AI [has] ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver. Backlash ensued. It happened twice with neural nets already: once in the late 60’s and again in the mid-90’s.” The point is, that the brain learns by chunks of low level to high level patterns is not the interesting part – the interesting and still unknown part is how?

That we are able to replicate human thought with AI using this logic is yet to be proven, which leads to the main criticism I have – why did he not test his theory like any good scientist would do? He seems to make a lot of claims about how this theory would solve everything with very little evidence. Moreover, he claims to reveal the secret of the human brain, thought, and emotion without fully making the connections. He is clearly merely interested in simply replicating certain “thought processes” mathematically.

Much of his logic I would agree to, for example that a computer is much more able to process the 300 million pattern recognizers a human brain presumably has (how he came to this number is unclear). Also the idea that there are directed and undirected thought processes, which explain our state of dreaming or mediation vs. goal oriented thinking makes much sense and pertains to the variables we are currently working with in this class – attention/meditation with various BCI technologies are to date the most reliable. The way in which information flows down the conceptual hierarchy as well as up thus enabling us to predict the future is also very plausible,  however how we are able to do this or remember new patterns is unclear.

Over all, the PRTM indeed gives the unfamiliar some thoughts on how we would begin computing a human brain. However nothing new is revealed or tested, as he even falls short in proving the differences between his and Hawkins theory. Actually, many of his unsupported ideas may in fact merely be … dreamt up.

Midterm Progress / Fabiola & Ashley

The goal has been to get the raspberryPi working with openFrameworks in order to measure the pulse of the participants during the “intimacy experiment” with the pulse sensor.

Conceptually, since this an experiment, here is the protocol: Touch Experiment Protocol and here is the questionnaire for before and after: Intimacy Experiment Questionnaire.

It’s been a long journey, but here is a step by step on how to get the raspberryPi working with openFrameworks, installing wiringPi to talk to the GPIO pins, and install the ofxPulseSensor addon. Big thanks to Ayo for the help here.

1398304_10152303166803984_1459319506_o

Working with the Pi setting up wiringPi and openFrameworks.

778667_10152303166798984_1687933775_o

Testing out the pulse sensor to prepare it for the Pi.

1978442_10152303166788984_325555283_o

Wiring the the breadboard to include the converter.

1966241_10152303166793984_764427896_o

Accessing the GPIO pins.

Step by step: 

1. Set up the SD card and create an image on your Pi http://www.raspberrypi.org/wp-content/uploads/2012/04/quick-start-guide-v2.pdf
2. Install openFrameworks on the card http://www.creativeapplications.net/tutorials/how-to-use-openframeworks-on-the-raspberrypi-tutorial/

This is the code we are using to snap a picture by keyPress, and save the images to the folder, in testApp.cpp. It took me a while to figure out that OF was complaining at first because I was using ofVideoGrabber.getPixels() that returns data with the type unsigned char *, whereas ofImage.saveImage() wants data of the type ofPixels. The confusing thing here was that ofPixels and unsigned char * are actually the same thing. So the workaround was to create a temporary ofImage, which has two versions of setFromPixels() which accept either ofPixels or unsigned char *, so we can call it with the data from getPixels() on the camera, and then save that out to a jpg.

#include “testApp.h”

//————————————————————–

void testApp::setup(){

camWidth = 320; // try to grab at this size.

camHeight = 240;

//we can now get back a list of devices.

vector<ofVideoDevice> devices = vidGrabber.listDevices();

for(int i = 0; i < devices.size(); i++){

cout << devices[i].id << “: ” << devices[i].deviceName;

if( devices[i].bAvailable ){

cout << endl;

}else{

cout << ” – unavailable ” << endl;

}

}

vidGrabber.setDeviceID(0);

vidGrabber.setDesiredFrameRate(60);

vidGrabber.initGrabber(camWidth,camHeight);

videoInverted = new unsigned char[camWidth*camHeight*3];

videoTexture.allocate(camWidth,camHeight, GL_RGB);

ofSetVerticalSync(true);

}

//————————————————————–

void testApp::update(){

ofBackground(100,100,100);

vidGrabber.update();

if (vidGrabber.isFrameNew()){

int totalPixels = camWidth*camHeight*3;

unsigned char * pixels = vidGrabber.getPixels();

for (int i = 0; i < totalPixels; i++){

videoInverted[i] = 255 – pixels[i];

}

videoTexture.loadData(videoInverted, camWidth,camHeight, GL_RGB);

}

}

//————————————————————–

void testApp::draw(){

ofSetHexColor(0xffffff);

vidGrabber.draw(20,20);

//vidGrabber.saveImage(¨stest.png¨);

 

}

//————————————————————–

void testApp::keyPressed  (int key){

// in fullscreen mode, on a pc at least, the

// first time video settings the come up

// they come up *under* the fullscreen window

// use alt-tab to navigate to the settings

// window. we are working on a fix for this…

// Video settings no longer works in 10.7

// You’ll need to compile with the 10.6 SDK for this

// For Xcode 4.4 and greater, see this forum post on instructions on installing the SDK

// http://forum.openframeworks.cc/index.php?topic=10343

if (key == ‘s’ || key == ‘S’){

vidGrabber.videoSettings();

}

if (key == ‘p’) {

ofImage image;

image.setFromPixels(vidGrabber.getPixels(), vidGrabber.width, vidGrabber.height, OF_IMAGE_COLOR);

image.saveImage(“camera ” + ofGetTimestampString() + “.jpg”);

}

}
3. Install wiring pi through this tutorial http://wiringpi.com/download-and-install/ Follow all the instructions on the webpage and enter into terminal. The tutorial has en error: It prompts you to enter the wrong folder,  where you actually need to cd wiringPi/wiringPi instead. Then in your projects config.make-file include the following line in the PROJECT LINKER FLAGS-section:
PROJECT_LDFLAGS=-Wl,-rpath=./libs
PROJECT_LDFLAGS += -lwiringPi
5. Copy the ofxWiringpi folder into your project src folder as well as your addons folder in openFrameworks
6. Include the #include wiring pi code from the basic example in your project testApp.ccp file
7. sudo make run

This is how far we’ve gotten without any problems.

8. Since the raspberryPi doesn’t take analogue input, you need to get an analogue to digital converter in order to get the data from the pulse sensor (see details in the tutorial below).
9. We added the pulse sensor by following this tutorial: https://github.com/patriciogonzalezvivo/ofxPulseSensor However, the wiring seems to be wrong, as the chip is put on backwards. So, we followed the schematic below instead. You can use this cross reference and make sure the pins are right: https://www.adafruit.com/datasheets/MCP3008.pdf

piSchematic/pulseSensor

This is how far we got in testing.

Precedents / Fabiola & Ashley

Here are some additional precedents we wanted to share for our project. Just to recap, it’s called the “Intimacy Experiment” where we basically hook up two cameras to a pulse sensor, and the camera’s are mounted on the head. We will have two strangers approaching each other, and the more their heart beat rises, the more pictures will be taken.

First Kiss” went viral these past few weeks, exemplifying the interest in experiments about human emotional connections and proximity.

The  Gender Swap” experiment was also shared quite a bit, showing how augmented reality can demonstrate outer body experiences. We are aiming for the same type of setting for our project.

The Strangers Project” is a project where photographer Richard Renaldi asks strangers to hold hands in pictures, acting like a beloved family. The project exemplifies the many ways in which human intimacy can turn from awkward to beautiful quite quickly.

Duracell recently demonstrated how their batteries work in a very smart way, urging people to hold hands in order to get current to warm up their bodies and close the circuit. It shows that people are more willing to touch for warmth than one might expect.

Imponderabilia is a project from 1977 by Marina Abramovic, pushing the boundary of touch in a simple and artistic way.

Here is an article describing a simple experiment that could be done to measure the precision of touch. The reason we bring this up is that it gives us another aspect to strongly consider for the project: namely which body part to choose for the experiment, and the difference in sensibility between different body parts.

 

Work in progress / Fabiola & Ashley

We faced some technical problems (surprise, surprise) getting the Pi running on our card and the image set up, but we are going to work on it more during the break, and with the great resources, even though we’ve never worked with Python or the Pi, I think we should be good and that it will be an exciting experience.

photo (2)

Here are the resources we’ve been using:

https://github.com/sightmachine/SimpleCV/blob/develop/doc/HOWTO-Install%20on%20RaspberryPi.rst

Here is the code we’ve been using to get it running:

$ sudo apt-get install ipython python-opencv pythonscipy python-numpy python-pygame python-setuptools python-pip

$ sudo pip install https://github.com/ingenuitas/SimpleCV/zipball/master

$ sudo add-apt-repository ppa:gijzelaar/opencv2.3

$ sudo apt-get update

We need to get a web camera for mac, a wifi dongle, a micro usb pin and a usb hub.

Update on form factor:

The change of hardware will not effect the form factor of the piece much, luckily. Here is our plan for the user tests:

3 Phase Structure

1) Prepare each subject by having them fill out a short questionnaire that puts them in a headspace conducive to the project. The questionnaires will be filled out together.  Strangers will occupy the space together but focus on the prompts given by Fabiola and I rather than on each other.

2) When finished they will begin the touch experiment. After the first person starts touching then the 5 minutes begin. They have 5 minutes – perhaps longer, to touch each other and explore sensations caused by the experiment. This will be documented by video and also through the camera attached to the subject.  We are deciding if we want to put them in a quiet space alone, or in a space with a small crowd.

3) We will be introducing strangers of all races, genders, sexualities, etc.  Since this is still in a testing phase we will be looking at how reactions vary when changing components.

Here is a great article that discusses many of the topics we are concerned with.

 

 

 

Update on progress / Ashely & Fabiola

So, I’m posting last weeks progress, which was … very slow. Since both me and Ashley are not exactly the queens of pComp (yet), we where facing problems on how to tap into the goPro for it to take pictures. We got the wifi shield to communicate to it’s internal wifi, but got stuck when we where going to write the code. After discussions in class, we decided to switch hardware from the Arduino to the Pi, and switch the camera to a normal web cam.  Ayo has offered us help to get started, which we appreciate a lot, and we should be able to get it all running during the break to hit the deadline.

Introduction

Aisen and Connor, since I was sick for the class where we did introductions, I sent you guys an email, but now I’m posting it here as well, per your request.

My background is in advertising, and I worked as a creative strategist for a few years before coming here. Basically, what I did is a lot of consumer research and social media strategy. After a while, it bored me, and the advertising industry is just such a pain, so I decided to get into the making and designing, which I had started doing for a while anyway. I had never touched code before coming here, and my technical skills are still weak, but I’m working on it.

I’m interested in psychology, semiotics, body interfaces and biotechnology, and think this class allows me to explore this.

Here is a link to my portfolio.