All posts by Betty

Response to How to Create a Mind – Betty

Ray Kurtzweil’s Pattern Recognition Theory of the Mind proposes that rather than using deductive reasoning, humans use pattern recognizers to create connections between neurons that allow them to learn. Chapter 3 of How to Create a Mind concentrates on the neocortex, which Kurtzweil argues is made up of cortical columns that contain a total of 300 million pattern recognizers. According to Kurzweil, our neocortex is a blank slate when we are born and only our experiences can wire connections between pattern recognizers. His hypothesis states that the structure of the neocortex is malleable, changing the hierarchical way it is connected between modules as we learn over time.

I particularly thought that this argument was convincing in terms of the examples he gave:

[Translating memories into language] is also accomplished by the neocortex, using pattern recognizers trained with patterns that we have learned for the purpose of using language. Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality.”

If reality truly is hierarchical, as Kurtzweil argues, then it would be natural that the way our brain processes information would be hierarchical as well.

This chapter could easily have been extremely technical — however, overall, he explained the concepts of hierarchical pattern recognition clearly, with easily understood examples, such as a higher/lower level recognizers sending a message of how a loved one is recognized.

Kurtzweil’s pattern recognition theory of the mind draws many parallels between the way we process and learn things and the way he believes computers (artificial intelligence) learn. Ultimately, Kurtzweil makes a case that one day we will be able to imitate and perhaps venture further than human intelligence. While I personally do not believe that the brain’s structure is just a simple algorithmic process as he implies, it did start me thinking of the capabilities of artificial intelligence, and if we did create them, how far should we take it?

For example, here is one quote that made me begin to think of how “human” our potential AI’s should be:

“However, if I don’t think about her for a given period of time, then these pattern recognizers will become reassigned to other patterns. That is why memories grow dimmer with time: The amount of redundancy becomes reduced until certain memories become extinct.”

If “memories become extinct”, how can an AI imitate that? Should it imitate that? With all the distortions in our logical thinking, how much of human intelligence would we want to imitate?

Imagine if we created artificial intelligence that is capable of being exposed to as many experiences and sensory inputs that we are bombarded by everyday, but has a pattern recognizer less prone distortions of memory that are natural in humans…I was hoping that Kurtzweil touch on the moral responsibilities that we have in creating artificial intelligence, but unfortunately, he did not dive into the potential dangers of artificial intelligence.

Can you truly replicate human intelligence? You can replicate the logical process, perhaps, but can you replicate empathy? Emotion? While I don’t doubt that one-day AIs will have the ability to surpass our own intelligence, I doubt that we can replicate how the human mind truly works.

Meditation Moon demo

We used the data coming in from the meditation channel to manipulate the visual of the moon.  More of the moon is revealed as the person’s meditation level increases. Main part of the Processing code is below, the channels, connection light and graphs are left untouched:

import processing.serial.*;
import controlP5.*;

ControlP5 controlP5;

Serial serial;

float[] intData = new float[11]; // this is where we are going to store the values each frame

Channel[] channels = new Channel[11];
Monitor[] monitors = new Monitor[10];
Graph graph;
ConnectionLight connectionLight;
String serialPort = “/dev/tty.usbmodem1411”;

int packetCount = 0;
int globalMax = 0;

int myValue = 0;

String scaleMode;

PImage moon;

void drawStar(float dataValue) {
// float newDataCalc = map(intData[2], 0, 100, 0, 360);

for (int i = 15; i < 360; i+=30) {
// println(“intData[2] is ” + intData[2]);
// println(“dataCalc is ” + newDataCalc);
float newDataPoint = dataPoint*3.6;
// println(“new data point” + newDataPoint);
float angle = radians(random(0,dataValue));
//float angle = radians(newDataCalc);
// float angle = radians(random(0, 360));
//println(“random is ” + angleRandom);
// println(“angle is ” + angle);
// println(“intData2 is ” + intData[2]);
// println(“angle is ” + angle);

float x = (200/2 + cos(angle)* (min(200, 200)/2)*0.72);
float y = (200/2 + sin(angle)* (min(200, 200)/2)*0.72);
//line(width/2, height/2, x, y);
for (int z = 0; z < 360; z+=30) { //30 or 120
float angle2 = radians(z);
float x2 = (200/2 + cos(angle2)* (min(200, 200)/2)*0.15);
float y2 = (200/2 + sin(angle2)* (min(200, 200)/2)*0.15);
//line(width/2, height/2, x2, y2);
//this line creates the big star and middle star
line(x2, y2, x, y);
float x3 = (200/2 + cos(angle)* (min(200, 200)/2)*0.85);
float y3 = (200/2 + sin(angle)* (min(200, 200)/2)*0.85);
//this line extends the big star
line(x, y, x3, y3);
float x4 = (200/2 + cos(angle2)* (min(200, 200)/2)*0.88);
float y4 = (200/2 + sin(angle2)* (min(200, 200)/2)*0.88);
//this line closes the extensions
line(x3, y3, x4, y4);
}
}
//delay(3000 – intData[2]*30);
}

 

void setup() {
// Set up window
size(1024, 768);
frameRate(60);
smooth();
frame.setTitle(“Processing Brain Grapher”);

moon = loadImage(“moon.jpg”);

// Set up serial connection
println(“Find your Arduino in the list below, note its [index]:\n”);

for (int i = 0; i < Serial.list().length; i++) {
println(“[” + i + “] ” + Serial.list()[i]);
}

// Put the index found above here:
serial = new Serial(this, serialPort, 9600);
serial.bufferUntil(10);

// Set up the ControlP5 knobs and dials
controlP5 = new ControlP5(this);
controlP5.setColorLabel(color(0));
controlP5.setColorBackground(color(0));
controlP5.disableShortcuts();
controlP5.disableMouseWheel();
controlP5.setMoveable(false);

// Create the channel objects
channels[0] = new Channel(“Signal Quality”, color(0), “”);
channels[1] = new Channel(“Attention”, color(100), “”);
channels[2] = new Channel(“Meditation”, color(50), “”);
channels[3] = new Channel(“Delta”, color(219, 211, 42), “Dreamless Sleep”);
channels[4] = new Channel(“Theta”, color(245, 80, 71), “Drowsy”);
channels[5] = new Channel(“Low Alpha”, color(237, 0, 119), “Relaxed”);
channels[6] = new Channel(“High Alpha”, color(212, 0, 149), “Relaxed”);
channels[7] = new Channel(“Low Beta”, color(158, 18, 188), “Alert”);
channels[8] = new Channel(“High Beta”, color(116, 23, 190), “Alert”);
channels[9] = new Channel(“Low Gamma”, color(39, 25, 159), “Multi-sensory processing”);
channels[10] = new Channel(“High Gamma”, color(23, 26, 153), “???”);

// Manual override for a couple of limits.
channels[0].minValue = 0;
channels[0].maxValue = 200;
channels[1].minValue = 0;
channels[1].maxValue = 100;
channels[2].minValue = 0;
channels[2].maxValue = 100;
channels[0].allowGlobal = false;
channels[1].allowGlobal = false;
channels[2].allowGlobal = false;

// Set up the monitors, skip the signal quality
for (int i = 0; i < monitors.length; i++) {
monitors[i] = new Monitor(channels[i + 1], i * (width / 10), height / 2, width / 10, height / 2);
}

monitors[monitors.length – 1].w += width % monitors.length;

// Set up the graph
graph = new Graph(0, 0, width, height / 2);

// Set yup the connection light
connectionLight = new ConnectionLight(width – 140, 10, 20);
}
void draw() {

//println(currentValue);

//HERE’s the good stuff ========================================//
/*NOTE: In order to see these values being printed, you must have a signal quality better than 200…
this can be changed at the bottom of this page of code here:
if ((Integer.parseInt(incomingValues[0]) == 200) && (i > 2)) {
newValue = 0;
}
*/

for (int i = 0; i < channels.length; i++) {    //loop through all 11 channels
Point tempPoint = channels[i].getLatestPoint(); //create a temporary point to hold the latest point (from points) in your channel class… getLatestPoint is a method of the Channel class
intData[i] = tempPoint.value; //store the value property of your new point (you can see that this value is set on instatiation) into the corresponding intData cell (we created intData at the top for this reason)
//println(“intData[” + i + “]: ” + intData[i]); //print out intData[i] to be sure
// println(“real data ” + intData[2]);
//println(“multiplied data ” + intData[2]*3.6);
}

//HERE’s the good stuff ========================================//
// Keep track of global maxima
if (scaleMode == “Global” && (channels.length > 3)) {
for (int i = 3; i < channels.length; i++) {
if (channels[i].maxValue > globalMax) globalMax = channels[i].maxValue;
}
}

// Clear the background
background(255);
// Update and draw the main graph
graph.update();
graph.draw();

// Update and draw the connection light
connectionLight.update();
connectionLight.draw();

// Update and draw the monitors
for (int i = 0; i < monitors.length; i++) {
monitors[i].update();
monitors[i].draw();
}

pushMatrix();
scale(0.5, 0.5);
image(moon, 0, 0);
popMatrix();
noFill();
stroke(255);

pushMatrix();
scale(2.5, 2.6);
translate(10, 0);
drawStar(intData[2]);
// println(incomingValues[2]);
popMatrix();

println(intData[2]);
// delay(3000 – intData[2]*30);
}

void serialEvent(Serial p) {
// Split incoming packet on commas
// See https://github.com/kitschpatrol/Arduino-Brain-Library/blob/master/README for information on the CSV packet format

String incomingString = p.readString().trim();
// print(“Received string over serial: “);
// println(incomingString);

String[] incomingValues = split(incomingString, ‘,’);

// Verify that the packet looks legit
if (incomingValues.length > 1) {
packetCount++;

// Wait till the third packet or so to start recording to avoid initialization garbage.
if (packetCount > 3) {

for (int i = 0; i < incomingValues.length; i++) {
String stringValue = incomingValues[i].trim();

int newValue = Integer.parseInt(stringValue);

// if(i==2){
// myValue = newValue;
// println(myValue);
// }

// Zero the EEG power values if we don’t have a signal.
// Can be useful to leave them in for development.
if ((Integer.parseInt(incomingValues[0]) == 200) && (i > 2)) {
newValue = 0;
}

channels[i].addDataPoint(newValue);
}
}
}
}
// Utilities

// Extend Processing’s built-in map() function to support the Long datatype
long mapLong(long x, long in_min, long in_max, long out_min, long out_max) {
return (x – in_min) * (out_max – out_min) / (in_max – in_min) + out_min;
}

// Extend Processing’s built-in constrain() function to support the Long datatype
long constrainLong(long value, long min_value, long max_value) {
if (value > max_value) return max_value;
if (value < min_value) return min_value;
return value;
}

 

 

– Project by Betty, Jagrat, Stephanie, Fei and Maxine

About Betty

My name is Betty and I’m a 1st year in MFADT. Prior to coming here, I studied art history, film and new media. I became interested in projections as a way to transform environments and manipulate the senses. While in DT, I’m interested in creating immersive, surreal physical and digital installations. I’m excited to explore the intersection of wearable tech and psychology in this class; I would love to do a project concerning mental disorders or synethesia. 
bettyQuinn_fl