Wednesday, May 8, 2013

ISTA 401: Final Project- Faces



Faces is work that achieves no clear goal, does not infer a motive, nor does it follow conventional rules regarding art. It may be best to consider it close to a Dadaist or anti-art work because if anything it’s mocking art. The project started as several ideas and has slowly merged into one clearly identifiable work.

Concept number one was an idea that with a phone you could write on a virtual wall at any time. Drawn from a project that allowed users to virtually draw on buildings with laser pointers it was a natural urge to try and condense the huge space the original project worked in to a small show room. For this we would use a camera watching a crowd in front of a projected mirror or essentially an image on the crowd just projected in front of the crowd. The crowd would watch the mirror and see themselves with but with a slight twist. If someone pulled out their phone and opened an app that turned the screen a green screen green you could then write on the mirror with your phone by holding it up to the camera. The technical details are simple enough. A camera watching the crowd would be looking for a certain color AKA green screen green. If it were to see the color it changes the pixels where the green is and output to a solid color. The projector showing the mirror then displays this altered image with a drawing on it. The total effect makes the user fell like he/she is drawing on the mirror with a phone. The idea was great for several reasons. First it was user interactive which in my opinion is always very important. Second the user controls the outcome of the work. Once the program is set, it is left to the masses for them to decide its fate. The implementation of this was more complex than we could attempt in a 4 week assignment so we moved on to our next idea.

This idea was drawn from two different sources and merged together. One of my partner’s mothers had the idea of a picture where the eyes in the picture would follow you wherever you went. Not a bad idea in our opinion, although slightly creepy. So we ran with it and combined our former idea of using a device as a drawing tool to a pointing tool. Not that far of a jump in our opinion. Thus we came up with a system where the camera looks for a laser pointers bright dot on a surface. Where that dot was located would be where the creepy eyes of our picture would be. It worked . . . for the most part. Initially we used a library called Mayron for the image analysis but we ran into limitations. The library could only see certain colors and would see every one of the pixels currently that color. A problem when a laser pointers dot generally looks like white. The program would find all pixels with white and essentially crash. This wasn’t going to work.
Our last and final iteration was only a natural step to take from iteration #2. We wanted the eye of the portraits to follow people so we need some software that could recognize people. Turns out there is a library called openCV that has a facial recognition feature embedded in it. This was perfect! We implemented it in so that the eye would follow a face walking by the image and we had our final product.

Well it wasn’t quiet that easy. openCV is a great piece of software that has been adapted and used on many different projects, programs and operating systems. It is very popular in image processing and helped us tremendously but not without some headaches. First we found that importing the openCV library into processing on a PC it’s close to impossible. Further its not possible on a 64-bit machine which is what mine is. Fortunately it is very easy to use with a Mac and so we used all Mac’s to complete the project. We also wanted a nice variation of portraits to be shown show we made a slide show feature with a fade in and out. This works perfectly creating the virtual art galllery feel that we were looking for with a special, creepy modification. Now we have the full effect. 

Together as a team we completed this very complex and ever changing projcet. I have to give full credit to my teammates for all their hard work. I personally did all of the eye placement and work to get them to match up with the pictures that my partner edited for eye holes. It was a fun project and a creepy outcome. Just picture this in an art gallery. You would initially think its just a projection of a famous painting but the notice that it is watching you everywhere you go!

Code below:

import hypermedia.video.*;
import java.awt.Rectangle;

OpenCV opencv;
// contrast/brightness values
int contrast_value    = 0;
int brightness_value  = 0;
int X1;
int Y1;
int X2;
int Y2;
int opac=255;
int order = 0;
int m=0;
int last=0;
boolean active = false;
boolean X = false;
boolean next = false;


int xcenterlefteye;
int ycenterlefteye;
int xcenterrighteye;
int ycenterrighteye;

int eyewidth;
int eyeheight;

int leftedgelefteye;
int rightedgelefteye;

int leftedgerighteye;
int rightedgerighteye;

int topedgelefteye;
int bottomedgelefteye;

int topedgerighteye;
int bottomedgerighteye;

int X12;
int Y12;
int X22;
int Y22;

int xcenterlefteye2;
int ycenterlefteye2;
int xcenterrighteye2;
int ycenterrighteye2;

int eyewidth2;
int eyeheight2;

int leftedgelefteye2;
int rightedgelefteye2;

int leftedgerighteye2;
int rightedgerighteye2;

int topedgelefteye2;
int bottomedgelefteye2;

int topedgerighteye2;
int bottomedgerighteye2;

PImage imgMona;
PImage imgjohn;
PImage imgobama;
PImage imgfarm;
PImage imggw;

void setup() {

    size( 1280, 800 );

    opencv = new OpenCV( this );
    opencv.capture( width, height);                   // open video stream
    opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-> front face detection : "haarcascade_frontalface_alt.xml"
    
    imgMona = loadImage("Mona.png");
    imgjohn = loadImage("john.png");
    imgobama = loadImage("obama.png");
    imgfarm = loadImage("farm.png");
    imggw = loadImage("gw.png");

}


public void stop() {
    opencv.stop();
    super.stop();
}


void draw() {
    background(0);
    int h = height;
    int w= width;
    opencv.read();
    //opencv.absDiff();
    opencv.flip( OpenCV.FLIP_HORIZONTAL );
   
    print("time " +m/1000+"\n");
    print("order " +order+"\n");
    print("next " +next+"\n");
    print("active " +active+"\n\n\n\n");
    
    m = millis()-last;
    
    if(millis()>last+30000){
      last=millis();
      next=true;
    }
     if(next){
      opac = opac-25;
      print("-opac = " +opac+'\n');
     }
     if((opac<-25) && (active==false)){
       next=false;
       active = true;
       order = order + 1;
     }
     if(active){
       
       opac = opac + 25;
       print("+opac = " +opac+'\n');
     }
     if ((opac > 255) && (active)){
        active=false;
       
     }
     if(order==5){
       order=0;
     }
     
    // proceed detection
    Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

   
    
    // draw face area(s)
    noFill();
    //stroke(255,0,0);
    noStroke();
    for( int i=0; i<faces.length; i++ ) {
        rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); 
        X1=(faces[i].x+(faces[i].width/2));
        Y1=(faces[i].y+(faces[i].height/2));
        X12=(faces[i].x+(faces[i].width/2));
        Y12=(faces[i].y+(faces[i].height/2));
        //print("X "+X1+ "    Y "+ Y1+"\n");
    }
    ellipseMode(CENTER);

 stroke(0);   
    
  if (order==0){
        //Mona
        xcenterlefteye = 605;
        ycenterlefteye = 275;
        xcenterrighteye = 685;
        ycenterrighteye = 275;
        
        eyewidth = 40;
        eyeheight = 20;
        
        leftedgelefteye = xcenterlefteye - eyewidth/2;
        rightedgelefteye = xcenterlefteye + eyewidth/2;
      
        leftedgerighteye = xcenterrighteye - eyewidth/2;
        rightedgerighteye = xcenterrighteye + eyewidth/2;
      
        topedgelefteye = ycenterlefteye - eyeheight/2;
        bottomedgelefteye = ycenterlefteye + eyeheight/2;
      
        topedgerighteye = ycenterrighteye - eyeheight/2;
        bottomedgerighteye = ycenterlefteye + eyeheight/2;
        
        
        //left eye
        float x1 = map(X1, 0, width, leftedgelefteye, rightedgelefteye);
        float y1 = map(Y1, 0, height, topedgelefteye, bottomedgelefteye);
        //right eye
        float x2 = map(X1, 0, width, leftedgerighteye, rightedgerighteye);
        float y2 = map(Y1, 0, height, topedgerighteye, bottomedgerighteye);
        
        ellipseMode(CENTER);
        
        //eyes
        fill(231,192,105,opac);
        ellipse(xcenterlefteye, ycenterlefteye, eyewidth, eyeheight);
        ellipse(xcenterrighteye, ycenterrighteye, eyewidth, eyeheight);
       
         
        //pupils
        fill(0,0,0,opac);
        ellipse(x1, y1, 10, 10);
        ellipse(x2, y2, 10, 10);
        
        tint(255,opac);
        image(imgMona,width/2-400,0);
  }
  
  if (order==1) { 
        //obama
        xcenterlefteye = 640;
        ycenterlefteye = 320;
        xcenterrighteye = 710;
        ycenterrighteye = 317;
        
        eyewidth = 35;
        eyeheight = 15;
        
        leftedgelefteye = xcenterlefteye - eyewidth/2;
        rightedgelefteye = xcenterlefteye + eyewidth/2;
      
        leftedgerighteye = xcenterrighteye - eyewidth/2;
        rightedgerighteye = xcenterrighteye + eyewidth/2;
      
        topedgelefteye = ycenterlefteye - eyeheight/2;
        bottomedgelefteye = ycenterlefteye + eyeheight/2;
      
        topedgerighteye = ycenterrighteye - eyeheight/2;
        bottomedgerighteye = ycenterlefteye + eyeheight/2;
        
        
        //left eye
        float x1 = map(X1, 0, width, leftedgelefteye, rightedgelefteye);
        float y1 = map(Y1, 0, height, topedgelefteye, bottomedgelefteye);
        //right eye
        float x2 = map(X1, 0, width, leftedgerighteye, rightedgerighteye);
        float y2 = map(Y1, 0, height, topedgerighteye, bottomedgerighteye);
        
        ellipseMode(CENTER);
        
        //eyes
        fill(255,255,255,opac);
        ellipse(xcenterlefteye, ycenterlefteye, eyewidth, eyeheight);
        ellipse(xcenterrighteye, ycenterrighteye, eyewidth, eyeheight);
       
         
        //pupils
        fill(0,0,0,opac);
        ellipse(x1, y1, 7, 7);
        ellipse(x2, y2, 7, 7);
        
        tint(255,opac);
        image(imgobama, (width/2)-180, (height/2)-250);  
      
  }
  if(order==2){
        //john
        xcenterlefteye = 520;
        ycenterlefteye = 373;
        xcenterrighteye = 670;
        ycenterrighteye = 373;
        
        eyewidth = 50;
        eyeheight = 30;
        
        leftedgelefteye = xcenterlefteye - eyewidth/2;
        rightedgelefteye = xcenterlefteye + eyewidth/2;
      
        leftedgerighteye = xcenterrighteye - eyewidth/2;
        rightedgerighteye = xcenterrighteye + eyewidth/2;
      
        topedgelefteye = ycenterlefteye - eyeheight/2;
        bottomedgelefteye = ycenterlefteye + eyeheight/2;
      
        topedgerighteye = ycenterrighteye - eyeheight/2;
        bottomedgerighteye = ycenterlefteye + eyeheight/2;
        
        
        //left eye
        float x1 = map(X1, 0, width, leftedgelefteye, rightedgelefteye);
        float y1 = map(Y1, 0, height, topedgelefteye, bottomedgelefteye);
        //right eye
        float x2 = map(X1, 0, width, leftedgerighteye, rightedgerighteye);
        float y2 = map(Y1, 0, height, topedgerighteye, bottomedgerighteye);
        
        ellipseMode(CENTER);
        
        //eyes
        fill(255,209,189,opac);
        ellipse(xcenterlefteye, ycenterlefteye, eyewidth, eyeheight);
        ellipse(xcenterrighteye, ycenterrighteye, eyewidth, eyeheight);
       
         
        //pupils
        fill(0,0,0,opac);
        ellipse(x1, y1, 13, 13);
        ellipse(x2, y2, 13, 13);
        
        tint(255,opac);
        image(imgjohn, (width/2)-221, (height/2)-300);
  }
  if(order==3){
        //farm
  
  //eyes # 1
  xcenterlefteye = 485;
  ycenterlefteye = 276;
  xcenterrighteye = 525;
  ycenterrighteye = 276;
  
  eyewidth = 20;
  eyeheight = 9;
  
  leftedgelefteye = xcenterlefteye - eyewidth/2;
  rightedgelefteye = xcenterlefteye + eyewidth/2;

  leftedgerighteye = xcenterrighteye - eyewidth/2;
  rightedgerighteye = xcenterrighteye + eyewidth/2;

  topedgelefteye = ycenterlefteye - eyeheight/2;
  bottomedgelefteye = ycenterlefteye + eyeheight/2;

  topedgerighteye = ycenterrighteye - eyeheight/2;
  bottomedgerighteye = ycenterlefteye + eyeheight/2;
  
  
  //left eye
  float x1 = map(X1, 0, width, leftedgelefteye, rightedgelefteye);
  float y1 = map(Y1, 0, height, topedgelefteye, bottomedgelefteye);
  //right eye
  float x2 = map(X1, 0, width, leftedgerighteye, rightedgerighteye);
  float y2 = map(Y1, 0, height, topedgerighteye, bottomedgerighteye);
  
  ellipseMode(CENTER);
  
  //eyes
  fill(205,205,205,opac);
  ellipse(xcenterlefteye, ycenterlefteye, eyewidth, eyeheight);
  ellipse(xcenterrighteye, ycenterrighteye, eyewidth, eyeheight);

   
  //pupils
  fill(0,0,0,opac);
  ellipse(x1, y1, 4, 4);
  ellipse(x2, y2, 4, 4);

  //////////////////////////////////
  
  
  //eyes # 2
  xcenterlefteye2 = 730;
  ycenterlefteye2 = 210;
  xcenterrighteye2 = 785;
  ycenterrighteye2 = 210;
  
  eyewidth2 = 20;
  eyeheight2 = 10;
  
  leftedgelefteye2 = xcenterlefteye2 - eyewidth2/2;
  rightedgelefteye2 = xcenterlefteye2 + eyewidth2/2;

  leftedgerighteye2 = xcenterrighteye2 - eyewidth2/2;
  rightedgerighteye2 = xcenterrighteye2 + eyewidth2/2;

  topedgelefteye2 = ycenterlefteye2 - eyeheight2/2;
  bottomedgelefteye2 = ycenterlefteye2 + eyeheight2/2;

  topedgerighteye2 = ycenterrighteye2 - eyeheight2/2;
  bottomedgerighteye2 = ycenterlefteye2 + eyeheight2/2;
  
  
  //left eye
  float x12 = map(X12, 0, width, leftedgelefteye2, rightedgelefteye2);
  float y12 = map(Y12, 0, height, topedgelefteye2, bottomedgelefteye2);
  //right eye
  float x22 = map(X12, 0, width, leftedgerighteye2, rightedgerighteye2);
  float y22 = map(Y12, 0, height, topedgerighteye2, bottomedgerighteye2);
  
  ellipseMode(CENTER);
  
  //eyes
  fill(205,205,205,opac);
  ellipse(xcenterlefteye2, ycenterlefteye2, eyewidth2, eyeheight2);
  ellipse(xcenterrighteye2, ycenterrighteye2, eyewidth2, eyeheight2);

    //pupils
  fill(0,0,0,opac);
  ellipse(x12, y12, 4, 4);
  ellipse(x22, y22, 4, 4);

  
  tint(255,opac);
  image(imgfarm, (width/2)-301, (height/2)-360);
  
  
  }
   if(order==4){
        //gw
  xcenterlefteye = 556;
  ycenterlefteye = 332;
  xcenterrighteye = 630;
  ycenterrighteye = 333;
  
  eyewidth = 30;
  eyeheight = 15;
  
  leftedgelefteye = xcenterlefteye - eyewidth/2;
  rightedgelefteye = xcenterlefteye + eyewidth/2;

  leftedgerighteye = xcenterrighteye - eyewidth/2;
  rightedgerighteye = xcenterrighteye + eyewidth/2;

  topedgelefteye = ycenterlefteye - eyeheight/2;
  bottomedgelefteye = ycenterlefteye + eyeheight/2;

  topedgerighteye = ycenterrighteye - eyeheight/2;
  bottomedgerighteye = ycenterlefteye + eyeheight/2;
  
  
  //left eye
  float x1 = map(X1, 0, width, leftedgelefteye, rightedgelefteye);
  float y1 = map(Y1, 0, height, topedgelefteye, bottomedgelefteye);
  //right eye
  float x2 = map(X1, 0, width, leftedgerighteye, rightedgerighteye);
  float y2 = map(Y1, 0, height, topedgerighteye, bottomedgerighteye);
  
  ellipseMode(CENTER);
  
  //eyes

  fill(205,205,205,opac);
  ellipse(xcenterlefteye, ycenterlefteye, eyewidth, eyeheight);
  ellipse(xcenterrighteye, ycenterrighteye, eyewidth, eyeheight);

   
  //pupils
  fill(0,0,0,opac);
  ellipse(x1, y1, 6, 6);
  ellipse(x2, y2, 6, 6);
  
   
   
   tint(255,opac);
  image(imggw, (width/2)-302, (height/2)-250);
   }
    
    
    
    if(key=='o'){
      tint(255,125);
        image( opencv.image(), 0,0, displayWidth, displayHeight);
    }

    if ((key=='z')){
      X = false;
    }
      if ((key=='x')){
      X = true;
    }
    if (X){
      fill(255);
      noStroke();
      quad(X1-10,Y1+10,X1-7,Y1+10,X1+10,Y1-10,X1+7,Y1-10);
      quad(X1+10,Y1+10,X1+7,Y1+10,X1-10,Y1-10,X1-7,Y1-10);
      
    }
}





or link to project:
https://www.dropbox.com/s/flxhx7wlji4ofxm/face_detectionE2.pde

Tuesday, April 16, 2013

ISTA 401: iProcessing

      Today I wanted to explore the world of mobile Processing. In today's world we are always on the go and having our laptop at all times just isn't possible, so what are our options when it comes to mobile devices? Everyone in today's world has at least a phone and probably a smart phone if not a tablet of some kind also. I personally have an old school iPhone 4 and a new iPad. These items I can carry and often do carry everywhere I go day to day and I have always wondered if I had a moment of  inspiration about a Processing project I was working on could I sit down and crank out a sketch anywhere in the world and any time? Turns out that yeah you can and not only can you but you have options as well. Below are a few application for the iPhone and iPad. For all you non Apple people out there I'm sorry for leaving you out but I have no way of testing apps on other devices. 
      Before we continue though I must throw out a disclaimer about there cool applications. 1) They are java-script based so that they can run on Apple devices and 2) they are in some ways limited. These are not the peak of programming applications but in a spot you definitively could find a bench anywhere anytime and get your ideas out of your brain and into sketch. So enjoy the following but don't get disappointed when you can't run your favorite image, or video libraries on these primitive applications.

First up:



Processing for iOS (Javascript): FREE
By Boyd Rotgans

Simple Processing (JS) language editor.

Version 1.0
- Make new, edit and remove sketches
- Based on the ProcessingJS
- Syntax Highlighting
- Reference link
 - Portrait only

iPhone Screenshot 1  iPhone Screenshot 2 iPhone Screenshot 3

Notes: This processing app is as simplistic as you can make it. Code your sketch and press play. Simple as that! Very easy to use interface with only one exception. When trying to exit your sketch it is hard to get the menu to pop up, sure to be an easy fix in the future. This app looks great on the iPad's retina display and is a joy to play with.

Next:



Processing for dummies: $1.99
By Niccolo Consolazio
Processing for dummies is the quickest and easiest guide and reference to Processing language and by now COMPILER!!

FEATURES
- Tons of tutorials
- Complete reference to Processing!
- Support the Retina Display!
- Processing Environment
- Processing Tools
- ASCII Table
- Compatible with iPad!
iPhone Screenshot 1    iPhone Screenshot 3   iPhone Screenshot 4

Notes: This is by far the better of the two apps in my opinion. The resources given are far greater and the interface is well thought out  You can create tons of sketches easily and manage each with ease. One of the greatest features of this app is the cool keyboard for the iPad that lets your swipe to get special characters. This small feature makes writing longer codes very easy on the iPad.


Some other resources:

iProcessing
by Luckybite

http://luckybite.com/iprocessing/

Notes: This website and development is very out of date. Hopefully will undertake the project of updating to the latest iOS.


Video showing examples above.


Final thoughts:
       I think the above speak for itself in that these are a couple handy little apps here that will let you amaze your friends on the subway to work but wont let you create ultimate sketch. In my mind these are training, time wasting (in a good way), brainstorming, introduction or glimpses into Processing in its full. I hope you enjoy them as much as I do and continue to develop one of a kind sketches. Or if your one to take it to the next step, you could always do the Processing world a favor and take inspiration from these apps and make a fully functioning Processing app that provides the full rich set of features Processing provides. We all would enjoy such an accomplishment but until then - Laters!

Thursday, February 21, 2013

ISTA 401 Blog: Art Show Ideas

Idea 1: Using a connect play angry birds.
Idea 2: Display the floor on the ceiling. Down looking cameras record the view and display it on the roof. Vertigo!
Idea 3: Facial recognition! Use Facebook to identify all people who are walking around the gallery.
Idea 4: Twitter visualization?
Idea 5: Keyboard heat-map?
Idea 6: Turn wall into mirror
Idea 7: Type into wall that outputs poems
Idea 8: play sound opposite of hearing them

Sunday, February 17, 2013

ISTA 401 Blog: Artist Statement

Who are you?
I'm a regular college student at the University of Arizona majoring in ISTA.
What do you do?
I like to fly, play sports, compete and have fun!
Why are you interested in it?
I love to fly because of the freedom, skill and knowledge that it requires. It's a great challenge and the views are great! Sports are my life. I love every type of competition. I love to compete and watching others do the same is enjoyable. Having fun is critical to me. Life isn't worth living if your not having fun.
Why is it important to you?
Flying is important to me because it has been in my life sense birth. My father has flown for a living my whole life. That love has transferred to my life and I hope to make it my own career in the future. As for sports its my pressure relief valve in life. When playing or watching sports I'm very happy and relax.
How do you justify it to the world?
These are the things that make me happy and make my life enjoyable. If your not perusing your dreams and what makes you happy then your not living your life.
What are the main concepts, issues, themes you explore?
As far as art I would like to focus on human computer interaction, social networking issues, and interactive visual displays.
What are some of the major works you've done so far?
The major work that I have done previously is called probablycats.com. You can check it out at probablycats.com. Essentially it creates a Pinterest like view of redit articles. Check it out! I've also done some project earlier in this class that you can see on some of my previous posts!

Thursday, February 14, 2013

ISTA 401 Blog: Project Ideas

      I've been challenged to brainstorm and mentally develop three ideas for Multimedia Installations that I would display in an imaginary art gallery. I have an Emory room to work with and have unlimited options. The possibilities are endless. I believe thatmfirstni souks develop an idea of how I want people to interact with the gallery. I'm a technology buff and think I will shoot for a complete virtual gallery. This is an interesting idea to me for several reasons.
      First, it's an art gallery without any Art in it. Breaking the rules right off the bat. Second in a traditional art gallery (at least the one that's in my mind) you can't ever tough things. There is always a barrier between the observer and art, whether it be a rope or a rail, DON'T TOUCH signs or a 300lb security guard. With the virtual art gallery there is no art to protect and those barriers disappear!
Second a virtual gallery is flexible. You can change everything with a touch of a button! Image one night you feel like going street. You want to be gangster and put up all your graffiti, photography of the hood and such. It's an expression of illegal expression. Then the next night or even the next hour you get mad at social norms and want to display your views on stupid formalities. It's possible with a virtual gallery.
      Third, people can interact with your ideas. Traditionally the gallery master or whoever is in charge gets full command over what the overall image and meaning of a gallery is. With virtual gallery's people can disagree with you. Point out their views, even argue with other observers. There are a million ways this could be done. Let them all draw on a virtual screen, give them all a virtual marker so they can draw on your drawings, give them virtual paint to correct your mistakes, interact with other virtual galleries and so on and so on.
      So, we have flexibility, personal observer interaction, and n real art (AKA no barrier). What does all this give us? Nothing actually. Instead I think it's the opposite idea. Really its the idea of giving the observers all the power. Why?
People like to be in control of their future, including what they see, hear, taste interact with. So why not give them the control? Why not let them choose? Good idea? Probably not, we will see but I have a couple specific ideas.

Crazy Idea 1: The Circle Canvas
Mainly spawning from the following two videos.
Video #1: divIT - Multitouch Display


Video #2: Samsung Horizon Dispaly

      There are a million videos out there like this but I think you might get the idea. I would turn the room into a huge multi-touch screen for the users to do what they like with it. It would be a complete circled poor to ceiling to completely immerse the observers. Users would enter as a group and interact anywhere around the interior of the circle. Options would include drawing, writing, uploading/ posting pictures of themselves, internet access, Google maps, video streaming/chats. It would be an all inclusive experience.

Crazy Idea 2: The Worm Hole
      Take the same set up as Crazy Idea #1 and turn the whole thing into a 360 degree view of a public place in another location. Could be right outside, could be in a park on the other side of town or in another country. Both locations would have a circle canvas and a 360 degree camera essentially making in a video chat session in 360. Place each worm hole in popular public places like Times Square and Grand Central Station and you have virtual worm hole to the other location. You have to image what it would look like. Stand in the middle of a circular display interacting with random people on the "other side" in real time.

Sunday, February 10, 2013

ISTA 401 Blog: Visual Poetry

      My latest project in in Multimedia Installations in collaboration with Nelson Post is a visual poetry piece. We were asked to make a random poetry generator. After creating a word bank we converted a known poem to a part of speech blank meaning we replaced all the words with their part of speech. Once this is done we can fill in the poem with words of correct part of speech but not the original words. The result is a random poem based on an original. Once that was done we made is visual pleasing adding a raining cloud effect and each part of speech displays in a different unique font and color. The results are as follows:







    Pretty cool right? The poem was When the Sun Comes After the Rain by Robert Lewis Stevenson.

Link to my code HERE

Sunday, January 20, 2013

ISTA 401 Blog: Vorticist Composition

   
      Well that didn't take long . . week two and already some programming! Using a sketching tool called Processing we were challenged to recreate a work of art chosen off MOMA's Inventing Abstraction website. There were a thousand great pieces to choose form but I chose the work above. This is a piece painted by Lawrence Atkinson in 1914-15. It is oil on a canvas 41x33 inches large. Lawrence Atkinson was an English artist born in Manchester in 1873. He initially painted almost exclusively landscapes until he was introduced to Vorticism. Since then he became a leader in the movement. To me the painting caught my eye because of its color pallet, is possibility and its simplicity. First the colors are an unusual combination in my opinion. Pale pinks in the back and bright lime greens in the foreground. This creates an interesting depth in the painting. Also the black diagonal lines in the middle draw your gaze up and vertically stretching the painting. The abstract image also bring huge amounts of possibility to the viewer. There are many angles and views to think about this image. Its not so abstract to the point of unrecognizable to its to quite clear either. It teeters on the edge of clarity. Lastly its not overwhelming to the viewer being that it is mainly large rectangular shapes. It would be boring without the color choices. It is also interesting that there is little shading but rather solid blocks of color on each shape.
      My sketch which is below simply finds a random point in the image, averages the pixels around that point and draws a supper pixel in its place. Over time is draws and similar picture to the image but incomplete and in a sense blurred.
.
Source: http://richardawarren.wordpress.com/about/