Showing posts with label effects. Show all posts
Showing posts with label effects. Show all posts

Friday, August 29, 2014

Fractals and Hand Tracking



My Process
For the past 2-3 weeks I have educating myself on creating Generative Art in Processing. I began doing research into what already exists and I came across a great book called Generative Art, A Practical Guide Using Processing. Many of the exercises in this book went into great detail into some important terms that we will definitely be using in our project. The translate() function is a great way to move the origin point (0,0,0)  for your sketch. This is helpful because you can make fractals that can be easily spaced out without having to calculate the parent fractals location plus the location of where you want the copies. I also learned that classes are the best way to keep organize in these sometimes lengthy sketches.

The Next Step
Once I had a good understanding of how to create fractals I went looking into how to make them interactive with a user on the kinect. I referenced a tutorial sketch that I had programmed that would utilize the kinect's skeletal tracking feature and I tweaked it to draw a red ellipse onto the user's right hand. I then pulled in a branch fractal and adjusted its update location to be connected with the location of the users hand. I ran the sketch and came to the realization that it was taking up a lot of processing power to draw what the kinect was viewing and the consistently changing branch fractal so I turned off the depth image it was drawing and I told Processing to find the pixels that coincided with the user and paint those green. That made the sketch run much more smoothly. Below is a snapshot of what the sketch looked like.



Wednesday, July 30, 2014

Processing, the Kinect and OpenNI

The Kinect was a easy decision for our project because it is not sensitive to the light conditions in the room at the time it is captured. Hence, if we use this in a dark room it will not be an issue. The Kinect camera works by creating a depth image. It uses infrared light to create an image that captures where the objects are in space. The Kinect camera resolution is 640x480. You can bump the camera up to 1280x1024 but the data will arrive at 10 frames per second rather than 20 frames per second.

The next decision we had to make was determining which processing library would work the best for what we are trying to accomplish with the Kinect. It boiled down to OpenKinect and OpenNI.
Dan Shiffman built on the work of the OpenKinect project to create a library for working with the Kinect in processing. OpenKinect drivers provide access to the Kinect's servo motors and has a very simple software license. The contributors to the OpenKinect project released their drivers under a fully open source license. In short, this means that you can use OpenKinect code in your own commercial and open source projects without having to pay a license fee to anyone. In response to OpenKinect PrimeSense released their own software for working with the Kinect. PrimeSense included more sophisticated software that would process the raw depth image to detect users and locate the position of their joints in three dimensions. They called their software OpenNI, NI standing for "Natural Interaction". OpenNI provides two key pieces of software that is useful to our goals. First is the OpenNI framework. this includes the drivers for accessing the basic depth data from the Kinect. This piece of software has a similar licensing situation as OpenKinect. However, the other feature that OpenNI provides does not have such a simple license. This feature is the user tracking. This license is provided by an external module, called NITE. NITE is not available under an open source license. It is a commercial product that belongs to PrimeSense. PrimeSense does provide a royalty-free license that you can use to make projects that use NITE with OpenNI. 

We chose to use OpenNI because it provides us with the option to use the user tracking and there was a good amount of reading material that explained and used the OpenNI library in Processing. OpenNI also has the advantage because it is designed to work with not just the kinect but also other depth cameras. This means that code we write using OpenNI to work with the Kinect will continue to work with newer depth cameras as they are released, saving us from needing to rewrite our applications depending on what camera we want to use. 

OpenNI recognizes heads, shoulders, elbows, wrists, chests, hips, knees, ankles, and feet. This is going to be an important part in tracking the crowd in our installation. One of the techniques we are considering using OpenNI's 3D features and Point Cloud systems. Below is are screenshots of a test sketch of a point cloud system that we created that creates the illusion that it is rotating completely around what the kinect is seeing.

Rotating Point Cloud Sketch Test Image (1)


Rotating Point Cloud Sketch Test Image (2)


Research and tutorials referenced from Making Things See by Greg Borenstein

Monday, July 28, 2014

Thermistor and Installation Size



The first picture above is the thermistor hooked into the Arduino UNO.  A thermistor gives readings about temperature change because temperature effects the thermistor's resistance (Temperature Sensor + Resistor = Thermistor.

The second picture is Nate and Miguel's first attempt at installation size.  Each box marks a corner of the installation size.  This is a very important aspect of our project as it will affect the user's experience.