Sunday, November 6, 2011

Installation Complete!

The installation is up and running along with the selection of years feature, thus making the bulk of the programming work FINALLY COMPLETE. The only thing left to do is to put the floor decal. As many visitors were expected each day over the long weekend there hasn't been a chance to place floor decals down, but hopefully they will find the opportunity to do it sometime soon (as some visitors and children don't seem to read the instructions on the corner of projection on how to interact with it, and instead just run amok in front of it, not knowing what is going on).



I feel lucky that we have got a powerful computer for the installation. I could not even test the Year features on my mac as it was painfully slow to run anything at all. A fast PC really makes all the difference as it works magically smoothly onsite at MBS.

I think the reason why it was finally possible to do this within two weeks is that the equipment is so easily available (kinects can be bought at the 24 hour Challenger store even!), the device does not even need to be hacked apart physically, and the documentation online for kinect and as3kinect is excellent (although you will need to sift out the white noise of random discussions and blogs also discussing similar problems and errors encountered while building it). Although... I did work overnight in the kitchen for a few days in order to program and test it...


Friday, October 28, 2011

Kinect Testing On-site


photo 3

photo 2

A version of the program is up and working now. Made it in two weeks, mostly just using the depth and blob detection. Granted, I could have used more advanced features of kinect like the skeleton, but mind you I only had two weeks to code something up, no one is going to help me debug when i write myself into a corner.

2am: One of the docents points out that I need to mirror the video image. why yes of course i have to! Thankfully this is easily done in as3kinect.

3am: Terrifying discovery of CABLES BEING TOO SHORT / NOT WORKING resulted in crazy 4am trip to Funan Challenger to get VGA cables and connectors. The cables had to be run across the giant ceiling but we didnt have the connectors either. Sadly, Funan Challenger, despite having a MASSIVE wall of cables, did not have our particular M/F cable (everything was M/M!) and we found it later in our next 24 hour option, Mustafas, after searching a massive pasar malam style cage full of dusty, poorly wrapped vga cables in ragged plastic bags. But there they were! THEY WERE THERE!

Picture 5

7am: STILL SITTING HERE hoping the technician will have a breakthrough with the cabling! this is now all out of my control!

Also, because I don't want to break the code 2 hours before the opening, I am going to integrate the time/years function after today's opening. So for today's version there is only the map of 1819.

Picture 11

Picture 14

9am: Technician cannot figure out what is wrong despite dedicatedly spending the last few hours cabling, recabling, tracing the cables. So we have sadly decided to move the setup to tomorrow so the tech guys or contractors can figure out how to connect a giant projector to a computer some 10 metres away.

In the meantime, everyone shall be regaled with this promo video i made the day before!

Monday, October 24, 2011


Today I installed the kinect drivers on the actual Windows PC to be used in the installation. I am grateful that it is brand new and has lots and lots of RAM. Running all tests on kinect is A JOY. Unlike the situation on my mac itself, which induces MORE AND MORE RAGING PANIC because I can visibly see the lag.

Problem: When installing on Windows 7 (32bit), running glview, glpclview, as3-server generates the following error:

"freenect_win_as3server_0.9b\libusb0.dll is either not designed to run on Windows or it contains an error."

My Mac runs Windows 7 (64bit) so I hadnt encountered this before. Luckily the solution was easy to find; simply a matter of finding the right USB driver for the job.

Solution: Copy libusb0_x86.dl from the freenect_drivers > xbox nui camera > x86 folder, paste it into folder where as3server is, and rename it to libusb0.dll

Problem: Need to import Flex Project to Flash Project.

Picture 19
Just press update.

Next Problem: "ArgumentError: Error #1063: Argument count mismatch on Spring(). Expected 0, got 5.'

I was moving a flex project over to flashcs5 so i could have more control over the look of the interface with kinect, buttons and all. However, the working code i had simply broke at this one point, returning this miscreant error.

So I have a Spring that is expecting to receive 5 values but for SOME REASON it says it expects 0? I checked the Spring class and it was properly constructed. I checked the other classes and they were all straightforward. So what was wrong? My initial thoughts: maybe the deserializer has to construct an instance of the object before setting it's properties. So maybe I have to define a value for the arguments even if i just set them to null first. BUT THIS DIDNT WORK.

I checked it over and over again. A FEW HOURS LATER (after frantic trial and error) I JUST COMPILED IT WITH AN OLDER VERSION OF FLASH/playerglobal.swc for Flash PLayer 10.3. AND THEN THE THING WORKED? I still have no idea why such an error would surface in Flash CS5. I can't see why. It was not very complicated to begin with. Sometimes AS3 really annoys me. Grrrrrrrrrrrr.

Picture 5

Also to move a flex project over you will be prompted if you use flex stuff so you can import the SDK as well. Woo and yay. In the end i used flex sdk 3.5. someone explain to me the difference between 3.5 and 4. But explain it to me later when i have more time to comprehend this all slowly.


Friday, October 21, 2011

Setting up a kinect-based installation in one weekend? Oh yes.


Picture 11


I followed the instructions on this o'reilly page. I should like to add that there are many descriptions and tutorials out there but not all of them will work for you. It also depends on what you want to do and what you can work with (for me, I can only code in AS3). I was flapping about aimlessly for half a day downloading loads of random drivers and installing and uninstalling them constantly, but it did not get me anywhere. In the end, this informative and detailed introduction to openkinect and as3kinect really sorted me out.

I am running Windows 7 via Parallels Desktop 7 on a Unibody Macbook Pro 2.53 Ghz Intel Core 2 Duo 4GB RAM.

  1. Uninstall all the other goddamned kinect related drivers you've previously installed. UNINSTALL EVERYTHING THAT IS CONFUSING.
  2. Download and unzip them. Install them ONE BY ONE in Device Manager -
  3. Download and install Microsoft Visual C++ 2010 Redistributable Package - "The Microsoft Visual C++ 2010 Redistributable Package installs runtime components of Visual C++ Libraries required to run applications developed with Visual C++ on a computer that does not have Visual C++ 2010 installed." - If you installed Visual 2010 Express you can skip this part. (OR PRETEND TO "REPAIR" THE INSTALLATION IF YOU ARE PARANOID! Makes no difference, it seems, except to calm one's RAGING PANIC)
  4. Download the as3 server bridge -
    Contents of folder as described in the original guide:
    • as3-server.exe—This is the bridge to ActionScript.
    • freenect_sync.dll—Part of the OpenKinect API. This is used to get the data when it can be processed instead of when it is available.
    • glut32.dll—This is an OpenGL library dependency.
    • pthreadVC2.dll—Unix multithreading library
    • freenect.dll—This is the OpenKinect main library (driver)
    • glpclview.exe—Demo from OpenKinect (3D projection of depth and color).
    • glview.exe—Demo from OpenKinect (2D projection of depth and color).
    • libusb0.dll—USB library
    • tiltdemo.exe—Demo from OpenKinect for controlling the Kinect's tilt position motor
  5. Running glview.exe returns the above image.


Because I needed to understand how this worked, I redrew the image explaining how the information gets from the kinect to my AS3 client. (although the original diagram was a lifesaver and i really appreciate it, i am also a spelling nazi so itchyfingers here had to correct it)


Test 1: Opening test.exe (flash executable) provided in
Switch on AS3server and then open test.exe

Picture 2

YES, Flash can receive video data. Kinect can receive messages from flash (changing LED colour, motor tilt, etc)

Test 2: Opening test.fla in Flash/Flash Builder to figure out how it works

In the given text.fla, there is a piano. When you present a "finger" shape (depending on depth calibration), it will make a tone sound when it overlaps the "piano".

Picture 4

The green spot represents the blob that is detected. The blue rectangle represents a successful hit. On succesful hit, a tone plays, corresponding to the location of the "piano key" in the entire scale. In the example, the tone is produced in AS3 using AS3 Sound Synthesis IV – Tone Class.

I DO NOT KNOW WHY IT THINKS THERE IS A BLOB AT THE TABLE. Is it because this is by depth? Should my "zone of interaction" just be a line instead? Why did i not think of sound as a response for interactions earlier (is it too late to get a speaker set up on site as well?)

Picture 5

This is the portion of the code that refers to the piano function. It looks straightfoward and humanly understandable.
Crucially, it says: "Events are fired by as3kinectUtils.fireTouchEvent"

More information about the as3kinectUtils class: Openkinect AS3 API.

Side observation: latency becomes highly noticeable once i turn on blobs. I am not sure if parallels/insufficient ram is the reason for the latency during blob detection, although there appears to be no lag in normal rgb video. Will only know when i test it on a native Windows PC tomorrow.

At this point I am finally beginning to see limitations of OpenKinect as only OpenNI/NITE can do the skeleton detection (even though it could not do all the other things like change LED colour and stuff). Should I take risk and try with OpenNI wrapper now that I can finally see it sort of working with OpenKinect? Time is very short and I have to make it work within the next 24 hours... Yes my timeline is 24 hours. Oh decisions, decisions...



Using this simple blob detection, the key is in the DEPTH DETECTION. My fingers in this example have to be at the right depth. This means that it is basically like the blob detection in reactivision. Except that now i dont need a special screen or something because my depth detection serves as the invisible screen/line. I can increase the blob size in where their default settings are defined under these vars:

public static const BLOB_MIN_WIDTH:uint = 15;
public static const BLOB_MAX_WIDTH:uint = 100;
public static const BLOB_MIN_HEIGHT:uint = 15;
public static const BLOB_MAX_HEIGHT:uint = 100;

Users would have to stand at a line and wave their hands in front of them. I think I am fine with that as an interaction.
This would also mean that I have to estimate the blob size. If i am going with blob sizes (estimated hand size as interaction), then I better make some controls to change size of blobs on the fly.


Picture 9

Picture 10

Currently, my original application was made with Flash Builder 4. It looks something like the image above. My aim by wednesday afternoon is to do the following:

- integrate multiple TouchEvents from kinect into it
- trace/function for keyboard recalibration of size of hand blobs
- reduce lag/latency

More info on flash.ui.Multitouch and other TouchEvents

My work entitled "\\" or "The Singapore River as a Psychogeographical Faultline II" will be at the Singapore Local Stories Gallery (at the end of the Titanic exhibition) at the Marina Bay Sands ArtScience Museum, from this Saturday (29th October 2011) onwards. Some artifacts from Pulau Saigon will also be shown there so please come to see them when you can.

To find out more about Pulau Saigon come down THIS THURSDAY EVENING, AT THE SUBSTATION (27th October 2011), when the Singapore Psychogeographical Society will be giving a lecture on psychogeoforensics.

Thursday, October 20, 2011

The Singapore Sandbar in the Singapore River

I am producing a moving image of the Singapore River. Previously I did make a version of the map based on a number of historical maps, but as I was also building many other things simultaneously so I was unable to verify the accuracy of all those maps. I am now cleaning up the images and cross-checking them with other maps.


I drew this on 27 July 2010 last year. Some artistic liberties have been taken in this version to flatten it to a line.

This year, to make it more accurate, I will also try to add in the sandbar that is sometimes spoken about in the Singapore River. This sandbar existed far back all the way to the 14th century during the days when Singapore was called Temasek and there are a number of references to it in old maps of Singapore. Some very interesting information can be found here by another map sleuth. Interestingly it is also mentioned in this game: World of Temasek...



Props to Jianwei and all the other people at Magma Studios who made this lovely game. Runs really smooth on my browser. Picking mangos here... I electronically wipe a nostalgic tear thinking of the first time I started playing Second Life and went around looking for the money trees. Walking around and around the money tree to find the precise angle at which the flat money icons could be seen. (OMG ITS BEEN ALMOST 5 YEARS SINCE THEN)

\\ The Singapore River as a Psychogeographical Faultline

The Singapore River is a site of historical and commercial significance for Singapore, as well as a site to socialise at, and dream of things to come. But what does the Singapore River look like? When prompted to reflect on the river, many find it hard to recall the geography of Singapore's most significant river – which has changed drastically in purpose, form, and colour over the last hundred years.

Originally developed with the support of the The Substation’s Open Call in 2010, \\ is an interactive map installation exploring the Singapore River as a “psychogeographical faultline” where reality, memories and imagined spaces interact, merge, or drift apart - like a series of tectonic plates.

I will be adapting the installation in the reactable to a wall installation using the kinect as the sensor. Visitors will be able to play with the shape of the singapore river by waving their arms and walking in front of a projection of the shape of the Singapore River.