Man With a Movie Camera Movie

Sought to bring out a technique about perspective of audiences that I took away from Man With A Movie Camera. So while brainstorming what to do, I came up with the concept of people watching people watching people….

I felt restricted by the flat 2D screens and wanted a different perspective of what perspective meant. I remembered my professor Yue Han talked about Live video and thought it would be interesting to explore that form.

I then remembered I took an instastory of this exact scene where the audiences of the 1920s were watching a movie and we were watching a movie about them watching a movie. And now my instagram followers would be viewing my story whilst I was viewing them viewing a movie. I thought that was interesting and wanted to find out how I could do it.

It was probably a coincidence that I was working on a webcam project involving manipulation of pixels of a screen using facial recognition. So using what I learnt, I went to explore luma keying and even though it took awhile, I finally figured it out after class.Was using a very complicated method to split colour to simulate colour keying before I found a much simpler method to it.This is the working and improved patcher. By routing a new object I learnt called suckah, I was able to pinpoint the exact RGB I wanted to pick out and by unpacking the list of rgb color codes, I was able to find the exact colours to key. I adjusted to tolerance to a suitable rate and applied a fullscreen patcher that enables full screen mode.

The result is as above and as presented in class. There are no alterations made on the screen captures.

Mirror Mirror Week 2

30/01/18
Tuesday, 1604

Hmm.. During lesson I decided not to look at the model codes and wanted to homebrew my own nasty concoction!

Firstly, I restarted everything(ok that’s mainly because I have not started my trial)! I started a fresh with the mindset of completing the project only with efficient and clean blocks of code to maximise my thought process.

Started with the simple idea that I wanted to have a subpatcher to open up the web cam, 1 to do facial recognition and 1 to scale it for the slider.

This was the easy part. I referenced most of the codes from the help directory and what I’ve done last week. The harder part would be the calculations behind the scaler.

This may look simple because this was the final output but this went through many itinerations. Before arriving at my final code I was looking through the proper usage of scale to find the sweet spot and also how I could create the effect of dimming the mirror instead of the other way round(as below).

I automatically knew there was a problem with the scaler as it wasn’t giving me the output I needed(which is the percentage of my face per screen pixel ratio). However I was faced with the dilemma that I could not place my square area on the right as it would not automatically update if 76800px(320px*240px resolution) was a constant on my left.

I went on to check the references on MAX MSP Jitter’s website and found out about the !/(inverse division) and decided to give it a go! It was a success and gave me the exact percentage I needed to achieve my result!

It was a success! It was delivering exactly what I wanted! But because I was wearing glasses, the facial recognition wasn’t detecting me well enough. I even added a jit.window as a prelude that would lead me to my full screen!

Full screen was pretty simple! All I did was a simple Key > Select 32(spacebar) which was taught in class! However I identified a problem that the screen was flashing way too often due to the facial recognition not picking up faces properly. I knew I was done with it at this point but I wasn’t about to give up knowing that it isn’t perfect.

To tackle my problem, I had to stop the unnecessary flickering. I noticed whenever my face isn’t present, the screen turns dark and the value becomes 0. So I limited my scale patcher to never hit 0. This is important because then I’d split out the value of 0 to become 1 and let everything else flow as a per norm.(refer to my finished patcher at the start)

I searched far and wide the MAX MSP Jitter references, trying out line, line~, rampsmooth~ within the scaler, outside the scaler and within the main patcher and found out that all I needed was a parameter to be in between the values that could be changed thanks to a simple slide parameter that receives the values of my “face not present brightness 1” and my “varying brightness from square area” values. And it works like a charm!


 

TamaGO Week 12

We had problems with Vuforia not being what we wanted and now we’re running out of time. Decided to take the bull by its horns and start learning unity on my own.

Running into problem after problem.

Screen Shot 2017-11-09 at 7.54.04 PM.png

Animator was used to play my animations but we had to use a seperate script to control the material change for the face to change! What a challenge!

Screen Shot 2017-11-10 at 5.22.44 PM.png

A total redesign of what the feel should be like. Decided to move many things around to make sure the user could interact better and optimize gameplay, basically quality of life changes.

Screen Shot 2017-11-10 at 5.41.00 PM.png

Total revamp, including the landing and loading pages.

shadowww.gif

And finally after the new SDK, Kudan AR, was implemented along with our new UI!

Here comes Scruffy!

TamaGO Week 11

It’s been 2 weeks so here’s an update from me!

3D models are up and running! Completely rigged, IK-ed and UV-ed!Screen Shot 2017-11-09 at 4.04.14 PM.png

Also used UVs to portray emotion by changing up their faces! Also, using the IK, I was able to make scruffy emote 3 states! Idle, Feeding(Happy, generally) and Angry.

Finally we can throw scruffy into the game world that Ryan has been building!

Screen Shot 2017-11-08 at 4.35.56 PM.png

Exciting times!

NOMADIC

After 18 daunting days of learning an entire library and system from scratch, Nomadic. An interactive experience using the LEAP Motion Sensor and Java on Processing.

The project aims to show the possibilities of what it would be like to be in full control, psychological and physically, of elements. With 3 different elements at your disposable from combustion and ignition of flames, manipulation of leafs that breeze through the wind and the control of a water stream, you are the master and you decide what you want to do.

Nomadic is far from complete and I even have plans to implement a 4th element; Earth. But so far, it has been a hell of a ride. Enjoy.

[WIP] D-17 NOMADIC

wind3

Started today off with redeveloping an entirely new particle system that could stretch beyond the width of the screen. Followed by assigning a variable to the radius of the ellipse and alpha which would in turn make it look a little smaller if it’s further away and the alpha to control how white it would look. 

wind5.gif

This new particle system looked pretty much like snow but served it’s purpose and when the Leaf image was ported in, it didn’t look half bad now. Now throwing in the applied forces tutorial from Daniel Shiffman, this was the end result. (I even added some light breeze force so that all the leaves would sway left and right at random intervals of -0.3 to 0.3).


water3

After slowly removing line by line, I finally found out what was the problem behind the lag was an infinite for loop regarding attractors. I removed it and changed it to a if statement and it worked well. Now I implemented the LEAP Motion library into it and put the attractor to a black fill, threw it to the start of the draw loop and changed it’s positioning to be of the hands that leap detects. 

water4

A few things to note, the hands don’t really work well together as it only takes into account the latest hand. Perhaps in the future it could be an implementation of finding the middle ground between the two hands. Another thing that bugs me is how the water particles do not swirl together around my attractor. Maybe I used a wrong reference for my attractor?

It’s a few more hours to D-Day and I need to start thinking of a way to rig my product up. I’ll probably need 3 projectors and hopefully they will be able to show what I’ve been working on. 

Cheers

[WIP] D-15 NOMADIC

Have not been updating here as I wanted but basically what I’ve done was finished up polishing my Nomadic Fire and now it works great. It still lacks the entire “punch and give me a fireball” animation but I guess I’ll solve that somehow.

I’ve given a stab at water bending but am having a lot of trouble using the LiquidFunProcessing library. The process was.. not smooth.

Decided to reuse the particle system that I learnt from Daniel Shiffman and it didn’t turn out great. I mean, it looks sort of like water but..

water2

Using some lighting loops on the RGB, I was able to create some sort of a funky look to show light reflections. I also added an ellipse to be an attractor that I’ll port over into my final one to start attracting the water towards the position of my hands. Some thing weird to point out is that the water does not attract all around my ellipse. Weird.

I was also faced with some sort of lag or slowness coming from somewhere that I’ll have to figure out sooner rather than later.


wind1.gif

Moving on, I didn’t like the look and the complexity behind Water and decided to try out air. I had a vague concept in mind and wanted to reproduce it using the current particle system I had by moving it out of the screen and increasing the range of the fall.

wind2.gif

Without much thought, I tried to implement some sort of rotation axis on the image so the leaf would look natural when it was falling down. Well, let’s just say, it didn’t turn out great. I have a feeling I’ll have to create an entirely new particle system to fit what I’m doing.

Cheers.