CityBLOCK

 

artboard-1

Project Title – CityBLOCK

Team Member – Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

Project Description
CityBLOCK is a modular city builder experience where the user controls the city building blocks with designed cubes to build their desirable dream city by rearranging the cubes.

What makes Toronto so unique? Being the largest city, most diverse, It’s home to a dynamic mix of tourist attractions, from museums and galleries to the world-famous CN Tower and, just offshore, Toronto Islands. And just a short drive away is Niagara Falls.
This project will let you build your version of Toronto city.
Everyone has their own imagination of their city. Since I came to Toronto city for the first time and i really surprised to see the fusion of old and new architecture of the city. I always wanted to show what I think about this city and I almost everyday go to the harbourfront and admire CN Tower, thinking about how it became the “ICON” of the city. The 0ther historical architecture of the city like Flatiron building, St Lawrence Market makes the city more unique and identifiable.
This is my take on to represent Toronto city from my perspective and of course, this city builder is modular so anything can be added later.

GITHUB LINK – here

Inspiration

The Reactable is an electronic musical instrument with a tabletop tangible user interface that was developed within the Music Technology Group at the Universitat Pompeu Fabra in Barcelona, Spain by Sergi Jordà, Marcos Alonso, Martin Kaltenbrunner, and Günter Geiger.

What I like about the retractable is any physical object can be smart and can interact, alter, do some modification with digital content.

The only input device is used here is only camera and it looks very clean in terms of setup for an open show.

Working flow

This set up is for the table to interaction and requires a lot of effort to make a table and configure the camera with a projector.

Technology

  • reacTIVision 1.5.1
  • TUIO Client – Processing
  • OV2710 – Camera
  • Projector – Which supports the short-throw lens with a throw ratio of 0.6 and below.
  • Infrared LED -850nm IlluminatorMaterial
    • Wooden/Paper blocks.
    • Table 4.5ft*3.5ft
    • Projector Mount
      These were the minimum requirement that I collected from several posts from reacTIVISION forum and

First of all, I wanted to try the reacTIVISION software and break it down to understand its working and constraints. Also, there is nothing much on the internet about it so I just started with documentation.

Work Plan

22nd – 23rd -ideation finalizing the concept
24th – 27th -Coding, configurating reacTIVision with processing.
27th – 30th-Testing and making ready for exhibition.
1st – 2nd-Installation/ setup
3rd – 4th-Exhibitions

After the first meeting, my whole project got flipped. Initially, I was thinking to make a tabletop interactive city builder and it became more awesome which was tracking fiducial markers from the front and projecting on the vertical plane. It was a smart move to hide the markers and making the aesthetics of the project more cooler.

I removed IR LEDS, Camera and started with webcam in order to look clean.

group-11Initial graphics and their coupled fiducial markers

 

Process

In this test1 I tested the recognition ability of reacTIVISION to detect the fiducial marker. I captured the image of the test marker and tested with reacTIVISION with different sets of the brightness of my smartphone and it recognized in almost every test run.

This time I was testing USB camera by Logitech and at this point, I don’t know how to configure the camera with reacTIVISION(it was .xml file) so without autofocus, it detects all the marker and it provides very low depth and with autofocus, it detects max 4 markers and it provides the larger range of depth but the marker that I have placed earlier in near to the camera gets out of focus when I try to put another marker far to the camera and hence limiting me tho the very low depth range interaction area.

After so many testing with a web camera, I found the minimal distance between camera and marker should 40-50 cm and its max distance should be 75-80 cm thus giving me a nice interactive region for interaction.

In test3 I tested the lag between the user’s input and the feedback on the screen. Text tracking was pretty good and constantly stick together with the marker.

In test 4 I tested with the graphic. initially, it was not rendering on the screen because there was another line of code to render a white background over the top of the image. After fixing that new challenges arises which was aligning images in one horizontal plane. The anchor point of the image is set in the center by default and it was fixed by offset with one common reference image. The distance between the two images is also affected by the camera distance so this created a need for a fixed platform.

group-1171

I selected some historical buildings from Toronto city and made their digital vector for laser cutting.  Nick gave me a suggestion to have multiple buildings for one building like 4 CN Tower. For CN Tower I have assigned one marker so I have to make 4 identical markers for each CN Tower. So in this way, I have multiple replicas of the same building and to keep the track of buildings and assigned markers I created an excel sheet. Then I attached markers to their respective buildings.

Internal Critique

Some valuable inputs from peers and mentors
1. A staircase like a platform will help the user to interact with the BLOCK pieces in an intended way.
2. Add some animation or triggers to move images so that the project will be more interactive.
3. A bigger TV will be more impactful for the project rather than the projector.

For 1 a quick fix would be making region on the table with a marker but there was some user holds the building from the base and in this process, they unknowingly hinder the marker from the camera. So the staircase (did the laser cut on the same day after critique thanks to Jignesh for helping me out staircase ) was the perfect solution as it turns out to be after the OPEN SHOW.

For 2 I did make the background animation.

While playing it was too laggy and too slow to process the video although it is 5 MB and still I was getting too much lag. So disabled the video.

For 3 I used a TV screen as the graphics were much brighter and sharper and the most important thing was that the set got cleaner and minimal.

 

Exhibition SetUP

img_20191204_165627

group-1170

Take-Aways from the Exhibition

Some people were so surprised and looking for the sensor from where I am tracking the buildings. Some people touched on the table to check is there any Magnetic field or not. Some people thought that I am using some algorithm that detects the shape of the buildings ( kept me thinking for a while) and replicating them on the screen.
After explaining to them how cityBLOCK works, they complimented the project smart and clean in terms of set up.
There was an old couple and the man was too happy to play with the blocks and his wife told him to we have to see other’s work as well and I think this was a success for my project about being a simple interactive project.

Challenges & Learnings:

The biggest hurdle for me was that I had no idea where to start. There is no proper resources but I did know that this can be done all I need was just one project to understand the communication between the camera and processing.

Getting image graphics on the screen when I show a marker to the camera was the most time-consuming phase of the project. This was just a code issue that draws another white background on the top of images.
Once I got the images then it was just completing the task which was necessary for the project.

References

http://reactivision.sourceforge.net/#files
https://www.tuio.org/?software

 

Skull Touch

S K U L L  T O U C H – An interactive skull

img_0604

Skull Touch
An interactive skull that reacts to the touches of people and gives output in the form of spooky sound in different amplitudes and frequency.

Mentors
Kate Hartman & Nick Puckett

Description

This project tries to explore different states of touche such as no-touch, one finger touch, two-finger touch and grab. Capacitive touch enables to detect the different state of touch.
When you think of the word “Tangible interface”, the first thing comes in my mind is the tactile interface that anyone can feel, touch and interact with. Why scary theme, since it was Halloween time and hence the spooky skull.

Github link: https://github.com/Rajat1380/SkullTouch

Inspiration

I started with the idea about how to use a pressure sensor and a touch sensor, then I realized that only two states can be obtained and I was not satisfied with that. So I started looking for how to get more output.
I got to know about the Capacitive touch sensor through which any surface can be a sensor. After watching this video by DisneyResearchHub, this got me thinking that this is what I was trying to do initially. Disney had developed its own hardware to detect different sensors and information is not open for the public. Then I started looking for alternatives and found StudioNAND/tact-hardware. They have provided all the information regarding the capacitive sensor. I always want to work with audio and how to control them with different input methods. This project gave me all the required push to go for it. As an input device, I chose the skull as this project was happening around Halloween time.

The Process

I started with the circuit set up and code to get the capacitance value for the different touches. The list provided by studioNAND to make low budget capacitive sensor is given below.

Requirement
1× 1N4148 diode
1× 10mH coil
Condensators

1× 100pF
1× 10nF
Resistors

1× 3.3k
1× 10k
1× 1M

I got all the except components except 10mH inductor. I got 3.3 mH inductor instead of recommended one in the list and preceded with the circuit set up.
I have to install TACT library for Arduino and processing to run the code.

Arduino Circuits

circuit

circuit2

Prototype 1

img_20191030_230809

img_20191105_185643

The output was very distinctive for no-touch, one finger touch, but not for one finger and two-finger touch. I figured this out very late in my project. This was happening because of 3.3mH inductor.  The range I was getting is too narrow for distinctive touches. I tried to make an inductor on 10mH with the hollow cylinder and copper wire but I was not able to get near 10mH. So proceeded with the current code.

As I was planning the control audio with the skull as an input device through different modes of touch. I directly jumped into the processing to couple the capacitance value with the amplitude, frequency of the scary sound to give spooky experience to the user.

Final Setup

group-15setup3

img_0597img_0601 img_0602Challenges & Learnings

  1. Libraries and how, where to install, these were my hard learning with this project. As TACT library was created in 2014 as now there is no support for the new Arduino microcontroller. So I have to use Arduino UNO.
  2. I was getting noise in capacitance output even for no-touch. I tweeted the developer of the TACT library and I got a reply from him. He gave me access to controlling sound via a microphone input. I am still finding this hard to understand. Someday I will.
  3. Getting components was hard for me as I was not able to get all the components from creation. I tried to craft inductor and it was not successful. It’s part of learning and it difficult to accept the dead ends.

References

StudioNAND, 2014 tact-hardware
28th Oct, 2019 [https://github.com/StudioNAND/tact-hardware]

Tore Knudsen, 2018 Sound Classifier tool
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2018 Capacitive sensing + Machine Learning
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2017 SoundClassifier
29th Oct, 2019 [https://github.com/torekndsn/SoundClassifier]

Tore Knudsen, 2017 Sound classification with Processing and Wekinator
29th Oct, 2019 [https://vimeo.com/276021078]

DisneyResearchHub, 2012 Botanicus Interacticus
30th Oct, 2019 [https://www.youtube.com/watch?v=mhasvJW9Nyc&t=49s]