Project02_UnderwaterDREAM

1.CONCEPT

Underwater Dream is a game that can be played by your mind. The purpose of this project is to create dream-like experience of underwater.

This idea came from my own dream experience. When I was dreaming, I can experience amazing story that never happened in real life. If I realized I was dreaming, I can control the dream by my mind. For example, if I want a cake, I can imagine there is a cake in my hand and after I blink my eyes, I will see a cake in front of me, like magic.

So there are some points that make dream experience different from real life:

1. Without real activity

2. Change surroundings by imagination (and blink)

3. Fantastic story, but looks real

Because many unexpected physical errors appeared (and difficult to be fixed) in Project 1, I don’t want to make many physical instalments. So this time, I chose game, which is a good way for interaction, as the platform.

To decide what element can be used for water park, I had a brainstorming:

Screen Shot 2014-11-17 at 7.00.50 AM

 

2.DESIGN

To achieve these points, this game combines three different data sauces: EEG sensor, Arduino, webcam.

EEG sensor is used to achieve point 1 and 2. Player can go forward by attention level, play music by meditation level, and destroy barriers by blinking eyes.

Arduino is used to connect water in real world to virtual water, by detecting water level.

Webcam is used with openCV to detect player’s face, to make the world in screen looks like a “real” one because player can have different view angle by moving head.

Here is a reference video about the “real” 3D effect:

http://www.youtube.com/watch?v=h9kPI7_vhAU

To summarize, this game can be played without keyboard and mouse, only mind and motion.

SCENE

About the scene, there are two plans:

First, the whole scene is a circle to make the game be able to circulate. Player can turn automatically. There are three themes for the whole map: Coral Valley, Sunken Ship, and Underwater Graveyard.

Second, the scenes are straight path that player can not turn. There are two themes for two scenes: Life and Death, player starts the game from Life scene, in which all the sceneries are light and lively. After player experiences the whole scene, he/she will be transported to Death area, which is full of dark and horrible skeletons and remain of sunken ship. After this, player will be brought back to Life area to complete a circulation.

3.TECHNOLOGY

SOFTWARE

         Unity 3d

Processing

3Ds max

Photoshop

LIBRARY

         openCV for Processing

ThinkGear

oscP5

 

DATA FLOW

The main challenge is how to transfer data between different sources. The data come from Arduino, Webcam, EEG sensor to Processing, and then transfer to Unity.

Screen Shot 2014-11-17 at 6.42.26 AM

 

From Processing to Unity

First of all, I have to make sure Processing can be connected with Unity. I found a video that let Processing control the scale of a cube in Unity. As the video said, I download the library called oscP5, with this, the number can be sent to Unity successfully. This is the base of the whole project that should be confirmed before everything start.

From Arduino to Processing

I use ultrasonic sensor in this project, to detect water level in a box. The game will start only the box is filled with water.

Here is the circuit:

Screen Shot 2014-11-15 at 9.45.25 PM

Here is an experiment, for testing if the ping sensor can detect the water level:

//Experiment 1

(Some description of experiment is not enough 200 words because I didn’t write in order, some descriptions are in other section, and also some descriptions are for more than 1 experiment)

Here is another experiment for testing the data transformation, from Arduino to Processing to Unity.

//Experiment 4

From EEG to Processing

20141025_013212

The first time I knew EEG is, I watched a TED talk about Emotiv, I was so surprised to the technology (and this is an important reason why I chose Digital Future). I have a NeuroSky Mindwave EEG sensor, but because I don’t have background of coding so I didn’t use it until I learnt Processing. There is a library for EEG sensor called ThinkGear, and fortunately, NeuroSky company provided lots of develop tools in their website and some of them are free.

To transfer the data from EEG sensor to Processing, two software are necessary: MindWaveManager and ThinkGearConnector. The first one is used for connecting EEG sensor to laptop; the other one is the socket that can help user to transfer data from sensor to Processing. Both of these can be download from the website.
There are several variables that have been provided in the library: attentionLevel, meditationLevel, blinkStrength, delta, theta, low_alpha, high_alpha, low_beta, high_beta, low_gamma, mid_gamma. The level of these can reflect user’s emotion. In this project, only attentionLevel, meditationLevel and blinkStrength are used.

This is Experiment 2: Data from EEG sensor to Processing

In this experiment, I used attentionLevel to control the height of this rectangle, and the blinkStrength to change the color of it. The harder I blink my eyes, the more vivid the color is.

//Experiment 2

Here is Experiment 5: using attention to control the movement and move speed. Player can move faster by concentrate more, and if player lose concentrate, he/she will stop.

//Experiment 5

In Experiment 6, the blink function has been added. When my strength of blink reached a certain level (it can avoid slight blink disturbing the result), the barrier disappears. This function is used to stimulate the situation in dream that I mentioned in the first part. The advantage of this function is the player won’t find the process of disappear because it happened when the player close the eyes.

//Experiment 6

From Webcam to Processing

20141117_000842

For tracking face with webcam, open CV library is used. From the example of face tracking, I tried to locate the middle point of face, and use the position to control the camera in Unity. The final effect is, when you move your head, the camera in Unity will move and rotate with your movement.

//Experiment 3

Work in Unity

After transfer data from Processing to Unity, the other works were completed in Unity.

The first challenge is Processing is not supported in Unity so I have to use JavaScript, which I never used before. I completed the code of JavaScript by learning from examples, online tutorials and official Unity manual. From learning JavaScript I found that even JavaScript and Processing are different languages, they are still have something similar that the knowledge I learnt from Processing is helpful for learning JavaScript.

In Unity, I move the whole scene instead of moving the character. In Experiment 3 I already tested the camera movement function, in Experiment 5 the movement function has been tested too, and in Experiment 6 the basic gameplay was completed. But they all relate to Processing. Here is the function just effect in Unity.

This experiment is an improved version of Experiment 6, because I need this game is automatically circulate without stop to restart, so the barriers should appear again. The Experiment 7 is for this function. The effect is, after the second barrier is destroyed, the previous one will appear to the original position.

//Experiment 7

Work in Processing

Experiment 8 is for background music that can be controlled by meditation level. When meditation level over 50, a piece of music will be played. It uses a library called minim. The music will not play a second time until it finishes for the first time.

//Experiment 8

Experiment 9 is for testing objloader library. I tried an example but when I tried to import my own obj, it can’t show the texture and I don’t know how to solve this problem. Objloader is another choice if Processing can’t transfer data to Unity. But obviously Processing can’t support 3d as convenient as Unity.

//Experiment 9

Experiment 10 is the one I tried to let player turn automatically. But unfortunately I just complete half and because at first I didn’t set the original position properly so it’s difficult to change it. I tried 2 methods to trigger the turn, the first one is use distance but it’s not accurate. So I tried to use collision. This one is much better but because I don’t have enough time to modify the position, I didn’t add this function into my final version. But I forgot to record this part….

CODE

Arduino:

https://gist.github.com/Jennaltj/49f766fe0ff61bb485d0

Processing and Unity:

https://gist.github.com/Jennaltj/d5addd6a73d6e85a81c5

4.ART

The 3d models were made in 3ds max, the texture were drawn in Photoshop.

Here are some reference pictures:

1 2 3 4

Here are some pictures for modelling:

10807367_1500762270200041_1880013779_o

Import 3d models to Unity:

unity

Also I added many effects such as light, fog, particle and so on in this scene.

4. Conclusion

Here is the final version of the project:

20141110_225853

Actually this is not a project for showing to publish. It concentrates on personal experience. But I think a multi-player version will be more attractive. And also because of the time, I didn’t complete all the things I want to show.

From this project I really learnt a lot. This is the first time I made something about code. I am so excited about learning a programming language (even I met a lot of difficult, both in Processing and JavaScript, and some of them haven’t been solved now). But I think coding is very important in a project because I found many ideas cannot be realized because the lack of program knowledge.