Process Journal #6
Computer visions seems very interesting to me. I checked the 4 videos posted on canvas and started this assignment by going through all the examples available on GitHub and the ML5 website. I just want to use some tensorflow.js model to implement some functions in the web. Then I don’t need to learn tensorflow. js systematically, I found I just need to use a ready-made model packaged as an NPM package. Such as MobileNet (image classification), coco-ssd (object detection),PoseNet (human gesture recognition), roll commands (voice recognition). The NPM pages for these models have detailed code examples that you can copy. There are also a number of third-party developments of off-the-shelf model packages, such as ML5, which includes pix2pix, SketchRNN and other fun models. We were asked to build upon one of the existing ML5 examples by changing the graphic or hardware input or output. I found StyleTransfer example is quite interesting, so I decided to work on that example. I already had the chance to explore PoseNet in the Body-Centric course. For this project, I decided to explore something different. I still used the webcams and picture, I decided to experiment with the style transfer example in the ML5.
It is an expansion on the style transfer example in ml5, where users select the paintings of their favorite artists as the materials within a limited range of choices, and the selected paintings will change the style of real-time images, thinking of a unique abstract painting video.
Firstly, the position detector detects the movement of the object. When the object moves to the visual center of the camera system, the detector immediately sends a signal to the image acquisition part to trigger the pulse.
Then, according to a predetermined program and delay, the image acquisition section sends pulses to the camera and lighting system, and both the camera machine and the light source are turned on.
The camera then starts a new scan. The camera opens the exposure mechanism before starting a new frame scan, and the exposure time can be pre-set. Turn on the lighting source at the same time. The lighting time should match the exposure time of the camera.
At this point, the screen scanning and output officially began. The image acquisition part obtains digital image or video through A/D mode conversion. At the same time, the obtained digital image/video is stored in the memory of the processor or computer, and then the processor processes, analyzes and recognizes the image.
Step1: allow styletransfer to use camera
Step 2: select the art work
（Start and stop the transfer process）
Step 3: check new synthesized video
In the example, there is only one painting, and the composition of video is too abstract. I wanted to give users more choices, so I tried several other paintings to see if they have different effects.
Now I have a big problem, that is, there is no big difference between the color and composition of video between the chrysanthemum painting and the abstract painting. I don’t know what the problem is. Then, I tried an abstract painting in blue.
The difference is so small that I don’t know what to do with the picture. The naked eye can only see very slight differences. But I’m going to make the framework so users can choose different images.
Users select the paintings of the artists they like as the materials, and the selected paintings will change the style of real-time images, as a unique abstract painting video.