Zap Project (Murgatroyd the Talking Moose)

The “Zap” Project                          

A “secret keeper” for kids age 5 and up


A “simpler” interactive  talking toy that doesn’t just spit back whatever it hears, tries to teach a 5 year old how to count, strongly hints at “robots are always war machines” or records whatever it hears for storage at some corporation’s head office.  The stuffed toy will be a friend and confidant to his companion child (of whatever age).  It will listen to and “understand” human speech but is incapable of making enough of the same noises human speech consists of and has developed its own “language” called “Zap”.  The child (or older person) will have to put the effort into learning to understand “Zap”.  What this gives a child is a toy/companion/friend he or she can tell anything that will “keep his or her secrets” because nobody else in the house understands what the toy is “saying”.

This is intended to be a child’s “best friend” who listens to what he’s told and doesn’t tell “Mom and Dad” about everything.  I believe this will stimulate “true” imagination (that which is triggered by nothing but itself) in the user by requiring the willing effort to imagine that “Zap” is actually speech and that there is an actual response from the toy.  Preliminary sketch of the concept is below.  The sketch is of a teddy-bear but the toy could be any stuffed animal of the necessary size.


Project Log

Day 1:

General contemplation of what I want the project to do and what the difference is between this and other interactive toys.

Most of the “interactive” toys  I’ve seen on the market have several drawbacks in my opinion:

      1. Pre-recorded and predictable human phrases triggered by a squeeze or similar
      2. Pre-recorded “babble” triggered as above
      3. Random and pointless movement (waddle or roll or other)
      4. Overly “educational” (teach my toddler to count, etc.) for my target age group
      5. Some of them actually record ambient (i.e. your conversations etc.) and store them at the manufacturer’s head office (privacy issues abound here see article copied from the New York Times below):“SAN FRANCISCO — My Friend Cayla, a doll with nearly waist-length golden hair that talks and responds to children’s questions, was designed to bring delight to households. But there’s something else that Cayla might bring into homes as well: hackers and identity thieves.Earlier this year, Germany’s Federal Network Agency, the country’s regulatory office, labeled Cayla “an illegal espionage apparatus” and recommended that parents destroy it. Retailers there were told they could sell the doll only if they disconnected its ability to connect to the internet, the feature that also allows in hackers. And the Norwegian Consumer Council called Cayla a “failed toy.”

        The doll is not alone. As the holiday shopping season enters its frantic last days, many manufacturers are promoting “connected” toys to keep children engaged. There’s also a smart watch for kids, a droid from the recent “Star Wars” movies and a furry little Furby. These gadgets can all connect with the internet to interact — a Cayla doll can whisper to children in several languages that she’s great at keeping secrets, while a plush Furby Connect doll can smile back and laugh when tickled.”

      6. They’re some variation of the “remote control robot” (usually some type of “armed combat units”)
      7. Tremendously expensive for the actual product offerings

Day 2:

Online research for the parts that may be required outside of the class kit (preliminary list below): 

      • Feather Wing with MP3 (or other format) storage (internally or on a micro-SD card) and playback capability
            • Found Adafruit Music Maker Feather Wing. 
                  • There is, however no (detectable from the product description) recording capability.  I may have to pre-record some speech, pre-“scramble” it into “Zap” and work with a variable timer (1 to 3 seconds at random). 
            • Found the Electret Mic Amp – MAX9814. 
                  • I’m going to use the microphone as a trigger for a response rather than a pickup for ambient voices.  There will be no privacy concerns as it will only trigger the playback.
            • Found the LiPo 2000mAh #2011 battery
                  • Should be enough to power the toy as there will be no continuous drain.

Feather Wing, microphone and lithium ion battery on order from Adafruit (DHL delivery … hopefully fast).

Preliminary Fritzing Sketch:


Preliminary Schematic Diagram:


Day Three:

Found a toy at Value Village (see photo below). 


This will become “Murgatroyd the Talking Moose”.  It’s suitable due to the zippered opening in the back (making parts insertion/repair simpler for the prototype).  Created a user survey form for testing period (link below).

Day Four:

Parts have arrived (yay).  Unfortunately, I can’t get downtown to solder them today … will go tomorrow.  Spent over an hour making and recording silly noises in separate tracks to load onto the SD card.  This was simply taking a consonant and adding a vowel (Ba pronounced as “Bah” for example) or just a vowel alone.  I intend to simply use the “shuffle” feature to make Murgatroyd “speak” in “Zap”.

Day Five:

All soldered up and ready to go (hopefully).  Note from photo of the circuit assembly below that I’ve deliberately installed the header pins such that the “working surfaces” of the Feather and Feather Wing are facing rather than exposed.  This will help prevent (hopefully) any jerked or pulled contacts and impact injuries inside the toy.  Music Maker library downloaded and installed in Arduino.   Files transferred to SD card as TRACK001 – TRACK037. 



Day Six:

Adafruit’s basic test sketches for the Music Maker and microphone check (link below):

The code for the Music Maker verifies and uploads but I’m getting message “SD failed, or not present”.  Tried re-seating the SD card with same result.  I hope I don’t have what the Adafruit site defines as a “non brand knock-off”.  Visually checked all solder points and connection points on the SD card … everything seems OK but will not recognize the card.  Have tried two brands (Kingston and Nextech) 32 GB micro-SD cards.  Adafruit’s test sketch for the microphone seems to recognize the input (getting a result in the serial monitor anyway). 

Day Seven:

I hate this …  I absolutely hate it.  Arduino’s test code for the microphone gives me a result I can use as input for the trigger and the Music Maker feather wing test code verifies and uploads but will not find the SD card (error copied below from serial monitor output):

Adafruit VS1053 Feather Test

VS1053 found

SD failed, or not present

I’ve tried everything I can think of.  Without access to the sound files I can’t code to test for the full project.  Failure in coding, SD card, wiring or other.  I’ve simulated the final assembly (all the pieces fit in the toy) but it doesn’t work (photo below)


Day Eight and Later (December 20 – 26):

Ran the user survey as best I could.

Eight survey responses (neighbours and relatives) about the toy (solely as concept since I can’t make the thing work).

Final Fritzing Sketch


Speculative Repair Kit List (If it actually worked):

  1. Micro-screwdriver set
  2. black electrician’s tape
  3. Needle & thread (for possible repairs to Murgatroyd)
  4. soldering kit (for possible repairs to the circuitry)

B.O.M. for the project:


Odd Man In & Consider Growth

Project Log – Odd Man In & Consider Growth (Experiment 4)

Dave Foster, Chris Luginbuhl, Karo Castro-Wunsch

Creation & Computation DIGF-6037-001 (Kate Hartman & Nicholas Puckett)

Project Description (from course Assignments tab):

Digital messaging is a pervasive part of our daily lives that has taken on a variety of forms and formats.  Working in small groups you will develop hardware and software systems that investigate methods of messaging and notifications.  Some of the topics we will cover include: synchronous/asynchronous communication, ambient data, alerts, web API’s, and messaging services.

For this project you will design and create a custom device that allows the members of your group to communicate in new ways.  Using the Wifi connection of the Feather M0, you will create standalone objects that communicate over the internet.  Each group must consider how these devices create a specific language that allows the things, people, or places on the ends of the network to communicate.

Project Initial Ideas:

From Dave (Balsamiq sketch below):

The Problem Being Addressed:

We are all, at some point in our lives, deeply immersed in one project or another (essays, art projects, etc.) and have no wish to be interrupted by Facebook Messenger, Twitter, etc. (even if these services are part of another assigned project).  What if there was a step back to the old-fashioned pager?  A self-standing, project specific communication request alarm triggered by any content added to the project’s “group application” (Facebook, Twitter, etc.)?.

The Concept:

A project specific communication request notification device (similar in concept to early pagers) which bypasses or steps back from direct Facebook/Chat-room/Web pop-ups/etc.  The idea is that the group’s 3 (or more) members would not have to be logged in to anything to receive notification that another group member was requesting communication.  It would be implied that this would be specific to messages about the project assigned to the group.

A project specific Facebook group, Twitter account and E-mail account would be set up with membership restricted to the group members for a specific project.  Each communication modality would be assigned a colour (red, green or blue) and would carry/wear a small device containing his Feather board wired to 2 differently coloured LED’s and (possibly) a vibration motor or similar noise maker.  The LED of each modality’s assigned colours would blink (and possible noisemaker would buzz) whenever any member had posted to the group’s service.

Use would be as a “filter” so that, if you’re frantically working on an essay or other assignment, you would not have to have any other service or application active (no distractions from your work due to pop-ups etc.).  All you would see is a blinking light if a group specific message was posted by another group member.


From Chris & Karo:

Concept 1:

The problem & context:

People who are trying to establish a consistent rhythm in their life often struggle to be consistent. Typical examples include:

  • self-employed people who wish to be at their desk and working by a certain time each day.
  • People learning to play a musical instrument by practicing daily
  • People wishing to establish a regular mindfulness meditation practice.
  • Writers struggling to finish a book

Establishing a routine is an important part of maintaining helpful habits (Gardiner, 2012)

Accountability partners work in different ways, but one format is a quick, daily check in, along the lines of “It’s 9am and I’m at my desk”.

Accountability partners is an idea that has gained momentum since the 90s. There is ample testimony that accountability improves the chances of sticking with a program (Inc, Quiet Rev, Huffington Post ).

From Wikipedia:  Not having an accountability partner to help a person accomplish their goal is one reason 92% of people did not accomplish their New Year’s resolution according to a University of Scranton study[5] by Dan Diamond in Forbes and an article by Dale Tyson.[6]

See also: Why an Accountability Buddy Is Your Secret Weapon for Faster Growth (Entrepreneur Magazine)

At the same time more and more people are working from home (3% of US workers work from home at least half time, according to CNN and Global Workplace Analytics and FlexJobs (link)). This means that remote accountability partners that on-site ones – typically smartphones and/or social media to connect. One problem with this arrangement is that smartphones and social media are perfect tools for procrastination (Meier, 2016).


One approach to accountability partnering that avoids these pitfalls follows the approach of calm technology (Weiser 2015) – that technology can help us best by existing at the periphery of awareness, rather than by demanding our attention.

Borrowing from Kate Hartman’s book “Make: Wearable Electronics: Design, prototype and wear your own interactive garments” pp 56-59, a sandwich switch could be used in a couch cushion, chair cushion or meditation cushion to send a wireless signal to an accountability partner that indicates the user is sitting at their work (practice, meditation, etc). This signal could result in a public post for accountability, or a private signal sent just to the accountability partner.

Numerous variations are possible… the partners could each have the same device and commit to both sitting down at 9am. Each partner receives a visible, tactile or auditory cue that the other has arrived. Progress and consistency could be tracked on an online dashboard.

This system could also be used (e.g. with exercise clothing having bend sensors integrated) for exercise accountability tracking, thereby overcoming some of the shortcomings of the much hyped but disappointing accountability ecosystem Gym Pact (


Gardner, B., Lally, P., & Wardle, J. (2012). Making health habitual: the psychology of “habit-formation” and general practice. The British Journal of General Practice, 62(605), 664–666.

Hartman, Kate “Make: Wearable Electronics: Design, prototype and wear your own interactive garments” O’Reilly Media (2014) pp 56-59

Adrian Meier, Leonard Reinecke, Christine E. Meltzer “Facebocrastination”? Predictors of using Facebook for procrastination and its effects on students’ well-being Computers in Human Behavior 64 (2016) 65-76

Weiser, Mark, and John Seely Brown. “Designing Calm Technology.” Designing Calm Technology. Http://, 21 Dec. 1995. Web. 28 Oct. 2015.

Concept 2

Problem – we want to connect with loved ones, but screen time takes us away from each other.


A metallic pendant worn against the skin incorporating the feather, heating element and Li-poly battery. A squeeze sends a message to the partner’s matching pendant, causing it to glow, vibrate, and/or heat up briefly. Partners wear matching pendants and messaging is 2-way.

Concept 3

Artwork investigating networks – neural networks, ecosystem, societal networks.

Networks are familiar from their hundreds of examples in nature and underpin the structure of our own brain. Neural networks are at least partially responsible for brains.

Creating a wifi (or XBee) connected network of physical nodes (a node being e.g. an LED with a sensor or button in a housing), would it be possible to establish and demonstrate information being passed through the network?

Could human input or intervention alter, enhance or suppress the patterns of information?

If more people come to the party to interact, at what point does it enhance the complexity, connectivity or synchronicity or the network, and at what point does too much human interference make it collapse?

Final Form(s):


  1. Free-standing container with Feather controller (coded for wireless access), lithium ion battery & 3 LED’s (Red, Green, Blue).  Container to be configured to “hook” over the screen of a laptop such that the LED’s are visible on a flat surface facing the user.
    1. Each LED coded to flash/blink (or at least turn on) indicating “communication request” through one of 3 pre-established group pages (Facebook, Twitter & E-mail).
    2. Allows for (semi)uninterrupted work on other projects while remaining potentially aware of communication regarding assigned group project.


Plain English Logic Flow (not code)


All LED’s to OFF

Link Feather to Pubnub account

Link Pubnub account to Facebook, Twitter and E-mail account


  1. Facebook group “Odd Man In”
  2. Twitter feed “Odd Man In”
  3. OCADU student E-mail account

IF – posting to Facebook group = YES


IF – posting to Twitter feed = YES


IF – posting to OCADU E-mail = YES



IF – Not logged into Facebook

RED LED at maximum

IF – logged into Facebook



IF – Not logged into Twitter

GREEN LED at maximum

IF – logged into Twitter



IF – Not logged into E-mail account

BLUE LED at maximum

IF – logged into E-mail account


Project Daily Log:

Tuesday, Nov. 14 – 12:00 to 13:30:

Chris & Dave met in the D/F lab at 12:00 for a design conference.  Dave put his idea forward (see above Balsamiq sketch) noting the physical design simplicity and adherence to the assignment parameters.  Chris mentioned some related notification input methods involving pressure sensors or switches installed in cushions.

While searching for applicable IFTTT links to Facebook Messenger, Dave received an E-mail from PubNub regarding a new service called ChatEngine that merits further examination.  Chris and Dave to meet again (probably) late Thursday afternoon.

Thurs Nov 16 – 6-8pm

Experimenting with example code. Trying to understand some of the workings:

-how JSON objects are created & parsed

-how pointers (* and &) work in C++

-object-oriented programming principles (e.g. wifiClient object).

Fri Nov 17 – 13:00 and After Class

Chris & Dave met in DF lab and after C & C class.  Further discussion as to which idea to implement and method of implementation.  After brief discussion with Kate, Dave seems to be leaning strongly towards the pager application with Chris concentrating on “point of presence” or “accountability partner” application.  At base, we’re trying to find a relatively simple “hey you” function specific to group members.  Looked through IFTTT for applets that might work.  We may be able to go through Adafruit I/O directly rather than over-complicating the exercise with Pubnub’s ChatEngine or similar.  We might be able to push a notification with a colour code for each communication method required (blue for Facebook Messenger, red for Twitter, etc.).  Chris has some coding examples which will be examined Tuesday.

Mon. Nov 20 (in class)

Further discussion amongst Dave, Chris & Karo re:  final form for individual devices.  Dave is concentrated on the simple pager.  Chris is attracted to the “accountability partner” cushion idea.  There will be some differences in the three products.  Meeting in the D/F lab Tuesday.

We worked together to ensure everyone’s Feather worked and that we could publish and read from the same Pubnub channel. We also prototyped a version of the software which published a message (1, 2 or 3) depending on which one of three switches was pressed, then read that message back from pubnub and lit an LED corresponding to the message (see video:


Caption: pager circuit working on a breadboard

Tues. Nov 21

Dave working on container for over-screen paging system as well as wiring for the Feather controller in his version of the project.  Final testing of code and Feather assembly (all LED’s working to spec. — code link and Fritzing diagram below).


Thursday Nov 23

Final form for Dave’s prototype cut from foamcore and glued together (photo below).  To set and cure overnight.  Hopefully all connections will remain intact after assembly and will work tomorrow.


Friday, November 24:

Both Chris’ and Karo’s applications functioned as planned.  Dave’s appears to have suffered a disconnect during construction of the housing as it does not function (though it did Thursday night).  Tried multiple reloads/resets of the controller with no luck.  Believe I did at least explain the function adequately.

Consider Growth:

As mentioned above, “Consider Growth” branched off of our group project on Monday Nov 20, though we continued to meet as a group. We discussed the right kind of technology to bring to this problem – how to encourage users without distracting or irritating them.

We decided to build a sandwich switch which could be slid inside existing cushions (e.g. couch cushions or meditation cushions) or placed on top of a chair.

At the same time, we discussed different ways reflecting users’ data back to them on a website or app. A simple line bar graph showing daily and monthly sitting totals would be a conventional option, but we wanted to do something more imaginative to reflect the open-ended experience of taking up a new skill, discipline or hobby. More details on how the graphical representation evolved can be found below.

A typical use scenario works like this:

-At 9am, the server sends a message start the session.

-If the user’s accountability partner is sitting, the user’s cushion EL wire lights up to indicate the accountability partner is sitting. And vice-versa.

-When the user sits, the user’s EL wire turns off.

-When both users are sitting, the server sends a message that rings the users bells signalling the start of the session.

-When either user sits, a generative animation “grows” on the webpage. If both users sit, both animations grow. If either one gets up, their animation stops growing.

-When the timer reaches a set amount of time (e.g. 20 minutes), the bell rings signalling the end of the session.

-Note that once the users sit, they can choose to view the animation or not. They will not need to interact with or receive notifications from the system until the end bell rings. This is a deliberate measure to reduce distractions.

We needed to make a larger “sandwich switch” than the one illustrated in Kate’s book (Hartman,  2014 pp 56-59). We decided to include internal “springs” of felt. As luck would have it, our first guess about the design of insulator between layers of conductive fabric worked well in testing on a variety of cushions. The conductive fabric was cut into two identical shapes, ironed onto the felt, and along with the insulation layer, the 5 layer sandwich was stitched together with bar tacks in the corners.


Caption: Switch – black felt with silver conductive fabric


Caption: you can’t solder to this conductive fabric. We had to stitch with conductive thread.


Caption – the lower layer of conductive fabric is visible through the layer of insulating felt with holes cut into it. It was tempting to make the holes in a seasonal snowflake pattern.


Caption: Assembled switch. The garter clip provides strain relief for the barrel jack used as a connector. We wanted to use a two-prong plug that could not be accidentally connected to our battery back, which had a JST connector.

We chose EL wire for this application because of its soft, even light and inherent flexibility and adaptability to different cushions. We also wanted to use a solenoid to ring a meditation gong at to signal the beginning and end of the sitting session rather than a screen or mobile device-based notification.

Both the solenoid and EL wire required 5V, so we used NPN transistors to switch the 4.5V from our battery pack using the Feather’s 3.3V logic. We tested this assembly by having the sandwich switch operate the EL wire and solenoid using a simple Arduino sketch (see video:

Putting the whole assembly into the project box with strain relief was time consuming, but it was helpful to have connectors on everything so that it could be transported easily without yanking wires out of their connections accidentally.


Caption: The solenoid (top), feather and featherwing protoboard (upper middle) with EL wire inverter (bottom) in a project box.


Caption: The wiring diagram for the device. It was all connected to the featherwing protoboard shown at the top.

We didn’t have time to make a second copy of the switch and circuit, and decided to demonstrate the operation by using a second Feather (running the same code) with an SPST tactile switch attached.

Some design sketches are below:


Caption: System architecture v1. “Everything is going to fit easily and there will be no need for a box”


Caption: The two transistor-based switch circuits. The one for the solenoid has a diode to prevent a high reverse voltage from damaging the transistor when the solenoid is disconnected, since it is an inductive load.


Caption: Design notebook page showing final system architecture including connectors. The notes are a prioritized list of the issues to work on. We got to most of these…


Caption: Final assembly with sandwich switch removed from inside of cushion.

The code for the Arduino is here:

The code uses a single channel to publish and read from Pubnub. Messages are JSON formatted and have a user name as the key, with a simple binary code indicating whether that user is sitting (e.g. {karoMessage:1} means Karo is sitting). The website javascript also receives these messages, and sends a message to ring the bell.

Consider Growth Visualization



The original consider growth visualization concept was to employ vis that grows organically, ie morphogenic structures that imitate the growth and system interactions of biological forms. In pursuit of this, I implemented a JS port of the differential line algorithm hoping to use the underlying growth pattern to inform the display. The implementation worked but due to runtime considerations, it’s current version isn’t useable for real-time use. Moving on from this we employed trigonometric waves, perlin deformations and simple modular rhythms in combinations to produce a series of parametric randomized forms that have a large amount of variation but are consistently visually engaging. The waves’ specifics can be investigated in the repo.

The waveforms were hooked into pubnub via pubnub’s JS API so that a waveform is turned on whenever someone sits on their pillow and announces their presence into the virtual space. The intention of using a line as an avatar is to produce an environment which is non-competitive and really stripped down, the limitations allowing users to be present without having to make any choices about their virtual actions and representation.


Experiment 3: On 2nd Thought … – Dave Foster

Experiment 3: Creation & Computation – “On 2nd Thought…”                         Dave Foster

DIGF-6037-001 (Kate Hartman & Nicholas Puckett)

Project Description (from Canvas site):

Look around the room – All of your laptops are basically the same, but you all use them very differently.  What is it missing?  For this project you will work individually to create a new peripheral for your computer that customizes an input or output function specifically for you.  This could be a new input device that changes the way you interact with your digital world or a means of notifying you about physical or virtual events around you.  To achieve this, you will use P5.js in conjunction with a variety of web APIs and a USB connection to your controller to create your new device.  Beyond the intended functionality of your new peripheral, you should also consider its materiality and spatial relationship to you and your computer.

Project Idea:

  1. Use an ultrasonic proximity sensor through the Arduino Feather board controller as an “on-off” switch or trigger to activate the display screen (through IFTTT) and the webcam on a Mac to photograph the viewer’s reaction whenever anyone comes within 46 cm (2 ft.).
    1. Ideas for display:
      1. Text selections from joke sites on the web
      2. Image selections from joke sites on the web
      3. Text selections from “quote” sites on the web
      4. Image selections from photography sites on the web

Project Log:

Graphic of proposed logic flow:


Simple English logic flow (THIS IS NOT CODEto translate into P5js later):

Wire and program Feather controller to use proximity sensor – Set up IFTTT link (link to site tbd)

Start –

Set Mac screen to base display (“your 2nd thought for the day” background colour & font t.b.d.)

If – proximity from Feather board reads as <= 46cm

go tojoke routine

joke routine

display 1 joke from site (site t.b.d.)  through IFTTT – AND   set timer @10 seconds

If timer = 10 seconds – triggerwebcam routine

webcam routine

Capture image and store in file ‘2ndthought’ (location t.b.d.)

reset Mac screen to base display.

Feather board wiring:


Failure Analysis:

So OK … how does one describe an utter and absolute debacle?  Well perhaps not absolute – I did get the Feather controller wired for the proximity sensor and programmed correctly and it works as required.  That portion was just a matter of following directions and modifying the code from Experiment 1 to my specifications (40 cm distance before trigger).  It was the rest of the project that fried my self image.  You’ll notice I did not present on Friday because I really have nothing to present.

Problem List:

  1. Most (almost all in fact) of the humour sites I wanted to pull the jokes from through IFTTT were not text-based, but image or video based and I could not code my way around an image to text translation requirement.  This leaves aside the fact that a good 30 – 40% of them were highly offensive in nature.
  2. The one site I found that was (mostly anyway) text based ( does not have an IFTTT link that I could locate on a search.
  3. Could not … could not … code an error free feed or trigger through Adafruit IO.
  4. (obvious) – no feed or trigger = no link to P5js (though I did try)
  5. On advice from Nicholas, and on Thursday night no less, I attempted to set up a simple array of text files usable by a random number generator triggered by my working Feather controller.  Nothing but crickets and error codes on my desktop at home and the whirling beachball of death on my Mac Powerbook.

Following all of that, at 3:00 AM on Friday morning I finally surrendered.  I erased all of my effort in complete frustration, caffeine overload and exhaustion and now throw myself on the mercy of my peers and professors.  Next time (perhaps … no definitely … next time I’ll request assistance with coding well prior to the generated migraine).

“Lord of the Dance” (Feng Yuan and Dave Foster – Creation & Computation – Exp. 2)

Process Journal (Lord of the Dance)

Feng Yuan and Dave Foster

DIGF-6037-001 – Creation and Computation (Kate Hartman and Nicholas Puckett)

Brief Project Description:

Link 20 (0r more) screens (input device type optional – output device type optional – PC, Mac, Phone etc.) such that it produces an interactive experience or display for 1 or more users .

Tuesday, Oct. 17th:

We met in the 6th floor DF lab space at 12:00.

Project Design Discussion:

As a beginning, we discussed several ideas for the project (short description, discussion results noted below tentative titles):

  • Proximity Alarm” (your friends are close)
  • Your screen (phone, tablet or computer) gives you an alert when any of 20 specified people or their device comes close.
    • Several problems here (not insurmountable – probably –  but complex).  For one, how do we establish the “trigger” (bluetooth signal?, TCPIP address?, etc.).  Also, how do we reliably (and preferably simply) code for this?
  • Proximity Alert” (any Bluetooth device)
    • Your screen (phone, tablet or computer) gives you an alert with available “radar screen” display of direction and proximity of any bluetooth device within a set radius.  The idea being to warn us if a camera-equipped device might be nearby.
      • As above with “trigger” etc.  Also does not really address the “20 screen” portion of the proposed project.
  • Join the Choir
    • The trigger is simpler (just a counter on the site) but issues of timing would (we speculated) be problematic.  We could not see a way to reliably code around this question.
    • Lord of the Dance” (Mah Na Mah Na)
    • Each screen in the group of 20  (phone, tablet or computer) is given a “voice” in the choir (bass, tenor, alto, soprano, etc.) at random and upon “logging in” to an established website.  Once the trigger number is reached for the site (20 per project specifications), the “choir” begins to sing (we were unable to pick one tune for this).
  • Lord of the Dance
    • Based on the Muppet sketch “Mah Na Mah Na .  Each screen becomes a separate but linked numbered site (from one to 20) all linked to a central node or site.  All 20 screens would receive and display the 2 “singers” and the base “tune” from a central node or site.  Site 1 would (initially) also receive the “little furry guy” or “dancer” who contributes the “mahna mahna” for a verse or two.  At that point the “dancer” begins to “riff” on the tune for a bar or two and the “singers” on that screen react “disapprovingly”, to which he responds by “jumping” to a random screen in the array and with the allowed “Mah Na Mah Na” and the song and dance continue.
      • We believe this is do-able and decided to go with this idea.

Research/Build Work:


  • Produced a basic mock-up sketch of the project using Balsamiq (see below)



  • Began research of the required code at the site.
  • Began basic coding.
    • Some discussion (no hard conclusions reached) about what the “look” of the characters should be given the attempted “simplicity” of the coding desired by both participants.
    • Achieved the beginnings of the “dance” on at least one screen.

Friday Oct. 20:


Showed Kate the Balsamiq mockup and described the idea to check acceptability within project parameters (seems to be OK and codeable – if we can make it work)

Monday Oct. 23:


Feng had some concerns regarding separation of sound track for singers and furry-guy.  It would be simpler to code if the furry-guy is separate from the singers.  Several files will have to be created for variation and to separate the “riff” and “reaction” animations.  It was decided that Dave would work on the sound file(s) while Feng coded the movement(s).  Next meeting scheduled for Tuesday, Oct. 24 @ noon.

Research/Build Work:


  • Downloaded WavePad (audio editor) for work with MP3 file of “Mah Na Mah Na
  • Began separation of “singer” and “furry-guy” tracks


  • Began coding for drawing characters for use in routine

Tuesday, Oct. 24:


As a result of Feng’s consultation with Nick, we decided that:

  • There was too much of the “server” paradigm in our original idea
  • There was (possibly) too little user-user/screen-screen interaction

As a consequence, we discussed a couple of ways in which to more closely conform to the project guidelines:

  • We discussed making the “dancer’s” movements contingent upon a mouse-click or enter key from each screen.
    • Could not come up with a way to time this to the “Mah Na Mah Na” tune
    • Could not decide exactly how to trigger the user response.
  • The above triggered a thought from Feng — what about a variant of “Whack-a-Mole”? (illustration from Balsamiq mockup below)


    • Advantages:
      • No need to start from scratch for basic idea
      • We could keep the “Mah Na Mah Na” tune and background “singers” intact (no requirement to separate the character’s sound tracks)
      • Simplifies “stimulus/response” or “user interaction” portion of assignment.
      • Simplifies the programming and selection of the “dancer” character.

Research/Build Work:


  • Used Photoshop to remove extraneous background from the picture (see below) and passed it to Feng



  • Began coding of the characters and the “Whack-a-Mole” game
  • Learning P5.JS from The Coding Train and P5.gif.js

Inspiration for the background design:

Wednesday, Oct. 25:

Meeting with Nick and Kate re: concerns with project:

  • Nick reiterated his objection to the server based portion of the concept
    • Recommended some simplification of the concept for “Whack-a-Mole” format
      • 4 “states” required
        • “Mole absent”
        • “Mole up”
        • “Mole hit”
        • “Mole missed”
  • Both Kate and Nick recommended not being “married to” the whole Muppets theme.


  • Work on the visual design of the game and draw the characters.

Thursday, Oct. 26:


Feng has simplified and coded the characters (illustrations below) allowing for the “states” recommended in Wednesday’s meeting.  Asked to have the tune “chopped” into manageable portions.  Discussion/decision as to which type of device(s) to use as well as “array” required to play the game.  iPads (X9) and iPhones (X11) chosen as best option given the game format.  Discussion as to whether 1 screen for 20 players or 20 screens for 1 player fits project parameters as well as game format.  Single player with 20 screens selected as best format.



Research/Build Work:


  • Continued work on code
  • Work on the character’s’ animation.


  • Chopped the tune into acceptable portions (“Mah Na Mah Na”, “Riffs” and 4 different “boop de de de” segments) and E-mailed WAV files to Feng.
  • Worked on Balsamiq mockup of array with input from Feng (illustration below):


Second idea (3 tables in a “U”) chosen as best for single-player format.

Friday, Oct. 27 – presentation day:

Met in DF lab at 10:30 to finalize.  Decided to use Mac screens as coding appears to work better there than in other devices.

Code available at :

Images from Presentation are below:

exp2_images-15 exp2_images-14 exp2_images-13 exp2_images-12 exp2_images-11 exp2_images-10 exp2_images-9 exp2_images-8 exp2_images-7 exp2_images-6 exp2_images-5 exp2_images-4 exp2_images-3 exp2_images-2 exp2_images-1


    Antiboredom. P5.gif.js.

“Best 25+ Animal Muppet Ideas on Pinterest | Drum Kits, Drummers and Rudimental Meaning.” Pinterest. N.p., n.d. Web. 26 Oct. 2017.

“Hyperspace Image.” Google Images. N.p., n.d. Web. 26 Oct. 2017.

“Language Settings.” P5.js | Reference. N.p., n.d. Web. 26 Oct. 2017.

“Mahna Mahna » Free MP3 Songs Download.” Free MP3 Songs Download – N.p., n.d. Web. 26 Oct. 2017.

The Code Train on Youtube. P5.js Sound Tutorial.

Umiliani, Piero. “Mah-na-mah-na.” Mah-na-mah-na. Parlophone, n.d. MP3.

VHTrayanov. “Muppet Show – Mahna Mahna…m HD 720p Bacco… Original!” YouTube. YouTube, 02 Oct. 2010. Web. 26 Oct. 2017.