Morsecode Messenger 2.0

By Feng Yuan and Roxanne Henry

Project Description

The project is a continuation of Feng’s experiment 3 morse code project. In this iteration, we configure two feathers to be able to send and receive morse code signals from one another. The input remains the same, but the output now goes to an on-board OLED display, which displays the signal’s letter equivalent.

Design Sketches

The original plan was a portable wireless Morse code signals sender. Feng and Roxanne imaged this device should be a bracelet attached with a led screen and several buttons. The batteries and feather board will be hidden inside the bracelet. The buttons could be used to enter the Morse code signals, and the screen would be used to display the receiving message. (As the image below)


Because of the limitation of time and equipments( a sew machine would boot the craft process ), they switched their idea and decided to make a tabletop paper-made morse code device. This device would still include two parts: a button-board(input the message) and a screen (output the signals). They decided to use white board papers. The final result would be white, clean and neat.


Circuit File


Here is the link to the video

Here is the link to the code

Process Journal

Step One: The first step of the project, ideation, had essentially already been done in the previous experiment when Feng and Roxanne decided to reuse the idea and add networking to it. The biggest hurdle they encountered in the ideation phase was the physical design of the finished product. This would not affect the code necessary to get the product functioning, however, so Feng and Roxanne put that aside for the beginning.


The first step to getting the code working was to reexamine the way Feng’s original code functioned. It wasn’t too challenging to transfer the functionality of publishing results via serial to publishing to PubNub. There was some consideration about whether or not Feng and Roxanne wanted to send whole words at a time or not, but in the end, the decision kept the status quo. Single letters were sent per package in order to maintain the spirit of morse code messaging, and for simplicity sake.


Step Two:

Feng and Roxanne have experiment various output methods : LCD screen, Buzzer Speaker, LED lights, and LED screen.

  • Standard 16×2 LCD Screen is too big for this project.
  • Piezo Buzzer can’t make a tone and volume is very limited. And a morse code buzzing noise can convert the morse code to something easily “readable”
  • LED lights look delightful, but also can’t make the signals readable.

After testing all these methods, they found the OLED feather wing most match with our project. The OLED screen could be easily applied with feather board. And int values and string values could be displayed on the screen. Based on the results of testings, they chose to use OLED feather wing as the output section.


Following the implementation of the publishing function and decisions about button layout, Feng and Roxanne looked into subscriptions. At first, it seemed to be doing fine. The tests were scripted and didn’t leave much room for error.

  1. Load program on each device
  2. Send from device A
  3. Receive on device B
  4. Send from device B
  5. Receive on device A

The trouble arose when device B would try to receive a message before device A managed to send one by using a timer to activate the subscription function. Roxanne initially suspected the Feather of running out of memory and experimented with adjusting the buffer size, as well as adjusting the timing of memory allocation for the messages.


Of course, none of these things were the problem. The issue was with the actual subscribe function. PubNub’s official documentation describes the functionality as such:  

“Listen for a message on a given channel. The function will block and return when a message arrives.”

This means that whenever the server had nothing to provide as far as a new message went, the feather’s program would essentially hang, and wait, and wait, and wait, until PubNub answered with something. This would happen despite the timeout being specified. Roxanne suspects there is a bug in the API and is very upset about this.


So, Roxanne and Feng decided to implement the subscription function to be activated on button-press. They used the built in button B on the OLED wing to accomplish this.

Step Three:

The final step of the project was to build the physical product. While deciding the layout of the wires, Feng and Roxanne decided it would be a good idea to try and conserve space by attaching both the ground and input wires directly onto the resistor. Upon testing this model, they found that this did not work.

One solution they tentatively tried was to use copper tape instead of wires for the input pin connection. They discovered that the copper tape they had acquired was not conductive on both side. Since the layout required a lot of turns, it was impossible to create a path without overlapping the tape. In the future, they are certain copper tape with conductive adhesive would have been a better choice.


So, Roxanne desoldered all the fancy Ys she had put together and mournfully put the input wire on the opposite side of the button where it belonged.


Feng and Roxanne choose to use the white hard paper boards as the bottom of the device and the white thin paper to make the case hiding the feather board and wires.

1. Measure the sizes of the board and button.




2. Layout the button position and board position on the hard paper board.



3. Make the box and stick the box on the paper board

4. Engrave the button holes for make the space for buttons




5. Organize the wires and make them orderly

6. Connect the buttons with the board




7. Hide the wires and close the box.



The “Pay attention” bot

The “Pay attention” bot helps the user to keep in tune with the “real world” around them, even if they’re really absorbed into their work or listening to loud music. The bot “pays attention” for you, listening for someone to call your attention. Once it detects someone trying to call your name, it will wave at you incessantly until you acknowledge it and turn it off. By now, you are well aware that someone in the real world was looking to get your attention.

Code available on Github.


The process for this project was rather quick and uneventful, unfortunately.

My first idea was to have the arduino itself record and process the speech recognition with the desktop printing out alerts but two things stopped me:

  1. I wasn’t very interested in purchasing a new board on such short notice, in case it doesn’t work out; it’s a pretty big commitment!
  2. I don’t like desktop or push notifications. They vex me.

So I decided to reverse the role. Have the computer, which already has microphone access record and process speech, and have the arduino nag me when something gets recognized. The next task I had was to find a suitable library or API that helped me with voice recognition. P5 was the first to offer one up. At first, I was skeptical of it, since it seemed really lightweight, so I started looking at alternatives. IBM Watson’s API seemed really interesting, but they weren’t offering it for free. There were some alternatives I could have used such as interfacing with Watson through Pubnub, but the interface of Pubnub seemed convoluted at best, and pay-to-play at worse. As someone who’s very used to getting their hands elbow-deep into code,  using an interface to do the work for me was both a very disorienting and very frustrating experience. I decided to drop this route altogether.

I went back to investigating P5’s speech and speechrec addons. For my purposes, I needed it to record continuously. There is a continuous option available, but the example online wasn’t working and I couldn’t get it to work myself, either. I distinctly remember reading somewhere, in a release statement probably, that the continuous function was buggy and to use something else instead, but I can’t, for the life of me, find it anymore. I should have took a screenshot. I’m still not used to having to document my process when I’m coding and debugging. I’ll remember to next time.

Anyway, I ended up finding a workaround. Simply assigning an OnEnd() function to the recording object and asking it to restart itself was sufficient enough for my needs. There was a small issue in testing where it would stop recording (evidently) in the time it takes for the state to change from “ended” to “started”, so it wouldn’t detect sounds for that small period of time. Given more time, I would have tried harder to get the Continuous option to work, but I have learned not to linger on the small things when you need a deliverable in a short amount of time.

Debugging the “restart if you stop” function.

After I got that part of my P5 code working, it was very simple to activate the servo through serial input. There was a tiny hiccup when I was doing

if (myRec.resultString == "Roxanne") {...}

which wouldn’t pick up my name if it was stringed inside a sentence. For example, something like “Roxanne, do you have a minute?” would be ignored.  I converted the code to

if ("Roxanne")) {...}

in order to search the string for my name, instead, and it worked beautifully. The if statement prompted the serial port to send through a code which my arduino was listening for.

In practice, the speech recognition was not as powerful as I would have liked (saying “Roxanne” often produced the words “Rock band”, instead), but it was sufficient for a prototype.

The arduino code was fairly trivial since its job was also fairly trivial. It was a slightly modified version of Kate and Nick’s basic servo code. I simply added a clause for a button press, which would toggle off the variable “shouldBeMoving” as well as a check for incoming serial data. If there was serial data, and it was the code I was feeding from P5, then I would toggle on my “shouldBeMoving” variable in order to activate the basic servo code. The arm on the servo was programmed to simply wave in a 90 degree angle, enough to be annoying and catch my attention, but not enough to be obnoxious to others.

so much code...
code code code…

I believe I broke the servo when attempting to graft the arm onto it, since it was working without fail before the arm. After the arm, the servo appeared to get tired or simply get stuck on itself after a few swings. It ended up making the button somewhat extraneous, since it was stopping itself, but I kept the button, simply because there is something satisfying about smashing the button to stop the servo, but also, just in case the servo was feeling exceptionally peppy and decided to keep waving for eternity.

Thanks Sean for the help in making the arm!

Another small thing I found frustrating but, you know, was kind of necessary, was debugging. The P5 speechrec addon requires a server to function, so I needed to have my code re-uploaded to my github page whenever I wanted to test a change. Debugging became especially frustrating since that’s usually a process where I add logging statements at different places in order to glean information, and then remove promptly when expected outcome happens. This made for quite a lot of commits, which the github page was slow to catch. I’d often have to wait about a minute or two between commits before my page would be up to date. But alas, such is the way.

so much commit
If you judge my commit messages… I really cant blame you…

Video of it working:

It lives! from Roxanne Henry on Vimeo.

References and thanks:

Servo test code

P5 Speech

IBM’s Watson

Kate and Nick’s servo code

Sean Harkins for help with making the arm

Dave Foster for help with making the box

The Apples Game

Members: Roxanne Henry, Margot Hunter


Oh what a missed pun opportunity! The name of the game is to find your pair; why oh why didn’t I name this Find Your Pear?

The game loosely revolved around the idea of the memory card game. My original concept was to use the phones as cards, where the phones were laid down upside down as a grid, and then players would have to find matches the same way the card game was played. However, I didn’t think it quite got people as involved with one another as I would have liked. It could still very well be considered a single player game. So I started thinking of ways of involved each person and their own personal devices. The idea came to me that if each person was a card, they would have the agency to go find their partners themselves. This way, the would have to interact with one another and each others’ devices in order to determine if they were a pair or not.

Development Journal

Day 1

The plan moving forward, then was to have 20 different apple slices, so that they matched up in 10 different ways. I was adamant about randomizing the distribution process, but also, about making the game fair and making sure everyone would have a partner. I knew it would be impossible to do without involving some centralized list that kept up to date with the client-side allocation of apples.

I started looking into using server-side controls. Originally I had a file with a list of the apple slice names, and the client-side code would then look into the file to find out which apples were available for picking from. However, I needed a way to have the client-side code confirm with the file once they used a specific apple. It is unfortunately impossible for client side code to write directly into server-side files, for obvious security reasons.



So, I investigated the possibility of using server-side scripting to do the writing. It took me a while to figure out which server side scripting languages had been installed on the webspaces, but I soon discovered it used both Node.js and PHP. I had slightly more experience using PHP, so I started with that. Unfortunately, there seemed to be a security problem. I didn’t want to spend too long debugging that. I know out of experience that when it comes to permissions, that that kind of error could come from any level of security protocols. I took one go at asking IT for write permissions to the servers, and when that fell through, I immediately started looking for another option. I didn’t want to waste too much time on things if I wasn’t certain I could make them work.


Day 2

Moving on, I looked into external API-enabled database solutions; brief consultation with Nick had reminded me of their existence. It didn’t take too long to find one that was free to use. I signed up, created my database, and started getting to know the API.


To my surprise, it was fairly simple to set up my code, using only p5, to communicate through the API to the database. I hadn’t expected a library dedicated to drawing and animation to have a powerful selection of HTTP methods, but I was pleasantly proven wrong. The biggest challenge with using this API was making sure I had set up my CORS-enable API key properly. It took a few tries of a different combination of reading the examples, the API documentation, and brute force testing to figure out the happy combination I needed to access the database. Turns out, a terminal slash in the default URL means something pretty specific to the API, and it was throwing off all my results. It’s always the small things ?.

Soon enough, I had an infrastructure built that would randomly select an apple from a list of available apples. There was still a small risk of there being duplicates attributed. The time it took for the client-side code to attribute a random apple and then update the database with the information was still slow enough that someone had time to access the database for the SAME list of apples as the first person, meaning their randomly attributed apple could, in theory, be the same, but it severely limited the chances of this, and that was good enough for the requirements of the project. It would have been impossible to guarantee total randomness and fairness without the random apple being selected and immediately updated by the server, but I didn’t have the time to learn if the database server I was using even had that possibility.

Day 3

I finally had access to apple drawings that I could test loading images with. It worked out fairly quickly and easily to load the images based on the incoming image file name. I wasn’t sure why I thought that would be difficult, but it really wasn’t. Something that did vex me momentarily and without explanation, however, was that using displayWidth and displayHeight gave me tiny apples on mobile, though it displayed correctly on PC. I found that using windowWidth and windowHeight worked better, but in reverse. This was fine since the game is easier to play on mobile overall.

The hardest part I wasn’t expecting came later when I found that the API’s GET method was a little slow. I decided to create a custom loading animation, in order to entertain the player while they waited. A simple thing, only 15 frames long, of the game’s eponymous apple being eaten, and then exploding back into a full apple. I thought the loadImage() function would load a gif as easily as a png. I was wrong in the most obnoxious way: it loaded the gif, alright, but only the first frame. RIP.

I started looking for solutions. After about 30-45 minutes of research, the first one I found was a suggestion about using loadImg() instead. This worked, except not in the way I expected, and certainly not the way I wanted. It created an HTML img element outside the canvas, without transparency, and without p5 control over it. This was not a good solution.

Day 4

I moved on to find the p5.gif library which I found worked wonderfully during testing. It was simple to use and worked the same was as loadImage(), except one would use loadAnimation(). The honeymoon phase wore out fast, though, when I realized it doesn’t work on mobile. Ugh!

Next, I found which, thankfully, worked really well on all devices I tested on. It required a few extra lines of code, but it was worth it to get the loading gif to finally work.

Finally, I added an image that would appear when the database was out of usable apples to inform the user of the system’s status. I think this is a really important piece of information to share with the user.

Day 5 – Morning of the presentation

The game only works on my phone for some reason. Everyone else is getting security problems. Nick and Kate show up and I share my concerns. They suggest I host it elsewhere. Of course. I feel like if I had hosted it elsewhere from the start, I would have avoided a lot of the server problems I had experienced on the first few days. Alas. I have about 10 minutes to change the hosted location. I quickly set up a new git repository, and throw my code there. Everything loads except I have no idea why. I made several rapid-fire changes in my rush to get this working. I think that maybe the solution was to add “.js” at the end of the include in my index file. Originally, it had been working no problem without it, but I suspect github’s hosting has some stricter rules about that. I guess I’ll never know! It ended up working in the end and people seemed to enjoy it, so, all’s well that ends well.




The code is available through, and also hosted on Github.

Video (thanks to Tommy for filming!)

Available at Vimeo.

Project context and bibliography

So the project was originally going to be a card game, and then it became a human card game. It’s a bit difficult for me to frame it as “a game which aimed to connect people physically instead of through technology” because I don’t personally believe they have to be mutually exclusive. Sure in the game’s case it could have been real, physical cards instead, but why not use the phone, which everyone already has. It can make for impromptu games, no planning ahead required. The technology allows us to access it any time we want, without worrying about having to bring the card game with us. Of course, several modifications would have to be made to allow for this. Custom game lobbies for groups of players, and true-random assignation of the apple halves would need to be implemented, for starters. Another good modification would be to change from apples to pears. Gotta be punny. But ultimately, I do not think this project’s aim was to bring people to interact outside of technology, but instead to embrace its possibilities and make use of them in a context where people just want to have an impromptu ice breaker game.

Antiboredom. “Antiboredom/p5.gif.js.” GitHub. December 20, 2016. Accessed October 24, 2017.


“Apples-for-the-teacher-gift-bushelbasket.jpg.” Digital image. Two Sisters Crafting. Accessed October 24, 2017.


Pedercini , Paolo. “ – a game library for p5.js.” – a game library for p5.js. Accessed October 24, 2017.


Brig. “Processing 2.x and 3.x Forum.” Processing 2.0 Forum. Accessed October 22, 2017.


“Reference.” P5.js | reference. Accessed October 2017., The Team at. “Plug and Play database service.” May 31, 2016. Accessed October 17, 2017.