| Connections Lab

Asmbly Dash – part 2

Cover Image for Asmbly Dash – part 2

Code | Demo

Background

We‘ve been covering real-time user experiences in Connections Lab. This iteration of the Asmbly dash utilizes adds a web socket server to allow users to "raise their hand" and ask for help at individual stations. I found the discovered the model for this “assist” interactivity from Glitch and thought it would be interesting to adapt to an actual space.

Technology

Having some familiarity with socket.io, I wanted to learn a bit more about ws with this iteration. This library is super light weight, and unlike socket.io, clients in the browser can just use native WebSockets to interact with the server. I had some difficulty updating my Linode box to permit WebSocket connections, but after rolling out a standard Nginx server with LetsEncrypt, those problems were mostly resolved.

On the frontend, I used a library called react-hot-toast to manage the new toast notifications. These notifications broadcast when a user requests assistance and also serve as a status indicator for the requestor.

A toast notification indicating help is on the way

After user testing with some friends, I found that communicating a “pending” state was particularly important so that users new their request was received without issue. It also serves to alert them when help is on the way.

A toast notification indicating a request for assistance is pending

Next steps

In project three, I’d like to make the assist interactions actually one-on-one, so that there isn’t several individuals responding to a request. In addition, I‘d like to add chat functionality so individuals tuned in from their home office can offer assistance. For a reach goal, I‘d like that communication to afford optional video chat.

| Connections Lab

Asmbly Dash

Cover Image for Asmbly Dash

Background

Our Connections Lab midterm project was an opportunity to combine and integrate ideas we’ve been covering over the past few weeks. I recently joined Austin’s local makerspace, Asmbly. During my onboarding and training the instructor mentioned the need for a way to quickly report issues with the stations. For my MVP I wanted to create a site that could contain information about each station (tips, safety, required training) and also allow users to submit those reports to the releveant staff.

Technology

I used Next.js for this project, this choice was motivated by two key features. The dynamic routes would allow me to programatically create pages based on a simple JSON file storing the tool data. In addition, when deployed on Vercel, the Next.js API routes makes deploying serverless functions a snap. I used this feature to create a function to create emails and route them to the appropriate staff using the nodemailer library.

For the frontend code, I used TypeScript, React and CSS modules to create reusable, scoped components. While this project certainly could have been tackled without these tools, I find that React greatly reduced the amount of code authored. In addition, TypeScript and CSS modules makes it difficult for future developers to break things. This is particularly important for this project considering Asmbly staff are all volunteers so anyone could chip in.

One piece that turned out really well was the QR code generation. I used a JavaScript library to map over the tools data and generate a QR Code for each one. I even got a chance to use the print media query to format the list when printed.

Next steps

I did a playback with Charlie, the instructor who first pitched this idea and he had some great ideas. The makerspace has a wiki that has some basic information about each tool. This could be used either as the actual link target or to source data into my app. Rather than throwing the information all on one page, another version might allow the user to select from three options: Info, Maintenance, Problem. The app could then simply route the user to the correct service depending on what they’re trying to do.

| Creative Coding

Smell-O-Vision

Cover Image for Smell-O-Vision

Background

This project has been a collaborative effort between myself and M Dougherty. I was incredibly excited to participate in their vision: to create an interactive, sensory experience with visuals and scents.

Planning

After some planning and feasibility experiments we decided to aim for an interactive environment that utilized camera based position detection to find the number of individuals in a space, the center point between them, and some other metrics discussed here. These datapoints would power a visualization using the P5 library and trigger scent diffusers around the room.

My primary role was to develop the machine learning integration, linking it with a live webcam and tuning it for performance in low light. M tackled hacking the diffusers so that they could be triggered from a single arduino. We both collaborated on the artistic direction for the visuals.

Execution

Visuals

My previous post discussed the basic setup of the Posenet machine learning model. As it turns out the most useful data points for our v1 were the number of individuals (poses), and their average X and Y coordinates in the room. The white circles in this photo represent individual poses; the purple circle is the center point for the group.

Top down view of individuals in a lobby. A purple blob is in the center of the photo and smaller white circles are on each individual.

Feeling pretty good about the model’s ability to handle stock footage, my next mission was to experiment to find possible visualizations that could use this data. Our initial attempts at visuals used the HSB colormode and had a pretty brutal color palette no matter the saturation. The results of my experimentation were maybe interesting but they weren’t much better I think.

In the end, we knew that we wanted four different color scenes in each corner and that they should be at least vaguely red, blue, green and yellow. After a few days of experimenting, our solution was pretty simple. Whenever the groups central point crossed into a new zone, we draw the blobs in standard RGB color mode using only the channel that corresponds with that zone.

User testing with my lovely wife Raven

Performance

After settling on a visual aesthetic, the performance of our visualization (as measured with frame rate) was starting to chug and lag. I found that the most impactful improvement was significantly shrinking the P5 canvas size, then scaling it up in the browser with CSS. This ended up creating some weird misalignments between the generated Posenet positions and the newly distorted canvas. However, after correlating the video, canvas, and Posenet input resolutions, it effectively solved all of our animation lag problems.

Installation Issues

At this point, we felt comfortable beginning to move our work into our installation space (a conference room at the NYU ITP campus). When using a proper webcam instead of the my Macbook’s, we ran into quite a bit of trouble finding a place for the camera due to a change in aspect ratio. We wanted the lights to be as dim as possible to enhance the visibility of the projection, but this lower light caused Posenet to struggle. In addition, we had a hard time finding a position for the camera that could see the room in its entirety. As a result, we couldn’t get any blobs to appear.

Installation Solutions

After another series of video and canvas size tweaks, we got some poses, but they would disappear eratically due to the lighting. After some experimentation, our current solution is two-fold: adjust our code to permit poorer quality poses and using post processing software to increase the exposure. It’s a super fine line between under and over detection (more blobs than people), but I’m pretty happy with the balance we’ve tuned it to at the moment.

Finally, for the camera positioning, we simply couldn’t manage to find a position that included the four quadrants of the room; I wished out loud for one of those iPhone fish eye lenses so we could tape it to the webcam. A few minutes later – in a moment of pure magic – M comes into the room brandishing a tragically unused gift from their mom: an iPhone fisheye lens. To the great delight of M’s mother, it worked like a charm.

Me setting up the FishEye lens in the corner of the room

Takeaway

This project was a tremendous opportunity for me to experiment with a variety of topics from machine learning, to animation, performance, and visualization. Historically, I have a difficult time delegating and trusting partners. This project, with it’s size, scope and deadline, would have been impossible task alone. However, with our shared creative vision and work ethic, M and I were able to create something really special. Getting to experience the gestalt of a truly collaborative, creative endeavor has been an incredibly rewarding lesson I’ll carry with me.