Cherry Rocking Chair for Dad or: A Botch-Job in Three Acts

My dad turned 70 this year, so Matt and I decided to make him an old man rocking chair. This project was an adventure. We wasted a lot of perfectly good cherry by repeatedly messing up and/or changing our design. Still, I’m proud of how it came out:

Photo front

Our inspiration was a beautiful rocking chair we found online: Inspiration Our starting design closely followed the Masaya chair. We planned to copy most of the details including the bridle jointed rockers, cantilevered armrests, and stretcher-supported seat. As we built, we made some changes. Usually, we plan things out in advance and more or less stick to the plan. This time, we often deferred choices until we were forced to make them. For example, we never had a coherent plan to connect the seat to the stretchers. Ultimately, we avoided the stretchers all together and fastened the seat to the fame with four 3/4” pins. This strategy, driven mostly by laziness and indecisiveness, was both flexible and stressful. I wasn’t truly confident in the project until we clamped up for the last time.

Eventually, we landed on this: Photo front

A Botch-Job in Three Acts

We worked on this project over three periods: one in around Halloween, one over Thanksgiving, and one just after Christmas.

Act I: Halloween

This was definitely our spookiest (and least successful) session. Matt and I got together and made doomed pieces for the frame. We cut two rockers, four vertical supports, and three stretchers. Out of the nine pieces we built that weekend, only three made it into the final piece.

The rockers were the most bone chilling part to replace because they were cut from a ridiculously expensive piece of 9” wide 8/4 cherry. I bought a band saw on Craigslist specifically for this project. It’s a cheap 8” Delta benchtop band saw. I drove about an hour to pick it up. The seller turned out to be a pawn shop.

Photo front

We tried some test cuts and were immediately disappointed. This saw clearly wasn’t powerful enough for 8/4 hardwood and the surface finish was absolutely terrible. We opted instead to cut the rockers using a jigsaw. This was a big mistake. Rocker one looked okay, but while cutting rocker two the blade deflected almost half an inch. We switched back to the band saw to correct the undercut, but the damage was done. We convinced ourselves at the time that thinner rockers would be okay, but I think we both knew the truth.

Act II: Thanksgiving

Over Thanksgiving, we made the seat. We bravely defied typical three act structure, so this session went much better. One contributing factor was a new band saw. Will joined me to drive up toward the New Hampshire border to buy a much bigger and much better 12” Craftsman band saw for only $75. The seller was a nice man who had just finished reffing a soccer game and was not a pawn shop. Here’s Matt using it to cut a part of the seat back:

Photo front

One aspect of the design that we deferred until the last minute was the woven seat. We bought a thousand feet of black polyester rope and ignored the question until it was time to start winding. The rope came on a massive and intimidating spool. We tested out a few options and dove in. Clamping the seat to a vertical column allowed us to pass the spool back and forth easily. Photo front There was so much rope. Photo front Eventually, it came together. Photo front

Act III: Christmas

The climactic final act. We got together for almost a week after Christmas and finished things up. First order of business: bite the bullet and buy more wood. The new rockers were a huge improvement. It was definitely the right decision to replace them. We also changed the design, replacing the cantilevered armrests with a more symmetrical design and adding overhangs in a few places to visually match the seat. Photo front As we clamped the seat to the frame to decide how it would fit, we realized that the stretchers supporting the seat weren’t necessary. Instead, we connected the parts with pins. Photo front Photo front We finally glued things up at 3 AM on New Year’s Day. Photo front

Very happy with how this project turned out. Happy birthday, Dad!

»


Marble Mirror (work in progress)

Photo front

I’ve always sort of wanted to make a marble-based machine. I’ve been following the progress of Wintergatan/Martin Molin’s Marble Machine X for some time. It’s a follow-up to the original and more well-known Marble Machine of YouTube.

You should probably just leave that video running and let it be the background music while you read this post.

This time around, Martin’s documented his progress with a compelling series of videos showing the ups and downs of his adventure to build a giant music playing machine based around marbles. A marble machine almost seems simple. It’s just tracks and belts and xylophones and stuff. His videos put the lie to this idea. The Marble Machine X project has stalled and as of late 2021, he’s pursuing a ground-up redesign of his incredible machine after coming to terms with some fundamental design flaws.

Fortunately, we picked a much easier project. The name is ridiculous at this point, but it’s stuck. The original idea was to produce some kind display where the “pixels” are made of a grid of macro-scale physical objects and connect this screen to a webcam to create a “mirror”. The direct inspiration was Daniel Rozin’s mechanical mirrors.

We considered a few designs but settled on one that’s nowhere near real time. We finally have it working:

It forms an image in the style of a Connect 4 board. A stepper motor lifts a column of marbles to the top of the machine by rotating a notched disk (blue) and drops one into the carriage (green). A RGB sensor in the carriage (red) measures the color of the cargo area and decides if it contains a reflective steel marble, a black marble, or no marble. If the marble can be used in one of the columns of the image, it is delivered to that column and dropped in. If the marble can’t be used because it doesn’t match the next pixel in any column, it is rejected and dropped back into the reservoir at the bottom.

Model front Model front

The carriage rides on linear bearings on an 8mm steel rod. It’s driven by a stepper motor and lead screw. A servo with a plastic horn acts as a gate.

Model front Model front

»


Fifty Year Beep (work in progress)

Phil, Will, and I are working on a project called The Fifty Year Beep. Here’s the idea: It’s a self-contained battery-powered circuit that beeps once and only once at a specified date and time fifty years in the future.

It would feature sleek, futuristic design. Our prototype looks like this: Photo front It could use some work.

You keep it in your home as a piece of art. On the day of your beep, you gather your friends and family to hear it. You’ve been waiting for fifty years! The moment finally comes and the device emits a single electronic beep.

I love this idea. I’m only half joking when I say this could be an actual product. A ten year beep would be a perfect wedding gift. Any big life event would be an occasion to give a beep.

We’ve been building one. Here’s what it looks like on the inside: Model front

It’s a basically circuit on a panel mounted inside a laser-cut dodecahedron. Here’s that panel: Model front

It’s based around SAMD21 microcontroller on a QT Py breakout board and an DS3231 RTC module. It’s powered by a TL2300 D-sized lithium thionyl chloride battery containing 19000 mAh at 3.6 V. The beep is produced by a small piezo buzzer.

The schematic: Photo front

The layout: Photo front

An Onshape model of the dodecahedron: Photo front

We removed all LEDs from the microcontroller and RTC module. We developed a scheme for setting an alarm on the RTC module and configuring the microcontroller to wake on an interrupt. This gets us down to a quiescent current of about 100 μA. Some naive math says that this version can last 190000 hours or about 21 years. The discharge curve for this battery chemistry is very flat, so we think this estimate might not be too far off.

Photo front

Here’s the code we used:

#include "ArduinoLowPower.h"
#include <RTClib.h>

RTC_DS3231 rtc;

// Pin used to trigger a wakeup
const int INTERRUPT_PIN = 1;
const int BEEPER_PIN = 0;

bool ready_to_beep;
DateTime alarm;
TimeSpan WAKEUP_PERIOD =  TimeSpan(1, 0, 0, 0);
TimeSpan BUFFER_PERIOD =  TimeSpan(0, 0, 0, 10);


void setup() {
  // enable internal pullup on all
  pinMode(INTERRUPT_PIN, INPUT_PULLUP);
  pinMode(2, INPUT_PULLUP);
  pinMode(3, INPUT_PULLUP);
  pinMode(4, INPUT_PULLUP);
  pinMode(5, INPUT_PULLUP);
  pinMode(7, INPUT_PULLUP);
  pinMode(9, INPUT_PULLUP);
  pinMode(10, INPUT_PULLUP);
  pinMode(11, INPUT_PULLUP);
  pinMode(12, INPUT_PULLUP);
  pinMode(13, INPUT_PULLUP);
  pinMode(14, INPUT_PULLUP);

  pinMode(BEEPER_PIN, OUTPUT);
  
  double_beep();
  
  rtc.begin();

  rtc.adjust(DateTime(F(__DATE__), F(__TIME__)));
  alarm = rtc.now() + TimeSpan(6, 20, 0, 0);
  ready_to_beep = false;


  // we don't need the 32K Pin, so disable it
  rtc.disable32K();

  // stop oscillating signals at SQW Pin
  // otherwise setAlarm1 will fail
  rtc.writeSqwPinMode(DS3231_OFF);

  // turn off alarm 2 (in case it isn't off already)
  // again, this isn't done at reboot, so a previously set alarm could easily go overlooked
  rtc.clearAlarm(2);
  rtc.disableAlarm(2);

  set_alarm();
  
  // Attach a wakeup interrupt on pin 8, calling repetitionsIncrease when the device is woken up
  LowPower.attachInterruptWakeup(INTERRUPT_PIN, interrupt_handler, FALLING);
}

void interrupt_handler() {
  if (ready_to_beep) {
    beep();
    rtc.clearAlarm(1);
  } else {
    double_beep();
    set_alarm();
  }
}

void set_alarm() {
  // set alarm 1, 2 flag to false (so alarm 1, 2 didn't happen so far)
  // if not done, this easily leads to problems, as both register aren't reset on reboot/recompile
  rtc.clearAlarm(1);

  if (rtc.now() + WAKEUP_PERIOD + BUFFER_PERIOD > alarm) {
    ready_to_beep = true;
    rtc.setAlarm1(alarm, DS3231_A1_Date);
  } else {
    rtc.setAlarm1(rtc.now() + WAKEUP_PERIOD, DS3231_A1_Date);
  }
}

void loop() {
  LowPower.sleep();
}

void beep()
{
    digitalWrite(BEEPER_PIN, HIGH);
    delayMicroseconds(1000000);
    digitalWrite(BEEPER_PIN, LOW);
}

void short_beep()
{
    digitalWrite(BEEPER_PIN, HIGH);
    delayMicroseconds(100000);
    digitalWrite(BEEPER_PIN, LOW);
}

void double_beep() {
    short_beep();
    delayMicroseconds(100000);
    short_beep();
}
»


NewFriend -- A Slackbot to Replace your Friends

There’s something I love about bad robots. To this day, I can’t watch this KetchupBot video without smiling.

Part of what makes this robot so great, is that it hits the perfect level of crappiness. I think if it were any better or worse at applying ketchup, it would be less funny. There’s also something about putting a lot of careful work into a machine that’s both crappy and useless.

Along these lines, Declan, Max, Phil and I built a Slackbot to imitate our friends. Every hour or so, it simulates a conversation among the users of our Slack workspace and posts it to a special channel in real time. It does a pretty good job of matching each person’s style, vocabulary, subject matter, punctuation, and spelling. It doesn’t do a good job of generating sentences that make sense. I think it kind of landed in KetchupBot territory.

The basis of the bot is a statistical model called a Markov chain trained on our Slack history.

What is a Markov Chain

A Markov chain models a system as a set of discrete states and transition probabilities between these state. I think a diagram is the best way to explain.

Let’s model the weather in Somerville, MA with a Markov chain model. The state will be the weather during one hour of the day.

Rain and sun model

In this model, the set possible states is sun and rain. This diagram says if it is sunny right now, there is a 90% chance that it will also be sunny in the next hour, and a 10% chance it will be raining. If it is raining right now, there’s an 80% chance it will be raining in the next hour and a 20% chance it will be sunny in the next hour. The arrows leading out of each node sum to one, because it will always be doing something in the next hour.

This whole structure conforms to what’s known as the Markov property, where the only factor that determines the probability of future states is the present state. To put it another way, the model is memory-less. To predict the state in an hour with a Markov chain, all we need to know is the current state.

How can I use it to make fun of my friends?

To connect this to the problem of generating the sentence in a particular style, we need to change what a state means and what a transition represents. What would it look like if we view a sentence as a sequence of states. The current state is the last word typed, and the next state is the next word. The transition probabilities represent the probabilities of saying each word next. We’d be treating a sentence as a kind of semi-random walk through word space.

A Markov chain for sentences

Imagine you heard me say I have a doctor's appointment on _____. You’ve got a pretty good idea of what I’m about to say. Almost certainly the name of a day, and probably a weekday. Maybe I’ll say the on my way to the twenty-fifth of July. There’s an outside chance I’ll surprise you by saying a boat or Aquidneck Island. Still, you can narrow it down a lot.

Problem is, the Markov model only gets to look at the current state, which we said was just the last word. In this example that would be on _____. With this restriction, you’re probably much less sure of what I’m about to say, but there’s still some information here. The next word is probably not another proposition like at, for example.

Well, we can play kind of a funny trick and decide to call the last two words the state. Doesn’t this break the Markov property? No, the transition probabilities still only depend on the current state, we’ve just changed the definition of a state. Now the first state is I have and the second state is have a. The final state is appointment on. This state actually has a lot of information about what the next word will be, almost as much as the full sentence.

There’s a tradeoff to opening up the state like this, and it has to do with the transition probabilities. There are way more of them now! As the number of words in the state increases, the number of possible states grows exponentially and the number of possible transitions does too.

Estimating the probabilities

Having a large number of states is a problem, because we’re going to have a finite amount of data to estimate these probabilities from. Let’s see what that looks like. A simple way to find these probabilities is to look at a corpus of text, find all the number of times each state ocurred, and count the number of times each possible transition actually happened. It’s a little silly to build a Markov chain from a single sentence, but let’s use that as the corpus, and take a look at the graph with a two word state and the above strategy for calculating transition probabilities.

Doctor model

This graph is pretty boring. Each state occurs exactly once, so the graph is linear. We only have one example of a transition from I have, and it’s to have a, so the estimated probability of that transition is one.

What if we use this corpus of four sentences instead:

My favorite color is red

My favorite color is blue

My favorite color is green

Your favorite color is blue

Now the graph would look like:

Color model

This Markov chain is a bit more interesting and illustrates the probability calculation. Since we have two instances of color is being followed by is blue, it is twice as probable in the Markov chain. There are also two states that lead to the favorite color state, each with probability one.

This model doesn’t reflect the fact that three of the four sentences start with My. We can solve that by adding special start and end tokens to the beginning and endings of the sentence. Effectively, just pretend that each sentence looks like this:

START START My favorite color is red END END

We need two of each because our states are two words.

Full color model

Generating sentences

Generating a new sentence from a Markov chain is simple. Start at the START START state and pick the next state according to the listed probabilities. This Markov chain could generate a sentence that doesn’t appear in the input list. It has 12.5% chance of generating this sentence, for example:

Your favorite color is red

Markov Chain for Conversations

So far we’ve been generating sentences, but our Slackbot generates conversations between multiple people instead. We considered different ways of doing this, but settled on a method that I think is kind of clever. We replace each word with a tuple representing the word and the speaker. We also added an extra token indicating that a single message has ended. A conversation like this:

Alice: Hey
Bob: Hey there

Is now represented with these tokens:

  • START
  • (alice, Hey)
  • SEND
  • (bob, Hey)
  • (bob, there)
  • END

Alice saying Hey is now a completely different token from Bob saying Hey. Instead of generating sentences by generating sequences of words, we can generate conversations as sequences of these tuple tokens. Here’s what a model for this one sentence would look like:

Conversation model

This graph looks little uglier, but it can encode a bunch of information we care about. Most importantly, it has the power to bridge the gap between individual messages so that they can be related to each other. It also captures when the pattern people choose to break up their messages. For example,

I don’t know

is very different from

I
don’t
know

The SEND tokens allow us to encode that.

The downside of the tuple token approach is, of course, that it increases the number of states and possible transitions. We tried to limit this effect by ignoring case and punctuation on the output side of nodes. For example, these conversations:

Alice: HEY
Bob: Hey there

Alice: Hey
Bob: Hey there

would lead to this Markov chain: Conversation case model

In this case, Alice retains the ability to say both Hey and HEY, but Bob doesn’t care about that distinction.

Want to try it out? Checkout our GitHub repo.

»


RaaS: Reality as a Service

Declan, Max, Phil and I have been working on this project for about a year. I’m pretty proud of it.

We built Reality as a Service, a free hosted robotics platform. From building robots like ZIPY, we know that reality can be very different from simulation, even for a simple system. RaaS allows everyone to test out control theory or reinforcement learning code on real hardware with features you’d find in the real world like latency, friction, and imperfect measurement. Until now, the only way to do this was to build your own robot. And who wants to do that?

RaaS gif


What is RaaS?

Our system is based on OpenAI Gym. Gym is a really cool project. It provides simulations of some simple interactive physical systems and exposes a common interface for controlling inputs and measuring the state. Our first robots are designed to match Gym’s Pendulum-v0 environment, which allows users to control the torque on the pivot of a simple pendulum with the goal of balancing it in the inverted position. pendulum RaaS is really easy to use, especially if you’re already familiar with Gym.

import gym
import gym_raas # import our extra RaaS Pendulum environment

env = gym.make('RaasPendulum-v0') # initialize the RaaS Pendulum environment
env.reset()

for _ in range(1000): # run for 1000 steps
   env.render()
    action = my_controller.get_action(observation) # <-- 🚨 your controller goes here 🚨
   observation, reward, done, info = env.step(action) # execute the chosen action

env.close()

Importing gym_raas adds a special new environment called RaasPendulum-v0 to the stock gym module. This environment’s interface is the same as Pendulum-v0. When code like this is run on our hardware, it controls a DC motor mounted to the base of the pendulum and reads back position and angular velocity information from a rotary encoder. When run on other computers, the code controls a simulation tuned to approximately match our hardware. Users can develop their code locally using the simulation and then test it on RaaS.


How do I try it?

You can try it by going to raas.perciplex.com and submitting a job. All you have to do is provide the URL to a Github project with a controller defined in a function called run.py in the main branch. We wrote an example controller that’s easy to try right away: https://github.com/perciplex/raas-example. If you want to write your own code, you can start by forking our starter repository.

After you submit a job, it will be added to the queue. Once your job is complete, you’ll be able to see a graph of the results, an animation showing you what happened, read the standard out, and download the data. interface


How does it work?

The center of the RaaS universe is a Flask server running on AWS Elastic Beanstalk. It maintains a list of queued, running, and completed jobs. Jobs can be added the the queue and viewed using a web app built with React.

The robots are Raspberry Pi 4s connected to motor with built in rotary encoders. Each robot continually polls the server for jobs. If one is waiting in the queue, that job is popped and assigned to the requesting robot. It spins up a Docker container, downloads the user’s code from Github, and attempts to run the code. When it completes or crashes, the data is saved to AWS S3, and metadata about the run is returned to the server, which marks the job as complete. The block diagram below outlines the system.

block diagram

Go try it out!

Seriously, we’re begging you. Go try it out. 🙏

»