Marble Mirror Update

We’ve made a lot of progress since my first marble machine post!

Major Changes

Here’s an overview of the main changes we made since my last post.

Smaller, Better Marbles

Most obviously, we’ve switched from 3/8” stainless steel balls to 6mm ABS beads. I was skeptical at first, but I now see that this is better for a number of reasons.

Why was I skeptical? I thought that small plastic beads would be too light and would get stuck in the machine, either by static charge or by just not having enough inertia to roll when necessary. Steel ball bearings are sold as machine parts and can be bought with tight tolerances. We were able to get two colors by oxidizing the steel with chemicals. The only plastic balls we could find that came in multiple colors were sold as art supplies or decoration. I thought this would mean they were inconsistent or not perfectly round. This has not been a problem at all. The balls are actually easier to move around by virtue of being lighter. The black and white beads have much better contrast than the oxidized and unoxidized ball bearings.

We’re using a product sold as “undrilled beads”.

Fiber Optic Color Sensing

The machine needs to know what color marbles are to be able to place them in the correct column. We do this with an RGB pixel sensor and an LED for illumination. Fitting the LED/sensor module onto the carriage was awkward and an annoying constraint. We solved this by moving the sensor to the side of the carriage and moving light around using PMMA fiber optic cables. One cable illuminates the marble and the other collects light for the sensor. I’m sort of surprised by how well this works.

It looks cool, too.

Bucket-Style Reservoir

One issue we’d been grappling with is the size of the ball reservoir. If we want to be able to display all possible images, we need at least enough black marbles to fill the whole screen and an equal number of white marbles. Imagine actually trying to draw a full black image with that configuration, though. The stream of marbles is drawn semi-randomly from the reservoir. Hunting for those last few black marble could take a while. If the reservoir can hold three times the number of marbles in the screen, it will still contain one third black marbles even when the screen is all black. That’s a big reservoir, though.

In previous versions, marbles were completely confined to a single plane. To increase the reservoir size without having it dominate the visible surface, we added a big out-of-plane bucket behind the machine.

Photo front

Smoothly-Curved Elevator Channel

We ran into issues with balls binding in the lower-left corner, where a stepper motor lifts marbles to the top of the machine. Previous versions featured a corner with a polygonal wall. Model front This was a simple design achievable with the laser cutter.

The new version has a smooth corner. Also produced using a laser cutter, but a little less obvious. Model front Model front

Improved Stepper Driver

In earlier versions of the marble machine, we used simple H-bridge style stepper drivers. They essentially just amplify the output of a digital pin. If, like us, you’re using the GPIO pins of a Raspberry Pi to command the driver, these are terrible. The speed that the pins can be controlled at and the consistency of the timing makes the drive speed low and the motor very noisy.

We changed to a proper CNC control board that reads GRBL G-code. Now, instead of directly controlling the state of every pin, the Pi just sends serial commands saying “go to position 10 mm”.

Getting this working was an ordeal, but was totally worth it. We had to fight to get this working, but now that it is the motion of the carriage and elevator is smooth and fast.

»


Cherry Rocking Chair for Dad or: A Botch-Job in Three Acts

My dad turned 70 this year, so Matt and I decided to make him an old man rocking chair. This project was an adventure. We wasted a lot of perfectly good cherry by repeatedly messing up and/or changing our design. Still, I’m proud of how it came out:

Photo front

Our inspiration was a beautiful rocking chair we found online: Inspiration Our starting design closely followed the Masaya chair. We planned to copy most of the details including the bridle jointed rockers, cantilevered armrests, and stretcher-supported seat. As we built, we made some changes. Usually, we plan things out in advance and more or less stick to the plan. This time, we often deferred choices until we were forced to make them. For example, we never had a coherent plan to connect the seat to the stretchers. Ultimately, we avoided the stretchers all together and fastened the seat to the fame with four 3/4” pins. This strategy, driven mostly by laziness and indecisiveness, was both flexible and stressful. I wasn’t truly confident in the project until we clamped up for the last time.

Eventually, we landed on this: Photo front

A Botch-Job in Three Acts

We worked on this project over three periods: one in around Halloween, one over Thanksgiving, and one just after Christmas.

Act I: Halloween

This was definitely our spookiest (and least successful) session. Matt and I got together and made doomed pieces for the frame. We cut two rockers, four vertical supports, and three stretchers. Out of the nine pieces we built that weekend, only three made it into the final piece.

The rockers were the most bone chilling part to replace because they were cut from a ridiculously expensive piece of 9” wide 8/4 cherry. I bought a band saw on Craigslist specifically for this project. It’s a cheap 8” Delta benchtop band saw. I drove about an hour to pick it up. The seller turned out to be a pawn shop.

Photo front

We tried some test cuts and were immediately disappointed. This saw clearly wasn’t powerful enough for 8/4 hardwood and the surface finish was absolutely terrible. We opted instead to cut the rockers using a jigsaw. This was a big mistake. Rocker one looked okay, but while cutting rocker two the blade deflected almost half an inch. We switched back to the band saw to correct the undercut, but the damage was done. We convinced ourselves at the time that thinner rockers would be okay, but I think we both knew the truth.

Act II: Thanksgiving

Over Thanksgiving, we made the seat. We bravely defied typical three act structure, so this session went much better. One contributing factor was a new band saw. Will joined me to drive up toward the New Hampshire border to buy a much bigger and much better 12” Craftsman band saw for only $75. The seller was a nice man who had just finished reffing a soccer game and was not a pawn shop. Here’s Matt using it to cut a part of the seat back:

Photo front

One aspect of the design that we deferred until the last minute was the woven seat. We bought a thousand feet of black polyester rope and ignored the question until it was time to start winding. The rope came on a massive and intimidating spool. We tested out a few options and dove in. Clamping the seat to a vertical column allowed us to pass the spool back and forth easily. Photo front There was so much rope. Photo front Eventually, it came together. Photo front

Act III: Christmas

The climactic final act. We got together for almost a week after Christmas and finished things up. First order of business: bite the bullet and buy more wood. The new rockers were a huge improvement. It was definitely the right decision to replace them. We also changed the design, replacing the cantilevered armrests with a more symmetrical design and adding overhangs in a few places to visually match the seat. Photo front As we clamped the seat to the frame to decide how it would fit, we realized that the stretchers supporting the seat weren’t necessary. Instead, we connected the parts with pins. Photo front Photo front We finally glued things up at 3 AM on New Year’s Day. Photo front

Very happy with how this project turned out. Happy birthday, Dad!

»


Marble Mirror (work in progress)

Photo front

I’ve always sort of wanted to make a marble-based machine. I’ve been following the progress of Wintergatan/Martin Molin’s Marble Machine X for some time. It’s a follow-up to the original and more well-known Marble Machine of YouTube.

You should probably just leave that video running and let it be the background music while you read this post.

This time around, Martin’s documented his progress with a compelling series of videos showing the ups and downs of his adventure to build a giant music playing machine based around marbles. A marble machine almost seems simple. It’s just tracks and belts and xylophones and stuff. His videos put the lie to this idea. The Marble Machine X project has stalled and as of late 2021, he’s pursuing a ground-up redesign of his incredible machine after coming to terms with some fundamental design flaws.

Fortunately, we picked a much easier project. The name is ridiculous at this point, but it’s stuck. The original idea was to produce some kind display where the “pixels” are made of a grid of macro-scale physical objects and connect this screen to a webcam to create a “mirror”. The direct inspiration was Daniel Rozin’s mechanical mirrors.

We considered a few designs but settled on one that’s nowhere near real time. We finally have it working:

It forms an image in the style of a Connect 4 board. A stepper motor lifts a column of marbles to the top of the machine by rotating a notched disk (blue) and drops one into the carriage (green). A RGB sensor in the carriage (red) measures the color of the cargo area and decides if it contains a reflective steel marble, a black marble, or no marble. If the marble can be used in one of the columns of the image, it is delivered to that column and dropped in. If the marble can’t be used because it doesn’t match the next pixel in any column, it is rejected and dropped back into the reservoir at the bottom.

Model front Model front

The carriage rides on linear bearings on an 8mm steel rod. It’s driven by a stepper motor and lead screw. A servo with a plastic horn acts as a gate.

Model front Model front

»


Fifty Year Beep (work in progress)

Phil, Will, and I are working on a project called The Fifty Year Beep. Here’s the idea: It’s a self-contained battery-powered circuit that beeps once and only once at a specified date and time fifty years in the future.

It would feature sleek, futuristic design. Our prototype looks like this: Photo front It could use some work.

You keep it in your home as a piece of art. On the day of your beep, you gather your friends and family to hear it. You’ve been waiting for fifty years! The moment finally comes and the device emits a single electronic beep.

I love this idea. I’m only half joking when I say this could be an actual product. A ten year beep would be a perfect wedding gift. Any big life event would be an occasion to give a beep.

We’ve been building one. Here’s what it looks like on the inside: Model front

It’s a basically circuit on a panel mounted inside a laser-cut dodecahedron. Here’s that panel: Model front

It’s based around SAMD21 microcontroller on a QT Py breakout board and an DS3231 RTC module. It’s powered by a TL2300 D-sized lithium thionyl chloride battery containing 19000 mAh at 3.6 V. The beep is produced by a small piezo buzzer.

The schematic: Photo front

The layout: Photo front

An Onshape model of the dodecahedron: Photo front

We removed all LEDs from the microcontroller and RTC module. We developed a scheme for setting an alarm on the RTC module and configuring the microcontroller to wake on an interrupt. This gets us down to a quiescent current of about 100 μA. Some naive math says that this version can last 190000 hours or about 21 years. The discharge curve for this battery chemistry is very flat, so we think this estimate might not be too far off.

Photo front

Here’s the code we used:

#include "ArduinoLowPower.h"
#include <RTClib.h>

RTC_DS3231 rtc;

// Pin used to trigger a wakeup
const int INTERRUPT_PIN = 1;
const int BEEPER_PIN = 0;

bool ready_to_beep;
DateTime alarm;
TimeSpan WAKEUP_PERIOD =  TimeSpan(1, 0, 0, 0);
TimeSpan BUFFER_PERIOD =  TimeSpan(0, 0, 0, 10);


void setup() {
  // enable internal pullup on all
  pinMode(INTERRUPT_PIN, INPUT_PULLUP);
  pinMode(2, INPUT_PULLUP);
  pinMode(3, INPUT_PULLUP);
  pinMode(4, INPUT_PULLUP);
  pinMode(5, INPUT_PULLUP);
  pinMode(7, INPUT_PULLUP);
  pinMode(9, INPUT_PULLUP);
  pinMode(10, INPUT_PULLUP);
  pinMode(11, INPUT_PULLUP);
  pinMode(12, INPUT_PULLUP);
  pinMode(13, INPUT_PULLUP);
  pinMode(14, INPUT_PULLUP);

  pinMode(BEEPER_PIN, OUTPUT);
  
  double_beep();
  
  rtc.begin();

  rtc.adjust(DateTime(F(__DATE__), F(__TIME__)));
  alarm = rtc.now() + TimeSpan(6, 20, 0, 0);
  ready_to_beep = false;


  // we don't need the 32K Pin, so disable it
  rtc.disable32K();

  // stop oscillating signals at SQW Pin
  // otherwise setAlarm1 will fail
  rtc.writeSqwPinMode(DS3231_OFF);

  // turn off alarm 2 (in case it isn't off already)
  // again, this isn't done at reboot, so a previously set alarm could easily go overlooked
  rtc.clearAlarm(2);
  rtc.disableAlarm(2);

  set_alarm();
  
  // Attach a wakeup interrupt on pin 8, calling repetitionsIncrease when the device is woken up
  LowPower.attachInterruptWakeup(INTERRUPT_PIN, interrupt_handler, FALLING);
}

void interrupt_handler() {
  if (ready_to_beep) {
    beep();
    rtc.clearAlarm(1);
  } else {
    double_beep();
    set_alarm();
  }
}

void set_alarm() {
  // set alarm 1, 2 flag to false (so alarm 1, 2 didn't happen so far)
  // if not done, this easily leads to problems, as both register aren't reset on reboot/recompile
  rtc.clearAlarm(1);

  if (rtc.now() + WAKEUP_PERIOD + BUFFER_PERIOD > alarm) {
    ready_to_beep = true;
    rtc.setAlarm1(alarm, DS3231_A1_Date);
  } else {
    rtc.setAlarm1(rtc.now() + WAKEUP_PERIOD, DS3231_A1_Date);
  }
}

void loop() {
  LowPower.sleep();
}

void beep()
{
    digitalWrite(BEEPER_PIN, HIGH);
    delayMicroseconds(1000000);
    digitalWrite(BEEPER_PIN, LOW);
}

void short_beep()
{
    digitalWrite(BEEPER_PIN, HIGH);
    delayMicroseconds(100000);
    digitalWrite(BEEPER_PIN, LOW);
}

void double_beep() {
    short_beep();
    delayMicroseconds(100000);
    short_beep();
}
»


NewFriend -- A Slackbot to Replace your Friends

There’s something I love about bad robots. To this day, I can’t watch this KetchupBot video without smiling.

Part of what makes this robot so great, is that it hits the perfect level of crappiness. I think if it were any better or worse at applying ketchup, it would be less funny. There’s also something about putting a lot of careful work into a machine that’s both crappy and useless.

Along these lines, Declan, Max, Phil and I built a Slackbot to imitate our friends. Every hour or so, it simulates a conversation among the users of our Slack workspace and posts it to a special channel in real time. It does a pretty good job of matching each person’s style, vocabulary, subject matter, punctuation, and spelling. It doesn’t do a good job of generating sentences that make sense. I think it kind of landed in KetchupBot territory.

The basis of the bot is a statistical model called a Markov chain trained on our Slack history.

What is a Markov Chain

A Markov chain models a system as a set of discrete states and transition probabilities between these state. I think a diagram is the best way to explain.

Let’s model the weather in Somerville, MA with a Markov chain model. The state will be the weather during one hour of the day.

Rain and sun model

In this model, the set possible states is sun and rain. This diagram says if it is sunny right now, there is a 90% chance that it will also be sunny in the next hour, and a 10% chance it will be raining. If it is raining right now, there’s an 80% chance it will be raining in the next hour and a 20% chance it will be sunny in the next hour. The arrows leading out of each node sum to one, because it will always be doing something in the next hour.

This whole structure conforms to what’s known as the Markov property, where the only factor that determines the probability of future states is the present state. To put it another way, the model is memory-less. To predict the state in an hour with a Markov chain, all we need to know is the current state.

How can I use it to make fun of my friends?

To connect this to the problem of generating the sentence in a particular style, we need to change what a state means and what a transition represents. What would it look like if we view a sentence as a sequence of states. The current state is the last word typed, and the next state is the next word. The transition probabilities represent the probabilities of saying each word next. We’d be treating a sentence as a kind of semi-random walk through word space.

A Markov chain for sentences

Imagine you heard me say I have a doctor's appointment on _____. You’ve got a pretty good idea of what I’m about to say. Almost certainly the name of a day, and probably a weekday. Maybe I’ll say the on my way to the twenty-fifth of July. There’s an outside chance I’ll surprise you by saying a boat or Aquidneck Island. Still, you can narrow it down a lot.

Problem is, the Markov model only gets to look at the current state, which we said was just the last word. In this example that would be on _____. With this restriction, you’re probably much less sure of what I’m about to say, but there’s still some information here. The next word is probably not another proposition like at, for example.

Well, we can play kind of a funny trick and decide to call the last two words the state. Doesn’t this break the Markov property? No, the transition probabilities still only depend on the current state, we’ve just changed the definition of a state. Now the first state is I have and the second state is have a. The final state is appointment on. This state actually has a lot of information about what the next word will be, almost as much as the full sentence.

There’s a tradeoff to opening up the state like this, and it has to do with the transition probabilities. There are way more of them now! As the number of words in the state increases, the number of possible states grows exponentially and the number of possible transitions does too.

Estimating the probabilities

Having a large number of states is a problem, because we’re going to have a finite amount of data to estimate these probabilities from. Let’s see what that looks like. A simple way to find these probabilities is to look at a corpus of text, find all the number of times each state ocurred, and count the number of times each possible transition actually happened. It’s a little silly to build a Markov chain from a single sentence, but let’s use that as the corpus, and take a look at the graph with a two word state and the above strategy for calculating transition probabilities.

Doctor model

This graph is pretty boring. Each state occurs exactly once, so the graph is linear. We only have one example of a transition from I have, and it’s to have a, so the estimated probability of that transition is one.

What if we use this corpus of four sentences instead:

My favorite color is red

My favorite color is blue

My favorite color is green

Your favorite color is blue

Now the graph would look like:

Color model

This Markov chain is a bit more interesting and illustrates the probability calculation. Since we have two instances of color is being followed by is blue, it is twice as probable in the Markov chain. There are also two states that lead to the favorite color state, each with probability one.

This model doesn’t reflect the fact that three of the four sentences start with My. We can solve that by adding special start and end tokens to the beginning and endings of the sentence. Effectively, just pretend that each sentence looks like this:

START START My favorite color is red END END

We need two of each because our states are two words.

Full color model

Generating sentences

Generating a new sentence from a Markov chain is simple. Start at the START START state and pick the next state according to the listed probabilities. This Markov chain could generate a sentence that doesn’t appear in the input list. It has 12.5% chance of generating this sentence, for example:

Your favorite color is red

Markov Chain for Conversations

So far we’ve been generating sentences, but our Slackbot generates conversations between multiple people instead. We considered different ways of doing this, but settled on a method that I think is kind of clever. We replace each word with a tuple representing the word and the speaker. We also added an extra token indicating that a single message has ended. A conversation like this:

Alice: Hey
Bob: Hey there

Is now represented with these tokens:

  • START
  • (alice, Hey)
  • SEND
  • (bob, Hey)
  • (bob, there)
  • END

Alice saying Hey is now a completely different token from Bob saying Hey. Instead of generating sentences by generating sequences of words, we can generate conversations as sequences of these tuple tokens. Here’s what a model for this one sentence would look like:

Conversation model

This graph looks little uglier, but it can encode a bunch of information we care about. Most importantly, it has the power to bridge the gap between individual messages so that they can be related to each other. It also captures when the pattern people choose to break up their messages. For example,

I don’t know

is very different from

I
don’t
know

The SEND tokens allow us to encode that.

The downside of the tuple token approach is, of course, that it increases the number of states and possible transitions. We tried to limit this effect by ignoring case and punctuation on the output side of nodes. For example, these conversations:

Alice: HEY
Bob: Hey there

Alice: Hey
Bob: Hey there

would lead to this Markov chain: Conversation case model

In this case, Alice retains the ability to say both Hey and HEY, but Bob doesn’t care about that distinction.

Want to try it out? Checkout our GitHub repo.

»