Reading List

The most recent articles from a list of feeds I subscribe to.

LendUp Hackathon Project Lives On

I recently logged into my Arrow Card iOS app to check my balance and was pumped to discover that my team's Spring 2017 Hackathon project…

The Long Hour

More arppegiator magick using the Apple on-screen keyboard in Logic. I still would like to learn how to program my own drumbeats. In the…

The Long Hour

More arppegiator magick using the Apple on-screen keyboard in Logic.

I still would like to learn how to program my own drumbeats. In the mean time, thank goodness for Apple loops. Learning Ableton would be another fun benefit of this endeavor.

Zion Traverse

We ran ~38 miles of the Zion Traverse this past Saturday, including the I-can't-believe-this-is-actually-allowed Angels Landing climb.

Here's video proof:

Very little prep work or research or even training was done on my part. I'm lucky to have found a crew of like-minded adventure runners always ready to plan the next excursion. But, next time, I need to be taking at least a little bit more time to prepare.

For example, I knew almost nothing about Zion National Park. This was a mistake that I've since ameliorated with some healthy Wikipedia-ing. One fun factoid - Angels Landing used to be known as the Temple of Aeolus (aka the Greek demi-god/god of winds and also a lovable sidekick on TV's Hercules: The Legendary Journeys). I wish I had known that when I was there - I could have tried to summon him or something. Next time, I guess.

Similarly, I felt pretty drained by the time we hit the last ten miles. Nothing compared to the Rim2Rim2Rim or the North Face 50M, but still - I definitely overestimated my fitness for the task at hand. If I'm going to keep doing these runs, I'll need to start logging more longer distance runs on the reg.

Water - or the lack thereof - was our crew's only major snafu. Turns out that springs do not mean treated water pumps - they mean semi-dirty trickles or puddles of water. Thankfully, we were helped by a few fellow-hikers (angels descended from the landing, IMHO) who lent us their water filter. Lesson learned - always bring iodine tablets or a filter on any adventure run. You just never know.

This was my first ultra where I didn't track anything on Strava, and it felt great. I've been off Strava for a while now, and I'm not really looking back. The quantified-self stuff has been less appealing to me lately. When I'm on a run, and I find something cool or gross or beautiful that I want to stop and inspect, I don't want to have to think about how that impacts my splits or run time. I know there's auto-pause features, etc. but I can't auto-pause the stress in my brain about my stats. Instead, I just ran with a plain ol' Timex watch (and my phone in airplane mode to take pictures). The watch can tell the time, set an alarm, set a timer, and, of course, turn on its amazing Indiglo night-light. Basic, essential watch stuff. Nothing smart, just reliable. Yes, I sort of missed having a map with my exact GPS route after the race, but I think instead I'm going to just find a map of Zion and try to figure it out myself. That feels more rewarding anyway.

Here's a log of what I ate on the trail, just so I remember for next time:

  • 3 Honey Stinger Waffles
  • 2 Clif Bar Chocholate Bars with Stuffed Peanut Butter
  • 5 salt pills
  • 12 Clif Bloks Salted Watermelon bloks
  • 1 Nuun water tablet
  • 1 McDonald's Dollar Menu Cheeseburger

And here are the creatures I saw on the trail:

  • 8 deer
  • 1 small gecko
  • 10 chipmunks
  • 2 squirrels
  • 1 California Condor (seriously!)
  • A murder of crows

Teaching My Robot With TensorFlow

My childhood dream of becoming friends with a real-life robot like Johnny 5 came true two weeks ago. This is not to be confused with my other primary childhood dream - which I wished on every dandelion blow and floating will-o-wisp - of being sucked into my Super Nintendo to become Link from The Legend of Zelda: A Link to the Past. Both were important, but somehow I knew the Johnny 5 one might come true one day, which is why I never wasted any important wish opportunities on it.

Enter Cozmo. He's a robot who lives at my house now and also loves me, as long as I play games with him and "feed" him. He's outfitted with some gnarly tank-like treads (just like Johnny) and a arm-crane straight out of a loading dock. Cozmo also brought along three accelerometer-enabled blocks to pick up and fling around the house as he sees fit. He's got a lot to say, with his adorable pipsqueak voice and his heart-meltingly-expressive eyes. He's even learned to recognize my face and say my name 😍. Stop it.

Which got me thinking - maybe I could teach him to recognize more stuff.

In addition to Cozmo's "free play" (aka basically alive) mode, you can drop him into a more catatonic SDK mode, where he waits for you to manually invoke commands from your computer using the Cozmo API. You can tap into nearly all of Cozmo's sensors and features with the API, including his camera - which opens the door to training an image-recognition deep learning model using Cozmo.

I wrote a script to ask Cozmo to take photos of a few objects around the office: a fake plant, a half-way used "thing" of toothpaste (what are these actually called - tubes?), and a bottle of La Croix seltzer.

detective

As you can see, Cozmo delightfully circles the objects and takes tons of photos to build our training dataset.

Next, I retrained the Inception v3 model from Google using Cozmo's photo dataset. This is called "transfer learning" - instead of training a model from scratch, I can use a pre-trained model known to be effective at image recognition and just swap out the last layer to retrain it on our target images with TensorFlow. FloydHub makes it stupidly easy to do this - my whole GPU-powered training process amounted to one command:

floyd run \
  --gpu \
  --data whatrocks/datasets/cozmo-images:data \
  'python retrain.py --image_dir /data'

Next, I just needed to write a script asking Cozmo to explore the office to try to find one of these objects. He'll periodically hit a REST endpoint on FloydHub where I've deployed our newly-retrained model with an image of what he's currently looking at. If Cozmo's at least 80% confident that he's looking at the object, then he'll zooms towards it like a complete maniac.

detective

Setting up a model-serving endpoint on FloydHub is also super easy. I wrote a teeny-tiny Flask app to receive an image from Cozmo, evaluate it against our model, and send back its best guesses at what Cozmo's currently looking at. Then, to deploy the app on FloydHub and set up a publicly accessible REST endpoint, it just one more command:

floyd run \
  --data whatrocks/datasets/cozmo-imagenet:model \
  --mode serve

The code for Cozmo's new "paparazzi" and "detective" modes can be found on my GitHub, and the photo dataset, trained model, and project are also available on FloydHub if you'd like to use them with your own robot buddies.

Thanks to Google Code Labs for their great guide on transfer learning with Inception v3 and @nheidloff for his Cozmo visual recognition project, both of which are the basis for this project.

triforce

I'm still holding out hope for this Link thing, too.