edunham http://edunham.net/ Sysadmin/DevOps & Developer Advocate en-us Sun, 14 Jan 2024 00:00:00 -0800 http://edunham.net/2024/01/14/diy_shoe_chains.html http://edunham.net/2024/01/14/diy_shoe_chains.html <![CDATA[DIY Shoe Chains]]> DIY Shoe Chains

The Pacific Northwest is dangerously frozen at the moment. Other regions handle conditions like this just fine. But a freeze like this is especially dangerous to us because most people are unprepared for it.

Right now we’ve got a bunch of frozen sleet on the ground that looks like snow, but offers the traction of an ice skating rink. This morning I thought I could cross the “snow” in regular shoes, and fell (embarrassingly, but non-injuriously) immediately. I was able to go inside and put on my shoe chains because I keep a pair on hand for occasions like this, and with chains the ice was as easy to walk on as snow would be. But it got me thinking about people who might not have thought ahead to own shoe chains. You can’t just order a pair for use today; nobody’s delivering anything in this weather.

So I did a quick experiment to see whether adequate shoe chains could be assembled out of stuff that a normal person might have on hand. All the commercial ones really are, after all, is a piece of something stretchy and some chain. Image below the fold.

Shoe chains are just a stretchy part and a traction part. The traction part goes under the shoe and digs into the ice when you walk. The stretchy part keeps the traction part in place, and makes the whole assembly easier to put onto the shoe.

I made the stretchy part from some old elastic salvaged from a fitted sheet that was on its way to the sewing stash, but you could use elastic from anywhere. If you have a lot of hair ties, you could connect them end to end and make a loop the right size. If you have a lot of rubber bands, you could use those (just be aware that the stretchy part breaking would mean the chains fall off your shoe!). If you didn’t have anything stretchy, you could use an old shoelace and then tie it securely in place. If you used something without stretch, you’d have to get the knot untied any time you wanted to remove the chains from your shoes. The size of the stretchy part should be large enough so it fits over the biggest part of the sole of your shoe, but small enough so that it doesn’t fall off.

Chain works great for the traction part of the shoe chains. I used a piece of flimsy anodized aluminum chain that used to be part of some accessory; I forget if it was a belt or a purse. A steel chain necklace would work great, as would a spare wallet chain. The prototype with the chain stayed in place really nicely on my shoes when I tested it. I attached the chain to the elastic by threading the elastic through the chain links, so it made a zig-zag of chain in the loop of elastic. If the elastic hadn’t fit through the chain, I could have used safety pins or paperclips to connect them, or sewn it. The size of the chain should be long enough to get from one side of your shoe sole to the other, but short enough that it stays in place and doesn’t leave a lot of slack.

I also tested plastic mardi gras beads instead of chain. They gave me more traction than an un-chained shoe, but the round beads tended to roll about and move out of position on the shoe. I had to adjust the mardi gras bead prototype several times while walking around, and I didn’t have to adjust the chain one at all. If you’re building shoe chains out of beads, you’ll probably need to put more effort into getting them to stay where you want them. The nice thing about plastic beads is that, if you cross the string parts and twist them, you can easily attach any part of the strand to any other part.

../../../_images/shoechains.png

All the usual disclaimers apply – be safe, use common sense, don’t sue me if this gives you an idea that gets you hurt. But if you do have to get from point A to point B on slippery ice, think about what resources you’ve got at your disposal for making that safer.

]]>
Sun, 14 Jan 2024 00:00:00 -0800
http://edunham.net/2024/01/02/dicemaking_starterkit.html http://edunham.net/2024/01/02/dicemaking_starterkit.html <![CDATA[Making Dice]]> Making Dice

If you’re here for pretty pictures of dice, prepare to be disappointed. Making adequate dice is easy, but taking good photos of them exceeds my current skills.

In today’s standard “how I spent my winter vacation” small talk, I showed some colleagues a dice set that I made last week, and they seemed surprised when I explained that it was relatively easy.

Making excellent or perfect dice is not easy, but I’m not trying for excellent or perfect. I’m trying for “nice to look at” and “capable of showing random-feeling numbers when I roll them”. Those goals are easy to achieve with cheap products from the internet.

..more:

If you want to learn to make good dice, go watch Rybonator on YouTube, and the algorithm will start suggesting other good channels as well.

Next I’m going to share some Amazon links, and a note of the approximate price at time of writing. They aren’t affiliate links because signing up for the affiliate program requires more paperwork than I feel like doing right now. But they are the specific items I’ve been using to get pretty-okay dice out of.

The basic materials you’ll need to make dice are:

  • A dice set mold (~$8). Good molds are a solid slab of material. This is not a good mold, but it does make dice.
  • Epoxy resin. I got the 16oz of this set (~$9) because it was the cheapest option that seemed adequate, and it was cheap, and it was adequate. When warmed in boiling water before mixing, it has very few bubbles, and it cures overnight if left in a warm place.
  • Clean disposable cups and popsicle sticks or equivalent stirrers. Grab these from your recycling or dollar store. ($0-$2)
  • Waterproof gloves that you don’t mind getting resin and paint on, if you don’t want high-tech chemicals on your skin.

A dice mold and resin are enough to get you some transparent dice, but that’s boring. You can include household objects like game pieces or beads in the dice, and you can buy additives specifically designed for resin casting:

  • Mica powder like this colorful set (~$7) or this metallic set (~$10) gives a shiny metallic or pearlescent look
  • Dye like this set (~$10) gives the resin an even, transparent color.
  • Glitter like this assortment (~$11) can sparkle like tiny stars, contrast pleasantly with dyed resin, or just display interesting behaviors where the larger pieces sink if the resin is too warm and the smaller pieces remain in suspension.

Plan what you want to include in a given dice set before starting to mix the resin. I find that 40ml (20ml each of resin and hardener) is just right for a single pour of the mold linked above. Then it’s just a matter of following the directions for the resin. After mixing, you can split the resin into several different disposable cups if you’d like to pour different colors together.

Once the dice are hard, unmold them and be amazed! If you want the numbers to be visible, consider flooding them with whatever paint you have on hand, then wiping off the excess with a paper towel. The molds emboss the numbes into the dice, so the number will be the only paint remaining after you wipe it.

If the dice come out too rough, zona papers (~$12) are popular for sanding to a glass-like finish.

If the dice have huge bubbles, you can fix them with UV resin (~$10) and an ultraviolet bulb. The trick to the UV stuff is making sure that the wavelength required by the resin (405-410nm) is included in the spectrum emitted by the lamp (385-410nm). You may already have a UV lamp around if you do gel nails or resin printing.

I find that bubbles often show up in the corners of dice, and sticking some clear tape to the sides I’m mending with UV resin helps keep it where it belongs while letting the light in to harden it. I’ve also gotten some fun effects by painting the inside of the bubble a contrasting color before filling it with UV resin. After repairing bubbles with UV resin, the affected sides often need to be flattened out with a file or coarse sandpaper before polishing with fine sandpaper or zona papers.

In making several sets of increasingly less-bad dice, I’ve noticed that some techniques seem to yield better outcomes:

  • Start with the resin really hot. I set both prats of the epoxy in a container of almost boiling water before use, then dry them off and measure it out immediately. Hot resin flows better.
  • When using large glitter, make sure to get some glitter-free resin into the very bottom of each die first, or stir it. Big inclusions have a nasty habit of blocking the resin from getting into the tip of the D4.
  • Place the mold on a plate or tray before pouring, and do not remove it from the tray until the dice are hard. Bending the mold at all changes the volume of the cavities, which presses resin out then sucks air in.
  • Smear some resin on the lid before capping the mold, and slowly roll the lid onto the mold. Setting the lid straight down allows air to be trapped under it in the middle.
  • Over-fill the mold cavities slightly.
  • Expect heavier inclusions, like large glitter, to sink to the bottom when the resin is poured hot. The pros often wait for the resin to get tacky before pouring part of a die, but that’s advanced technique and I have yet to try much of it.
  • Paint can be easily removed from the dice numbers with an ultrasonic cleaner. Don’t try to clean the dice this way if you want the paint to stay in!

This barely scratches the surface of dice-making, and there are better resources on every topic for becoming an expert, making custom molds, and other advanced topics. Although it’s very hard to make excellent dice, it’s shockingly cheap and easy to make mediocre dice, and mediocre dice are often more than adequate to have fun with.

]]>
Tue, 02 Jan 2024 00:00:00 -0800
http://edunham.net/2023/03/14/retroreflectors_1.html http://edunham.net/2023/03/14/retroreflectors_1.html <![CDATA[Retroreflectors & Storing Ground Glass]]> Retroreflectors & Storing Ground Glass

Retroreflectors are fun. I finally got around to picking up some cheap glass blast media today (mine’s the 40/70 grit recycled bottle glass from Harbor Freight / Central Pneumatic) and did some testing with various paints and glues that I had lying around. I’m using it as retroreflective beads. When I hear “beads” I think of things with holes in them for putting on a string, but in this case it means more like beads of condensation – tiny round blobs. They feel gritty like beach sand, and being made from clear glass, they look like unusually sparkly white sand as well.

Roughly a decade ago I stumbled across a PDF that did a good job of explaining the physics of glass bead retroreflectors, which I can of course no longer find. It was published by some highway department and contained extreme detail about the temperature tolerances for applying retroreflective beads to the fog lines on roads, because the retroreflective properties only work if the glass beads are embedded just over halfway into the paint. If the beads sink too far into the paint they can’t work their light-bending physics magic, and if they’re embedded too shallowly into the paint they’ll easily come un-stuck from it.

Mental Model

Here’s how I think about bead-type retroreflectors. Physics means that the glass beads basically bounce the light straight back toward where it came from, so the stick figure is holding the flashlight close to its eyes for maximum testing effectiveness over short distances. The beads are obviously not to scale.

../../../_images/diagram.png

The beads have a different index of refraction from the surrounding air. That just means that light bends at a certain angle when it goes from air to glass, and it bends at that angle again when it goes from glass to air on its way out.

The surface of the bead that’s covered in paint reflects light, perhaps only some of the light depending on the color of the paint. White paint reflects the most light, of course.

The light goes into the bead, bends a little, bounces off the paint, goes almost straight back through the bead the way it came, bends a little when it hits the air, and ends up going more or less straight back toward the light source.

I’ve drawn the light in orange before it bounces off the paint and in yellow after it bounces off, to kind of illustrate how it behaves differently hitting the painted floor when it’s gone through a retroreflective bead versus when it’s just gone through the air.

Storing the blast media

../../../_images/storage.png

The one problem with even 5lbs of ground glass is that it does its utmost to get everywhere. If the bottle is ever inverted with the lid on and the lid gets bumped, the sand-like glass ends up everywhere the next time you take the lid off.

Plastic bottles like this come with a “safety” seal when I buy food stuff like spices in them, and I’ve never had a problem with those getting all up in the cap and spilling everywhere.

I tried ironing a scrap of mylar food packaging (a fruit snacks bag was most conveniently accessible) onto the plastic bottle, and it stuck nicely! I used my clothes iron on the Cotton setting. Now the beads will stay in the jar until I want to use them again.

Paint/Glue Testing

I put various sorts of paint and glue onto popsicle sticks and sprinkled them with the blast media. I then put them on the door of a room with good blackout curtains, took a photo, turned out the lights, and took a flash photo.

I used a commercial plastic retroreflector in the first two photos as a control for how well the DIY ones were working.

Here’s how it looked with the lights on:

../../../_images/lightson.png

From top to bottom, the samples are:

  • Dark green puff paint / fabric paint
  • Black puff paint / fabric paint
  • Superglue
  • modpodge
  • clear nail polish
  • dark red nail polish
  • 1 coat white nail polish, then half white half clear nail polish with beads on the second coat
  • white nail polish
  • black nail polish
  • black acrylic paint
  • white acrylic paint
  • metallic gold-ish paint stuff

Everything except the two-coat enamel test had the glass beads poured directly onto the one layer of the paint or glue. Yes, I labeled modpodge upside down. I wasn’t planning to take photos at first but then I realized if I was going to post about it I probably should.

Here’s how that setup looks in a flash photo from maybe 10’ away, in a dark room:

../../../_images/lightsoff1.png

Here’s how it looks from the same distance, still a dark room, but I zoomed in more:

../../../_images/lightsoff2.png

And here’s how it looks, still zoomed in, but with the commercial reflector removed:

../../../_images/lightsoff3.png

Conclusions

The cheapest glass sandblast media that I could get ahold of works surprisingly well as a retroreflector when applied to white paint. The performance difference between beads on white and beads on darker colors was surprising. The clear glues (superglue and mod podge) worked better than the darker colored paints, which was a bit surprising.

My biggest surprise was how the white sample almost blended in with the white door in ambient lighting, yet stood out so much in the dark.

Further stuff to test with this includes doing the glue thing on a darker background color, and trying various types of clear topcoat over the bead layer. I don’t think those will work, but then again I did think the black paint might work, so it’s worth testing.

]]>
Tue, 14 Mar 2023 00:00:00 -0700
http://edunham.net/2022/04/05/diy_sewing_thimble.html http://edunham.net/2022/04/05/diy_sewing_thimble.html <![CDATA[DIY Thimble]]> DIY Thimble

Over the past couple weeks, my schedule has had a higher than usual concentration of the kind of meetings where one sits off-camera and listens to a presenter talk. Like many engineers who knit in meetings, I find that keeping my hands busy helps me focus. Knitting puts me on the losing side of a battle between “don’t drop any stitches” and the laws of physics, however, so instead I’ve been hand sewing quite a bit.

I’ve been aware for some time that sewing with a thimble is Objectively Better than sewing without, but I somehow made it to adulthood without learning to use a thimble properly, let alone avoid losing one between setting a project down one month and picking it up again the next. I have DIY’d a lot of various leather thimbles over the years to try to sew with them, but the designs have been a hassle to assemble, non-functional, or both (sewing a leather thimble is a chicken and egg problem: you need one to make one without great inconvenience or pain). However, I have finally stumbled across a design that doesn’t fail or annoy me in the ways that all the previous ones did.

It’s embarrassingly easy to make. By embarrassingly, I mean “why didn’t I figure this out decades ago?!” All you need is some surgical tape, and a piece of leather about as wide as your finger and long enough to wrap over your fingertip.

../../../_images/step0.png

1. Wrap your finger in a couple layers of surgical tape, sticky side out. You want it tight enough to not fall off too easily, but loose enough to slip on and off later.

../../../_images/step1.png

2. Put the piece of leather onto your finger over the tape, so it covers the spot you keep accidentally jabbing with the needle when you fail to use a thimble.

../../../_images/step2.png
  1. Wrap the whole thing with a couple layers of surgical tape, sticky side in.
../../../_images/step3.png

That’s literally all there is to it. If you don’t like using leather, cardboard might work, or any plastic that’s flexible but sturdy enough to be hard to jab a needle through.

The one of these that I made earlier in the week and have been using ever since has molded to the shape of my finger and only gotten more comfortable over time.

../../../_images/oldnew.png

P.S. That’s a heavy silk jacquard, pretty on both sides, 28” wide without good selvedges but they currently have it on Please Go Away sale for $3.13/yard. The fibers burn and smell like silk, and it feels like silk, so I don’t think they’re lying about the composition. The selvedges aren’t too nice, one side is fuzzy and the other seems to have just been cut, but for a price like that I can’t complain. The orange is more coppery in natural light than the photos make it look on my monitor.

]]>
Tue, 05 Apr 2022 00:00:00 -0700
http://edunham.net/2022/03/18/lumenator.html http://edunham.net/2022/03/18/lumenator.html <![CDATA[Lumenator]]> Lumenator

About a year ago, I found out about lumenators. The theory is that if you put sunshine-ish amounts of light onto a creature, the creature reacts as it would in sunlight.

So, I built one. It’s technically brighter than the sun.

Mine is composed of 48 e26 sockets mounted in a 2’x4’ piece of plywood, with some switches that allow the rows to be turned on and off independently. It currently plugs into the wall, although I might eventually get around to hardwiring it if I ever get serious about cable management.

Each of the 48 sockets is intended to hold an LED bulb of around 2500 lumens. I use a mix of color temperatures, with most bulbs 5000K but some 3000K. This means that with all the bulbs in and all the rows lit, it emits around 120,000 lumens from approximately 0.74 square meters, or about 162,162 lux. According to Wikipedia, sunlight is around 100,000 lux once you account for the atmosphere.

../../../_images/testing.png

Here’s how it looked with all the bulbs in it while I was testing it on my kitchen table. This photo was taken around 3pm in early April, 2021. That’s midafternoon springtime daylight for comparison, visible out through the window. Yes, I was testing it by sticking the exposed wires into a C13. No, it didn’t catch on fire. No, you should not try this at home.

I’ve mounted the board of sockets onto the wall above my desk, so turning it on is like opening a variably sunny window. I typically use about 12-24 of the bulbs on a day to day basis, as more can make it bright enough to be slightly uncomfortable.

I put off writing this post for a year because I never actually got around to filling in all 48 of the lamp sockets. As well as a few bulbs being dead on arrival, half a dozen of the bulbs I bought were duds. They made a horrible buzzing noise when powered, so I removed them. Over time I’ve also swapped out some of the daylight bulbs from the apparatus to light other rooms of my house, and installed ordinary 750-lumen bulbs into those sockets because it turns out that a matrix of light sockets is an extremely convenient place to store extra bulbs.

Sourcing materials

The plywood, paint, and romex were left over from other projects.

I had a tough time sourcing affordable e26 sockets. Apparently people don’t build ridiculous light fixtures from scratch on the cheap very much? I ended up using adapters meant to help you use e26 bulbs in GU24 sockets. The pins meant to interface with the GU24 socket were easy to solder wires onto later, and the cylindrical shape of the adapters made them easy to friction fit into holes in the plywood.

I made a big spreadsheet of price per lumen of e26 LED bulbs available various places, ruled out those whose specs were incomplete or self-contradictory, and got the cheapest bulbs that seemed likely to add up to Too Much Light.

The switches, 6-gang switchplate, and electrical plug came from Habitat For Humanity.

Build Process

I chose the size based on what plywood I had around. On the plywood, I drew a grid of 96 rectangles. I drilled holes in alternating rectangles of the grid, to maximize space between bulbs, because I was frightened of making the kind of mistake that would cause the whole thing to overheat and catch on fire.

../../../_images/drill-holes.png

The holes were drilled with a bit just a hair smaller than the GU24 adapter sockets, so that the sockets would be held in place by friction. A few holes came out too small for the sockets, but were easily enlarged with a rasp.

I also rasped off any noticeable splinters. If I’d been doing fancy carpentry, I would have sanded the whole thing at this point. I wasn’t, so I didn’t.

I painted the board white, for maximum light reflection. I thought about trying to do some kind of mirror or chrome finish behind the bulbs, but that seemed way too hard. If I was building it again, I would look for a plastic mirror and secure it to the plywood before drilling the holes. Or I might try gluing a sheet of mylar onto the wood after drilling the holes but before installing the sockets.

../../../_images/socket-install.png

The sockets installed flush with the board, by pushing them through the holes from the front side. I made sure to install all of them in the same orientation for each row, though this was actually unnecessary because alternating current doesn’t care about polarity. To get the sockets flush with the board, I found that it was easiest to fit a row in and then place the whole board face-down on a flat surface and walk on it between the sockets. That way everything ended up even with the floor. If you don’t like walking on your projects, you could probably achieve a similar effect by carefully hitting it with a hammer.

I then wired the rows of lights to the switches. 3 of the switches control 12 bulbs each, and 2 control 6 bulbs each, because this seemed like the best compromise between using all 5 available switches and getting it to work with the limited quantity of romex that I had available.

../../../_images/wire-tool.png

I test fit all the wires to make sure I had enough, then soldered them onto the pins of the GU24 adapters. Since I needed to strip off 1” of wire insulation from the middle of a wire a bunch of times, I built a tool for the job, by gluing razor blades to each side of the jaws of a pair of clothes pins. That way I’d just clip the tool onto the cable where I needed to strip it, spin it around so the blades cut through the insulation, and cut off the inch of insulation with another knife.

../../../_images/switches-connected.png ../../../_images/switch-plate.png

You’ll notice in the photos that I secured the switches by screwing them directly to the switch plate instead of putting them in a box. You’re normally supposed to put switches in boxes, and I might regret this shortcut someday, but I didn’t have a 6-gang box and it’s been fine so far.

Mind-numbing amounts of soldering later, I tested the assembly to make sure nothing was shorted, then powered it on to make sure it could light. Before installing it on the wall, I covered all the exposed wiring in electircal tape, mostly because I dislike exposed wires. Also, safety. But you’d get just as zapped from connecting the poles of any of those exposed sockets on the front as you would by connecting a pair of wires on the back, so worrying about hiding exposed conductors on the back but not the front is mostly aesthetic.

../../../_images/joints-taped.png

I then affixed it to the wall above my desk, using a piece of 2x4 at each short end of the plywood to hold everything well away from the wall. I left the top and bottom open to improve air circulation in case it ran into thermal problems, but in a year of use it hasn’t caught on fire.

It looks underwhelming in photos, because very bright lights are hard to photograph. This shows how I us it on an ordinary day like today, with a set of 12 5000K bulbs and a set of 6 3000K bulbs lit and the rest remaining off.

../../../_images/lit-up.png

Costs

If you built one of these today, it’d cost you:

  • about $20 for a 2’x4’ chunk of plywood
  • about $40 for a pack of 50 GU24 to E26 adapters
  • about $160 for 48 2500-lumen e26 bulbs
  • about $30 for 15 feet of 12 gauge 2-wire romex
  • maybe $10 for switches and a 6-gang plate at your local Habitat For Humanity?

So you’d be looking at around $260 for the whole build, unless you had materials on hand or found better deals on materials.

Was it worth it?

I like having my false sun in my office, because I feel like being in bright light helps my brain and body agree that it’s actually daytime. It also helps me fake a summer sleep schedule when the weather outside suggests wintry hibernation. I notice that being in natural daylight at dusk (even when dusk is at 6pm!) causes me to feel sleepy, whereas being in midday-sunlight light levels circumvents that process.

The false sun was somewhat inconvenient to build and install, and would not be accessible for everyone. I was able to make it because I have woodworking and soldering equipment already available, and because I was able to screw it directly to the wall of my office without penalty.

If I was renting my home, I would have mounted the lightbulb array in a bookcase or other tall piece of furniture instead of attaching it onto the wall. If I was building one of these for use in a rental, I’d probably keep the holes-in-plywood scheme for supporting the lamp sockets, but I would size it to mount over the top couple shelves of an Ikea Billy or similar cheap bookcase, including some room for airflow.

For other locations in my house, I’ve become partial to the 5,000-lumen 4’ LED “shop” lights that my local Harbor Freight sometimes has on sale for $20 apiece. I find them trivially easy to install and plenty bright to send most of that “it’s daytime!” signal. However, they’re over twice as expensive per lumen compared to the DIY version.

You probably shouldn’t build exactly what I did, as it’s bulky and inconvenient and not all that aesthetically pleasing. But I hope this project demonstrates that it’s perfectly achieveable to create a light fixture brighter than the sun! When playing with electricity, please learn what you’re doing beforehand, be mindful that alternating current can kill you, and generally use common sense. If you try to build or modify something like this without having a basic idea of electrical safety, you’re just sticking a fork in an outlet but with extra steps.

]]>
Fri, 18 Mar 2022 00:00:00 -0700
http://edunham.net/2022/03/11/starlink_dressup.html http://edunham.net/2022/03/11/starlink_dressup.html <![CDATA[Playing Dress-Up With Starlink]]> Playing Dress-Up With Starlink

Some friends showed me a post investigating whether Starlink dishes still work when decorated in various ways. They asked whether I was able to reproduce the results. So I pulled the dish down off my roof and tested it with a few things that I had lying around the house and yard.

Methodology Notes:

I gathered data in my camera roll: a snapshot of the setup, followed by a screencap of the “router <-> internet” pane of the speedtest within the Starlink Android app. I left the dish set up in the same spot, which was at ground level but reported no obstructions when I did the sky scan thing in the app. I power cycled the dish once during the experiment, roughly halfway through the control tests for the “is it slower?” question, because I needed to plug something else into the extension cord that it was using for a minute.

I’m located south of the 45th parallel.

Will It Send?

../../../_images/starlink-stats.png

For my first batch of tests, I was curious whether I could get any data in and out via Starlink with various configurations of stuff on and around the dish. I only ran one speedtest each for these.

I started by suspending things over Starlink, because I knew some of the materials would get heavy (such as wet cotton cloth) and I was scared of hurting the little motors inside that it uses to move itself around. I draped a canvas tarp over the north side of a fence to make an impromptu photo studio for the dish. The dish had an unobstructed view of the sky upward and to the north. I placed metal folding chairs to the east and west of the dish, with their backs closest to the dish, to hold up the various covers. I used some old brake pads to weigh down the things that I draped over the chairs so they wouldn’t blow away.

  1. No cover, control to make sure Starlink works in this position. It sends. 146 Mbps down, 15 Mbps up.
  2. Covered with clear-ish greenhouse plastic. It sends. 148 Mbps down, 15 Mbps up.
  3. Same as (2) but with a cotton bedspread over the plastic. It sends. 170 Mbps down, 8 Mbps up.
  4. Same as (3) but with the bedspread soaking wet. It does not send. The app reports “offline, obstructed”.
  5. Plastic from (2) plus a single layer of corrugated cardboard box. It sends. 62 Mbps down, 9 Mbps up.
  6. Same as (5) but I dumped some water on the box. It mostly ran off. It sends. 65 Mbps down, 9 Mbps up.
  7. Double layer of row cover, like you put on plants in the garden to keep the frost from hurting them. It sends. 94 Mbps down, 11 Mbps up.
  8. Single layer of ratty old woven plastic tarp. It sends. 109 Mbps down, 6 Mbps up.

For the final 2 tests, I got a bit braver about putting things directly on the dish, instead of just suspending them above it.

  1. Dish in a black plastic trash bag like you’re throwing it out. It sends. 147 Mbps down, 10 Mbps up.
  2. Same tarp as from (8), but directly on the dish instead of hanging above it. It sends. 207 Mbps down, 20 Mbps up.

Is It Slower?

As you’ll notice in the numbers that I was getting in the above trials, sometimes covering the dish yields a faster single speed test than having it exposed. That seems wrong. I suspect that this is normal variance based on what satellites are available, but I don’t actually know what behavior is normal. So I did a few more tests in configuration 10, and a bunch of tests in configuration 1, to make sure we’re not in a timeline where obstructing a radio link somehow makes it perform better.

Plastic Tarp Covering Starlink (10)
Mbps Down Mbps Up
207 20
173 5
138 11
124 13
180 19
101 9
Unobstructed Starlink (1)
Mbps Down Mbps Up
104 24
114 4
119 10
100 16
111 14
147 5
147 5
97 15
218 15
87 15
176 16
192 6
125 6

I left it all out overnight, and the canvas tarp that I’d been using as a visual backdrop blew off of the fence so it was covering the dish. I made a video call on the connection with the canvas over the dish and didn’t notice any subjective degradation of service compared to what I’m accustomed to getting from it.

Conclusions

Starlink definitely still works when I put dry textiles between the dish and the satellite. It doesn’t seem to matter if the textiles are directly on the dish or suspended a couple feet above it.

I’m surprised that Eric Kuhnke’s Starlink worked with 2 layers of wet bedsheet over it. The only case in which I managed to cause my Starlink to report itself as being obstructed was when I had a wet bedspread suspended in the air over it. I don’t really want to start piling wet cloth directly on the dish, though, because wet cloth is heavy and I’m scared of overloading the actuators.

Starlink gets kind of warm during normal operation, as demonstrated by the cats. I wouldn’t want to leave mine in a dark colored trash bag or under a dark colored tarp on a sunny day in case it overheated. And there’s no need to – if you don’t want people to know you’re using one, you can just suspend an opaque and waterproof tarp above it and there’ll be better airflow around the dish itself and thus less risk of overheating.

Have fun!

]]>
Fri, 11 Mar 2022 00:00:00 -0800
http://edunham.net/2022/02/11/tree_style_tab_setup.html http://edunham.net/2022/02/11/tree_style_tab_setup.html <![CDATA[tree-style tab setup]]> tree-style tab setup

How to get rid of the top bar in firefox after installing tree style tab:

In about:config (accept the risk), search toolkit.legacyUserProfileCustomizations.stylesheets and hit the funny looking button to toggle it to true.

In about:support, above the fold in the “Application Basics” section, find Profile Directory.

In that directory, mkdir chrome, then create userChrome.css, containing:

#main-window[tabsintitlebar="true"]:not([extradragspace="true"]) #TabsToolbar
{
  opacity: 0;
  pointer-events: none;
}
#main-window:not([tabsintitlebar="true"]) #TabsToolbar {
    visibility: collapse !important;
}
]]>
Fri, 11 Feb 2022 00:00:00 -0800
http://edunham.net/2022/01/30/top_htop_in_windows.html http://edunham.net/2022/01/30/top_htop_in_windows.html <![CDATA[top/htop in windows is ctrl+shift+esc]]> top/htop in windows is ctrl+shift+esc

Helping a neighbor with a windows update issue today, I explained that asking a Linux admin to use Windows is like asking a Latin speaker to translate a document from Spanish. Most of the concepts are similar enough that they’ll be more helpful than a monolingual English speaker, but good guessing is not the same as fluency.

Since it was a problem with good directions on how to fix it, the process mostly went smoothly. As I complained to a Windows admin friend afterwards, it was fine except all the slashes in paths were backwards, and I couldn’t find the command prompt equivalent to Linux’s top or htop.

My Windows friend pointed out the obvious solution to me: The windows equivalent to top is not a command at all, but rather the task manager GUI. Next time I need top or htop in Windows, I’ll try to remember to hit ctrl+shift+esc to summon that interface instead.

And next time I’m searching the web for “windows command prompt top” “windows equivalent of top command”, and other queries that assume it’ll let me live in the terminal like my preferred operating systems, I might just end up back on this very post. Hi, future me!

]]>
Sun, 30 Jan 2022 00:00:00 -0800
http://edunham.net/2021/06/12/transcription.html http://edunham.net/2021/06/12/transcription.html <![CDATA[transcription with mplayer and i3]]> transcription with mplayer and i3

I recently wanted to manually transcribe an audio recording. I prefer to type into LibreOffice Writer for this purpose. Writer has an audio player plugin for transcription, but unfortunately its keyboard shortcuts didn’t work when I tried it.

I just want to play some audio in one workspace and have play/pause and 5-second rewind shortcuts work even when another window is focused.

Since I am using i3wm on Ubuntu, I can glue up a serviceable transcription setup from stuff that’s already lying around.

The first challenge is to persuade an audio player to accept commands while it’s not the window in focus. By complaining about this problem to someone more knowledgeable than myself, I learned about mplayer’s slave mode. From its docs, I learn that I can instruct mplayer to take arbitrary commands on a fifo as follows:

$ mkfifo /tmp/mplayerfifo
$ mplayer -slave -input file=/tmp/mplayerfifo audio-to-transcribe.mp3

Now I can test whether mplayer is listening on the fifo. And indeed, the audio pauses when I tell it:

$ echo pause > /tmp/mplayerfifo

At this time I also test the incantation to rewind the audio by 5 seconds:

$ echo seek -5 > /tmp/mplayerfifo

Since both commands work as expected, I can now create keyboard shortcuts for them in .i3/config:

bindsym $mod+space exec "echo pause > /tmp/mplayerfifo"
bindsym $mod+z exec "echo seek -5 > /tmp/mplayerfifo"

After writing the config, $mod+shift+c reloads it so i3 knows about the new shortcuts.

Finally, I’ll make sure this keeps working after I reboot. I’ll make an alias in my ~/.bashrc to save having to remember the mplayer incantation:

$ echo "alias transcribe='mplayer -slave -input file=/tmp/mplayerfifo" >> ~/.bashrc

And to automatically create the fifo once on boot:

$ echo "mkfifo /tmp/mplayerfifo" >> ~/.profile

Now after I source ~/.bashrc, I can play media with this transcribe alias, and the keyboard shortcuts control it from anywhere in my window manager.

]]>
Sat, 12 Jun 2021 00:00:00 -0700
http://edunham.net/2021/05/25/irssi_and_libera_chat.html http://edunham.net/2021/05/25/irssi_and_libera_chat.html <![CDATA[irssi and libera.chat]]> irssi and libera.chat

I’m in some channels that are moving from Freenode to Libera.

My irssi runs on a DigitalOcean droplet, and whenever I try to connect to Libera from that instance, I get the error:

[libera] !tungsten.libera.chat *** Notice -- You need to identify via SASL to use this server

Libera’s irssi guide (https://libera.chat/guides/irssi) says how to connect with SASL, and down in their sasl docs (https://libera.chat/guides/sasl) they mention that SASL is required for IP ranges that are easy to run bots on... including my VPS.

The fix is to pop open an IRC client locally (or use web IRC), connect to Libera without SASL, and register one’s nick and password. After verifying one’s email address over the regular connection, the network can be reached via SASL from anywhere using the registered nick as the username and the nickserv password as the password.

Obvious in retrospect, but poorly SEO’d for how the problem looks at the outset, so that’s how I worked around problems reaching Libera from Irssi on a VPS.

]]>
Tue, 25 May 2021 00:00:00 -0700
http://edunham.net/2021/04/03/assemblyline.html http://edunham.net/2021/04/03/assemblyline.html <![CDATA[Assembly Lines]]> Assembly Lines

This time last year, my living room was occupied by a cotton mask production facility of my own devising. I had reverse engineered a leftover surgical mask to get the approximate dimensions, consulted pictures of actual surgeons’ masks, and contrived a mask design which was easy-enough to sew in bulk, durable-enough to wash with one’s linens, and wearable-enough to fit most faces.

Tinkering with and improving the production line was delightful enough to make me wonder if I’d missed a deeper calling when I chose not to pursue industrial engineering as a career, but the actual work – the parts where I used myself as jsut another machine to make more masks happen – was profoundly miserable. At the time, it made more sense to attribute that misery to current events: The world as we knew it is ending, of course I’m grumpy. I took it for granted that the sewing project of making masks was equivalent to the design/prototype/build cycle of my more creative sewing endeavors, and assumed that it was supposed to be equally enjoyable.

A year later, however, I’m running a similar personal assembly line on an electrical project, and noticing some patterns. I have to do 4 steps each on 96 little widgets to complete this phase of the project. My engineering intuition says that the optimal process would be to do all of step 1, then all of step 2, then all of step 3, then all of step 4. That seems like it should be the fastest, and make me happy becuase it’s the best – no wasted effort taking out then putting away the set of tools for each step several times.

The large-batch process would also yield consistency across all of its outputs, so that no one widget comes out much worse than any other. Consistency is aesthetic and satisfying in the end result, so the process which yields consistency should feel preferable... but instead, it feels deeply distasteful to stick with any one production phase for too long. What’s going on there? What assumption is one side of the internal argument using that the other side lacks?

It took me 2 steps over about 24 of the widgets to figure out what felt so wrong about that assembly-line reasoning: The claims of “best” and “fastest” only hold if the process being done remains exactly the same on widget 96 as it was on widget 1. That’s true if a machine is doing it, but false if the worker is able and allowed to think about the process they’re working on. Larger batch sizes are optimal if the assembly process is unchanging, but detrimental if the process needs to be modified for efficiency or ease of use along the way. For instance, I’d initially planned a design that needed about 36’ of wire, but by examining and contemplating the project when it was ready to be wired up, I found a way to accomplish the same goals with only about 23’ of wire. If I’d been “perfectly efficient” in treating the initial design as perfect, I would likely have cut the wire into the lengths that were needed for the 36’ plan before the 23’ design occurred to me, and that premature optimization would have destroyed the materials I’d need to assemble the more efficient design once I figured it out.

In other words, a self-modifying assembly line necessarily shrinks the batch size that it’s worth producing. I’ve seen the same thing in software – when automating a process, it’s best to do it by hand a couple times, and then test a script on a small batch of input and fix any errors, and then apply it to larger and larger batches as it gets closer and closer to the best I can get it. It’s just easier to notice the phenomenon in a process that uses the hands while leaving the brain mostly free than in processes of more intellectual labor.

And there was the answer as to why attempting to do all 96 of step 1, then all 96 of step 2, felt terrible: Because using the maximum batch size implied that the process was as good as I’d be able to get it, and that any improvements I might think of while working would be wasted if they weren’t backwards-compatible with the steps of the old process that were already completed. Smaller batch sizes, then, have an element of hope to them: There will be a “next time” of the whole process, so thinking about “how I’d do it next time” has a chance to pay off.

]]>
Sat, 03 Apr 2021 00:00:00 -0700
http://edunham.net/2020/05/30/pulseaudio_on_volumio.html http://edunham.net/2020/05/30/pulseaudio_on_volumio.html <![CDATA[pulseaudio & volumio]]> pulseaudio & volumio

The speakers in my living room are hooked up to a raspberry pi that runs Volumio. It’s a nice way to play music from various sources without having to physically reconfigure the speakers between inputs.

Volumio as a pulseaudio output

For awhile, my laptop was able to treat Volumio as just another output device, based on the following setup:

  • the package pulseaudio-module-zeroconf was installed on the pi and on every laptop that wants to output audio through the living room speakers
  • the lines load-module module-zeroconf-publish and load-module module-native-protocol-tcp were added to /etc/pulseaudio/default.pa on the pi
  • the line load-module module-zeroconf-discover was added to /etc/pulse/default.pa on my Ubuntu laptop
  • pulseaudio was restarted on both devices after these changes (pulseaudio -k to kill, pulseaudio to start it)

starting pulseaudio on boot on Volumio

And then as long as the laptop was connected to the same wifi network as the pi, it Just Worked. Until, in the course of troubleshooting an issue that turned out to involve the laptop having chosen the wrong wifi, I power cycled the pi and it stopped working, because pulseaudio was not yet configured to start on boot.

The solution was to add the following to /etc/systemd/system/pulseaudio.service on the pi:

[Unit]
Description=PulseAudio system server

[Service]
Type=notify
Exec=pulseaudio --daemonize=no --system --realtime --log-target=journal
User=volumio
ExecStart=/usr/bin/pulseaudio

[Install]

And then enabling, starting, and troubleshooting any failures to start:

systemctl --system enable pulseaudio.service
systemctl --system start pulseaudio.service
systemctl status pulseaudio.service -l # explain what went wrong
systemctl daemon-reload # run after editing the .service file

Thanks to Rudd-O’s blog post, which got me 90% of the way to the “start pulseaudio on boot” solution. Apparently systemctl started caring more about having an ExecStart directive since that post was written, which meant I had to inspect the resulting errors, which means I’m writing down the resulting tidbit of knowledge so that I can find it again later.

future work

Nobody in my household has yet found a good way to persuade the Windows computer who lives under the TV to speak pulseaudio yet. If I ever figure that out, I’ll update here.

]]>
Sat, 30 May 2020 00:00:00 -0700
http://edunham.net/2020/05/22/moving_on_from_mozilla.html http://edunham.net/2020/05/22/moving_on_from_mozilla.html <![CDATA[Moving on from Mozilla]]> Moving on from Mozilla

Today – Friday, May 22nd, 2020 – is within days of my 5-year anniversary with Mozilla, and it’s also my last day there for a while. Working at Mozilla has been an amazing experience, and I’d recommend it to anyone.

There are some things that Mozilla does extremely well, and I’m excited to spread those patterns to other parts of the industry. And there are areas where Mozilla has room for improvement, where I’d like to see how others address those challenges and maybe even bring back what I learn to Moz someday.

Why go?

When I try to predict what my 2025 or 2030 self will wish I’d done with my career around now, I anticipate that I’ll want access to opportunities which build on a background of technical leadership and mentoring junior engineers.

It wouldn’t be impossible to create these opportunities within Mozilla, but from talking with trusted mentors both inside and outside the company, I’ve concluded that I would get a lot more impact for the same effort if I was working within a growing organization.

As a mature organization, Mozilla’s internal leadership needs are very different from those of a younger and more actively growing company. There’s a far higher bar at Moz for what it takes to be the best person for a task, because the saturation of “best people” is quite high and the prevalence of entirely new tasks is relatively low in comparison. Technical leadership here seems to often require creating a need as well as filling it. At a growing organization, on the other hand, the types and availabilities of such opportunities are very different.

I’m especially looking forward to leveling up on a different stack in my next role, to improve my understanding of the nuances of the underlying problems our technolgies address. I think it’s a bit like learning a second language: only through comparing and contrasting multiple solutions to the same sort of problem can one understand what traits corrolate to all those solutions’ strengths versus which details are simply incidental.

Why now?

I’ll be the first to admit that May 2020 is a really strange time to be changing jobs. But I have an annual tradition of interviewing at several places, learning what their stacks and cultures and unique fractals of tech debt look like, and then turning down an offer or two because changing roles would be a step backwards for both my career development and overall quality of life.

Shortly before the global conference circuit ground to a halt along with everything else, I started interviewing from a DevOps Advocate position, just to explore what it might look like to turn my teaching hobby into a day job. By the time those interviews were complete, the tech evangelism space had been turned inside out and was rapidly reinventing itself, and the skills that qualified me for the old world of devrel were looking less and less like the kind of expertise that might be needed to succeed in the new one. However, a SRE from the technical interviews suggested that I interview for her team, and upon taking that advice I discovered an organization that keeps most of the stuff I loved about Mozilla while also offering the other opportunities that I was looking for.

As with anywhere, there are a few aspects of my new role that I suspect may not be as great yet as where I’m leaving, but these areas of improvement look like things that I’ll be able to have some influence over. Or at least there’ll be room to push the Overton Window in a good direction!

Want more details on the new role? I’ll be writing more about it after I start on June 1st!

]]>
Fri, 22 May 2020 00:00:00 -0700
http://edunham.net/2020/05/22/offboarding.html http://edunham.net/2020/05/22/offboarding.html <![CDATA[Offboarding]]> Offboarding

Turns out that 5 years at a place gets you a bit of a pile of digital detritus. Future me might want notes on what-all steps I took to remove myself from everything, so here goes:

  • GitHub: Clicking the “pull requests” thing in that bar at the top gives a list of all open PRs created by me. I closed out everything work-related, by either finishing or wontfix-ing it. Additionally, I looked through the list of organizations in the sidebar of my account and kicked myself out of owner permissions that I no longer need. Since my GitHub workflow at Mozilla included a separate account for holding admin perms on some organizations, I revoked all of that account’s permissions and then deleted it.
  • Google Drive: (because moving documents around through the Google Docs interface is either prohibitively difficult or just impossible) I moved all notes docs that anyone might ever want again into a shared team folder.
  • Bugzilla: The “my dashboard” link at the top, when logged in, lists all needinfos and open assigned bugs. I went through all of these and removed the needinfos from closed bugs, changed the needinfos to appropriate people on open bugs, and reassigned assigned bugs to the people who are taking over my old projects. When reassigning, I linked the appropriate notes documents in the bugs and filled in any contextual information that they didn’t capture. I also checked that my Bugzilla admin had removed all settings to auto-assign me bugs in certain components.
  • Email deletion prep: I searched for my old work email address in my password manager to find all accounts that were using it. I deleted these accounts or switched them to a personal address, as necessary. It turned out that the only thing I needed to switch over was my Firefox account, which I initially set up to test a feature on a service I supported, but then found very useful.
  • Git repos: When purging pull requests and bugs, I pushed my latest work from actively developed branches, so that no work will be lost when I wipe my laptop
  • Assorted other perms: Some developers had granted me access to a repo of secrets, so I contacted them to get that access revoked.
  • Sharing contact info: I didn’t send an email to the all-company list, but I did email my contact info to my teammates and other colleagues with whom I’d like to keep in touch.
  • Take notes on points of contact. While I still have access to internal wikis, I note the email addresses of anyone I may need to contact if there are problems with my offboarding after my LDAP is decommissioned.
  • Wipe the laptop: That’s next. All the repos of Secret Secrets are encrypted on its disk and I’ll lose the ability to access an essential share of the decryption key when my LDAP account goes away, but it’s still best practices to wipe hardware before returning it. So I’ll power it off, boot it from a liveUSB, and then run a few different tools to wipe and overwrite the disk.
]]>
Fri, 22 May 2020 00:00:00 -0700
http://edunham.net/2020/02/18/git_move_module_to_monorepo.html http://edunham.net/2020/02/18/git_move_module_to_monorepo.html <![CDATA[Git: moving a module into a monorepo]]> Git: moving a module into a monorepo

My team has a repo where we keep all our terraform modules, but we had a separate module off in its own repo for reasons that are no longer relevant.

Let’s call the modules repo git@github.com:our-org/our-modules.git. The module moving into it, let’s call it git@github.com:our-org/postgres-module.git, because it’s a postgres module.

First, clone both repos.:

git clone git@github.com:our-org/our-modules.git
git clone git@github.com:our-org/postgres-module.git

I can’t just add postgres-module as a remote to our-modules and pull from it, because I need the files to end up in a subdirectory of our-modules. Instead, I have to make a commit to postgres-module that puts its files in exactly the place that I want them to land in our-modules. If I didn’t, the README.md files from both repos would hit a merge conflict.

So, here’s how to make that one last commit:

cd postgres-module
mkdir postgres
git mv *.tf postgres/
git mv *.md postgres/
git commit -m "postgres: prepare for move to modules repo"
cd ..

Notice that I don’t push that commit anywhere. It just sits on my filesystem, because I’ll pull from that part of my filesystem instead of across the network to get the repo’s changes into the modules repo:

cd our-modules
git remote add pg ../postgres-module/
git pull pg master --allow-unrelated-histories
git remote rm pg
cd ..

At this point, I have all the files and their history from the postgres module in the postgres directory of the our-modules repo. I can then follow the usual process to PR these changes to the our-modules remote:

cd our-modules
git checkout -b import-pg-module
git push origin import-pg-module
firefox https://github.com/our-org/our-modules/pull/new/import-pg-module

We eventually ended up to skip importing the history on this module, but figuring out how to do it properly was still an educational exercise.

]]>
Tue, 18 Feb 2020 00:00:00 -0800
http://edunham.net/2020/01/04/finding_lost_minecraft_base.html http://edunham.net/2020/01/04/finding_lost_minecraft_base.html <![CDATA[Finding a lost Minecraft base]]> Finding a lost Minecraft base

I happen to administer a tiny, mostly-vanilla Minecraft server. The other day, I was playing there with some friends at a location out in the middle of nowhere. I slept in a bed at the base, thinking that would suffice to get me back again later.

After returning to spawn, installing a warp plugin (and learning that /warp comes from Essentials), rebooting the server, and teleporting to some other coordinates to install their warps, I tried killing my avatar to return it to its bed. Instead of waking up in bed, it reappeared at spawn. Since my friends had long ago signed off for the night, I couldn’t just teleport to them. And I hadn’t written down the base’s coordinates. How could I get back?

Some digging in the docs revealed that there does not appear to be any console command to get a server to disclose the last seen location, or even the bed location, of an arbitrary player to an administrator. However, the server must know something about the players, because it will usually remember where their beds were when they rejoin the game.

On the server, there is a world/playerdata/ directory, containing one file per player that the server has ever seen. The file names are the player UUIDs, which can be pasted into this tool to turn them into usernames. But I skipped the tool, because the last modified timestamps on the files told me which two belonged to the friends who had both been at our base. So, I copied a .dat file that appeared to correspond to a player whose location or bed location would be useful to me. Running file on the file pointed out that it was gzipped, but unzipping it and checking the result for anything useful with strings yielded nothing comprehensible.

The wiki reminded me that the .dat was NBT-encoded. The recommended NBT Explorer tool appeared to require a bunch of Mono runtime stuff to be compatible with Linux, so instead I grabbed some code that claimed to be a Python NBT wrapper to see if it would do anything useful. With some help from its examples, I retrieved the player’s bed location:

from nbt import *
n = nbt.NBTFile("myfile.dat",'rb')
print("x=%s, y=%s, z=%s" % (n["SpawnX"], n["SpawnY"], n["SpawnZ"]))

Teleporting to those coordinates revealed that this was indeed the player’s bed, at the base I’d been looking all over for!

The morals of this story are twofold: First, I should not quit writing down coordinates I care about on paper, and second, Minecraft-adjacent programming is still not my idea of a good time.

]]>
Sat, 04 Jan 2020 00:00:00 -0800
http://edunham.net/2019/12/30/toy-hypercube.html http://edunham.net/2019/12/30/toy-hypercube.html <![CDATA[Toy hypercube construction]]> Toy hypercube construction

I think hypercubes are neat, so I tried to make one out of string to play with. In the process, I discovered that there are surprisingly many ways to fail to trace every edge of a drawing of a hypercube exactly once with a single continuous line.

This puzzle felt like the sort of problem that some nerd had probably solved before, so I searched the web and discovered that the shape I was trying to configure the string into is called an Eulerian Cycle.

I learned that any graph in which every vertex attaches to an even number of edges has such a cycle, which is useful for my craft project because the euler cycle is literally the path that the string needs to take to make a model of the object represented by the graph.

Mathematical materials

To construct a toy hypercube or any other graph, you need the graph. To make it from a single piece of string, every vertex should have an even number of edges.

Knowing the number of edges in the graph will be useful later, when marking the string.

Physical materials

For the edges of the toy, I wanted something that’s a bit flexible but can sort of stand up on its own. I found that cotton clothesline rope worked well: it’s easy to mark, easy to pin vertex numbers onto, and sturdy but still flexible. I realized after completing the construction that it would have been clever to string items like beads onto the edges to make the toy prettier and identify which edge is which.

For the vertices, I pierced jump rings through the rope, then soldered them shut, to create flexible attachment points. This worked better than a previous prototype in which I used flimsier string and made the vertices from beads.

Vertices could be knotted, glued, sewn, or safety pinned. A bookbinding awl came in handy for making holes in the rope for the rings to go through.

Mathematical construction

First, I drew the graph of the shape I was trying to make – in this case, a hypercube. I counted its edges per vertex, 4. I made sure to draw each vertex with spots to write numbers in, half as many numbers as there are edges, because each time the string passes through the vertex it makes 2 edges. So in this case, every vertex needs room to write 2 numbers on it.

Here’s the graph I started with. I drew the edges in a lighter color so I could see which had already been visited when drawing in the euler cycle.

../../../_images/one1.jpg

Then I started from an arbitrary vertex and drew in the line. Any algorithm for finding euler paths will suffice to draw the line. The important part of tracing the line on the graph is to mark each vertex it encounters, sequentially. So the vertex I start at is 1, the first vertex I visit is 2, and so forth.

Since the euler path visits every vertex of my particular hypercube twice, every vertex will have 2 numbers (the one I started at will have 3) when I finish the math puzzle. These pairs of numbers are what tell me which part of the string to attach to which other part.

Here’s what my graph looked like once I found an euler cycle in it and numbered the vertices that the cycle visited:

../../../_images/two1.jpg

Physical construction

Since my graph has 32 edges, I made 33 evenly spaced marks on the string. I used an index card to measure them because that seemed like an ok size, but in retrospect it would have been fine if I’d made it smaller.

../../../_images/three1.jpg

I then numbered each mark in sequence, from 1 to 33. I numbered them by writing the numbers on slips of paper and pinning the papers to the rope, but if I was using a ribbon or larger rope, the numbers could have been written directly on it. If you’re doing this at home, you could mark the numbers on masking tape on the rope just as well.

../../../_images/four1.jpg

The really tedious step is applying the vertices. I just went through the graph, one vertex at a time, and attached the right points on the string together for it.

The first vertex had numbers 1, 25, and 33 on it for the euler cycle I drew and numbered on the graph, so I attached the string’s points 1, 25, and 33 together with a jump ring. The next vertex on the drawing had the numbers 2 and 18 on it, so I pierced together the points on the string that were labeled 2 and 18.

I don’t think it matters what order the vertices are assembled in, as long as the process ultimately results in all the vertices on the graph being represented by rings affixing the corresponding points on the string together.

I also soldered the rings shut, because after all that work I don’t want them falling out.

../../../_images/five1.jpg

That’s all there is to it!

../../../_images/seven1.jpg

I’m going to have to find a faster way to apply the vertices before attempting a 6D hypercube. An ideal vertex would allow all edges to rotate and reposition themselves freely, but failing that, a lighter weight string and crimp fasteners large enough to hold 6 pieces of that string might do the trick.

The finished toy is not much to look at, but quite amusing to try to flatten into 3-space.

../../../_images/six1.jpg
]]>
Mon, 30 Dec 2019 00:00:00 -0800
http://edunham.net/2019/09/17/kubectl_unable_to_recognize_stdin.html http://edunham.net/2019/09/17/kubectl_unable_to_recognize_stdin.html <![CDATA[kubectl unable to recognize STDIN]]> kubectl unable to recognize STDIN

Or, Stupid Error Of The Day. I’m talking to a GCP’s Kubernetes engine through several layers of intermediate tooling, and kubectl is failing:

subprocess.CalledProcessError: Command '['kubectl', 'apply', '--record', '-f', '-']' returned non-zero exit status 1.

Above that, in the wall of other debug info, is an error of the form:

error: unable to recognize "STDIN": Get https://11.22.33.44/api?timeout=32s: dial tcp 11.22.33.44:443: i/o timeout

This error turned out to have such a retrospectively obvious fix that nobody else seems to have published it.

When setting up the cluster on which kubectl was failing, I added the IP from which my tooling would access it, and hit the “done” button to save my changes. (That’s under the Authorized Networks section in “kubernetes engine -> clusters -> edit cluster” if you’re looking for it in the GCP console.) However, the “done” button is only one of the two required steps to save changes: One also must scroll all the way to the bottom of the page and press the “save” button there.

So if you’re here because you Googled that error, go recheck that you really do have access to the cluster on which you’re trying to kubectl apply. Good luck!

]]>
Tue, 17 Sep 2019 00:00:00 -0700
http://edunham.net/2019/08/26/ccc_camp.html http://edunham.net/2019/08/26/ccc_camp.html <![CDATA[What to bring to CCC Camp next time]]> What to bring to CCC Camp next time

I took last week off work and attended CCC camp, which was wonderful on a variety of axes. I packed light, but through the week I noted some things it’d be worth packing less-lightly for.

So, here are my notes on what it’d be worth bringing if or when I attend it again:

Clothing

The site is dusty, extremely hot through the day, and quite cold at night. Fashion ranges from “generic nerd” to hippie, rave, and un-labelably eccentric. There is probably no wrong thing to wear, though I didn’t see a single suit or tie. A full base layer and a silk sleeping bag liner improve comfort at night. A big hat, or even an umbrella, offers protection from the day star.

I was glad to have 3 pairs of shoes: Lightweight waterproof sandals for showering, sturdier sandals for walking around in all day, and boots for early mornings and late nights. I saw quite a few long coats and even cloaks at night, and their inhabitants all looked very comfortably warm.

Doing sink laundry was more inconvenient at camp than for ordinary travel, and I was glad to have packed to minimize it.

A small comfortable bag, or large pockets in every outfit, are essential for keeping track of one’s wallet, phone, map, and water bottle.

I occasionally found myself wishing that I’d brought a washable dust mask, usually around midafternoon when camp became one big dust cloud.

Campsite Amenities

Covering a tent in space blankets makes it look like a baked potato, but keeps it warm and dark at night and cool through early afternoon. Space blankets are super cheap online, but difficult to find locally.

For a particularly opulent tent experience, consider putting a doormat outside the entrance as a place to remove shoes or clean dusty feet before going inside. I improvised a doormat with a trash bag, which was alright but the real thing would have been nicer to sit on.

Biertisch tables and benches are prevalent around camp, so you can usually find somewhere to sit, but it doesn’t hurt to bring a camp chair of the folding or inflatable variety. Inflatable stuff, from furniture to swimming pools, tended to survive fine on the ground.

I was glad to have brought a full sized towel rather than a tiny travel one. A shower caddy or bag to carry soap, washcloth, hair stuff, and clean clothes would have been handy, though I improvised one from another bag that I had available.

String and duct tape came in predictably handy in customizing my campsite.

Electronics

DECT phones are very fun at camp, but easy to pocket dial with. This is solved by finding the lock feature on the keypad, or picking a flip phone. I was shy about publishing my number and location in the phonebook, but after seeing how helpful the directory was for people to get ahold of new acquaintances for important reasons, I would be more public about my temporary number in the future.

Electricity is a limited resource but sunlight isn’t. Many tents sport portable solar panels. For those whose electronics have non-European plugs, a power strip from home is a good idea.

I packed a small headlamp and used it pretty much every day. Even with it, I found myself occasionally wishing that I’d brought a small LED lantern as well.

A battery to recharge cell phones is good to have as well, especially if you don’t run power to your tent. A battery can be left charging unattended in all kinds of public places where one would never leave one’s phone or laptop.

Food

Potable water is free, both still and sparkling. Perhaps I’ve been spoiled by the quality of the tap water at my home, but I wished that I’d brought water flavorings to mask the local combination of minerals.

I brought a small medical kit, from which I ended up using or sharing some aspirin, ibuprofin, antihistamines, and lots of oral rehydration salt packets.

Meals were available for free (with donations gratefully accepted) at several camps for everyone, and at the Heaven kitchen for volunteers. There were also a variety of food carts with varyingly priced dishes. The food carts outside the gates in front of the venue were good for an icecream or fresh veggie snack, which were harder to find within camp.

Savory meals and all kinds of drinks were everywhere, but there didn’t seem to be any place nearby to just pick up straight chocolate. Small, nonperishable snacks like that are worth getting at a grocery before arrival, since they’re not readily available on the grounds.

Other

If any of the special skills that your nerd friends ask you for help with require tools, bring them. I happen to always carry a needle and thread when traveling, and ended up using them to repair a giant inflatable computer-controlled sculpture.

A hammock, and something to shade it with, came in very handy and would be worth bringing again. There were lots of trees, and it might have been entertaining to set up a slackline for passers-by to fall off of, but I don’t think it’d be worth the weight of carrying one internationally.

Night time is basically a futuristic art show as well as a party. There’s no such thing as too much electroluminescent wire or too many LEDs, whether for decorating your camp or yourself. As a music party, it’s also extremely loud, so I was glad to have brought earplugs. Comfortable earplugs also improve sleep; music goes till 3 or 4 AM in many places and early risers start making noise around 8 or 9.

Camp has a lake, in which it’s popular to float large inflatable animals, especially unicorns. I saw more big inflatable unicorn floaties being used around camps as extra seating than being used in the lake, though.

There’s a railway that goes around camp, and sometimes runs a steam train. I won’t say you should rig a little electric cart to fit its rails and drive around on it, but somebody did and looked like they were having a really wonderful time.

Bikes, and lots of folding bikes, were everywhere. Scooters, skateboards, and all sorts of other wheeled contrivances, often electric, were also prevalent. The only rolling transportation that I didn’t see at all around camp were roller blades and skates, because the ground is probably too rough for them.

I ran out of stickers, and wished I’d brought more. I didn’t see as many pins as some conferences have.

A small notebook also came in handy. Each day, I checked both the stage schedule and the calendar to find the official and unofficial events which looked interesting, and noted their times on paper. It was consistently convenient to have a means of jotting down notes which didn’t risk running out of battery. Flipping through the book afterwards, about 1/4 of its contents is actually pictures I drew to explain various concepts to people I was chatting with, a few pages are daily schedule notes, and the rest is about half notes on things that presenters said and half ideas I jotted down to do something with later.

I was glad to have brought cash rather than just cards, not only for food but also because many workshops had a small fee to cover the cost of the materials that they provided.

Camp Advice

Nobody even tries to maintain a normal sleep schedule. People sleep when they’re tired, and do stuff when they aren’t. Talks and events tend to be scheduled from around noon to around midnight. I don’t think it would be possible to attend camp with a rigorous plan for what to every day and both stick to that plan and get the most out of the experience.

In shared spaces, people pick the lowest common denominator of language – at several workshops, even those initially scheduled to be held in German, presenters proactively asked if any attendees needed it to be in English then switched to English if asked. Behind-the-scenes, such as in the volunteers’ kitchen, I found that this was reversed: Everyone speaks German, and only switches to give you instructions if you specifically ask for English. Plenty of attendees have no German at all and get along fine.

Volunteer! If something isn’t happening how it should, fix it, or ask “how can I help?”. Volunteering an hour or two for filing badges or washing dishes is a great way to make new friends and see another side of how camp works.

]]>
Mon, 26 Aug 2019 00:00:00 -0700
http://edunham.net/2019/06/24/describing_mentorship.html http://edunham.net/2019/06/24/describing_mentorship.html <![CDATA[More on Mentorship]]> More on Mentorship

Last year, I wrote about some of the aspirations which motivated my move from Mozilla Research to the CloudOps team. At the recent Mozilla All Hands in Whistler, I had the “how’s the new team going?” conversation with many old and new friends, and that repetition helped me reify some ideas about what I really meant by “I’d like better mentorship”.

To generalize about how mentors’ careers affect what they can mentor me on, I’ve sketched up a quick figure in order to name some possible situations that people can be in relative to one another:

../../../_images/places-to-find-mentors.png

The first couple cases of mentorship are easy to describe, because I’ve experienced and thought about them for many years already:

Mentorship across industries

Mentors from outside my own industry are valuable for high level perspectives, and for advice on general life and human topics that aren’t specialized to a single field. Additionally, specialists in other industries often represent the consumers of my own industry’s products. Wise and thoughtful people who share little to none of my domain knowledge can provide constructive feedback on why my industry’s work gets particular reactions from the people it affects – just as someone who’s never read a particular book before is likely to catch more spelling errors than its own author, who’s been poring over the same manuscript for many hours a day for several years.

However, for more concrete problems within my particular career (“this program is running slower than expected”, or even “how should I describe that role on my resume?”), observers from outside of it can rarely offer a well tested recommendation of a path forward.

Mentorship across companies within an industry

Similarly, mentors from other companies within my own industry are my go-to source of insight on general trends and technologies. A colleague in a distant corner of my field can tell me about the frustrations they encountered when using a piece of technology that I’m considering, and I can use that advice to make better-informed choices in my daily work.

But advice on a particular company’s peculiarities rarely translates well across organizations. A certain frequency of reorganization might be perfectly ordinary at my company, but a re-org might indicate major problems at another. This type of education, while difficult to get from someone at a different company, is perfectly feasible to pick up from anyone on another team within one’s own organization.

Mentorship across teams within a company

When I switched roles, I had trial-and-errored my way into the observation that there’s a large class of problems with which mentors from different teams within the same company cannot effectively help. I’d tentatively call these “junior engineer problems”, as having overcome their general cases seems to correlate strongly to seniority. In my own expeience, honing the improvement of code-adjacent skills such as the intuition for what problems should be effectively solvable from the docs versus when and whom to ask for help, how deeply to explore a prospective course of action before committing to it, and when to write off an experiment as “effectively impossible”, are all questions whose answers one derives from experience and observing expert peers rather than from just asking them with words.

Mentorship across projects or specialties within a team

I had assumed that simply being on the same team as people capable of imparting that highly specialized variant of common sense would suffice to expose me to it. However, my first few projects on my new team have clearly shown, in both the positive and the negative cases, that working on the same project as an expert is far more useful to my own growth than simply chancing to be bureaucracied into the same group.

The negative case was my first pair of projects: The migration of 2 small, simple services from my team’s AWS infrastructure to GCP. Although I was on the same team as experts in this process, the particular projects were essentially mine alone, and it was up to me to determine how far to proceed on each problem by myself before escalating it to interrupt a busy senior engineer. My heuristics for that process weren’t great, and I knew that at the outset, but my bias toward asking for help later than was optimal slowed the process of improving my ability to draw that line – how can one enhance one’s discrimination between “too soon”, “just right”, and “too late” when all the data points one gathers are in the same one of those categories?

Mentorship within a project

Finally, however, I’m in the midst of a project that demonstrates a positive case for the type of mentorship I switched teams to seek. I’m in the case labeled A on the diagram up above – I’m working with a more-experienced teammate on a project which also includes close collaboration with members of another team within our organization. In examining why this is working so much better for me than my prior tasks, I’ve noticed some differences: First, I’m getting constant feedback on my own expectations for my work. This is no serious nor bureaucratic process, but simply a series of tiny interactions – expressions of surprise when I complete a task effectively, or recommendations to move on to a different approach when something seems to take too long. Similarly, code review from someone immersed in the same problem that I’m working on is indescribably more constructive than review from someone who’s less familiar with the nuances of whatever objective my code is trying to achieve.

Another reason that I suspect I’m improving more quickly than before in this particular task is the opportunity to observe my teammate modeling the skills that I’m learning in his interactions with our colleagues from another team (those in position C on that chart). There’s always a particular trick to asking a question in a way that elicits the category of answer one actually wanted, and watching this trick done frequently in circumstances where I’m up to date on all the nuances and details is a great way to learn.

The FOSS loophole

I suspect I may have been slower to notice these differences than I otherwise might have been, because the start of my career included a lot of fantastic, same-project mentorship from individuals on other teams, at other companies, and even in other industries. This is because my earliest work was on free and open source software and infrastructure. In FOSS, anyone who can pay with their time and computer usage buys access to a cross-company, often cross-industry web of professionals and can derive all the benefits of working directly with mentors on a single project. I was particularly fortunate to draw a wage from the OSU Open Source Lab while doing that work, because the opportunity cost of a hours spent on FOSS by a student who also needs to spend those hours on work is far from free.

]]>
Mon, 24 Jun 2019 00:00:00 -0700
http://edunham.net/2019/04/06/rustacean_hat_pattern.html http://edunham.net/2019/04/06/rustacean_hat_pattern.html <![CDATA[Rustacean Hat Pattern]]> Rustacean Hat Pattern

Based on feedback from the crab plushie pattern, I took more pictures this time.

There are 40 pictures of the process below the fold.

Materials required

../../../_images/one.jpg
  • About 1/4 yard or 1/4 meter of orange fabric. Maybe more if it’s particularly narrow. Polar fleece is good because it stretches a little and does not fray near seams.
  • A measuring device. You can just use a piece of string and mark it.
  • Scissors, a sewing machine, pins, orange thread
  • Scraps of black and white cloth to make the face
  • The measurements of the hat wearer’s head. I’m using a hat to guess the measurements from.
  • A pen or something to mark the fabric with is handy.

Constructing the pattern pieces

If you’re using polar fleece, you don’t have to pre-wash it. Fold it in half. In these pictures, I have the fold on the left and the selvedges on the right.

The first step is to chop off a piece from the bottom of the fleece. We’ll use it to make the legs and spines later. Basically like this:

../../../_images/two.jpg

Next, measure the circumference you want the hat to be. I’ve measured on a hat to show you.

../../../_images/three.jpg

Find 1/4 of that circumference. If you measured with a string, you can just fold it, like I folded the tape measure. Or you could use maths.

../../../_images/four.jpg

That quarter-of-the-circumference is the distance that you fold over the left side of the big piece of fabric. Like so:

../../../_images/five.jpg

Leave it folded over, we’ll get right back to it. Guesstimate the height that a hat piece might need to be, so that we can sketch a piece of the hat on it. I do this by measuring front to back on a hat and folding the string, I mean tape measure, in half:

../../../_images/six.jpg

Back on the piece we folded over, put down the measurement so we make sure not to cut the hat too short. That measurement tells you roughly where to draw a curvy triangle on the folded fabric, just like this:

../../../_images/seven.jpg

Now cut it out. Make sure not to cut off that folded edge. Like this:

../../../_images/eight.jpg

Congratulations, you just cut out the lining of the hat! It should be all one piece. If we unfold the bit we just cut and the bit we cut it from, it’ll look like this:

../../../_images/nine.jpg

Now we’re going to use that lining piece as a template to cut the outside pieces. Set it down on the fabric like so:

../../../_images/ten.jpg

And cut around it. Afterwards you have 1 lining piece and 2 outer pieces:

../../../_images/eleven.jpg

Now grab that black and white scrap fabric and cut a couple eye sized black circles, and a couple bits of white for the light glints on the eyes. Also cut a black D shape to be the mouth if you want your hat to have a happy little mouth as well as just eyes.

../../../_images/twelve.jpg

Construction

Put the black and white together, and sew an eye glint kind of shape in the same spot on both, like so:

../../../_images/thirteen.jpg

Pull the top threads to the back so the stitching looks all tidy. Then cut off the excess white fabric so it looks all pretty:

../../../_images/fourteen.jpg

Now the fun part: Grab one of those outside pieces we cut before. Doesn’t matter which. Sew the eyes andmouth onto it like so:

../../../_images/fifteen.jpg

Now it’s time to give the hat some shape. On both outside pieces – the one with the face and also the one without – sew that little V shaped gap shut. Like so:

../../../_images/sixteen.jpg

Now they look kind of 3D, like so:

../../../_images/seventeen.jpg

Let’s sew up the lining piece next. It’s the bit we cut off of the fold earlier. Fold then sew the Vs shut, thusly:

../../../_images/eighteen.jpg

Next, sew most of the remaining seam of the lining, but leave a gap at the top so we can turn the whole thing inside out later:

../../../_images/nineteen.jpg

Now that the lining is sewn, let’s sew 10 little legs. Grab that big rectangular strip we cut out at the very beginning, and sew its layers together into a bunch of little triangles with open bottoms. Then cut them apart and turn them inside out to get legs. Here’s how I did those steps:

../../../_images/twenty.jpg

Those little legs should have taken up maybe 1/3 of big rectangular strip. With the rest of it, let’s make some spines to go across Ferris’s back. They’re little triangles, wider than the legs, sewn up the same way.

../../../_images/twentyone.jpg

Now put those spines onto one of the outside hat pieces. Leave some room at the bottom, because that’s where we’ll attach the claws that we’ll make later. The spines will stick toward the face when you pin them out, so when the whole thing turns right-side-out after sewing they’ll stick out.

../../../_images/twentytwo.jpg

Put the back of the outside onto this spine sandwich you’re building. Make sure the seam that sticks out is on the outside, because the outsides of this sandwich will end up inside the hat.

../../../_images/twentythree.jpg

Pin and sew around the edge:

../../../_images/twentyfour.jpg

Note how the bottoms of the spines make the seam very bulky. Trim them closer to the seam, if you’re using a fabric which doesn’t fray, such as polar fleece.

../../../_images/twentyfive.jpg

The outer layer of the hat is complete!

../../../_images/twentysix.jpg

At this point, we remember that Ferris has some claws that we haven’t accounted for yet. That’s ok because there was some extra fabric left over when we cut out the lining and outer for the hat. On that extra fabric, draw two claws. A claw is just an oval with a pie slice misisng, plus a little stem for the arm. Make sure the arms are wide enough to turn the claw inside out through later. It’s ok to draw them straight onto the fabric with a pen, since the pen marks will end up inside the claw later.

../../../_images/twentyseven.jpg

Then sew around the claws. It doesn’t have to match the pen lines exactly; nobody will ever know (except the whole internet in this case). Here are the front and back of the cloth sandwich that I sewed claws with:

../../../_images/twentyeight.jpg

Cut them out, being careful not to snip through the stitching when cutting the bit that sticks inward, and turn them right-side out:

../../../_images/twentynine.jpg

Now it’s time to attach the liner and the hat outer together. First we need to pin the arms and legs in, making another sandwich kind of like we did with the spines along the back. I like to pin the arms sticking straight up and covering the outer’s side seams, like so:

../../../_images/thirty.jpg

Remember those 10 little legs we sewed earlier? Well, we need those now. And I used an extra spine from when we sewed the spines along Ferris’s back, in the center back, as a tail. Pin them on, 5 on each side, like little legs.

../../../_images/thirtyone.jpg

And finally, remember that liner we sewed, with a hole in the middle? Go find that one real quick:

../../../_images/thirtytwo.jpg

Now we’re going to put the whole hat outer inside of the lining, creating Ferris The Bowl. All the pretty sides of things are INSIDE the sandwich, so all the seam allowances are visible.

../../../_images/thirtythree.jpg

Rearrange your pins to allow sewing, then sew around the entire rim of Ferris The Bowl.

../../../_images/thirtyfour.jpg

Snip off the extra bits of the legs and stuff, just like we snipped off the extra bits of the spines before, like this:

../../../_images/thirtyfive.jpg

Now Ferris The Bowl is more like Ferris The Football:

../../../_images/thirtysix.jpg

Reach in through the hole in the end of Ferris The Football, grab the other end, and pull. First it’ll look like this...

../../../_images/thirtyseven.jpg

And then he’ll look like this:

../../../_images/thirtyeight.jpg

Sew shut that hole in the bottom of the lining...

../../../_images/thirtynine.jpg

Stuff that lining into the hat, to make the whole thing hat-shaped, and you’re done!

../../../_images/forty.jpg
]]>
Sat, 06 Apr 2019 00:00:00 -0700
http://edunham.net/2019/02/28/when_searching_an_error_fails.html http://edunham.net/2019/02/28/when_searching_an_error_fails.html <![CDATA[When searching an error fails]]> When searching an error fails

This blog has seen a dearth of posts lately, in part because my standard post formula is “a public thing had a poorly documented problem whose solution seems worth exposing to search engines”. In my present role, the tools I troubleshoot are more often private or so local that the best place to put such docs has been an internal wiki or their own READMEs.

This change of ecosystem has caused me to spend more time addressing a different kind of error: Those which one really can’t just Google.

Sometimes, especially if it’s something that worked fine on another system and is mysteriously not working any more, the problem can be forehead-slappingly obvious in retrospect. Here are some general steps to catch an “oops, that was obvious” fix as soon as possible.

Find the command that yielded the error

First, I identify what tool I’m trying to use. Ops tools are often an amalgam of several disparate tools glued together by a script or automation. This alias invokes that call to SSH, this internal tool wraps that API with an appropriate set of arguments by ascertaining them from its context. If I think that SSH, or the API, is having a problem, the first troubleshooting step is to figure out exactly what my toolchain fed into it. Then I can run that from my own terminal, and either observe a more actionable error or have something that can be compared against some reliable documentation.

Wrappers often elide some or all of the actual error messages that they receive. I ran into this quite recently when a multi-part shell command run by a script was silently failing, but running the ssh portion of that command in isolation yielded a helpful and familiar error that prompted me to add the appropriate key to my ssh-agent, which in turn allowed the entire script to run properly.

Make sure the version “should” work

Identifying the tool also lets me figure out where that tool’s source lives. Finding the source is essential for the next troubleshooting steps that I take.:

$ which toolname
$ toolname -version #

I look for hints about whether the version of the tool that I’m using is supposed to be able to do the thing I’m asking it to do. Sometimes my version of the tool might be too new. This can be the case when the dates on all the docs that suggest it’s supposed to work the way it’s failing are more than a year or so old. If I suspect I might be on too new a version, I can find a list of releases near the tool’s source and try one from around the date of the docs.

More often, my version of a custom tool has fallen behind. If the date of the docs claiming the tool should work is recent, and the date of my local version is old, updating is an obvious next step.

If the tool was installed in a way other than my system package manager, I also check its README for hints about the versions of any dependencies it might expect, and make sure that it has those available on the system I’m running it from.

Look for interference from settings

Once I have something that seems like the right version of the tool, I check the way its README or other docs looked as of the installed version, and note any config files that might be informing its behavior. Some tooling cares about settings in an individual file; some cares about certain environment variables; some cares about a dotfile nearby on the file system; some cares about configs stored somewhere in the homedir of the user invoking it. Many heed several of the above, usually prioritizing the nearest (env vars and local settings) over the more distant (system-wide settings).

Check permissions

Issues where the user running a script has inappropriate permissions are usually obvious on the local filesystem, but verifying that you’re trying to do a task as a user allowed to do it is more complicated in the cloud. Especially when trying to do something that’s never worked before, it can be helpful to attempt to do the same task as your script manually through the cloud service’s web interface. If it lets you, you narrow down the possible sources of the problem; if it fails, it often does so with a far more human-friendly message than when you get the same failure through an API.

Trace the error through the source

I know where the error came from, I have the right versions of the tool and its dependencies, no settings are interfering with the tool’s operation, and permissions are set such that the tool should be able to succeed. When all this normal, generic troubleshooting has failed, it’s time to trace the error through the tool’s source.

This is straightforward when I’m fortunate enough to have a copy of that source: I pick some string from the error message that looks like it’ll always be the same for that particular error, and search it in the source. If there are dozens of hits, either the tool is aflame with technical debt or I picked a bad search string.

Locating what ran right before things broke leads to the part of the source that encodes the particular assumptions that the program makes about its environment, which can sometimes point out that I failed to meet one. Sometimes, I find that the error looked unfamiliar because it was actually escalated from some other program wrapped by the tool that showed it to me, in which case I restart this troubleshooting process from the beginning on that tool.

Sometimes, when none of the aforementioned problems is to blame, I discover that the problem arose from a mismatch between documentation and the program’s functionality. In these cases, it’s often the docs that were “right”, and the proper solution is to point out the issue to the tool’s developers and possibly offer a patch. When the code’s behavior differs from the docs’ claims, a patch to one or the other is always necessary.

]]>
Thu, 28 Feb 2019 00:00:00 -0800
http://edunham.net/2018/09/19/running_a_python3_script_from_the_correct_directory.html http://edunham.net/2018/09/19/running_a_python3_script_from_the_correct_directory.html <![CDATA[Running a Python3 script in the right place every time]]> Running a Python3 script in the right place every time

I just wrote a thing in a private repo that I suspect I’ll want to use again later, so I’ll drop it here.

The situation is that there’s a repo, and I’m writing a script which shall live in the repo and assist users with copying a project skeleton into its own directory.

The script, newproject, lives in the bin directory within the repo.

The script needs to do things from the root of the repository for the paths of its file copying and renaming operations to be correct.

If it was invoked from somewhere other than the root of the repo, it must thus change directory to the root of the repo before doing any other operations.

The snippet that I’ve tested to meet these constraints is:

# chdir to the root of the repo if needed
if __file__.endswith("/bin/newproject"):
    os.chdir(__file__.strip("/bin/newproject"))
if __file__ == "newproject":
    os.chdir("..")

In code review, it was pointed out that this simplifies to a one-liner:

os.chdir(os.path.join(os.path.dirname(__file__), '..'))

This will keep working right up until some malicious or misled individual moves the script to an entirely different location within the repository or filesystem and tries to run it from there.

]]>
Wed, 19 Sep 2018 00:00:00 -0700
http://edunham.net/2018/09/17/cfp_tricks_1.html http://edunham.net/2018/09/17/cfp_tricks_1.html <![CDATA[CFP tricks 1]]> CFP tricks 1

Or, “how to make a selection committee do one of the hard parts of your job as a speaker for you”. For values of “hard parts” that include fine-tuning your talk for your audience.

I’m giving talk advice to a friend today, which means I’m thinking about talk advice and realizing it’s applicable to lots of speakers, which means I’m writing a blog post.

Why choosing an audience is hard

Deciding who you’re speaking to is one of the trickiest bits of writing an abstract, because a good abstract is tailored to bring in people who will be interested in and benefit from your talk. One of the reasons that it’s extra hard for a speaker to choose the right audience, especially at a conference that they haven’t attended before, is because they’re not sure who’ll be at the conference or track.

Knowing your audience lets you write an abstract full of relevant and interesting questions that your talk will answer. Not only do these questions show that you can teach your subject matter, but they’re an invaluable resource for assessing your own slides to make sure your talk delivers everything that you promised it would!

Tricks for choosing an audience

Some strategies I’ve recommended in the past for dealing with this include looking at the conference’s marketing materials to imagine who they would interest, and examining the abstracts of past years’ talks.

Make the committee choose by submitting multiple proposals

Once you narrow down the possible audiences, a good way to get the right talk in is to offload the final choice onto the selection committee! A classic example is to offer both a “Beginner” and an “Advanced” talk, on the same topic, so that the committee can pick whichever they think will be a better fit for the audience they’re targeting and the track they choose to schedule you for.

If the CFP allows notes to the committee, it can be helpful to add a note about how your talks are different, especially if their titles are similar: “This is an introduction to Foo, whereas my other proposal is a deep dive into Foo’s Bar and Baz”.

Use the organizers’ own words

I always encourage resume writers to use the same buzzwords as the job posting to which they’re applying when possible. This shows that you’re paying attention, and makes it easy for readers to tell that you meet their criteria.

In the same way, if your talk concept ties into a buzzword that the organizers have used to describe their conference, or directly answers a question that their marketing materials claim the conference will answer, don’t be afraid to repeat those words!

When choosing between several possible talk titles, keep in mind that any jargon you use can show off your ability, or lack thereof, to relate to the conference’s target audience. For instance, a talk with “Hacking” in the title may be at an advantage in an infosec conference but at a disadvantage in a highly professional corporate conf. Another example is that spinning my Rust Community Automation talk to “Life is Better with Rust’s Community Automation” worked great for a conference whose tagline and theme was “Life is Better with Linux”, but would not have been as successful elsewhere.

Good luck with your talks!

]]>
Mon, 17 Sep 2018 00:00:00 -0700
http://edunham.net/2018/08/24/job_move.html http://edunham.net/2018/08/24/job_move.html <![CDATA[Skill Tree Balancing with a Job Move]]> Skill Tree Balancing with a Job Move

I’ve recently identified some ways in which my former role wasn’t setting me up for career success, and taken steps to remedy them. Since not everybody lucks into this kind of process like I did, I’d like to write a bit about what I’ve learned in case it offers some reader a useful new framework for thinking about their skills and career growth.

Tl;dr: I’m moving from Research to Cloud Ops within Mozilla. The following wall of text and silly picture are a brain dump of new ideas about skills and career growth that I’ve built through the process.

Managers as Mentors

Oddly enough, I can trace the series of causes and effects that ultimately culminated in my decision to switch roles straight back to a company that I’ve never even worked at. By shaping my manager’s leadership skills, Microsoft’s policies have trickled down into benefits to my open source focused career at Mozilla.

Apparently, managers at Microsoft have targets to meet for their reports’ career growth and advancement, as well as the usual product and performance metrics. I’ve heard that a manager there who meets all their other deliverables but fails to promote the people under them can be demoted for that negligence! I’ve learned about this culture because it showed up in my own ex-Microsoft manager as a refusal to take “I’m happy where I am and don’t think I particularly need to go seek a promotion” as a complete answer when he asked about my goals for career growth and progression.

I have many peers whose managers seem fine with “I’m happy where I am” as an answer to career goal questions. Replacing an employee for their lower-level duties as they graduate to higher-level ones can be between inconvenient and almost impossible for a boss, so it can seem to be in everybody’s best interests to leave someone for a long time when they find a role where they’re content. But my own manager’s ingrained distaste for allowing that kind of stagnation led to a series of conversations that forced me to look at the bigger picture of my career, and realize the ways that sticking with a role that’s “good enough for now” could cause me serious problems years down the road.

Complacency and the Same Year of Work

I’ve been warned over and over by mentors through the industry that it’s a terrible idea to do the “same year of work” over and over. I’ve even repeated that advice to others – “don’t tolerate a job that isn’t helping you grow! Keep moving roles till you find something you love!”. This advice is especially easy to follow when the “same year of work” you’re repeating is unpleasant in some way.

However, I found myself in a role at Mozilla Research where the “same year of work” is totally amazing! 3 times in a row, I came into a team with major ops needs, and delivered value in a way that I’m great at to a team of awesome people with expertise in areas where my knowledge is only superficial. And it feels great to help them out and solve problems with knowledge that’s old hat for me but novel to my colleagues. For each role, my progress eventually reached a point where I’d solved all the problems that I could fix in a timely manner – I’d hit the edge of my skills at the time. Although it’s educational, working outside my skill set eventually slowed down my performance and work quality to the point where it was better to have me shift focus to a group whose needs were within what I could execute rapidly and with high quality.

The parts of this cycle in which I’m attempting a load of tasks primarily outside my skill set (and often outside the set of skills that I have the prerequisite skills to bootstrap myself into in a timely manner) can be stressful, but on the whole it’s been a rewarding experience. I’ve felt like I’ve gotten a “new role” every year or so, on about the timeline that a peer starting with similar skills to my own might have had to switch companies to find new opportunities. Without my manager’s guidance, I might have continued in “same year of work” cycle indefinitely, but our conversations about career progression helped me improve my understanding of how I personally grow as an engineer.

Self-Teachable Skills

What’s that “set of skills that I have the prerequisite skills to bootstrap myself into in a timely manner” that I mentioned earlier? They’re those challenges which I can identify and frame in a way which lets me ask the kind of questions that get actionable answers. They’re knowledge gaps directly adjacent to my established areas of expertise, about which I can concisely frame questions whose answers I can use to accomplish a task. Despite being a “team of one”, I’ve successfully self-taught skills such as learning a new configuration management system and improving my FOSS community management knowledge to solve particular challenges. Since I was already more skilled than the average entry-level engineer in those areas when I started my solo ops role, I had good heuristics for framing questions in a way that was likely to yield the answers I wanted, and evaluating the credibility and applicability of sources to learn from.

In other words, I’ve found that I can be successful at self-teaching a skill when I can write out a template for what I know the correct answer to a given question will look like. I might not know precisely which config management system will meet my needs, but I already know a variety of pain points to look out for and how to assess whether an example given in a tool’s docs is similar to the task I’m looking to apply the tool on. I might not know exactly what community management strategy is best for a given FOSS challenge, but I know enough about community management in general to frame a question about what strategy to use with all the relevant information and propose several viable options to ask an expert for their opinions in selecting between.

Non-Self-Teachable skills

I’ve found that, in working as a team of one, I’ve done pretty well at growing the skills that I had the appropriate foundation to “bootstrap” or “self-teach”. However, there are other skills for which this hasn’t been the case. Although these types of skills can be self-taught very slowly and inefficiently, I’m going to nickname them “non-self-teachable” for purposes of this discussion. “Infeasible-to-self-teach”, while technically more accurate, is just too much of a tangle to type out.

For skills with whose problem space I’m not fully acquainted, I find it extremely challenging to improve in a timely manner in isolation. In areas like prioritizing work across projects, and project planning, I find that my ability to self-teach is limited when I don’t have sufficient expertise to ask for help efficiently. When I don’t know what sort of answer would be actionable, I struggle to frame general questions and figure out where to ask them. And when I lack the expertise to identify what specific tasks I should be asking for help to improve at, I miss many opportunities to seek help.

How did I discover these “unknown unknowns” of skills that I lack the foundation to teach myself efficiently? I started noticing them when seeking resources to understand my career growth options in order to refine my career goals. In an internal wiki, I found a document which gives broad outlines for and descriptions of the competence and impact of employees at each pay grade. Mozilla’s is called a Job Family Architecture; other companies have similar documents with different names. Comparing my performance to the descriptions of higher levels helped me identify and articulate some particular skill areas that I haven’t improved at the same rate as those where I’ve successfully self-taught. Quantifying these areas that I’d like to work on has helped me figure out what changes I could make to get into circumstances more like those where I’ve improved rapidly at the “problem” skills in the past.

Bootstrapping Non-Self-Teachable Skills

When an engineer enters their very first job or internship, or touches a programming language for the first time, it’s reasonable to assume that all the skills they need to build are in the category that I’m describing as non-self-teachable. Skills I’m currently expert at, and even teach to others, started out as non-self-teachable for me as well. So what changed? How did I go from having no idea what questions to ask about Git, to being able to solve even its gnarliest problems with only a handful of StackOverflow queries or man page checks?

When I look back at my career, the common denominators among times I’ve built any skill from non-self-teachable to self-teachable have been peers and mentors. Watching the way that a teammate addresses a challenge and comparing it to the way I would have approached it gives me new insights at a rate that no amount of struggling with the skill by myself ever could. And managers or mentors who are subject matter experts in my field can compare the way I approach a task to the way they would have, and point out differences to yield skill-improving feedback. When I work closely enough with an expert for a while , I build a mental model of the way they think, and for the rest of my career I can ask myself “How would they have architected this? How would they have tackled this problem?”.

These mental models can simulate a team in circumstances that are similar to those where I worked with the experts, but as I advance into novel work in isolation, the models become less and less useful because I can’t predict how the expert would have approached a task unlike anything I’ve seen them encounter.

Bridging My Skill Gaps

Once I identified my skill gaps, I first attempted to seek better mentorship in my existing role. After several different attempts, I determined that a mentor’s intimacy with the particular task they’re giving feedback on is integral to their ability to help me refine a technical skill. In both individual and group mentorship, I found it easy to refine skills that I could already self-teach, but between difficult and impossible to get good help on skills about which I wasn’t yet expert enough to frame good questions. Doing my best to improve within my existing role helped me figure out which skills it was reasonable to try to improve in place, and which others were infeasible to build in a timely manner under the circumstances.

I talked with my manager about what I’d learned about myself from the various attempts to build the skills I regard as needing work. We evaluated our options for putting me into a situation where I had the peers and mentorship that I’m looking for: What would it look like to change my current role so it had more peers? What would it look like to move me into a role on a team of ops folks elsewhere? When we took into account some other skill gaps that I’m interested in addressing, such as working with experts on infrastructure at larger scale, we concluded that the best step to address my current concerns was to explore my options for shifting to a different team.

At this point, I literally wrote down a list of relevant skills in my notebook. I brainstormed a section of skills where I feel I’m below where I’d like to be and would learn best from peers. I also outlined the skills I’ve succeeded with in my present role, and the skills at which I consider myself above average and thus am not worried about aggressively growing while I focus on improvement in other areas.

I summarized my lists into the 2 key reasons I’m interested in a role change. Those reasons are: I want to work with peers and mentors who can offer detailed technical feedback based on their expertise at the problems I’m solving, and I want to refine my prioritization and planning skills by being more closely exposed to good examples in work like mine.

As I sought other teams that might meet these needs for me and interviewed with them, I kept my lists and summary on my desk. I felt that having them in sight made a real difference in my ability to clearly articulate my interest in a role change, and helped me ask the right questions to determine whether any team looked like a good match for my priorities. As I interviewed, I paid attention to not only the technical topics that we discussed, but the prospective colleagues’ attitude toward the answers I got wrong. On the team I ended up selecting, I was especially impressed by the way that my future peers ended each question by filling in any gaps between my answer and the complete or best possible answer that they were “looking for”. This made the interviews feel like a constructive conversation, and even if I hadn’t taken the role, I would have left with a better understanding of the technologies we discussed.

New Team

With all that, I’m excited to announce that I’m transitioning from Research to the Cloud Ops team within Mozilla! This team supports a multitude of projects, and there are always at least 2 ops engineers in a “buddy system” for every project they support. The new role is similar to my old one in that it juggles supporting many projects at once, but very different in that I’ll be working directly with expert colleagues to learn their best ways to do it.

And yeah, I’m staying at Mozilla. It might net me more cash to jump ship to another company, but monetary compensation is not what this move is optimizing for. The drawbacks I would experience if I chose a team at another company include greater uncertainty about what the team is actually like, and having to re-learn all the specialized bureaucracy that comes with onboarding anywhere. I also encounter very few other companies whose cultures are as closely aligned with my own values, expecially pertaining to open source, and none of them currently have openings that I can confirm are as good a match to my current skill building goals as the Cloud Ops team.

Visualization

Drawing a picture is one of my favorite ways to gain control of and ask the right questions about new knowledge. Here’s an ugly little chart into which I’ve thrown a handful of skill areas and my approximate levels at them before and after my old role… and of course, the skill emphases that have been important to me in picking the right place to go next.

Net chart illustrating skill gaps filled in by emphases of new role

If you want to make a similar chart, I made that one with the net chart type in LibreOffice.

]]>
Fri, 24 Aug 2018 00:00:00 -0700
http://edunham.net/2018/07/20/why_ops.html http://edunham.net/2018/07/20/why_ops.html <![CDATA[Why an ops career]]> Why an ops career

Disclaimers: Not all tasks that come to a person in an ops role meet my definition of ops tasks. Advanced ops teams move on from simple problems and choose more complex problems to solve, for a variety of reasons. This post contains generalizations, and all generalizations have counter-examples. This post also refers to feelings, and humans often experience different feelings in response to similar stimuli, so yours might not be like mine.

It’s been a great “family reunion” of FOSS colleagues and peers in the OSCON hallway track this week. I had a conversation recently in which I was asked “Why did you choose ops as a career path?”, and this caused me to notice that I’ve never blogged about this rationale before.

I work in roles revolving around software and engineering because they fall into a cultural sweet spot offering smart and interesting colleagues, opportunities for great work-life balance, and exemplary compensation. I also happen to have taken the opportunity to spend over a decade building my skills and reputation in this industry, which helps me keep the desirable roles and avoid the undesirable ones. Yet, many people in my field prefer software development over operations work.

I’ve chosen, and stuck with, ops because it gives me the sensation of having better-defined success conditions than I get when developing code for others. When I tackle an ops problem, it is usually a task which I could tediously, miserably, but correctly perform by hand. This base case of “if all else fails, the desired thing can be done by hand” frames a problem more concretely and measurably than any written description of someone’s hopes and dreams about a piece of software.

Of course, performing ops tasks by hand does not scale. Often, the speed with which a given task is performed is part of its success criteria. And if you ask a human to perform the same task 20 times, you’ll likely get 21 subtly different outputs. This is why we automate: Automation brings computers’ strengths of speed and lack of boredom to the equation.

Automation tasks are necessarily framed in terms of the specific behaviors, described in technical terms, that computers are supposed to be performing. The constraints of the infrastructure provide a rigorously defined abstraction layer between psychology and code. This vocabulary of infrastructure expresses the constraints for ops work such that even if I’m not the end user of a piece of automation code, I can experience a high level of confidence that I understand what the person requesting it believed that they wanted when they made the request.

Automation is unlike software engineering tasks with success conditions that hinge on human emotions and behavior. Any success condition with psychology integral to it becomes time-consuming, if not impossible, to test against. Throw in psychological effects that incline a human to have slightly different reactions to the same thing depending on when and how you show it to them, and you lose even basic repeatability from the simple task of testing whether your code is “good enough”. For software engineering tasks with human behavior and emotions in their success criteria, I cannot consistently prove to myself that success is even possible. Although I enjoy recreationally tackling potentially-impossible challenges from time to time, I do not enjoy the pressure and uncertainty that come from betting my career and compensation on such puzzles.

Even systems built solely from understandable components develop complexity and challenges. Emergent behaviors arise, perhaps necessarily, from complex systems. In ops work, I feel a certainty that each component is independently predictable when broken down small enough, and that it would be possible with enough work to rebuild the entire system incrementally from such “atoms” of predictability. Of course it is almost never worth the time and effort to actually rebuild the system from scratch, but simply knowing that it would be possible gives me confidence that the problems I encounter with my systems can be solved. (To reiterate, “can be solved” bears little relation to “is worth solving”, but it does affect the way I feel about tasks.) Contrast this “certainty of solvability” to the problems encountered when developing software for other people: “the customer doesn’t like this!”, “users aren’t clicking where we want them to!”. Those problems hinge on human components that would usually be highly unpleasant, unethical, and illegal to disassemble and debug. Software problems tightly coupled to psychology do not make me feel like I can be certain that any amount of effort would guarantee a solution.

No workflow can, nor should, eliminate the bigger-picture questions about what we want to be building, or how we want to go about building it. However, I find that the structure of roles that companies typically categorize as “ops work” supports decoupling the questions without answers from the questions with answers, and offloading the half you don’t want to deal with onto someone who enjoys them more.

]]>
Fri, 20 Jul 2018 00:00:00 -0700
http://edunham.net/2018/05/15/team.html http://edunham.net/2018/05/15/team.html <![CDATA[Thoughts on retiring from a team]]> Thoughts on retiring from a team

The Rust Community Team has recently been having a conversation about what a team member’s “retirement” can or should look like. I used to be quite active on the team but now find myself without the time to contribute much, so I’m helping pioneer the “retirement” process. I’ve been talking with our subteam lead extensively about how to best do this, in a way that sets the right expectations and keeps the team membership experience great for everyone.

Nota bene: This post talks about feelings and opinions. They are mine and not meant to represent anybody else’s.

Why join a team?

When I joined the Rust community subteam, its purpose was defined vaguely. It was a small group of “people who do community stuff”, and needed all the extra hands it could get. A lot of my time was devoted explicitly to Rust and Rust-related tasks. The tasks that I was doing anyways seemed closely aligned with the community team’s work, so stepping up as a team contributor made a lot of sense. Additionally, the team was so new that the only real story for “how to work with this team and contribute to its work” was “join the team” . We hadn’t yet pioneered the subteams and collaboration with community organizers outside the official community team which are now multiplying the team’s impact.

Why leave?

I’m grateful to the people who have the bandwidth and interest to put consistent work into participating on Rust’s community team today. As the team has grown and matured, its role has transitioned from “do community tasks” to “support and coordinate the many people doing those tasks”. I neither enjoy nor excel at such coordination tasks. Not only do I have less time to devote to Rust stuff, but the community team’s work has naturally grown into higher-impact categories that I personally find less fulfilling and more exhausting to work on.

Teams and people change

In a way, the team’s growth and refinement over the years reminds me of a microcosm of what I saw while working at a former startup as it built up into an enterprise company. Some peoples’ working style had been excellently suited to the 5-person company they originally joined, but clashed with the 50-person company into which that startup grew. Others who would never have thrived in a company of only 10 people were hiring on and having a fantastic impact scaling the company up to 1,000. And some were fine when the company was small and didn’t mind being part of a larger organization either. That experience reminds me that the fit between a person and organization at some point in the past does not guarantee that they’ll remain a good fit for each other over time, and neither is necessarily to blame for the eventual mismatch as both grow and change.

Does leaving harm anyone?

When you’re appreciated and valued for the work you do on a team, it’s easy to get the idea that the team would be harmed if you left. The tyres on my bike are a Very Important Part of the bike, and if I took them off, the bike wouldn’t be rideable. But a team isn’t just a machine – a team’s impact is an emergent phenomenon that comes out of many factors, not a static item. If a sports team has a really excellent coach, they’ll retain the lessons they learned from that coach’s mentorship even after the coach moves away. Older players will pass along the coach’s lessons to younger ones, and their ideas will stick around and improve the group even long after the original players’ retirement. When a team is coordinated well, one member leaving doesn’t hurt it. And if I leave on good terms rather than sticking around till I burn out or burn bridges, I can always be available for remaining members to consult when if need advice that only I can provide.

Would staying harm anyone?

I think that in the case of the Rust community team, it would reflect poorly on the community as a whole if the exact same people comprised the community team for the entire life of the language.

If nobody new ever joins the team, we wouldn’t get new ideas and tactics, nor the priceless infusion of fresh patience and optimism that new team members bring to our perennial challenges and frustrations. So, new team members are essential. If new people joined on a regular basis but nobody ever left, the team would grow unboundedly large as time went on, and have you ever tried to get anything done with a hundred- or thousand-person committee? In my opinion, having established team members retire every now and then is an essential factor in preventing either of those undesirable hypotheticals.

The team selects for members who’ll step up and accomplish tasks when they need to. I think establishing turnover in a healthy and sustainable way is one of the most essential tasks for the team to build its skills at. The best way to get a healthy amount of turnover – not too much, but not too little either – is for every team member to step up to the personal challenge of identifying the best time to retire from active involvement. And for me, that happens to look like right now.

Aspirational Clutter

Do you have stuff in your house that you don’t use, and it’s taking up space, and you’re kind of annoyed at it for taking up space, but you don’t feel like you can get rid of it because you think you really should use it, or you’re sure you’re just going to make some personal change that will cause you to use it someday? I call that stuff aspirational clutter: It doesn’t belong to you, it belongs to some imaginary person who doesn’t exist but you aspire to become them someday.

A team meeting every week on your agenda can be aspirational clutter in the same way as a jumbled shelf of planners or a pile of sports gear covering a treadmill: It not only isn’t a good fit for who you are right now, but by wasting time or space it actually gets in the way of the habits and changes that would make you more like that person you aspire to be.

I find few experiences more existentially miserable than feeling obliged to promise work that I know I’ll lack the resources of time or energy to deliver. Sticking around on a team that I’m no longer a good fit for puts me in a situation where get to choose between feeling guilty if I don’t promise to get any work done, or feeling like a disappointment for letting others down if I commit to more than I’m able to deliver. Those aren’t feelings I want to experience, and I can avoid them easily by being honest with myself about the amount of time and energy I have available to commit to the team.

The benefits of contributing from a non-team-member role

One scary idea that comes up when leaving a team is the question: “if I’m not on the team, how can I help with the team’s work?”.

In my opinion, it builds a healthier community if people who are good at a given team’s work spend some time interfacing with the team from the perspective of non-team-members. If I know how the community team gets stuff done and I go “undercover” as a non-team-member coming to them for help, I can give them essential feedback to improve the experience and processes that non-team-members encounter.

When I wear my non-team-member hat and try to get stuff done, I learn what it’s like for everyone else who tries to interface with the team. I can then use the skills that I built on by participating on the team to remedy any challenges that a non-team-member encounters. Those changes create a better experience for every community member who interacts with the team afterwards.

What next?

As a community team alum, I’ll keep doing the Rust outreach – the meetup organizing, the conference talks, the cute swag, the stickers – that I’ve been doing all along. Stepping down from the official team member list just formalizes the state that my involvement has been in for the past year or so: Although I get the community team’s support for my endeavors when I need it, I’m not invested in the challenges of supporting others’ work which the team is now tackling.

I’m proud of the impact that the team has had while I’ve been a part of it, and I look forward to seeing what it will continue to accomplish. I’m grateful for all the leadership and hard work that have gone into making the Rust community subteam an organization from which I can step back while remaining confident that it will keep excelling and evolving.

Why blog all that?

I’m publishing my thoughts on leaving in the hopes that they can help you, dear reader, gain some perspective on your own commitments and curate them in whatever way is best for you.

If you read this and feel pressured to leave something you love and find fulfilling, please try to forget you ever saw in this post.

If you read this hoping it would give you some excuse to quit a burdensome commitment and feel disappointed that I didn’t provide one, here it is now: You don’t need a fancy eloquent excuse to stop doing something if you don’t want to any more. Replace unfulfilling pursuits with better ones.

]]>
Tue, 15 May 2018 00:00:00 -0700
http://edunham.net/2018/02/23/slacking_from_irc.html http://edunham.net/2018/02/23/slacking_from_irc.html <![CDATA[Slacking from Irssi]]> Slacking from Irssi

UPDATE: SLACK DECIDED THIS SHOULD NO LONGER BE POSSIBLE AND IT WILL NOT WORK ANY MORE

My IRC client helps me work efficiently and minimize distraction. Over the years, I’ve configured it to behave exactly how I want: Notifying me when topics I care about are under discussion, and equally as important, refraining from notifications that I don’t want. Since my IRC client is developed under the GPL, I have confidence that the effort I put into customizing it to improve my workflow will never be thrown out by a proprietary tool’s business decisions.

But the point of chat is to talk to other humans, and a lot of humans these days are choosing to collaborate on Slack. Slack has its pros and cons, but some of the drawbacks can be worked around using open technologies.

Do you need to Slack from irssi?

If you feel ambivalent toward the web UI, going to the trouble of setting up an IRC client for Slack will likely be more hassle than reward.

If you loathe the Slack UI but don’t care much for IRC, you might be better off considering the Matrix bridge or seeking out other clients. I have not tried them and can’t vouch for any.

If you use WeeChat, you can use Slack IRC directly, or try the WeeChat Slack gateway. A friend at a larger company informs me that the latter doesn’t hold up well on Slack workspaces in the tens of thousands of users, so if you’re on a particularly large Slack you might be better off treating it as an IRC server.

If you’re looking for a terminal-based client to get started with IRC, consider choosing WeeChat over Irssi. WeeChat is extensible in more languages and allegedly has fewer edge cases like the saga detailed here.

The IRC bridge won’t work on all Slacks

First, a word of warning: You can only Slack from IRC if a workspace owner enables the IRC/XMPP gateway for a given instance, which is disabled by default because Slack distrusts users’ ability to make good security decisions. They’re not necessarily wrong, but generally users who care deeply enough to slog through their misleading instructions also know a thing or two about SSL.

This will work on all IRC clients, But...

The good news is that once a Slack instance is exposing the IRC gateway, it looks to an IRC client just like a single standalone IRC server.

The bad news is that the instructions Slack provides will only work if you do them on an IRC client where you weren’t connected to any other IRC yet. This is because (at time of writing) they tell you only to add Slack as a server in your client.

Irssi is “special”.

Let’s step back to the basics of how IRC works for a minute: On IRC, a network is a group of servers, and on a given network you can join various channels. Being on a network is intentionally agnostic of what server you’re connected to, so that if one server goes away, you can connect to another and keep right on chatting.

All modern IRC clients that I’m aware of allow you to be connected to several networks at once. For instance I’m on the Freenode IRC network to talk to people about FOSS projects I use, and also the Mozilla IRC network because most of my work channels are there. Every command you issue to irssi is done in the context of some network – do you want to auth to services? Join a new channel? Add a server? Irssi assumes that the server of the buffer in which you issue a command is the server you want the command to apply to, though some might require you to explicitly specify the network.

So, if you’re already on a network and you tell your client to add a server, what will happen? That’s right, your client will do the smart thing and add the server to the network. It will also likely connect you to that server, and do all the things on that server which you’ve asked it to do when you first connect to that particular network.

What happens if that new server is not part of the network in question at all, but is instead Slack? irssi will do the automatic things, like joining channels and trying to auth to services, that it’s supposed to do when it joins that network anyways, because you as a human told the client that the server was on the network. Even if that’s because some mean ol’ documentation tricked you into it, irssi doesn’t know any better.

What happens when irssi autojoins a bunch of channels, but issues those join commands to the Slack server? Well, on Slack just as on IRC, the first person to join a channel creates it. So, you have just revealed to your whole Slack workspace exactly the names of the channels you were in on the IRC server from which you issued the /server command.

Spuriously creating a bunch of channels isn’t the end of the world, you can just delete them, right? Well, if you have owner or admin permissions on the Slack workspace, absolutely! If you are not an owner or admin, you will have to go find someone who is and ask them to clean up the mess.

Well, at least that’s what happens when an active IRC user blindly assumes that whoever wrote the Slack IRC connection instructions had tried them in an irssi instance they were actually using for IRC. My bad.

Don’t follow the Slack docs verbatim from Irssi.

When you’re looking at https://my.slack.com/account/gateways, it has instructions like the following:

  1. Ensure that your IRC client is configured with your normal Slack username as your nick.
  2. If you are connecting through a raw /server command, your command will be: /server myserver.irc.slack.com 6667 myserver.Nosh5Neevot5Efua
  3. If you have a more UI-oriented setup, your IRC server is myserver.irc.slack.com, and the server password is myserver.Nosh5Neevot5Efua. Accepted ports are 6667, 6697, and 8000.

That Nosh5Neevot5Efua bit is a password you shouldn’t share with anyone – for this post I’m using a string from pwgen so it looks more like the actual config.

To avoid the tale of woe that I outlined above, if you’re slacking from Irssi, you need to add a network before adding the server. This changes the steps to:

  1. Ensure that your IRC client is configured with your normal Slack username as your nick.
  2. Add a network with the command /network add myslack
  3. If you are connecting through a raw /server command, your command will be: /server add -auto -network myslack myserver.irc.slack.com 6667 myserver.Nosh5Neevot5Efua

See the irssi docs for more options. Join the desired channels on the Slack network just as you would in IRC. When you’re done, remember to /save, and your .irssi/config should contain something like:

servers = (
  ...
  {
    address = "myserver.irc.slack.com";
    chatnet = "myslack";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "no";
    autoconnect = "yes";
    password = "mozilla.Nosh5Neevot5Efua";
  }
);

chatnets = {
  ...
  myslack = { type = "IRC"; };
};

channels = (
  ...
  { name = "#slackchannel"; chatnet = "myslack"; autojoin = "yes"; },
  ...
)

Now Slack is almost IRC

With the bridge set up, Slack behaves mostly like IRC. There remain some outstanding differences:

  • You cannot leave the workspace’s default channel. You can mute the channel or turn off notifications but Slack won’t let you leave.
  • When someone uses @here in a channel, Slack appends your username to the end of the message when forwarding it along to IRC to make sure you get pinged. The person did not actually type your nick when it occurrs in this context.
  • If you want to hilight a Slack user in a message, you must inlcude the @ in their username. If you just say the string of their name, they won’t get notified. This is the opposite of IRC, where it’s a newbie mistake to include someone’s hat when addressing them.
  • Slack has message threading and allows editing and deleting messages, neither of which are really a thing on IRC. Remember that Slack sends the first version of each message to the IRC bridge. Messages in a thread will look like they were sent to the channel. Messages that were later deleted will persist in your IRC logs. Edits won’t show up; IRC bridge users see only the first version of each. If you need to view an edited message or edit or delete your message, you have to use the Slack UI.

Have fun!

]]>
Fri, 23 Feb 2018 00:00:00 -0800
http://edunham.net/2017/12/12/an_incomplete_list_of_pacific_northwest_tech_conferences.html http://edunham.net/2017/12/12/an_incomplete_list_of_pacific_northwest_tech_conferences.html <![CDATA[Some northwest area tech conferences and their approximate dates]]> Some northwest area tech conferences and their approximate dates

Somebody asked me recently about what conferences a developer in the pacific northwest looking to attend more FOSS events should consider. Here’s an incomplete list of conferences I’ve attended or hear good things about, plus the approximate times of year to expect their CFPs.

The Southern California Linux Expo (SCaLE) is a large, established Linux and FOSS conference in Pasadena, California. Look for its CFP at socallinuxexpo.org in September, and expect the conference to be scheduled in late February or early March each year.

If you don’t mind a short flight inland, OpenWest is a similar conference held in Utah each year. Look for its CFP in March at openwest.org, and expect the conference to happen around July. I especially enjoy the way that OpenWest brings the conference scene to a bunch of fantastic technologists who don’t always make it onto the national or international conference circuit.

Moving northward, there are a couple DevOps Days conferences in the area: Look for a PDX DevOps Days CFP around March and conference around September, and keep an eye out in case Boise DevOps Days returns.

If you’re into a balance of intersectional community and technical content, consider OSBridge (opensourcebridge.org) held in Portland around June, and OSFeels (osfeels.com) held around July in Seattle.

In Washington state, LinuxFest Northwest (CFP around December, conference around April, linuxfestnorthwest.org) in Bellingham, and SeaGL (seagl.org, CFP around June, conference around October) in Seattle are solid grass-roots FOSS conferences. For infosec in the area, consider toorcamp (toorcamp.toorcon.net, registration around March, conference around June) in the San Juan Islands.

And finally, if a full conference seems like overkill, considering attending a BarCamp event in your area. Portland has CAT BarCamp (catbarcamp.org) at Portland State University around October, and Corvallis has Beaver BarCamp (beaverbarcamp.org) each April.

This is by no means a complete list of conferences in the area, and I haven’t even tried to list the myriad specialized events that spring up around any technology. Meetup, and calagator.org for the Portland area, are also great places to find out about meetups and events.

]]>
Tue, 12 Dec 2017 00:00:00 -0800
http://edunham.net/2017/11/29/user_is_not_authorized_to_perform_iam_changepassword.html http://edunham.net/2017/11/29/user_is_not_authorized_to_perform_iam_changepassword.html <![CDATA[User is not authorized to perform iam:ChangePassword.]]> User is not authorized to perform iam:ChangePassword.

Summary: A user who is otherwise authorized to change their password may get this error when attempting to change their password to a string which violates the Password Policy in your IAM Account Settings.

So, I was setting up the 3rd or 4th user in a small team’s AWS account, and I did the usual: Go to the console, make a user, auto-generate a password for them, tick “force them to change their password on next login”, chat them the password and an admonishment to change it ASAP.

It’s a compromise between convenience and security that works for us at the moment, since there’s all of about 10 minutes during which the throwaway credential could get intercepted by an attacker, and I’d have the instant feedback of “that didn’t work” if anyone but the intended recipient performed the password change.

So, the 8th or 10th user I’m setting up, same way as all the others, gets that error on the change password screen: “User is not authorized to perform iam:ChangePassword”. Oh no, did I do their permissions wrong? I try explicitly attaching the Amazon’s IAMUserChangePassword policy to them, because that should fix their not being authorized, right? Wrong; they try again and they’re still “not authorized”.

OK, I have their temp password because I just gave it to them, so I’ll pop open private browsing and try logging in as them.

When I try putting in the same autogenerated password at the reset screen, I get “Password does not conform to the account password policy.”. This makes sense; there’s a “prevent password reuse” policy enabled under Account Settings within IAM.

OK, we won’t reuse the password. I’ll just set it to that most seekrit string, “hunter2”. Nope, the “User is not authorized to perform iam:ChangePassword” is back. That’s funny, but consistent with the rules just being checked in a slightly funny order.

Then, on a hunch, I try the autogenerated password with a 1 at the end as the new password. It changes just fine and allows me to log in! So, the user did have authorization to change their password all along... they were just getting an actively misleading error message about what was going wrong.

So, if you get this “User is not authorized to perform iam:ChangePassword” error but you should be authorized, take a closer look at the temporary password that was generated for you. Make sure that your new password matches or exceeds the old one for having lowercase letters, uppercase letters, numbers, special characters, and total length.

When poking at it some more, I discovered that one also gets the “User is not authorized to perform iam:ChangePassword” message when one puts an invalid value into the “current password” box on the change password screen. So, check for typos there as well.

This yak shave took about an hour to pin down the fact that it was the contents of the password string generating the permissions error, and I haven’t been able to find the error string in any of Amazon’s actual documentation, so hopefully I’ve said “User is not authorized to perform iam:ChangePassword” enough times in this post that it pops up in search results for anyone else frustrated by the same challenge.

]]>
Wed, 29 Nov 2017 00:00:00 -0800
http://edunham.net/2017/11/01/better_remote_teaming_with_distributed_standups.html http://edunham.net/2017/11/01/better_remote_teaming_with_distributed_standups.html <![CDATA[Better remote teaming with distributed standups]]> Better remote teaming with distributed standups

Agile development’s artifact of the daily stand-up meeting is a great idea. In theory, the whole team should stand together (sitting or eating makes meetings take too long) for about 5 minutes every morning. Each person should comment on:

  • What they did since yesterday
  • What they plan on doing today
  • Any blockers, thigns they’re waiting on to be able to get work done
  • Anything else

And then, 5 minutes later, everybody gets back to work. But do they really?

Problems with in-person standups

When I’ve participated in stand-up meetings in person, I’ve noticed a few major flaws:

  • Context switching into and out of the meeting impacts a maker’s schedule by substantially more than the meeting’s planned 5 minutes.
  • People naturally tend to problem-solve during the meeting, and overcoming this urge to be helpful can be difficult and frustrating. However, allowing this problem-solving is a waste of most attendees’ time and can drag the meeting out to over an hour if left unchecked.
  • The content of the meeting isn’t recorded. If I run into an issue that I think I recall Jane saying she was blocked on last week, I can either interrupt her work to ask her about it, send her an email and wait for her to reply, or just fight it myself. I can’t look it up anywhere.
  • If a team decides to keep notes, this “office housework may be distributed inequitably among team members. Or if everyone takes turns taking notes, well... not everyone is necessarily skilled at note-taking, so there’s little guarantee that the notes will be consistently useful.

When an international team decides to pursue standup meetings through a synchronous medium like a phone or video call, it keeps all of these drawbacks while adding the problem of time zones. Let’s say your San Francisco-based company holds your daily standup at the perfectly sensible hour of 10am. Colleagues in New York City may love this, as it’s 1pm their time so they have plenty of time to prepare for the meeting. But your “perfectly reasonable” 10am standup isn’t so reasonable internationally: it’s 5pm for a colleague in London, 6pm in Paris, 6am the next day in Auckland, and sleep-worthy hours like 4AM the next day in Sydney and 2am, also the next day, in Tokyo.

Is demanding that some team members stay late at the office every day at the expense of family and personal commitments, or wake up before sunrise, the way that you want to treat your team? Is making a request like this, which disproportionately impacts your international colleagues, consistent with your values?

The better way: Robots!

If you’re familiar with my talks about community automation, you won’t be surprised at my excitement to share another robot which makes life better.

A team I’m on has recently started using a Slack app called Geekbot to perform asynchronous, inherently logged, on-task standup meetings. The only thing special about Geekbot is that somebody else has already done the coding, testing, and debugging – if your team uses IRC or another chat client, the basic “ask questions of each team member and post their answers, once per day” functionality is trivial to implement on any extensible platform.

These distributed, asynchronous standups are the best standup meetings I’ve ever participated in. Why?

  • They stay on task. You answer specific questions to the robot in PM, the robot posts your answers to a channel, and anyone who wants to chat about what you said has to do so outside the main thread of that conversation.
  • They’re asynchronous. If we added colleagues in any time zone, they could configure the bot to ping them at a time they find convenient, and the rest of the team could still keep up with their progress as long as everyone keeps reporting roughly every 24 hours.
  • Others’ updates minimize interruption. Rather than dropping what I’m doing to take a call for a meeting, I can use my ordinary chat-multitasking skills to read my colleagues’ updates while waiting on another task to complete.
  • They’re self-recording. I can look back after lunch and see what I claimed I’d get done today; I can search my chat logs for “who was working on that component?”. I strongly prefer to answer easy questions like this myself instead of interrupting others – I save that for the difficult, interesting questions – so I deeply appreciate this ability to solve my own problems with the meeting’s inherently perfect notes.

Basically, these robot-powered, distributed standup check-ins are showing all of the benefits, and none of the major drawbacks, of in-person standup meetings.

The catch: Culture

Why is it working? We’ve had similar standup type platforms before, and they work poorly if at all. I believe these standups are working better than prior attempts to automate the process for 2 main reasons:

  • The standups are performed in a convenient location. Rather than having to remember to log into some service which exists only for the standup, it comes to you in the chat medium where you were doing the rest of your team communication.
  • The team’s culture values filling them out. If someone skips a standup, others will ask them where they were or what was going on. If you added a bot without adding a culture of appreciating standups, everyone would simply ignore or block the bot and nothing would change.

So, assess how your standups are working. Do they take your target amount of time and stay focused? Can you refer to their contents later as you need to? If they could use improvement, it’s worth investigating how a robot could help you out.

]]>
Wed, 01 Nov 2017 00:00:00 -0700
http://edunham.net/2017/10/05/saying_ping.html http://edunham.net/2017/10/05/saying_ping.html <![CDATA[Saying Ping]]> Saying Ping

There’s an idiom on IRC, and to a lesser extent other more modern communication media, where people indicate interest in performing a real-time conversation with someone by saying “ping” to them. This effectively translates to “I would like to converse with you as soon as you are available”.

The traditional response to “ping” is to reply with “pong”. This means “I am presently available to converse with you”.

If the person who pinged is not available at the time that the ping’s recipient replies, what happens? Well, as soon as they see the pong, they re-ping (either by saying “ping” or sometimes “re-ping” if they are impersonating a sufficiently complex system to hold some state).

This attempt at communication, like “phone tag”, can continue indefinitey in its default state.

It is an inefficient use of both time and mental overhead, since each missed “ping” leaves the recipient with a vague curiosity or concern: “I wonder what the person who pinged wanted to talk to me about...”. Additionally, even if both parties manage to arrange synchronous communication at some point in the future, there’s the very real risk that the initiator may forget why they originally pinged at all.

There is an extremely simple solution to the inefficiency of waiting until both parties are online, which is to stick a little metadata about your question onto the ping. “Ping, could you look issue # xyz?” “Ping, can we chat about your opinions on power efficiency sometime?”. And yet there appears to be a decent correlation between people I regard as knowing more than I do about IRC etiquette, and people who issue pings without attaching any context to them.

If you do this, and happen to read this, could you please explain why to me sometime?

]]>
Thu, 05 Oct 2017 00:00:00 -0700
http://edunham.net/2017/07/28/resumes_1_page_or_more.html http://edunham.net/2017/07/28/resumes_1_page_or_more.html <![CDATA[Resumes: 1 page or more?]]> Resumes: 1 page or more?

Some of my IRC friends are job hunting at the moment, so I’ve been proofreading resumes. These friends are several years into their professional careers at this point, and I’ve found it really interesting to see what they include and exclude to make the best use of their resumes’ space.

No wasted space

I’ve also stumbled into a rule of thumb that I like a lot better than the “1 page rule”: The rule of No Wasted Space.

If you spread 1 page worth of stuff across 2 pages, you’re wasting a page worth of space. This forces any reader to spend 2 pages worth of time on your 1 page worth of content, which is an act of disrespect to them.

Big blank areas are a waste of space if they push your resume to more pages than it needs to be. On the other side of that same coin, however, a bunch of cramped small text wastes its space if it sacrifices easy legibility and scannability for the sake of cramming everything relevant onto a single sheet. And excess words where fewer would have communicated just as well are a waste of ink, as well as interviewer time!

Number those pages

Resumes that run to multiple pages are vastly improved by a little note on the corner of each: “Surname, Page X of Y”.

“I existed!”

I’m still surprised by how often people share awards and titles as “got X award”, “was President of Organization”. I hold the opinion that identical achievements sound much cooler with active rather than passive verbs: “Earned X award”, “Led Organization”, etc.

Stuff in more numbers

I’m a little annoyed by whatever cognitive bias is in play with this one: People sound better at what they do when they stuff more numbers into their descriptions, even when the numbers aren’t objectively very useful or necessary. “Tutored students” vs “Tutored 30 students”, “Improved performance” vs “Doubled performance”, etc. This is another compelling reason to instrument your systems and measure them before and after making major changes, which is something I personally need to improve at work as well :)

The Mirror Trick

When you’ve been staring at a document for hours, it’s really hard to take a step back and tell what kind of first impression its formatting is going to make. To counter this, make your resume in the format you expect an interviewer to see it. Printed out or fullscreened on a laptop are common choices. Then either make your computer flip it horizontally, or just hold it up to a mirror and look at the reflection. This makes it look just new enough that you can spot glaring formatting errors instead of just reading the individual words – “What’s that enormous white space doing there?”, “Wait why is that indented to a different level from everything else?”.

]]>
Fri, 28 Jul 2017 00:00:00 -0700
http://edunham.net/2017/06/27/internet_safety.html http://edunham.net/2017/06/27/internet_safety.html <![CDATA[Opinion: Levels of Safety Online]]> Opinion: Levels of Safety Online

The Mozilla All-Hands this week gave me the opportunity to explore an exhibit about the “Mozilla Worldview” that Mitchell Baker has been working on. The exhibit sparked some interesting and sometimes heated discussion (as direct result of my decision to express unpopular-sounding opinions), and helped me refine my opinions on what it means for someone to be “safe” on the internet.

Spoiler: I think that there are many different levels of safety that someone can have online, and the most desirable ones are also the most difficult to attain.

Obligatory disclaimer: These are my opinions. You’re welcome to think I’m wrong. I’d be happy to accept pull requests to this post adding tools for attaining each level of safety, but if you’re convinced I’m wrong, the best place to say that would be your own blog. Feel free to drop me a link if you do write up something like that, as I’d enjoy reading it!

Safety to Consume Desired Information

I believe that the fundamental layer of safety that someone can have online is to be able to safely consume information. Even at this basic level, a lot of things can go wrong. To safely consume information, people need internet access. This might mean free public WiFi, or a cell phone data plan. Safety here means that the user won’t come to harm solely as a result of what they choose to learn. “Desired information” means that the person gets a chance to find the “best” answer to their question that’s available.

How could someone come to harm as a result of choosing to learn something? If you’ve ever joked about a particular search getting you “put on a watch list”, I’m sure you can guess. I happen to hold the opinion that knowledge is an amoral tool, and it’s the actions that people take for which they should be held accountable – if you believe that there exist facts that are inherently unethical to know, we’ll necessarily differ on the importance of this safety.

How might someone fail to get the information they desired? Imagine someone searching for the best open source social networking tools on a “free” internet connection that’s provided and monitored by a social networking giant. Do you think the articles that turn up in their search results would be comparable to what they’d get on a connection provided by a less biased organization?

Why “desired information”, and not “truth”? My reason here is selfish. I enjoy learning about different viewpoints held by groups who each insist that the other is completely wrong. If somebody tried to moderate what information is “true” and what’s “false”, I would probably only be allowed to access the propaganda of at most one of those groups.

Sadly, if your ISP is monitoring your internet connection or tampering with the content you’re trying to view, there’s not a whole lot that you can do about it. The usual solution is to relocate – either physically, or feign relocation by using an onion router or proxy. By building better tools, legislation, and localization, it’s plausible that we could extend this safety to almost everyone in the world within our lifetimes.

Safety to Produce Information Anonymously

I think the next layer of internet safety that people need is the ability to produce information anonymously. The caveat here is that, of course, nobody else is obligated to choose to host your content for you. The safety of hosting providers, especially coupled with their ability to take financial payment while maintaining user anonymity, is a whole other can of worms.

Why does producing information anonymously come before producing information with attribution? Consider the types of danger that accompany producing content online. Attackers often choose their victims based on characteristics that the victims have in the physical world. Attempted attacks often cause harm because the attacker could identify the victim’s physical location or social identity. While the best solution would of course be to prevent the attackers from behaving harmfully at all, a less ambitious but more attainable project is to simply prevent them from being able to find targets for their aggression. Imagine an attacker determined to harm all people in a certain group, on an internet where nobody discloses whether or not they’re a member of that group: The attacker is forced to go for nobody or everybody, neither of which is as effective as an individually targeted attack. And that’s just for verbal or digital assaults – it is extremely difficult to threaten or enact physical harm upon someone whose location you do not know.

Systems that support anonymity and arbitrary account creation open themselves to attempted abuse, but they also provide people with extremely powerful tools to avoid being abused. There are of course tradeoffs – it takes a certain amount of mental overhead, and might feel duplicitous, to use separate accounts for discussing your unfashionable polticical views and planning the local block party – but there’s no denying how much less harm it is possible to come to when behaving anonymously than when advertising your physical identity and location.

How do you produce information anonymously? First, you access the internet in a way that won’t make it easy to trace your activity to your house. This could mean booting from a LiveCD and accessing a public internet connection, or booting from a LiveCD and using a proxy or onion router to connect to the sites you wish to access in order to mask your IP address. A LiveCD is safer than using your day-to-day computer profile because browsers store information from sites you visit, and some information about your operating system is sometimes visible to sites you visit. Using a brand-new copy of your operating system, which forgets everything when you shut down, is an easy way to avoid revealing those identifying pieces of information.

Proof read anything that you want to post anonymously to make sure it doesn’t contain any details about where you live, or facts that only someone with your experiences would know.

How do you put information online anonymously? Once you have a connection that’s hard to trace to your real-world self, it’s pretty simple to set up free accounts on mail and web hosting sites under some placeholder name.

Be aware that the vocabulary you use and the way you structure your sentences can sometimes be identifying, as well. A good way to strip all of the uniqueness from your writing voice is to run a piece of writing through http://hemingwayapp.com/ and fix everything that it calls an error. After that, use a thesaurus to add some words you don’t usually use anywhere else. Alternately, you could run it through a couple different translation tools to make it sound less like you wrote it.

How do you share something you wrote anonymously with your friends? Here’s the hard part: You don’t. If you’re not careful, the way that you distribute a piece of information that you wrote anonymously can make it clear that it came from you. Anonymously posted information generally has to be shared publicly or to an entire forum, because to pick and choose exactly which individuals get to see a piece of content reveals a lot about the identity of the person posting it.

Doing these things can enable you to produce a piece of information on the internet that would be a real nuisance to trace back to you in real life. It’s not impossible, of course – there are sneaky tricks like comparing the times when you use a proxy to the times when material shows up online – but someone would only attempt such tricks if they already had a high level of technical knowledge and a grudge against you in particular.

Long story short, in most places with internet access, it is possible but inconvenient to exercise your safety to produce information anonymously. By building better online tools and hosting options, we can extend this safety to more people who have internet access.

Safety to Produce Information Pseudonymously

An important thing to note about producing information anonymously is that if you step up and take credit for another piece of information you posted, you’re less anonymous. Add another attribution, and you’re easier still to track. It’s most anonymous to produce every piece of information under a different throwaway identity, and least anonymous to produce everything under a single identity even if it’s made up.

Producing information pseudonymously is when you use a fake name and biography, but otherwise go about the internet as the same person from day to day. The technical mechanics of producing a single pseudonymous post are identical to what I described for acting “anonymously”, but I differentiate psyedonymity from anonymity in that the former is continuous – you can form friendships with other humans under a pseudonym.

The major hazard to a pseudonymous online presence is that if you aggregate enough details about your physical life under a single account, someone reading all those details might use them to figure out who you are offline. This is addressed by private forums and boards, which limit the number of possible attackers who can see your posts, as well as by being careful of what information you disclose. Beware, however, that any software vulnerability in a private forum may mean its contents suddenly becomes public.

In my opinion, pseudonymous identity is an excellent compromise between the social benefits of always being the same person, and physical safety from hypothetical attackers. I find that behaving pseudonymously rather than anonymously helps me build friendships with people whom I’m not sure at first whether to trust, while maintaining a sense of accountability for my reputation that’s absent in strictly anonymous communication. But hey, I’m biased – you probably don’t know my full name or home address from my web presence, so I’m on the pseudonymity spectrum too.

Safety to Produce Information with Accurate Attribution

The “safety” to produce information with attribution is extremely complex, and the one on which I believe that most social justice advocates tend to focus on. It is as it sounds: Will someone come to harm if they choose to post their opinions and location under their real name?

For some people, this is the easiest safety to acquire: If you’re in a group that’s not subject to hate crimes in your area, and your content is only consumed by people who agree with you or feel neutrally toward your views, you have this freedom by default.

For others, this safety is almost impossible to obtain. If the combination of your appearance and the views you’re discussing would get you hurt if you said it in public, extreme social change would be required before you had even a chance at being comparably safe online.

I hold the opinion that solving the general case of linking created content to real-world identities is not a computer problem. It’s a social problem, requiring a world in which no person offended by something on the internet and aware of where its creator lives is physically able to take action against the content’s creator. So it’d be great, but we are not there yet, and the only fictional worlds I’ve encountered in which this safety can be said to exist are impossibly unrealistic, totalitarian dystopias, or both.

In Summary

In other words, I view misuse of the internet as a pattern of the form “Creator posts content -> attacker views content -> attacker identifies creator -> attacker harms creator”. This chain can break, with varying degrees of difficulty, at several points:

First, this chain of outcomes won’t begin if the creator doesn’t post the content at all. This is the easiest solution, and I point out the “safety to consume desired content” because even someone who never posts online can derive major benefits from the information available on the internet. It’s easy, but it’s not good enough: Producing as well as consuming content is part of what sets the internet apart from TV or books.

The next essential link in the chain is the attacker identifying the content’s creator. If someone has no way to contact you physically or digitally, all they can do is shout nasty things to the world online, and you’re free to either ignore them or shout right back. Having nasty things shouted about your work isn’t optimal, but it is difficult to feel that your physical or social wellbeing is jeopardized by someone when they have no idea who you are. This is why I believe that the safety to produce information anonymously is so important: It uses software to change the outcome even in circumstances where the attacker’s behavior cannot be modified. Perfect pseudonymity also breaks this link, but any software mishap or accidental over-sharing can invalidate it instantly. The link is broken with fewer potential points of failure by creating content anonymously.

The third solution is what I alluded to when discussing the safety of pseudonymity: Prevent the attacker from viewing the content. This is what private, interest-specific forums accomplish reasonably well. There are hazards here, especially if a forum’s contents become public unintentionally, or if a dedicated attacker masquerades as a member of the very group they wish to harm. So it helps, and can be improved technologically through proper security practices by forum administrators, and socially via appropriate moderation. It’s better, from the perspective that assuming the same online identity each day allows creators to build social bonds with one another, but it’s still not optimal.

The fourth and ideal solution is to break the cycle right at the very end, by preventing the attacker from harming the content creator. This seems to be where most advocates argue we should jump straight into, because it’s really perfect – it requires no change or compromise from content creators, and total change from those who might be out to harm them. It’s the only solution in which people of all appearances and beliefs and locations are equally safe online. However, it’s also the most difficult place to break the cycle, and a place at which any error of implementation would create the potential for incalculable abuse.

I’ve listed these safeties in an order that I regard as how feasible they are to implement with today’s social systems and technologies. I think it’s possible to recognize the 4th safety as the top of the heap, without using that as an excuse to neglect the benefits which can come from bringing more of the world its 3 lesser but far more attainable cousins.

]]>
Tue, 27 Jun 2017 00:00:00 -0700
http://edunham.net/2017/05/23/salt_successful_ping_but_highstate_says_minion_did_not_return.html http://edunham.net/2017/05/23/salt_successful_ping_but_highstate_says_minion_did_not_return.html <![CDATA[Salt: Successful ping but highstate says "minion did not return"]]> Salt: Successful ping but highstate says “minion did not return”

Today I was setting up some new OSX hosts on Macstadium for Servo’s build cluster. The hosts are managed with SaltStack.

After installing all the things, I ran a test ping and it looked fine:

user@saltmaster:~$ salt newbuilder test.ping
newbuilder:
    True

However, running a highstate caused Salt to claim the minion was non-responsive:

user@saltmaster:~$ salt newbuilder state.highstate
newbuilder:
    Minion did not return. [No response]

Googling this problem yielded a bunch of other “minion did not return” kind of issues, but nothing about what to do when the minion sometimes returns fine and other times does not.

The fix turned out to be simple: When a test ping succeeds but a longer-running state fails, it’s an issue with the master’s timeout setting. The timeout defaults to 5 seconds, so a sufficiently slow job will look to the master like the minion was unreachable.

As explained in the Salt docs, you can bump the timeout by adding the line timeout: 30 (or whatever number of seconds you choose) to the file /etc/salt/master on the salt master host.

]]>
Tue, 23 May 2017 00:00:00 -0700
http://edunham.net/2017/03/01/advice_on_storing_encryption_keys.html http://edunham.net/2017/03/01/advice_on_storing_encryption_keys.html <![CDATA[Advice on storing encryption keys]]> Advice on storing encryption keys

I saw an excellent question get some excellent infosec advice on IRC recently. I’m quoting the discussion here because I expect that I’ll want to reference it when answering others’ questions in the future.

A user going by Dagnabit asked:

May I borrow some advice specifically on how best to store an ecryption key? I have a python script that encrypts files using libsodium, My question is how can I securely store the encryption key within the file system? Is it best kept in an sqlite db that can only be accessed by the user owning the python script?

This user has run right into one of the fundamental challenges of security: How can my secrets (in this case, keys) be safe from attackers, while still being usable?

HedgeMage replied with a wall of useful advice. Quotations are her words, links and annotations between them are me adding some context and opinions.

So, it depends on your security model: in most cases I’m prone to keeping my encryption key on a hardware token, so that even if the server is compromised, the secret key is not.

You’re probably familiar with time-based one-time-pad hardware tokens, but in the case of key management, the “hardware token” could be as simple as a USB stick locked in a safe. On the spectrum of compromise between security and convenience, a hardware token is toward the DNSSEC keyholder end.

However, for some projects you are on virtualized infrastructure and can’t plug in a hardware token. It’s unfortunate, because that’s really the safest thing, but a reality for many of us.

This also applies to physical infrastructure in which an application might need to use a key without human supervision.

Without getting into anything crazy where a proxy server does signing, etc, you usually are better off trusting filesystem permissions than stuffing it in the database, for the following reasons:

While delegating the task of signing to a proxy server can make life more annoying to an attacker, you’re still going to have to choose between having a human hold the key and get interrupted whenever it’s needed, or trusting a server with it, at some point. You can compromise between those two extremes by using a setup like subkeys, but it’s still inconvenient if a subkey gets compromised.

  • It’s easier to monitor the filesystem activity comprehensively, and detect

intrusions/compromises.

  • Filesystem permissions are pretty dependable at this point, and if the

application doing the signing has permission for the key, whether in a DB or the filesystem, it can compromise that key... so the database is giving you new attack surfaces (compromise of the DB access model) without any new protections.

To put it even more bluntly, any unauthorized access to a machine has the potential to leak all of the secrets on it. The actions that you’ll need to take if you suspect the filesystem of a host was compromised are pretty much identical to those you’d take if the DB was.

  • Stuffing the key in the DB is nonstandard enough that you may be writing more of

the implementation yourself, instead of depending as much as possible on widely-used, frequently-examined code.

Dagnabit’s reply saved me the work of summarizing the key takeaways:

I will work on securing the distrubtion and removing any unnecessary packages.

I’ll look at the possibility of using a hardware token to keep it secure/private.

Reducing the attack surface is logical and something I had not considered.

]]>
Wed, 01 Mar 2017 00:00:00 -0800
http://edunham.net/2017/01/23/tech_internship_hunting_ideas.html http://edunham.net/2017/01/23/tech_internship_hunting_ideas.html <![CDATA[Tech Internship Hunting Ideas]]> Tech Internship Hunting Ideas

A question from a computer science student crossed one of my IRC channels recently:

Them: what is the best way to fish for internships over the summer?
    Glassdoor?

Me: It depends on what kind of internship you're looking for. What kind of
    internship are you looking for?

Them: Computer Science, anything really.

This caused me to type out a lot of advice. I’ll restate and elaborate on it here, so that I can provide a more timely and direct summary if the question comes up again.

Philosophy of Job Hunting

My opinion on job hunting, especially for early-career technologists, is that it’s important to get multiple offers whenever possible. Only once one has a viable alternative can one be said to truly choose a role, rather than being forced into it by financial necessity.

In my personal experience, cultivating multiple offers was an important step in disentangling impostor syndrome from my career choices. Multiple data points about one’s skills being valued by others can help balance out an internal monologue about how much one has yet to learn.

If you disagree that cultivating simulataneous opportunities then politely declining all but the best is a viable internship hunting strategy, the rest of this post may not be relevant or interesting to you.

Identifying Your Options

To get an internship offer, you need to make a compelling application to a company which might hire you. I find that a useful first step is to come up with a list of such companies, so you can study their needs and determine what will make your application interest them.

Use your social network. Ask your peers about what internships they’ve had or applied for. Ask your mentors whether they or their friends and colleagues hire interns.

When you ask someone about their experience with a company, remember to ask for their opinion of it. To put that opinion into perspective, it’s useful to also ask about their personal preferences for what they enjoy or hate about a workplace. Knowing that someone who prefers to work with a lot of background noise enjoyed a company’s busy open-plan office can be extremely useful if you need silence to concentrate! Listening with interest to a person’s opinions also strengthens your social bond with them, which never hurts if it turns out they can help you get into a company that you feel might be a good fit.

Use LinkedIn, Hacker News, Glassdoor, and your city’s job boards. The broader a net you cast to start with, the better your chances of eventually finding somewhere that you enjoy. If your job hunt includes certain fields (web dev, DevOps, big data, whatever), investigate whether there’s a meetup for professionals in that field in your region. If you have the opportunity to give a short talk on a personal project at such a meetup, do it and make sure to mention that you’re looking for an internship.

Identify your own priorities

Now that you have a list of places which might concievably want to hire you, it’s time to do some introspection. For each field that you’ve found a prospective company in, try to answer the question “What makes you excited about working here?”.

You do not have to have know what you want to do with your life to know that, right now, you think DevOps or big data or frontend development is cool.

You do not have to personally commit to a single passion at the expense of all others – it’s perfectly fine to be interested in several different languages or frameworks, even if the tech media tries to pit them against each other.

However, for each application, it’s prudent to only emphasize your interests in that particular field. It’s a bit of a faux pas to show up to a helpdesk interview and focus the whole time on your passion for building robots, or vice versa. And acting equally interested in every other field will cause an employer to doubt that you’re necessarily the best fit for a specialized role... So in an interview, try not to stray too far from the value that you’re able to deliver to that company.

This is also a good time to identify any deal-breakers that would cause you to decline a prospective internship. Are you ok with relocating? Is there some tool or technology that would cause you to dread going to work every day?

I personally think that it’s worth applying even to a role that you know you wouldn’t accept an offer from when you’re early in your career. If they decide to interview you, you’ll get practice experiencing a real interview without the pressure of “I’ll lose my chance at my dream job if I mess this up!”. Plus if they extend an offer to you, it can help you calibrate the financial value of your skills and negotiate with employers that you’d actually enjoy.

Craft an excellent resume

I talk about this elsewhere.

There are a couple extra notes if you’re applying for an internship:

1) Emphasize the parts of your experience that relate most closely to what each employer values. If you can, it’s great to use the same words for skills that were used in the job description.

2) The bar for what skills go on your resume is lower when you have less experience. Did you play with Docker for a weekend recently and use it to deploy a toy app? Make sure to include that experience.

Practice, Practice, Practice

If you’re uncomfortable with interviewing, do it until it becomes second nature. If your current boss supports your internship search, do some mock interviews with them. If you’re nervous about things going wrong, have a friend roleplay as a really bad interview with you to help you practice coping strategies. If you’ll be in front of a panel of interviewers, try to get a panel of friends to gang up on you and ask tough questions!

To some readers this may be obvious, but to others it’s worth pointing out that you should also practice wearing the clothes that you’ll wear to an interview. If you wear a tie, learn to tie it well. If you wear shirts or pants that need to be ironed, learn to iron them comptently. If you wear shoes that need to be shined, learn to shine them. And if your interview will include lunch, learn to eat with good table manners and avoid spilling food on yourself.

Yes, the day-to-day dress codes of many tech offices are solidly in the “sneakers, jeans, and t-shirt” category for employees of all levels and genders. But many interviewers, especially mid- to late-career folks, grew up in an age when dressing casually at an interview was a sign of incompetence or disrespect. Although some may make an effort to overcome those biases, the subconscious conditioning is often still there, and you can take advantage of it by wearing at least business casual.

Apply Everywhere

If you know someone at a company where you’re applying, try to get their feedback on how you can tailor your resume to be the best fit for the job you’re looking at! They might even be able to introduce you personally to your potential future boss.

I think it’s worth submitting a good resume to every company which you identify as being possibly interested in your skills, even the ones you don’t currently think you want to work for. Interview practice is worth more in potential future salary than the hours of your time it’ll take at this point in your career.

Follow Up

If you don’t hear back from a company for a couple weeks, a polite note is order. Restate your enthusiasm for their company or field, express your understanding that there are a lot of candidates and everything is busy, and politely solicit any feedback that they may be able to offer about your application. A delayed reply does not always mean rejection.

If you’re rejected, follow up to thank HR for their time.

If you’re invited to interview, reply promptly and set a time and date. For a virtual or remote interview, only offer times when you’ll have access to a quiet room with a good network connection.

Interview Excellently

I don’t have any advice that you won’t find a hundred times over on the rest of the web. The key points are:

  • Show up on time, looking respectable
  • Let’s hope you didn’t lie on your resume
  • Restate each question in your answer
  • It’s ok not to know an answer – state what you would do if you encountered the problem at work. Would you Google a certain phrase? Ask a colleague? Read the manual?
  • Always ask questions at the end. When in doubt, ask your interviewer what they enjoy about working for the company.

Keep Following Up

After your interview, write to whoever arranged it and thank the interviewers for their time. For bonus points, mention something that you talked about in the interview, or include the answer to a question that you didn’t know off the top of your head at the time.

Getting an Offer

Recruiters don’t usually like to disclose the details of offers in writing right away. They’ll often phone you to talk about it. You do not have to accept or decline during that first call – if you’re trying to stall for a bit more time for another company to get back to you, an excuse like “I’ll have to run that by my family to make sure those details will work” is often safe.

Remember, though, that no offer is really a job until both you and the employer have signed a contract.

Declining Offers

If you’ve applied to enough places with a sufficiently compelling resume, you’ll probably have multiple offers. If you’re lucky, they’ll all arrive around the same time.

If you wish to decline an offer from a company whom you’re certain you don’t want to work for, you can practice your negotiation skills. Read up on salary negotiation, try to talk the company into making you a better offer, and observe what works and what doesn’t. It’s not super polite to invest a bunch of their time in negotiations and then turn them down anyway, which is why I suggest only doing this to a place that you’re not very fond of.

To decline an offer without burning any bridges, be sure to thank them again for their time and regretfully inform them that you’ll be pursuing other opportunities at this time. It never hurts to also do them a favor like recommending a friend who’s job hunting and might be a good fit.

Again, though, don’t decline an offer until you have your actual job’s contract in writing.

]]>
Mon, 23 Jan 2017 00:00:00 -0800
http://edunham.net/2016/09/27/rust_s_community_automation.html http://edunham.net/2016/09/27/rust_s_community_automation.html <![CDATA[Rust's Community Automation]]>

Rust’s Community Automation

Here’s the text version, with clickable links, of my Automacon lightning talk today.

Intro

I’m a DevOps engineer at Mozilla Research and a member of the Rust Community subteam, but the conclusions and suggestions in this talk are my own observations and opinions.

The slides are a result of trying to write my own CSS for sliderust... Sorry about the ugliness.

I have 10 minutes, so this is not the time to teach you any Rust. Check out rust-lang.org, the Rust Community Resources, or your city’s Rust meetup to get started with the language.

What we are going to cover is how Rust automates some community tasks, and what you can learn from our automation.

Community

I define “community”, in this context, as “the human interaction work necessary for a project’s success”. This work is done by a wide variety of people in many situations. Every interaction, from helping a new contributor to discussing a proposed code change to criticizing someone’s behavior, affects the overall climate of a project’s community.

Automation

To me, “automation” means “offloading peoples’ work onto a system”. This can be a computer system, but I think it can also mean changes to the social systems that guide peoples’ behavior.

So, community automation is a combination of:

  • Building tools to do things the humans used to have to
  • Tweaking the social systems to minimize the overhead they create

Scoping the Problem

While not all things can be automated and not all factors of the community are under the project leadership’s control, it’s not totally hopeless.

Choices made and automation deployed by project leaders can help control:

  • Which contributors feel welcome or unwelcome in a project
  • What code makes it into the project’s tree
  • Robots!

Moderation

Our robots and social systems to improve workflow and contributor experience all rely on community members’ cooperation. To create a community of people who want to work constructively together and not be jerks to each other, Rust defines behavior expectations code of conduct. The important thing to note about the CoC is that half the document is a clear explanation of how the policies in it will be enforced. This would be impossible without the dedication of the amazing mod team.

The process of moderation cannot and should not be left to a computer, but we can use technology to make our mods’ work as easy as possible. We leave the human tasks to humans, but let our technologies do the rest.

In this case, while the mods need to step in when a human has a complaint about something, we can automate the process of telling peole that the rules exist. You can’t join the IRC channel, post on the Discourse forums, or even read the Rust subreddit without being made aware that you’re expected to follow the CoC’s guidelines in official Rust spaces.

Depending on the forums where your project communicates, try to automate the process of excluding obvious spammers and trolls. Not everybody has the skills or interest to be an excellent moderator, so when you find them, avoid wasting their time on things that a computer could do for them!

It didn’t fit in the talk, but this Slashdot post is one of my favorite examples of somebody being filtered out of participating in the Rust community due to their personal convictions about how project leadership should work. While we do miss out on that person’s potential technical contributions, we also save all of the time that might be spent hashing out our disagreements with them if we had a less clear set of community guideines.

Robots

This lightning talk highlighted 4 categories of robots:

  • Maintaining code quality
  • Engaging in social pleasantries
  • Guiding new contributors
  • Widening the contributor pipeline

Longer versions of this talk also touch on automatically testing compiler releases, but that’s more than 10 minutes of content on its own.

The Not Rocket Science Rule of Software Engineering

To my knowledge, this blog post by Rust’s inventor Graydon Hoare is the first time that this basic principle has been put so succinctly:

Automatically maintain a repository of code that always passes all the tests.

This policy guides the Rust compiler’s development workflow, and has trickled down into libraries and projects using the language.

Bors

The name Bors has been handed down from Graydon’s original autolander bot to an instance of Homu, and is often verbed to refer to the simple actions he does:

  1. Notice when a human says “r+” on a PR
  2. Create a branch that looks like master will after the change is applied
  3. Test that branch
  4. Fastforward the master branch to the tested state, if it passed.

Keep your tree green

Saying “we can’t break the tests any more” is a pretty significant cultural change, so be prepeared for some resistance. With that disclaimer, the path to following the Not Rocket Science Rule is pretty simple:

  1. Write tests that fail when your code is bad and pass when it’s good
  2. Run the tests on every change
  3. Only merge code if it passes all the tests
  4. Fix the tests whenever thy’re wrong.

This strategy encourages people to maintain the tests, because a broken test becomes everyone’s problem and interrupts their workflow until it’s fixed.

I believe that tests are necessary for all code that people work on. If the code was fully and perfectly correct, it wouldn’t need changes – we only write code when something is wrong, whether that’s “It crashes” or “It lacks such-and-such a feature”. And regardless of the changes you’re making, tests are essential for catching any regressions you might accidentally introduce.

Automating social pleasantries

Have you ever submitted an issue or change request to a project, then not heard back for several months? It feels bad to be ignored, and the project loses out on potential contributors.

Rust automates basic social pleasantries with a robot called Highfive. Her tasks are easy to explain, though the implementaion details can be tricky:

  1. Notice when a change is submitted by a new contributor, then welcome them
  2. Assign reviewers, based on what code changed, to all PRs
  3. Nag the reviewer if they seem to have forgotten about their assignment

If you don’t want a dedicated greeter-bot, you can get many of these features from your code management system:

  • Use issue and pull request templates to guide potential contributors to the docs that can help them improve their report or request.
  • Configure notifications so you find out when someone is trying to interact with your project. This could mean muting all the noise notifications so the signal ones are available, or intermittently polling the repositories that you maintain (a daily cron job or weekly calendar reminder works just fine).

Guide new contributors

In open source projects, “I’m new; what can I work on?” is a common inquiry. In internal projects, you’ll often meet colleagues from elsewhere in your organization who ask you to teach them something about the project or the skills you use when working on it.

The Rust-implemented browser engine Servo is actually a slightly better example of this than the compiler itself, since the smaller and younger codebase has more introductory-level issues remaining. The site starters.servo.org automatically scrapes the organization’s issue trackers for easy and unclaimed issues.

Issue triage is often unrewarding, but using the tags for a project like this creates a greater incentive to keep them up to date.

When filing introductory issues, try to include links to the relevant documentation, instructions for reproducing the bug, and a suggestion of what file you would look in first if you tackled the problem yourself.

Automating mentorship

Mentorship is a highly personalized process in which one human transfers their skills to another. However, large projects often have more contributors seeking the same basic skills than mentors with time to teach them.

The parts of mentorship which don’t explicitly require a human mentor can be offloaded onto technology.

The first way to automate mentorship tasks is to maintain correct and up-to-date documentation. Correct docs train humans to consult them before interrupting an expert, whereas docs that are frequently outdated or wrong condition their users to skip them entirely.

Use tools like octohatrack and your project status updates to identify and recognize contributors who help with docs and issue triage. Docs contributions may actually save more developer and community time than new code features, so respect them accordingly.

Finally, maintain a list of introductory or mentored issues – even if that’s just a Google Doc or Etherpad.

Bear in mind that an introductory issue doesn’t necessarily mean “suitable for someone who has never coded before”. Someone with great skills in a scripting language might be looking for a place to help with an embedded codebase, or a UX designer might want to get involved with a web framework that they’ve used. Introductory issues should be clear about what knowledge a contributor should acquire in order to try them, but they don’t have to all be “easy”.

Automating the pipeline

Drive-by fixes are to being a core contributor as interviews are to full time jobs. Just as a company attempts to interview as many qualified candidates as it can, you can recruit more contributors by making your introductory issues widely available.

Before publicizing your project, make sure you have a CONTRIBUTING.txt or good README outlining where a new contributor should start, or you’ll be barraged with the same few questions over and over.

There are a variety of sites, which I call issue aggregators, where people who already know a bit about open source development can go to find a new project to work on. I keep a list on this page <http://edunham.net/pages/issue_aggregators.html>, pull requests welcome <https://github.com/edunham/site/blob/master/pages/issue_aggregators.rst> if I’m missing anything. Submitting your introductory issues to these sites broadens your pipeline, and may free up humans’ recruiting time to focus on peole who need more help getting up to speed.

If you’re working on internal rather than public projects, issue aggregators are less relevant. However, if you have the resources, it’s worthwhile to consider the recruiting device of open sourcing an internal tool that would be useful to others. If an engineer uses and improves that tool, you get a tool improvement and they get some mentorship. In the long term, you also get a unique opportunity to improve that engineer’s opinion of your organization while networking with your engineers, which can make them more likely to want to work for you later.

Follow Up

For questions, you’re welcome to chat with me on Twitter (@QEDunham), email (automacon <at> edunham <dot> net), or IRC (edunham on irc.freenode.net and irc.mozilla.org).

Slides from the talk are here.

]]>
Tue, 27 Sep 2016 00:00:00 -0700
http://edunham.net/2016/09/23/setting_a_freenode_channel_s_taxonomy_info.html http://edunham.net/2016/09/23/setting_a_freenode_channel_s_taxonomy_info.html <![CDATA[Setting a Freenode channel's taxonomy info]]> Setting a Freenode channel’s taxonomy info

Some recent flooding in a Freenode channel sent me on a quest to discover whether the network’s services were capable of setting a custom message rate limit for each channel. As far as I can tell, they are not.

However, the problem caused me to re-read the ChanServ help section:

/msg chanserv help
- ***** ChanServ Help *****
- ...
- Other commands: ACCESS, AKICK, CLEAR, COUNT, DEOP, DEVOICE,
-                 DROP, GETKEY, HELP, INFO, QUIET, STATUS,
-                 SYNC, TAXONOMY, TEMPLATE, TOPIC, TOPICAPPEND,
-                 TOPICPREPEND, TOPICSWAP, UNQUIET, VOICE,
-                 WHY
- ***** End of Help *****

Taxonomy is a cool word. Let’s see what taxonomy means in the context of IRC:

/msg chanserv help taxonomy
- ***** ChanServ Help *****
- Help for TAXONOMY:
-
- The taxonomy command lists metadata information associated
- with registered channels.
-
- Examples:
-     /msg ChanServ TAXONOMY #atheme
- ***** End of Help *****

Follow its example:

/msg chanserv taxonomy #atheme
- Taxonomy for #atheme:
- url                       : http://atheme.github.io/
- ОХЯЕБУ                    : лололол
- End of #atheme taxonomy.

That’s neat; we can elicit a URL and some field with a cryllic and apparently custom name. But how do we put metadata into a Freenode channel’s taxonomy section? Google has no useful hits (hence this blog post), but further digging into ChanServ’s manual does help:

/msg chanserv help set

- ***** ChanServ Help *****
- Help for SET:
-
- SET allows you to set various control flags
- for channels that change the way certain
- operations are performed on them.
-
- The following subcommands are available:
- EMAIL           Sets the channel e-mail address.
- ...
- PROPERTY        Manipulates channel metadata.
- ...
- URL             Sets the channel URL.
- ...
- For more specific help use /msg ChanServ HELP SET command.
- ***** End of Help *****

Set arbirary metadata with /msg chanserv set #channel property key value

The commands /msg chanserv set #channel email a@b.com and /msg chanserv set #channel property email a@b.com appear to function identically, with the former being a convenient wrapper around the latter.

So that’s how #atheme got their fancy cryllic taxonomy: Someone with the appropriate permissions issued the command /msg chanserv set #atheme property ОХЯЕБУ лололол.

Behaviors of channel properties

I’ve attempted to deduce the rules governing custom metadata items, because I couldn’t find them documented anywhere.

  1. Issuing a set property command with a property name but no value deletes the property, removing it from the taxonomy.
  2. A property is overwritten each time someone with the appropriate permissions issues a /set command with a matching property name (more on the matching in a moment). The property name and value are stored with the same capitalization as the command issued.
  3. The algorithm which decides whether to overwrite an existing property or create a new one is not case sensitive. So if you set ##test email test@example.com and then set ##test EMAIL foo, the final taxonomy will show no field called email and one field called EMAIL with the value foo.
  4. When displayed, taxonomy items are sorted first in alphabetical order (case insensitively), then by length. For instance, properties with the names a, AA, and aAa would appear in that order, because the initial alphebetization is case-insensitive.
  5. Attempting to place mIRC color codes in the property name results in the error “Parameters are too long. Aborting.” However, placing color codes in the value of a custom property works just fine.

Other uses

As a final note, you can also do basically the same thing with Freenode’s NickServ, to set custom information about your nickname instead of about a channel.

]]>
Fri, 23 Sep 2016 00:00:00 -0700
http://edunham.net/2016/08/02/adventures_in_mercurial.html http://edunham.net/2016/08/02/adventures_in_mercurial.html <![CDATA[Adventures in Mercurial]]> Adventures in Mercurial

I adore Git, but have needed to ramp up my Mercurial (Hg) skills recently to dig prior work related to my current tasks out of a repo’s history. Here are some things I’m learning:

Command Equivalences

As this tutorial so helpfully explains, the two VCSes aren’t all that dissimilar under their hoods. I condensed the command comparison table into a single page and printed it out for quick reference; a PDF is here.

Clone

The thing I want to clone lives at http://hg.mozilla.org/hgcustom/version-control-tools/file/tip/autoland.

Trying to clone the full URL yields a 404, but snipping the URL back to the top-level directory gets me the repo:

$ hg clone http://hg.mozilla.org/hgcustom/version-control-tools/
destination directory: version-control-tools
requesting all changes
adding changesets
adding manifests
adding file changes
added 4574 changesets with 10874 changes to 1971 files
updating to bookmark @
1428 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ ls
version-control-tools

Examine Log

hg log | less shows me that each commit’s summary in this repo includes the part of the codebase it touches, and a bug number.

hg log | grep autoland: | less gives me the summaries of every commit that touched autoland, but I cannot show a commit from summary alone.

The Hg book helped me construct a filter that will show a unique revision ID onthe same line as each description.

hg log --template '{rev} {desc}\n' | grep autoland: is much more useful. It gives me the local ID of each changeset whose description included “autoland:”.

From here, I can use a bit more grep to narrow down the list of matching messages, then I’m ready to examine commits.

Examining Commits

That {rev} in my filter was the “repository-local changeset revision number”. For these examples I’ll examine revision 2589.

hg status --change 2589 lists the files that were touched by that revision, and hg export 2589 yields a full diff of the changes introduced.

This gets me enough information to make an appropriate set of changes, run the tests, and create my own commits!

]]>
Tue, 02 Aug 2016 00:00:00 -0700
http://edunham.net/2016/07/01/thinkpad_13_trackpoint_i3.html http://edunham.net/2016/07/01/thinkpad_13_trackpoint_i3.html <![CDATA[Thinkpad 13 Trackpoint slowdown in i3 window manager]]> Thinkpad 13 Trackpoint slowdown in i3 window manager

As has been mentioned on Reddit, the Thinkpad 13 trackpoint settings aren’t in the same place as those of older thinkpads. Despite some troubleshooting, I haven’t yet found what files to edit to adjust the trackpoint’s speed and sensitivity in Ubuntu 16.04.

The trackpoint has been slightly sluggish/unresponsive when I use the i3 window manager, and has additional intermittent slowdowns when using Chromium and Firefox in i3.

Although I don’t yet know the right way to fix trackpoint sensitivity on this machine, I accidentally discovered a highly effective workaround today:

  • Log into Unity (the default desktop that Ubuntu ships with) and configure the mouse and input settings as desired
  • Log out, and get back into i3wm
  • Launch unity-settings-daemon
  • And suddenly, the mouse works correctly the way it did in Unity!

I fully realize that this is a nasty hack around identifying and solving the actual problem, but it succeeds at making the mouse responsive while minimizing time spent troubleshooting.

]]>
Fri, 01 Jul 2016 00:00:00 -0700
http://edunham.net/2016/06/27/hieroglyph_and_tinkerer_dependencies.html http://edunham.net/2016/06/27/hieroglyph_and_tinkerer_dependencies.html <![CDATA[Hieroglyph and Tinkerer Dependencies]]> Hieroglyph and Tinkerer Dependencies

In setting up virtualenvs for my slides and blog repos on my new laptop, I’ve been reminded that a variety of Sphinx-based tools require system dependencies as well as the ones in their virtualenvs.

Hieroglyph dependency issues

The error resulting from pip install -r requirements.txt ended with:

Command ".../virtualenv/bin/python2 -u -c
"import setuptools,
tokenize;__file__='/tmp/pip-build-lzbk_r/Pillow/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-BNDc_6-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/home/edunham/repos/slides/rustcommunity/v/include/site/python2.7/Pillow"
failed with error code 1 in /tmp/pip-build-lzbk_r/Pillow/

Its fix, from stackoverflow, was:

$ sudo apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk
$ pip install -r requirements.txt

Tinkerer depencencies, too!

pip install -r requirements.txt over in my blog repo yielded:

Command ".../virtualenv/bin/python2 -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-NVLSBY/lxml/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-qD5QIe-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/home/edunham/repos/site/v/include/site/python2.7/lxml" failed with error code
1 in /tmp/pip-build-NVLSBY/lxml/

The fix is again to install the missing system deps, on Ubuntu:

$ sudo apt-get install libxml2-dev libxslt-dev
$ pip install -r requirements.txt

That’s it! I’m writing this down for SEO on the specific errors at hand, since the first several useful hits are currently stackoverflow.

If you’re a pip developer reading this, please briefly contemplate whether it’d be worthwhile to have some built-in mechanism to translate common dependency errors to the appropriate system package names needed based on the OS on which the command is run.

]]>
Mon, 27 Jun 2016 00:00:00 -0700
http://edunham.net/2016/06/23/cfps_made_easy.html http://edunham.net/2016/06/23/cfps_made_easy.html <![CDATA[CFPs Made Easier]]>

CFPs Made Easier

Check out this post by Lucy Bain about how to come up with an idea for what to talk about at a conference. I blogged last year about how I turn abstracts into talks, as well. Now that the SeaGL CFP is open, it’s time to look in a bit more detail about the process of going from a talk idea to a compelling abstract.

In this post, I’ll walk you through some exercises to clarify your understanding of your talk idea and find its audience, then help you use that information to outline the 7 essential parts of a complete abstract.

Getting ready to write your abstract

Your abstract is a promise about what your talk will deliver. Have you ever gotten your hopes up for a talk based on its abstract, then attended only to hear something totally unrelated? You can save your audience from that disappointment by making sure that you present what your abstract says you will.

I find the abstract to be the hardest part of the talk to write, because it sets the stage for every other part of it. If your abstract is thorough and clear about what your talk will deliver, you can refer back to it throughout the writing process to make sure you’re including the information that your audience is there for!

For both you and your audience to get the most out of your talk, the following questions can help you refine your talk idea before you even start to write its abstract.

Why do you love this idea?

Start working on your abstract by taking some quick notes on why you’re excited about speaking on this topic. There are no wrong answers! Your reasons might include:

  • Document a topic you care about in a format that works well for those who learn by listening and watching
  • Impress a potential employer with your knowledge and skills
  • Meet others in the community who’ve solved similar problems before, to advise you
  • Recruit contributors to a project
  • Force yourself to finish a project or learn more detail about a tool
  • Save novices from a pitfall that you encountered
  • Travel to a conference location that you’ve always wanted to visit
  • Build your resume
  • Or something else entirely!

Starting out by identifying what you personally hope to gain from giving the talk will help ensure that you make the right promises in your abstract, and get the right people into the room.

What’s your idea’s scope?

Make 2 quick little lists:

  • Topics I really want this presentation to cover
  • Topics I do not want this presentation to cover

Once you think that you have your abstract all sorted out, come back to these lists and make sure that you included enough topics from the first list, and excluded those from the second.

Who’s the conference’s target audience?

Keynotes and single-track conferences are special, but generally your talk does not have to appeal to every single person at the conference.

Write down all the major facts you know about the people who attend the conference to which you’re applying. How young or old might they be? How technically expert or inexperienced? What are their interests? Why are they there?

For example, here are some statements that I can make about the audience at SeaGL:

  • Expertise varies from university students and random community members to long-time contributors who’ve run multiple FOSS projects.
  • Age varies from a few school-aged kids (usually brought by speakers and attendees) to retirees.
  • The audience will contain some long-term FOSS contributors who don’t program, and some relatively expert programmers who might have minimal involvement in their FOSS communities
  • Most attendees will be from the vicinity of Seattle. It will be some attendees’ first tech conference. A handful of speakers are from other parts of the US and Canada; international attendees are a tiny minority.
  • The audience comes from a mix of socioeconomic backgrounds, and many attendees have day jobs in fields other than tech.
  • Attendees typically come to SeaGL because they’re interested in FOSS community and software.

Where’s your niche?

Now that you’ve taken some guesses about who will be reading your abstract, think about which subset of the conference’s attendees would get the most benefit out of the topic that you’re planning to talk about.

Write down which parts of the audience will get the most from your talk – novices to open source? Community leaders who’ve found themselves in charge of an IRC channel but aren’t sure how to administer it? Intermediate Bash users looking to learn some new tricks?

If your talk will appeal to multiple segments of the community (developers interested in moving into DevOps, and managers wondering what their operations people do all day?), write one question that your talk will answer for each segment.

You’ll use this list to customize your abstract and help get the right people into the room for your talk.

Still need an idea?

Conferences with a diverse audience often offer an introductory track to help enthusiastic newcomers get up to speed. If you have intermediate skills in a technology like Bash, Git, LaTeX, or IRC, offer an introductory talk to help newbies get started with it! Can you teach a topic that you learned recently in a way that’s useful to newbies?

If you’re an expert in a field that’s foreign to most attendees (psychology? beekeeping? Cray Supercomputer assembly language?), consider an intersection talk: “What you can learn from X about Y”. Can you combine your hobby, background, or day job with a theme from the conference to come up with something unique?

The Anatomy of an Abstract

There are many ways to structure a good abstract. Here are the 7 elements that I try to always include:

  1. Set the scene with a strong introductory sentence, which reminds your target audience of your topic’s relevance to them. Some of mine have included:

    • “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.”
    • “Git is the most popular source code management and version control system in the open source community.”
    • “When you’re new to programming, or self-taught with an emphasis on those topics that are directly relevant to your current project, it’s easy to skip learning about analyzing the complexity of algorithms.”
  2. Ask some questions, which the talk promises to answer. These questions should be asked from the perspective of your target audience, which you identified earlier. This is the least essential piece of an abstract, and can be skipped if you make sure your exposition clearly shows that you understand your target audience in some other way. Here are a couple of questions I’ve used in abstracts that were accepted to conferences:

    • “Do you know how to control what information people can discover about you on an IRC network?”
    • “Is the project of your dreams ignoring your pull requests?”
  3. Drop some hints about the format that the talk will take. This shows the selection commitee that you’ve planned ahead, and helps audience members select session that’re a good fit for their learning styles. Useful words here include:

    • “Overview of”
    • “Case study”
    • “Demonstration”
    • “Deep dive into”
    • “Outline X principles for”
    • “Live coding”
  4. Identify what background knowledge the audience will need to get the talk’s benefit, if applicable. Being specific about this helps welcome audience members who’re undecided about whether the talk is applicable to them. Useful phrases include:

    • “This talk will assume no background knowledge of...”
    • “If you’ve used ____ to ____, ...”
    • “If you’ve completed the ____ tutorial...”
  5. State a specific benefit that audience members will get from having attended the talk. Benefits can include:

    • “Halve your Django website’s page load times”
    • “Get help on IRC”
    • “Learn from ____‘s mistakes”
    • “Ask the right questions about ____
  6. Reinforce and quantify your credibility. If you’re presenting a case study into how your company deployed a specific tool, be sure to mention your role on the team! For instance, you might say:

    • “Presented by [the original author | a developer | a maintainer | a long-term user] of [the project], this talk will...”
  7. End with a recap of the talk’s basic promise, and welcome audience members to attend.

These 7 pieces of information don’t have to each be in their own sentence – for instance, establishing your credibility and indicating the talk’s format often fit together nicely in a single sentence.

Once you’ve got all of the essential pieces of an abstract, munge them around until it sounds like concise, fluent English. Get some feedback on helpmeabstract.com if you’d like assistance!

Give it a title

Naming things is hard. Here are some assorted tips:

  • Keep it under about 50 characters, or it might not fit on the program
  • Be polite. Rude puns or metaphors might be eye-catching, but probably violate your conference or community’s code of conduct, and will definitely alienate part of your prospective audience.
  • For general talks, it’s hard to go wrong with “Intro to ___” or “___ for ___ users”.
  • The form “[topic]: A [history|overview|melodrama|case study|love story]” is generally reliable. Well, I’m kidding about “melodrama” and “love story”... Mostly.
  • Clickbait is underhanded, but it works. “___ things I wish I’d known about ___”, anyone?

Good luck, and happy conferencing!

]]>
Thu, 23 Jun 2016 00:00:00 -0700
http://edunham.net/2016/05/27/thinkpad_13.html http://edunham.net/2016/05/27/thinkpad_13.html <![CDATA[2ish weeks with the Thinkpad 13]]> 2ish weeks with the Thinkpad 13

I recently got a Thinkpad 13 to try replacing my X230 as a personal laptop. Here’s the relevant specs from my order confirmation:

Battery 3cell 42Wh
System Unit 13&S2 IG i5-6200U NvPro
Camera 720p HD Camera
AC Adapter and Power Cord 45W 2pin US
Processor Intel Core i5-6200U MB
Hard drive 128GB SSD SATA3
Keyboard Language KYB SR ENG
Publication Language PUB ENG
Total memory 4GB DDR4 2133 SoDIMM
OS DPK W10 Home
Pointing device 3+2BCP NoFPR SR
Preload Language W10 H64-ENG
Preload OS Windows 10 Home 64
Preload Type Standard Image
TPM Setting Software TPM Enabled
Display Panel 13&S2 FHD IPS AG AL SR
WiFi wireless LAN adapters Intel 8260AC+BT 2x2 vPro

I picked mediocre CPU and RAM because the RAM’s easy to upgrade, and I’m curious about whether I actually need top-of-the-line CPUs to have an acceptable experience on a personal laptop.

Why the 13?

I had a few hard requirements for my next personal laptop:

  • Trackpoint with buttons
  • Decent key travel (the X1 carbon has 1.86mm and typing on it for too long made my hands hurt)
  • USBC port
  • Under $1,000

Plus a few nice-to-haves:

  • Small and light are nice, including charger
  • Screen not much worse than 1920x1080
  • Good battery life
  • Metal case and design for durability make me happy
  • My house is already full of Thinkpad chargers, so a laptop that uses them helps reduce additional clutter

I’ll be the first to admit that this is an atypical set of priorities. My laptop is home to Git, Vim, and a variety of tools for interacting with the internet, so the superficial I/O differences matter more to me than the machine’s internal specs.

Things I like about the 13

  • 2.1mm key travel is everything I hoped for. At least, I’ve used it all day and my hands don’t hurt.
  • Battery life is pretty decent, and battery will be easy to replace when it starts to fail.
  • Light-enough weight. Lighter charger than other Thinkpads I’ve had.
  • Smallest Thinkpad charger that I’ve ever seen.
  • Case screws are all captive.
  • Mystery hole in the bottom case turns out to be a highly convenient hard shutdown button.
  • Hinges feel pretty solid and hold the screen up nicely.
  • No keyboard backlight. I dislike them.
  • 4GB of RAM is a single stick, easy to add more (and I’ll need to for a smoother web browsing experience; neither Firefox nor Chromium is particularly happy with only 4GB)
  • A vanilla Ubuntu 16.04 iso Just Worked for installing Linux. It must have shipped with whatever magic signatures were required to play nice with the new security measures, because the install process was delightfully non-thought-provoking.
  • ~7mm plastic bezel between buttons and trackpad reduces likelihood of accidentally moving cursor when clicking.
../../../_images/13-button-bezel.jpg
  • Screen’s the same as my X240, xrandr calls it 1920x1080 294mm x 165mm. This fits 211 columns of a default font, or 127 columns of a font that’s comfortably legible when the laptop is on the other side of my desk.

Nitpicks about the 13

  • Color.

When I purchased mine at the end of April, only the silver chassis had a metal lid and shipped with a nice screen by default (the higher-res screen is available in the all-plastic black model for an additional charge). So now I own a non-black laptop for the first time since my Dell Latitude D410 in high school. The screen bezel and keys are black, though, and if I really cared I could probably paint the rest of it.

  • Power button.
../../../_images/13-power-button.jpg

It feels horribly... squishy? There’s no satisfying click to tell me when I’ve pushed it far enough. Holding it for 10 seconds only sometimes shuts the laptop off (though there’s a reset switch on the mobo accessible by a paperclip-hole in the bottom panel which forces shutdown instantly when pushed). There’s a circle on the pwoer button that looks like it might be an LED, but it never lights up.

  • Cutesy font.
../../../_images/13-lenovo-font.jpg

This is a tiny nitpick, but they’ve changed the Lenovo logo on the lid, pre-BIOS boot screen, and screen bezel from the already-mediocre font to a super condescending, childish, roundy one. Fortunately the lid one is easily hidden under some stickers.

  • Bottom panel held on by clips as well as screws.

More on this one in the disassembly section below, but I’m afraid they’ll break with how often I take my laptops apart.

  • Mouse buttons feel cheap and plastic-ey.

They feel like thin plastic shells instead of solid buttons like on older Thinkpads. I’m not sure precisely why they feel that way, but it’s a reminder that you’re using a lower-end machine.

  • Longest side is about 1cm greater than the short side of a TSA xray tub.

My X240 fits perfectly along the short end of the tub, leaving room for my shoes beside it. I have to use two tubs or separate my pair of shoes when putting the 13 through the scanner. (see, I wasn’t kidding when I said “nitpicks”)

  • The Trackpoint top is not interchangeable with those of older Thinkpads.
../../../_images/13-trackpoints.jpg

The round part is the same size, but the square hole in the bottom is about 2mm to a side rather than the 4mm of the one on an x220 keyboard. Plus the cylinder bit is about 2mm long rather than the x220’s 3.5mm, so even with an adapter for the square hole, older trackpoints would risk leaving marks on the screen.

  • The fan is a little loud.

I anticipate that this will get a lot less annoying when I upgrade to 16 or 32GB of ram and maybe tune it in software using thinkfan.

Thinkpad 13 partial disassembly photos

To get the bottom case off, pull all the visible screws and also remove the 3 tiny rubber feet from under the palm rest. I stuck my tiny rubber feet in a plastic bag and filed it away, because repeated removal would eventually destroy the glue and get them lost.

../../../_images/13-slide-and-pry.jpg

The bottom case comes off with a combination of sliding and prying. Getting it back on again requires sliding the palmrest edge just right, then snapping the sides and back on before the palm rest slips out of place. It’s tricky.

../../../_images/13-bendy-battery.jpg

The battery is easily removed by pulling out a single (non-captive) screw. It seems to be a thin plastic wrapper around 3 cell phone batteries. The battery has no glue holding it in, just screws.

../../../_images/13-mobo.jpg

Here’s its guts, with battery removed.

../../../_images/13-mobo-annotated.jpg

Note the convenient hard power cycle button (accessible via a tiny hole in the bottom case when assembled), pair of RAM slots and SSD form factor, and airspace compartment that almost looks intended for hiding half a dozen very small items. The coin cell battery (in sky blue shrink wrap) flaps around awkwardly when the machine is disassembled, but at least it’s not glued down.

]]>
Fri, 27 May 2016 00:00:00 -0700
http://edunham.net/2016/05/10/reflections_on_my_first_live_webcast.html http://edunham.net/2016/05/10/reflections_on_my_first_live_webcast.html <![CDATA[Reflections on my first live webcast]]> Reflections on my first live webcast

This morning, I participated in the O’Reilly Emerging Languages Webcast with my “Rust from a Scripting Background” talk. Here’s how it went.

Preparation

I was contacted about a month before my webcast and asked to present my OSCON talk as part of the event. I explained why my “How to learn Rust” talk didn’t sound like a good fit for the emerging languages webcast, and suggested the “Starting Rust from a Scripting Background” talk that I gave at my local Rust meetup recently as an alternative.

After we agreed on what talk would be suitable, O’Reilly’s Online Events Producers emailed me a contract to e-sign. The contract gives O’Reilly the opportunity to reuse and redistribute my talk from the webcast, and promises me a percentage of the proceeds if my recording is sold, licensed, or otherwise used to make them money.

During the week before the webcast, I did a test call in which an O’Reilly representative walked me through how to use the webcast software and verified that my audio was good on the phone I planned to use for the webcast.

A final copy of the slides for the webcast, in my option of PDF or Powerpoint, was due at 5pm the day before the event.

The Software (worked for me on Ubuntu)

O’Reilly Media provided an application called “Presentation Manager XD” from on24.com that presenters log into during the event.

According to my email from O’Reilly, the requirements for the event are:

  • Slides - PowerPoint or PDF only please with no embedded video or audio. Screen ratio of 4:3
  • Robust Internet connection
  • Clear, reliable phone line.
  • Windows 7 or 8 running IE 8+, Firefox 22+ or Chrome 27+
  • Mac OS 10.6+ running Firefox 22+ or Chrome 27+
  • Latest version of Flash Player
  • If you plan to share your screen, you will need to install a small application - you will be prompted to install it the first time you log into the platform.

Some of these requirements are lies. I used Firefox 46.0 on Ubuntu 14.04. I did rewrite my slides in LibreOffice because it emits better PDFs than the HTML tools I normally use, but I was also looking for an excuse to rewrite them to clean up their organization.

I clicked around in the “Presentation Manager XD” UI and downloaded a file called “ON24-ScreenShare-plugin”, then chmod +x‘d it and executed it with ./ON24-ScreenShare-plugin. This caused Wine to run, install some Gecko stuff, and start the screenshare plugin sufficiently well to share my screen to the webcast tool.

I had to re-run the plugin in Wine after logging out of and back into my window manager, of course. Additionally, the screenshare window’s resizing is finicky. It’s fine to grab and drag the highlighted parts of the window’s border with the mouse, but the meta+click command with which one usually moves windows in i3 causes the sides of the screenshare window to move independently of each other.

Here’s what the webcast UI looked like during streaming, just at the end of the Kotlin talk while I was getting ready to start mine:

../../../_images/oreillywebcast.png

The Talk

As previously mentioned, I rewrote my talk in LibreOffice Impress – ostensibly to get a prettier PDF, but also because it’s been a month or two since I last prepared for it and re-writing helps me refresh my memory and verify that all my facts are up to date.

GUI-based slide editing is downright painful after using rst-based tools for so long, especially because LibreOffice has no good way to embed code samples out of the box. I ended up going with screenshots from the Rust playground, which were probably better than embedded code samples, but relearning how to edit slides like a regular person wasn’t a pleasant experience.

I took more notes than I normally do, since nobody on the webcast could see whether I was reciting or reading. I’m glad I did, as having the notes on a physical page in front of me was reassuring and helped me avoid missing any important points.

I rehearsed the timing of each section of my slides individually, since it naturally broke down into 7 or so discrete parts and I had previously calculated how much of my hour to allocate to each section. Most sections ran consistently over time when preparing, yet under time during the actual talk. The lesson here is to rehearse until I’m happy with a section and can make it the same duration twice in a row.

The experience of presenting a talk in a subjectively empty room made me realize just what high-bandwidth communication regular conferences are.

Pros:

  • No need to worry about eye contact
  • All the notes you want
  • Can’t see anyone sleeping
  • Chat channel allowed instant distribution of links
  • Chat channel allowed expert attendees to help answer questions
  • Presentation software allowed gracefully skipping slides, rather than the usual paging back and forth with the arrow keys

Cons:

  • Can’t take quick surveys by show of hands
  • Negligible feedback on how many people are there and their body language of engagement/disengagement
  • Silences are super awkward
  • Can’t see the shy attendees, in order to encourage participation

The audience asked fewer questions during the talk than I expected. Fortunately, they came up with plenty of questions at the end – extra fortunate because I overcompensated on time and finished my slides about 15 minutes before the end of my speaking slot!

Q&A was surprisingly relaxing, as it was totally up to me which questions to answer. I’ll admit that I did select in favor of those that I could answer concisely and eloquently, deferring the questions that didn’t make as much sense to think about them while I answered easier ones.

tl;dr

In my experience, presenting a webcast was lower-stress and comparably impactful to a conference talk.

For would-be presenters concerned about their or the audience’s appearance, the visual anonymity of a webcast could be a great place to start a speaking career.

Speakers accustomed to presenting in rooms full of humans should expect subtle feedback, like nods, smiles, and laughter, to be totally invisible in a webcast environment.

And if O’Reilly asks you to do a webcast with them, I’d say go for it – they made the whole experience as seamless and easy as possible.

]]>
Tue, 10 May 2016 00:00:00 -0700
http://edunham.net/2016/05/05/paths_into_devops.html http://edunham.net/2016/05/05/paths_into_devops.html <![CDATA[Paths Into DevOps]]> Paths Into DevOps
../../../_images/twitter.png

Today, Carol asked me about how a current sysadmin can pivot into a junior “devops” role. 10 tweets into the reply, it became obvious that my thoughts on that type of transition won’t fit well into 140-character blocks.

My goal with this post is to catalog what I’ve done to reach my current level of success in the field, and outline the steps that a reader could take to mimic me.

Facets of DevOps

In my opinion, 3 distinct areas of focus have made me the kind of person from whom others solicit DevOps advice:

  • Cultural background
  • Technical skills
  • Self-promotion

I place “cultural background” first because many people with all the skills to succeed at “DevOps” roles choose or get stuck with other job titles, and everyone touches a slightly different point on the metaphorical elephant of what “DevOps” even means.

Cultural Background

What does “DevOps” mean to you?

  • Sysadmins who aren’t afraid of automation?
  • 2 sets of job requirements for the price of 1 engineer?
  • Developers who understand what the servers are actually doing?
  • Reducing the traditional divide between “development” and “operations” silos?
  • A buzzword that increases your number of weekly recruiter emails?
  • People who use configuration management, aka “infrastructure as code”?

From my experiences starting Oregon State University’s DevOps Bootcamp training program, speaking on DevOps related topics at a variety of industry conferences, and generally being a professional in the field, I’ve seen the term defined all of those ways and more.

Someone switching from “sysadmin” to “devops” should clearly define how they want their day-to-day job duties to change, and how their skills will need to change as a result.

Technical Skills

The best way to figure out the technical skills required for your dream job will always be to talk to people in the field, and read a lot of job postings to see what you’ll need on your resume and LinkedIn to catch a recruiter’s eye.

In my opinion, the bare minimum of technical skills that an established sysadmin would need in order to apply for DevOps roles are:

  • Use a configuration management tool – Chef, Puppet, Salt, or Ansible – to set up a web server in a VM.
  • Write a script in Python to do something more than Hello World – an IRC bot or tool to gather data from an API is fine.
  • Know enough about Git and GitHub to submit a pull request to fix something about an open source tool that other sysadmins use, even if it’s just a typo in the docs.
  • Understand enough about continuous integration testing to use it on a personal project, such as TravisCI on a GitHub repo, and appreciate the value of unit and integration tests for software.
  • Be able to tell a story about successfully troubleshooting a problem on a Linux or BSD server, and what you did to prevent it from happening again.

Keep in mind that your job in an interview is to represent what you know and how well you can learn new things. If you’re missing one of the above skills, go ask for help on how to build it.

Once you have all the experiences that I listed, you are no longer allowed to skip applying for an interesting role because you don’t feel you know enough. It’s the employer’s job to decide whether they want to grow you into the candiate of their dreams, and your job to give them a chance. Remember that a job posting describes the person leaving a role, and if you started with every skill listed, you’d probably be bored and not challenged to your full potential.

Self Promotion

“DevOps” is a label that engineers apply to themselves, then justify with various experiences and qualifications.

The path to becoming a community leader begins at engaging with the community. Look up DevOps-related conferences – find video recordings of talks from recent DevOps Days events, and see what names are on the schedules of upcoming DevOps conferences.

Look at which technologies the recent conferences have discussed, then look up talks about them from other events. Get into the IRC or Slack channels of the tools you want to become more expert at, listen until you know the answers to common questions, then start helping beginners newer than yourself.

Reach out to speakers whose talks you’ve enjoyed, and don’t be afraid to ask them for advice. Remember that they’re often extremely busy, so a short message with a compliment on their talk and a specific request for a suggestion is more likely to get a reply than overly vague requests. This type of networking will make your name familiar when their companies ask them to help recruit DevOps engineers, and can build valuable professional friendships that provide job leads and other assistance.

Contribute to the DevOps-related projects that you identify as having healthy communities. For configuration management, I’ve found that SaltStack is a particularly welcoming group. Find the source code on GitHub, examine the issue tracker, pick something easy, and submit a pull request fixing the bug. As you graduate to working on more challenging or larger issues, remember to advertise your involvment with the project on your LinkedIn profile!

Additionally, help others out by blogging what you learn during these adventures. If you ever find that Google doesn’t have useful results for an error message that you searched, write a blog post with the message and how you fixed it. If you’re tempted to bikeshed over which blogging platform to use, default to GitHub Pages, as a static site hosted there is easy to move to your own hosting later if you so desire.

Examine job postings for roles like you want, and make sure the key buzzwords appear on your LinkedIn profile wherever appropriate. A complete LinkedIn profile for even a relatively new DevOps engineer draws a surprising number of recruiters for DevOps-related roles. If you’re just starting out in the field, I’d recommend expressing interest in every opportunity that you’re contacted about, progressing to at least a phone interview if possible, and getting all the feedback you can about your performance afterwards. It’s especially important to interview at companies that you can’t see yourself enjoying a job at, because you can practice asking probing questions that tell you whether an employer will be a good fit for you. (check out this post for ideas).

Another trick for getting to an interview is to start something with DevOps in the name. It could be anything from a curated blog to a meetup to an online “book club” for DevOps-related papers, but leading something with a cool name seems to be highly attractive to recruiters. Another way to increase your visibility in the field is to give a talk at any local conference, especially LinuxFest and DevOpsDays events. Putting together an introductory talk on a useful technology only requires intermediate proficiency, and is a great way to build your personal brand.

To summarize, there are really 4 tricks to getting DevOps interviews, and you should interview as much as you can to get a feeling for what DevOps means to different parts of the industry:

  • Contribute back to the open source tools that you use
  • Network with established professionals
  • Optimize your LinkedIn and other professional profiles to draw recruiters
  • Be the founder of something.

Questions?

I collect interesting job search and interview advice links at the bottom of my resume repo readme.

I bolded each paragraph’s key points in the hopes of making them easier to read.

You’re welcome to reach out to me at blog @ edunham.net or @qedunham on Twitter if you have other questions. If I made a dumb typo or omitted some information in this post, either tell me about it or just throw a pull request at the repo to fix it and give yourself credit.

]]>
Thu, 05 May 2016 00:00:00 -0700
http://edunham.net/2016/04/18/persona_and_3rd_party_cookies_in_firefox.html http://edunham.net/2016/04/18/persona_and_3rd_party_cookies_in_firefox.html <![CDATA[Persona and third-party cookies in Firefox]]> Persona and third-party cookies in Firefox

Although its front page claims we’ve deprecated persona, it’s the only way to log into the statusboard and Air Mozilla. For a long time, I was unable to log into any site using Persona from Firefox 43 and 44 because of an error about my browser not being configured to accept third-party cookies.

The support article on the topic says that checking the “always accept cookies” box should fix the problem. I tried setting “accept third-party cookies” to “Always”, and yet the error persisted. (setting the top-level history configuration to “always remember history” didn’t affect the error either).

Fortunately, there’s also an “Exceptions” button by the “Accept cookies from sites” checkbox. Editing the exceptions list to universally allow “http://persona.org” lets me use Persona in Firefox normally.

_static/persona-exception.png

That’s the fix, but I don’t know whose bug it is. Did Firefox mis-balance privacy against convenience? Is the “always accept third-party cookies” setting’s failure to accept a cookie without an exception some strange edge case of a broken regex? Is Persona in the wrong for using a design that requires third-party cookies at all? Who knows!

]]>
Mon, 18 Apr 2016 00:00:00 -0700
http://edunham.net/2016/04/11/plushie_rustacean_pattern.html http://edunham.net/2016/04/11/plushie_rustacean_pattern.html <![CDATA[Plushie Rustacean Pattern]]> Plushie Rustacean Pattern

I made a Rustacean. He’s cute. You can make one too.

../../../_images/ferris-on-pattern.jpg

You’ll Need

  • A couple square feet of orange polar fleece, or any other orange fabric that won’t stretch or fray too much
  • A handful of stuffing. I cannibalized a throw pillow.
  • A needle and some orange thread
  • Black and white fabric scraps and thread, or black and white embroidery floss, for making the face.
  • Intermediate sewing skills
  • This pattern

The Pattern

../../../_images/ferris-pattern-color.png

Get yourself a front, back, underside, and claw drawn on paper, either by printing them out or tracing from a screen. The front, back, and underside should have horizontal symmetry, except for the face placement. Make sure the points marked in red and blue on this pattern are noted on your paper.

Mine measure about 6” wide between the points marked in red.

Sewing vocabulary

  • The right side of a fabric is what ends up on the outside of the finished item. The wrong side ends up where you can’t see it. Some fabrics have both sides the same; in that case, the wrong side is whichever one you feel like tracing the pattern onto.
  • seam allowance is some extra fabric that ends up on the inside of the item when you’re done. The pattern above does not include seam allowance. This means that if you cut the fabric along the lines in the pattern, your finished rustacean will be tiny and sad and shaped wrong. You cut the paper along the lines, then trace it onto the fabric, then sew along the lines.
  • applique is where you sew one piece of fabric onto the surface of another to make a design.
  • There are a bunch of great youtube videos on basic sewing skills. Watch whichever ones you need.

Assembly

  1. Trace a front, a back, an underside, and the 2 claws onto the wrong side of your fabric with whatever will write on it without bleeding through. Make sure to transfer the blue centerline marks and the red three-point join marks.
  2. Cut out the shapes you just traced, leaving about 1” of margin around them. We’ll trim the seams properly later, so don’t worry about getting it exact.
  3. Find a couple claw-sized chunks of leftover fabric and pin one to the back of each claw (right sides together, of course).
  4. Sew around both claws, leaving the arm ends open so you can turn them. I find it’s easiest to backstitch, and you can get away with stitches up to about 1.5mm apart with normal weight polar fleece.
  5. Trim around the outside of the seams on the claws to leave about 1/4” seam allowance, and clip right up to the stitches in the concave spot. If you backstitched, make sure flip them over before trimming the seams so you don’t accidentally cut through the longer stitches.
  6. Turn the claws so the right side of the fabric is out and the seams are on the inside, and stuff them with stuffing or fabric scraps. A pair of wooden chopsticks from a fast food place are a great tool for turning and stuffing.
  7. Put the front and back pieces right sides together so the points marked in red and blue on the pattern line up. Pin them together.
  8. Sew from one red mark to the other along Ferris’s spiky back.
  9. Trim around the spikes leaving about 1/4” seam allowance, clipping right up to the seam in the concave spots.
  10. Figure out which side is front (hint, it has only 2 legs rather than 4). Imagine where Ferris’s little face will go when he’s finished. Now, pin both claws onto the right side of the front piece, so they’ll be oriented correctly when he’s done. If in doubt, pin the bottom front in place and turn the whole thing inside out to make sure the claws are right.
  11. Match the center front of the underside with the center of Ferris’s front (both have a blue + on the pattern). Be sure the pieces have their right sides together and the claws are sandwiched between them.
  12. Match the points marked with red triangles on each side of the front and underside together and pin them. If the claws are sticking out at this point, go back to step 10 and try again
  13. Sew from one red mark to the other to join Ferris’s front to the front of his underside. Put a few extra stitches in the part of the seam where his “arms”/claws are attached, to make sure they can’t be pulled out.
  14. Trim around the 2 tiny legs that you’ve sewn so far, with about 1/8” seam allowance.
  15. Now you can applique his face onto the right side of his front. Or embroider it if you know how. Cut the black and white felt scraps into face-shaped pieces and sew them down, giving Ferris whatever expression you want.
  16. Line up the 4 back legs on the underside and back pieces, and pin them right sides toether. Sew everything except the part marked in green – that’s the hole through which you’ll turn him inside out.
  17. Trim around those last 4 legs, leaving at least 1/8” seam allowance. Don’t cut away any more fabric from the bit marked in green. If you leave a bit of extra fabric around the leg seams, they’ll be harder to turn but require less stuffing.
  18. Turn Ferris right side out. Again, chopsticks or the non-pointy end of a barbeque skewer are useful for getting the pointy bits to do the right thing.
  19. Stuff Ferris with the filling. I filled mine quite loosely, because it makes him softer and more huggable. If you overfill his body, his spikes will look silly. If you overfill his legs, they’ll stick out in funny directions and not bend right.
  20. Tuck the seam allowance back into the hole through which you stuffed Ferris and sew it shut. Congratulations, you have your own toy crab!

The Finished Product

../../../_images/ferris-plushie-montage.jpg

He’s cute, cuddly, and palm-sized. Lego dude for scale.

]]>
Mon, 11 Apr 2016 00:00:00 -0700
http://edunham.net/2016/03/24/could_rust_have_a_left_pad_incident.html http://edunham.net/2016/03/24/could_rust_have_a_left_pad_incident.html <![CDATA[Could Rust have a left-pad incident?]]> Could Rust have a left-pad incident?

The short answer: No.

What happened with left-pad?

The Node community had a lot of drama this week when a developer unpublished a package on which a lot of the world depended.

This was fundamentally possible because NPM offers an unpublish feature. Although the docs for unpublish admonish users that “It is generally considered bad behavior to remove versions of a library that others are depending on!” in large bold print, the feature is available.

What’s the Rust equivalent?

The Rust package manager, Cargo, is similar to NPM in that it helps users get the libraries on which their projects depend. Rust’s analog to the NPM index is crates.io.

The best explanation of Cargo’s robustness against unpublish exploits is the docs themselves:

cargo yank

Occasions may arise where you publish a version of a crate that actually ends up being broken for one reason or another (syntax error, forgot to include a file, etc.). For situations such as this, Cargo supports a “yank” of a version of a crate.:

$ cargo yank --vers 1.0.1
$ cargo yank --vers 1.0.1 --undo

A yank does not delete any code. This feature is not intended for deleting accidentally uploaded secrets, for example. If that happens, you must reset those secrets immediately.

The semantics of a yanked version are that no new dependencies can be created against that version, but all existing dependencies continue to work. One of the major goals of crates.io is to act as a permanent archive of crates that does not change over time, and allowing deletion of a version would go against this goal. Essentially a yank means that all projects with a Cargo.lock will not break, while any future Cargo.lock files generated will not list the yanked version.

As Cargo author Alex Crichton clarified in a GitHub comment yesterday, the only way that it’s possible to remove code from crates.io is to compel the Rust tools team to edit the database and S3 bucket.

Even if a crate maintainer leaves the community in anger or legal action is taken against a crate, this workflow ensures that code deletion is only possible by a small group of people with the motivation and authority to do it in the way that’s least problematic for users of the Rust language.

For more information on the crates.io package and copyright policies, see this internals thread.

But I just want to left pad a string in Rust??

Although a left-pad crate was created as a joke, you should probably just use the format! built-in from the standard library.

]]>
Thu, 24 Mar 2016 00:00:00 -0700
http://edunham.net/2016/03/23/reducing_saltstack_log_verbosity_for_travisci.html http://edunham.net/2016/03/23/reducing_saltstack_log_verbosity_for_travisci.html <![CDATA[Reducing SaltStack log verbosity for TravisCI]]> Reducing SaltStack log verbosity for TravisCI

Servo has some Salt configs, hosted on GitHub, for which changes are smoke-tested on TravisCI before they’re deployed. Travis only shows the first 10k lines of log output, so I want to minimize the amount of extraneous information that the states print.

My salt state looks like::

android-sdk:
  archive.extracted:
    - name: {{ common.homedir }}/android/sdk/{{ android.sdk.version }}
    - source: https://dl.google.com/android/android-sdk_{{
      android.sdk.version }}-linux.tgz
    - source_hash: sha512={{ android.sdk.sha512 }}
    - archive_format: tar
    - archive_user: user
    - if_missing: {{ common.homedir }}/android/sdk/{{ android.sdk.version
      }}/android-sdk-linux
    - require:
      - user: user

The output in TravisCI is::

      ID: android-sdk
Function: archive.extracted
    Name: /home/user/android/sdk/r24.4.1
  Result: True
 Comment: https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz extracted in /home/user/android/sdk/r24.4.1/
 Started: 17:46:25.900436
Duration: 19540.846 ms
 Changes:
          ----------
          directories_created:
              - /home/user/android/sdk/r24.4.1/
              - /home/user/android/sdk/r24.4.1/android-sdk-linux

          extracted_files:
              ... 2755 lines listing one file per line that I don't want to see in the log

https://docs.saltstack.com/en/latest/ref/states/all/salt.states.archive.html has useful guidance on how to increase the tar state’s verbosity, but not to decrease it. This is because the extra 2755 lines aren’t coming from tar itself, but from Salt assuming that we want to know.

terse outputter settings

The outputter takes several state_output setting options. The terse option summarizes the result of each state into a single line.

There are a couple places you can set this:

  • Invoke Salt with salt --state-output=terse hostname state.highstate
  • Add the line state_output: terse to /etc/salt/minion, if you’re using salt-call
  • Setting state_output_terse is apparently an option, though I can’t find any example of a real-world salt config that uses it

Setting the terse option in /etc/salt/minion dropped the output of a highstate from over 10,000 lines to about 2500.

]]>
Wed, 23 Mar 2016 00:00:00 -0700
http://edunham.net/2016/03/14/fixing_sudo_on_osx.html http://edunham.net/2016/03/14/fixing_sudo_on_osx.html <![CDATA[Fixing sudo errors from the command line on OSX]]> Fixing sudo errors from the command line on OSX

The first symptom that I had made a terrible mistake showed up in an Ansible playbook:

GATHERING FACTS
***************************************************************
fatal: [...] => ssh connection closed waiting for a privilege escalation password prompt
fatal: [...] => ssh connection closed waiting for a privilege escalation password prompt
fatal: [...] => ssh connection closed waiting for sudo password prompt
fatal: [...] => ssh connection closed waiting for sudo password prompt

That looks like the sudo binary might be broken. To rule out Ansible problems, remote into the machine and try to use sudo:

administrators-Mac-mini:~ administrator$ sudo ls
sudo: effective uid is not 0, is sudo installed setuid root?

This meant that there was a file permissions problem:

working-host administrator$ ls -al /usr/bin/sudo
-r-s--x--x  1 root  wheel  164560 Sep  9  2014 /usr/bin/sudo

broken-host administrator$ ls -al /usr/bin/sudo
-rwxrwxr-x  1 root  wheel  164560 Sep  9  2014 /usr/bin/sudo

Now the problem is reduced to fixing the permissions. One does not simply sudo to root, because there’s no working sudo. However, Apple provides a utility which allows you to enable root login using only the administrator account’s permissions:

broken-host administrator$ dsenableroot
username = administrator
user password:
root password:
verify root password:

dsenableroot:: ***Successfully enabled root user.

The first password is the current one for the administrator account, and the other two should be the same string and will become the root account’s password.

After enabling root login, disconnect then SSH into the host as root:

broken-host root# chmod 4411 /usr/bin/sudo

And test that the fix fixed it:

broken-host root# su administrator
broken-host administrator$ sudo ls

Finally, clean up after yourself to inconvenience any future attackers:

broken-host administrator$ dsenableroot -d

Moral of the story: Errant chowns of /usr/bin are just as bad when they come from automation as when they come from humans.

]]>
Mon, 14 Mar 2016 00:00:00 -0700
http://edunham.net/2016/03/08/ansible_vagrant_and_changed_host_keys.html http://edunham.net/2016/03/08/ansible_vagrant_and_changed_host_keys.html <![CDATA[Ansible, Vagrant, and changed host keys]]> Ansible, Vagrant, and changed host keys

Related to this bug, the Vagrant Ansible provisioner seems to ignore some system settings.

The symptom is that when you update a previously used Vagrant box, or otherwise change its host key, Ansible provisioning fails with the error:

fatal: [hostname] => SSH Error: Host key verification failed.
    while connecting to 127.0.0.1:2200
It is sometimes useful to re-run the command using -vvvv, which prints SSH
debug output to help diagnose the issue.

The standard solution would be to forget about the old host key with ssh-keygen -R 127.0.0.1:2200 or ignore the change with export ANSIBLE_HOST_KEY_CHECKING=false.

If you trust the box not to be evil and expect its host key to change frequently due to your testing, a fix which the Ansible provisioner does respect is to add ansible.host_key_checking = false to the Vagrantfile, like:

Vagrant.configure(2) do |config|
...
    config.vm.define "hostname" do |prodmaster|
        hostname.vm.provision "ansible" do |ansible|
            ansible.playbook = "provision/hostname.yaml"
            ansible.sudo = true
            ansible.host_key_checking = false
            ansible.verbose = 'vvvv'
            ansible.extra_vars = { ansible_ssh_user: 'vagrant'}
        end
    end
...
end
]]>
Tue, 08 Mar 2016 00:00:00 -0800
http://edunham.net/2016/03/07/vidyo_with_ubuntu_and_i3wm.html http://edunham.net/2016/03/07/vidyo_with_ubuntu_and_i3wm.html <![CDATA[Vidyo with Ubuntu and i3wm]]> Vidyo with Ubuntu and i3wm

Mozilla uses Vidyo for virtual meetings across distributed teams. If it doesn’t work on your laptop, you can use the mobile client or book a meeting room in an office, but neither of those solutions is optimal when working from home.

Vidyo users within Mozilla can download a .deb or .rpm installer from v.mozilla.org. On Ubuntu, it’s easy to install the downloaded package with sudo dpkg -i path/to/the/file.deb.

The issue is that when you invoke VidyoDesktop from your launcher of choice (dmenu for me), i3 does what’s usually the right thing and makes the client fullscreen in a tile. This doesn’t allow the interface to pop up a floating window with the confirm dialog when you try to join a room, so you can’t.

mod + shift + space

Mod was alt by default last time I installed i3, but I’ve since remapped it to the window key (as IRC clients use alt for switching windows). Some people use caps lock as their mod key.

mod + shift + space makes the window floating, which allows it to pop up the confirmation dialog when you try to join a call.

Float windows by default

Alternately, stick the line:

for_window [class="VidyoDesktop"] floating enable

in your ~/.i3/config.

Installing Vidyo despite the libqt4-gui error

Edited as of May 2017: Recent Vidyos depend on a package that’s not available in Ubuntu’s repos. The easiest workaround is:

sudo dpkg -i --ignore-depends=libqt4-gui path/to/VidyoInstaller.deb
]]>
Mon, 07 Mar 2016 00:00:00 -0800
http://edunham.net/2016/02/26/are_we_building_are_we_sites_yet.html http://edunham.net/2016/02/26/are_we_building_are_we_sites_yet.html <![CDATA[Are we 'are we' yet?]]> Are we ‘are we’ yet?

The Rust community, being founded and enjoyed by a variety of Mozilians, seems to have inherited the tradition of tracking top-level progress metrics using are we sites.

  • Are we concurrent yet? tracks the progres of Rust’s concurrency ecosystem
  • Are we web yet? tracks the status of Rust’s HTTP stack, web frameworks, and related libraries
  • Are we IDE yet? provides a list of what features are supported for Rust per IDE, and links to the relevant tracking issues and RFCs

If this blog post was an ‘are we’ page itself, the big text at the top would probably say “Getting There”.

]]>
Fri, 26 Feb 2016 00:00:00 -0800
http://edunham.net/2016/02/19/buildbot_withproperties.html http://edunham.net/2016/02/19/buildbot_withproperties.html <![CDATA[Buildbot WithProperties]]> Buildbot WithProperties

Today, I copied an existing command from a Buildbot configuration and then modified it to print a date into a file.:

...
if "cargo" in component:
    cargo_date_cmd = "echo `date +'%Y-%m-%d'` > " + final_dist_dir + "/cargo-build-date.txt"
    f.addStep(MasterShellCommand(name="Write date to cargo-build-date.txt",
                             command=["sh", "-c", WithProperties(cargo_date_cmd)] ))
...

It broke:

Failure: twisted.internet.defer.FirstError: FirstError[#8, [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.FirstError'>: FirstError[#2, [Failure instance: Traceback: <type 'exceptions.ValueError'>: unsupported format character 'Y' (0x59) at index 14

Why? WithProperties.

It turns out that WithProperties should only be used when you need to interpolate strings into an argument, using either %s, %d, or %(propertyname)s syntax in the string.

The lesson here is Buildbot will happily accept WithProperties("echo 'this command uses no interpolation'") in a command argument, and then blow up at you if you ever change the command to have a % in it.

However, it appears that build steps run as MasterShellCommand``s without ``WithProperties do not display their name in the waterfall, but rather say “running” or “ran”.

]]>
Fri, 19 Feb 2016 00:00:00 -0800
http://edunham.net/2016/02/15/using_notty.html http://edunham.net/2016/02/15/using_notty.html <![CDATA[Using Notty]]> Using Notty

I recently got the “Hey, you’re a Rust Person!” question of how to install notty and interact with it.

A TTY was originally a teletypewriter. Linux users will have most likely encountered the concept of TTYs in the context of the TTY1 interface where you end up if your distro fails to start its window manager. Since you use ctrl + alt + f[1,2,...] to switch between these interfaces, it’s easy to assume that “TTY” refers to an interactive workspace.

Notty itself is only a virtual terminal. Think of it as a library meant as a building block for creating graphical terminal emulators. This means that a user who saw it on Hacker News and wants to play around should not ask “how do I install notty”, but rather “how do I run a terminal emulator built on notty?”.

Easy Mode

Get some Rust:

curl -sf https://raw.githubusercontent.com/brson/multirust/master/blastoff.sh | sh
multirust update nightly

Get the system dependencies:

sudo apt-get install libcairo2-dev libgdk-pixbuf2.0 libatk1.0 libsdl-pango-dev libgtk-3-dev

Run Notty:

git clone https://github.com/withoutboats/notty.git
cd notty/scaffolding
multirust run nightly cargo run

And there you have it! As mentioned in the notty README, “This terminal is buggy and feature poor and not intended for general use”. Notty is meant as a library for building graphical terminals, and scaffolding is only a minimal proof of concept.

Explanation: Getting Rust

Since the Rust language is still under active development, many features are available in the Nightly version of the compiler which are not yet available in Stable. If you got Rust from your package manager, you probably are using Stable. To check, run rustc --version and see whether the result says “nightly” in it.

Notty uses some features that’re available in Nightly but not Stable. If you try to compile it with Stable, you’ll get an error that makes this obvious:

Compiling notty v0.1.0 (file:///home/edunham/code/notty)
src/lib.rs:16:1: 16:16 error: #[feature] may not be used on the stable release channel
src/lib.rs:16 #![feature(io)]
              ^~~~~~~~~~~~~~~
error: aborting due to previous error
Could not compile `notty`.

When you need to switch between Rust versions frequently, multirust is the tool for the job.

Explanation: Getting system dependencies

I’ve reproduced the following error messages in full to help out any confused new Rustaceans Googling for them:

Cairo is a graphics library that you can get from your system package manager. If you try to compile notty’s dependencies without it, you’ll get an error:

Build failed, waiting for other jobs to finish...
failed to run custom build command for `cairo-sys-rs v0.2.1`
Process didn't exit successfully:
`/home/edunham/code/notty/notty-cairo/target/release/build/cairo-sys-rs-1d0cf50d5d2dab2f/build-script-build`
(exit code: 101)
--- stderr
thread '<main>' panicked at '`"pkg-config" "--libs" "--cflags" "cairo"` did
not exit successfully: exit code: 1
--- stderr
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
', /home/edunham/.multirust/toolchains/nightly/cargo/registry/src/github.com-0a35038f75765ae4/cairo-sys-rs-0.2.1/build.rs:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.

The only other gotcha about the dependencies is that errors about gdk actually mean you need to install the libgtk-3-dev package:

failed to run custom build command for `gdk-sys v0.2.1`
Process didn't exit successfully:
`/home/edunham/code/notty/scaffolding/target/release/build/gdk-sys-e1b0a13b32593729/build-script-build`
(exit code: 101)
--- stderr
thread '<main>' panicked at '`"pkg-config" "--libs" "--cflags" "gdk-3.0"` did
not exit successfully: exit code: 1
--- stderr
Package gdk-3.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gdk-3.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gdk-3.0' found
', /home/edunham/.multirust/toolchains/nightly/cargo/registry/src/github.com-0a35038f75765ae4/gdk-sys-0.2.1/build.rs:17
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Running notty

Compiling and running scaffolding necessarily builds a bunch of dependencies, some of which throw various warnings. You also might be able to crash scaffolding with an error such as:

thread '<main>' panicked at 'not yet implemented', .../notty/src/datatypes/mod.rs:160

This, along with everywhere else that unimplemented!() occurrs in the notty source code, is an opportunity for you to contribute and help improve the project!

]]>
Mon, 15 Feb 2016 00:00:00 -0800
http://edunham.net/2016/01/19/how_much_knowledge_do_you_need_to_give_a_conference_talk.html http://edunham.net/2016/01/19/how_much_knowledge_do_you_need_to_give_a_conference_talk.html <![CDATA[How much knowledge do you need to give a conference talk?]]> How much knowledge do you need to give a conference talk?

I was recently asked an excellent question when I promoted the LFNW CFP on IRC:

As someone who has never done a talk, but wants to, what kind of knowledge do you need about a subject to give a talk on it?

If you answer “yes” to any of the following questions, you know enough to propose a talk:

  • Do you have a hobby that most tech people aren’t experts on? Talk about applying a lesson or skill from that hobby to tech! For instance, I turned a habit of reading about psychology into my Human Hacking talk.
  • Have you ever spent a bunch of hours forcing two tools to work with each other, because the documentation wasn’t very helpful and Googling didn’t get you very far, and built something useful? “How to build ___ with ___” makes a catchy talk title, if the thing you built solves a common problem.
  • Have you ever had a mentor sit down with you and explain a tool or technique, and the new understanding improved the quality of your work or code? Passing along useful lessons from your mentors is a valuable talk, because it allows others to benefit from the knowledge without taking as much of your mentor’s time.
  • Have you seen a dozen newbies ask the same question over the course of a few months? When your answer to a common question starts to feel like a broken record, it’s time to compose it into a talk then link the newbies to your slides or recording!
  • Have you taken a really interesting class lately? Can you distill part of it into a 1-hour lesson that would appeal to nerds who don’t have the time or resources to take the class themselves? (thanks lucyw for adding this to the list!)
  • Have you built a cool thing that over a dozen other people use? A tutorial talk can not only expand your community, but its recording can augment your documentation and make the project more accessible for those who prefer to learn directly from humans!
  • Did you benefit from a really great introductory talk when you were learning a tool? Consider doing your own tutorial! Any conference with beginners in their target audience needs at least one Git lesson, an IRC talk, and some discussions of how to use basic Unix utilities. These introductory talks are actually better when given by someone who learned the technology relatively recently, because newer users remember what it’s like not to know how to use it. Just remember to have a more expert user look over your slides before you present, in case you made an incorrect assumption about the tool’s more advanced functionality.

I personally try to propose talks I want to hear, because the dealine of a CFP or conference is great motivation to prioritize a cool project over ordinary chores.

]]>
Tue, 19 Jan 2016 00:00:00 -0800
http://edunham.net/2016/01/16/buildbot_and_eoferror.html http://edunham.net/2016/01/16/buildbot_and_eoferror.html <![CDATA[Buildbot and EOFError]]> Buildbot and EOFError

More SEO-bait, after tracking down an poorly documented problem:

# buildbot start master
Following twistd.log until startup finished..
2016-01-17 04:35:49+0000 [-] Log opened.
2016-01-17 04:35:49+0000 [-] twistd 14.0.2 (/usr/bin/python 2.7.6) starting up.
2016-01-17 04:35:49+0000 [-] reactor class: twisted.internet.epollreactor.EPollReactor.
2016-01-17 04:35:49+0000 [-] Starting BuildMaster -- buildbot.version: 0.8.12
2016-01-17 04:35:49+0000 [-] Loading configuration from '/home/user/buildbot/master/master.cfg'
2016-01-17 04:35:53+0000 [-] error while parsing config file:
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1155, in gotResult
        _inlineCallbacks(r, g, deferred)
      File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1099, in _inlineCallbacks
        result = g.send(result)
      File "/usr/local/lib/python2.7/dist-packages/buildbot/master.py", line 189, in startService
        self.configFileName)
    --- <exception caught here> ---
      File "/usr/local/lib/python2.7/dist-packages/buildbot/config.py", line 156, in loadConfig
        exec f in localDict
      File "/home/user/buildbot/master/master.cfg", line 415, in <module>
        extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
      File "/usr/local/lib/python2.7/dist-packages/buildbot/status/status_push.py", line 404, in __init__
        secondaryQueue=DiskQueue(path, maxItems=maxDiskItems))
      File "/usr/local/lib/python2.7/dist-packages/buildbot/status/persistent_queue.py", line 286, in __init__
        self.secondaryQueue.popChunk(self.primaryQueue.maxItems()))
      File "/usr/local/lib/python2.7/dist-packages/buildbot/status/persistent_queue.py", line 208, in popChunk
        ret.append(self.unpickleFn(ReadFile(path)))
    exceptions.EOFError:

2016-01-17 04:35:53+0000 [-] Configuration Errors:
2016-01-17 04:35:53+0000 [-]   error while parsing config file:  (traceback in logfile)
2016-01-17 04:35:53+0000 [-] Halting master.
2016-01-17 04:35:53+0000 [-] Main loop terminated.
2016-01-17 04:35:53+0000 [-] Server Shut Down.

This happened after the buildmaster’s disk filled up and a bunch of stuff was manually deleted. There were no changes to master.cfg since it worked perfectly.

The fix was to examine master.cfg to see where the HttpStatusPush was created, of the form:

c['status'].append(HttpStatusPush(
    serverUrl='http://build.servo.org:54856/buildbot',
    extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
))

Digging in the Buildbot source reveals that persistent_queue.py wants to unpickle a cache file from /events_build.servo.org/-1 if there was nothing in /events_build.servo.org/. To fix this the right way, create that file and make sure Buildbot has +rwx on it.

Alternately, you can give up on writing your status push cache to disk entirely by adding the line maxDiskItems=0 to the creation of the HttpStatusPush, giving you:

c['status'].append(HttpStatusPush(
   serverUrl='http://build.servo.org:54856/buildbot',
   maxDiskItems=0,
   extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
))

The real moral of the story is “remember to use logrotate.

]]>
Sat, 16 Jan 2016 00:00:00 -0800
http://edunham.net/2016/01/13/who_would_you_hire.html http://edunham.net/2016/01/13/who_would_you_hire.html <![CDATA[Who would you hire?]]> Who would you hire?

If you’re using open source as a portfolio to make yourself a more competitive job candidate, it can feel like you have to start your own project to show off your skills.

In the words of one job seeker I chatted with recently, “I feel like most of my contributions [to other peoples’ projects] aren’t that significant or noteworthy”. Here’s a thought experiment to justify including projects to which you contribute, even without a leadership role, on your resume:

Imagine you want to hire a coder.

Candidate A always works alone and refuses to contribute to a project if it doesn’t make her look like a rockstar.

Candidate B triages the unglamorous issues that affect multiple users, and steadily produces small, self-contained fixes that avoid introducing new bugs.

When the situation is framed in these terms, I hope that it’s obvious which coder you’d want on your team.

When writing your resume, there’s only space to include a few of the many activities in which you invest your time. It’s tempting to only include your biggest, highest-profile solo projects, while disregarding those projects to which you’ve made a small but steady stream of useful contributions.

Reread your resume from the perspective of someone who hasn’t met you yet and has only the information in that document available to form a first impression of your character. Which of the 2 hypothetical coders does it make you sound like? Is that how you really are?

]]>
Wed, 13 Jan 2016 00:00:00 -0800
http://edunham.net/2016/01/09/troubleshooting_stunnel.html http://edunham.net/2016/01/09/troubleshooting_stunnel.html <![CDATA[Troubleshooting stunnel]]> Troubleshooting stunnel

Today I’ve learned a few things aout how stunnel works. The main takeaway is that Googling for specific errors in the stunnel log is incredibly unhelpful, resulting in a variety of mailing list posts with no replies. Tracking an error message through the source of the program doesn’t lead to any useful comments, either. So here’s some SEO bait with concrete troubleshooting suggestions.

I started out, as usual, with a pile of errors:

/usr/local/bin/stunnel
[ ] Clients allowed=2000
[ ] Cron thread initialized
[.] stunnel 5.27 on x86_64-apple-darwin14.0.0 platform
[.] Compiled/running with OpenSSL 1.0.2e 3 Dec 2015
[.] Threading:PTHREAD Sockets:POLL,IPv6 TLS:ENGINE,FIPS,OCSP,PSK,SNI
[ ] errno: (*__error())
[.] Reading configuration from file stunnel.conf
[.] UTF-8 byte order mark not detected
[ ] Initializing service [9987]
[!] Error resolving "127.0.0.1": Neither nodename nor servname known (EAI_NONAME)
[ ] Cannot resolve connect target - delaying DNS lookup
[ ] No certificate or private key specified
[ ] SSL options: 0x03000004 (+0x03000000, -0x00000000)
[.] Configuration successful
[ ] Listening file descriptor created (FD=6)
[!] bind: Address already in use (48)
[!] Error binding service [9987] to 127.0.0.1:9987
[ ] Closing service [9987]
[ ] Service [9987] closed
stunnel startup failed, already running?

[!] Error resolving “127.0.0.1”: Neither nodename nor servname known

This was the biggest wat, and the hardest to track down because the solution is so obvious.

“Error resolving” sounds like the machine hasn’t been informed of localhost’s existance, so let’s check:

$ cat /etc/hosts
127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost

And I can even ping 127.0.0.1 successfully. So the message and I have different ideas about what it means to “resolve” an IP.

I found the fix here by diffing the stunnel.conf against that on a working machine, and learned that I’d neglected to specify the correct port number on the destination host.

The solution to the “Error resolving localhost” turned out to be specifying the correct port for the other end of the stunnel:

$ cat stunnel.conf
pid =

[9987]
client = yes
accept = 127.0.0.1:9987
cafile = ./cert.pem
verify = 3
connect = 01.23.456.789:9988

Wow. Painfully obvious after you realize what’s wrong, and just plain painful before.

[!] Error binding service [9987] to 127.0.0.1:9987

The “already running?” hint is correct here. This error means stunnel didn’t let go of the port despite failing to start on a previous attempt.

Easy fix; check whether it’s really stunnel hogging the port:

$ lsof -i tcp:9987
COMMAND PID      USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
stunnel 363        me    6u  IPv4 0x20f17e5e0dd35277      0t0  TCP localhost:dsm-scm-target (LISTEN)

and if so, whack it with a metaphorical hammer:

$ sudo killall stunnel

Tada!

After getting the destination IP+port combination specified correctly and the old broken stunnel killed, the stunnel starts successfully.

]]>
Sat, 09 Jan 2016 00:00:00 -0800
http://edunham.net/2015/12/21/questions_about_open_source_and_design.html http://edunham.net/2015/12/21/questions_about_open_source_and_design.html <![CDATA[Questions about Open Source and Design]]> Questions about Open Source and Design

Today, I posed a question to some professional UI and UX designers:

How can an open source project without dedicated design experts collaborate with amateur, volunteer designers to produce a well-designed product?

They revealed that they’ve faced similar collaboration challenges, but knew of neither a specific process to solve the problem nor an organization that had overcome it in the past.

Have you solved this problem? Have you tried some process or technique and learned that it’s not able to solve the problem? Email me (design@edunham.net) if you know of an open source project that’s succeeded at opening their design as well, and I’ll update back here with what I learn!

In no particular order, here are some of the problems that we were talking about:

  • Non-designers struggle to give constructive feedback on design. I can say “that’s ugly” or “that’s hard to use” more easily than I can say “here’s how you can make it better”.
  • Projects without designers in the main decision-making team can have a hard time evaluating the quality of a proposed design.
  • Non-designers struggle to articulate the objective design needs of their projects, so design remains a single monolithic problem rather than being decomposed into bite-sized, introductory issues the way code problems are.
  • Volunteer designers have a difficult time finding open source projects to get involved with.
  • Non-designers don’t know the difference between different types of design, and tend to bikeshed on superficial, obvious traits like colors when they should be focusing on more subtle parts of the user experience. We as non-designers are like clients who ask for a web site without knowing that there’s a difference between frontend development, back end development, database administration, and systems administration.
  • The tests which designers apply to their work are often almost impossible to automate. For instance, I gather that a lot of user interaction testing involves watching new users attempt to complete a task using a given design, and observing the challenges they encounter.

Again, if you know of an open source project that’s overcome any of these challenges, please email me at design@edunham.net and tell me about it!

]]>
Mon, 21 Dec 2015 00:00:00 -0800
http://edunham.net/2015/12/03/linode_plan_names_and_pricing.html http://edunham.net/2015/12/03/linode_plan_names_and_pricing.html <![CDATA[Linode vs AWS]]> Linode vs AWS

I’m examining a Linode account in order to figure out how to switch the application its instances are running to AWS. The first challenge is that instance types in the main dashboard are described by arbitrary numbers (“UI Name” in the chart below), rather than a statistic about their resources or pricing. Here’s how those magic numbers line up to hourly rates and their corresponding monthly price caps:

RAM Hourly $ Monthly $ UI Name Cores GB SSD
1GB $0.015/hr $10/mo 1024 1 24
2GB $0.03/hr $20/mo 2048 2 48
4GB $0.06/hr $40/mo 4096 4 96
8GB $0.12/hr $80/mo 8192 6 192
16GB $0.24/hr $160/mo 16384 8 384
32GB $0.48/hr $320/mo 32768 12 768
48GB $0.72/hr $480/mo 49152 16 1152
64GB $0.96/hr $640/mo 65536 20 1536
96GB $1.44/hr $960/mo 98304 20 1920

AWS “Equivalents”

AWS T2 instances have burstable performance. M* instances are general-purpose; C* are compute-optimized; R* are memory-optimized. *3 instances run on slightly older Ivy Bridge or Sandy Bridge processors, while *4 instances run on the newer Haswells. I’m disergarding the G2 (GPU-optimized), D2 (dense-storage), and I2 (IO-optmized) instance types from this analysis.

Note that the AWS specs page has memory in GiB rather than GB. I’ve converted everything into GB in the following table, since the Linode specs are in GB and the AWS RAM amounts don’t seem to follow any particular pattern that would lose information in the conversion.

Hourly price is the Linux/UNIX rate for US West (Northern California) on 2015-12-03. Monthly price estimate is the hourly price multiplied by 730.

Instance vCPU GB RAM $/hr $/month
t2.micro 1 1.07 .017 12.41
t2.small 1 2.14 .034 24.82
t2.medium 2 4.29 .068 49.64
t2.large 2 8.58 .136 99.28
m4.large 2 8.58 .147 107.31
m4.xlarge 4 17.18 .294 214.62
m4.2xlarge 8 34.36 .588 429.24
m4.4xlarge 16 68.72 1.176 858.48
m4.10xlarge 40 171.8 2.94 2146.2
m3.medium 1 4.02 .077 56.21
m3.large 2 8.05 .154 112.42
m3.xlarge 4 16.11 .308 224.84
m3.2xlarge 8 32.21 .616 449.68
c4.large 2 4.02 .138 100.74
c4.xlarge 4 8.05 .276 201.48
c4.2xlarge 8 16.11 .552 402.96
c4.4xlarge 16 32.21 1.104 805.92
c4.8xlarge 36 64.42 2.208 1611.84
c3.large 2 4.02 .12 87.6
c3.xlarge 4 8.05 .239 174.47
c3.2xlarge 8 16.11 .478 348.94
c3.4xlarge 16 32.21 .956 697.88
c3.8xlarge 32 64.42 1.912 1395.76
r3.large 2 16.37 .195 142.35
r3.xlarge 4 32.75 .39 284.7
r3.2xlarge 8 65.50 .78 569.4
r3.4xlarge 16 131 1.56 1138.8
r3.8xlarge 32 262 3.12 2277.6

Comparison

Linode and AWS do not compare cleanly at all. The smallest AWS instance to match a given Linode type’s RAM typically has fewer vCPUs and costs more in the region where I compared them. Conversely, the smallest AWS instance to match a Linode type’s number of cores often has almost double the RAM of the Linode, and costs substantially more.

Switching from Linode to AWS

When I examine the Servo build machines’ utilization graphs via the Linode dashboard, it becomes clear that even their load spikes aren’t fully utilizing the available CPUs. To view memory usage stats on Linode, it’s necessary to configure hosts to run the longview client. After installation, the client begins reporting data to Linode immediately.

After a few days, these metrics can be used to find the smallest AWS instance whose specs exceed what your application is actually using on Linode.

Sources:

]]>
Thu, 03 Dec 2015 00:00:00 -0800
http://edunham.net/2015/11/25/giving_thanks_to_rust_contributors.html http://edunham.net/2015/11/25/giving_thanks_to_rust_contributors.html <![CDATA[Giving Thanks to Rust Contributors]]> Giving Thanks to Rust Contributors

It’s the day before Thanksgiving here in the US, and the time of year when we’re culturally conditioned to be a bit more public than usual in giving thanks for things.

As always, I’m grateful that I’m working in tech right now, because almost any job in the tech industry is enough to fulfill all of one’s tangible needs like food and shelter and new toys. However, plenty of my peers have all those material needs met and yet still feel unsatisfied with the impact of their work. I’m grateful to be involved with the Rust project because I know that my work makes a difference to a project that I care about.

Rust is satisfying to be involved with because it makes a difference, but that would not be true without its community. To say thank you, I’ve put together a little visualization for insight into one facet of how that community works its magic:

../../../_images/orglog_deploy_teaser.png

The stats page is interactive and available at http://edunham.github.io/rust-org-stats/. The pretty graphs take a moment to render, since they’re built in your browser.

There’s a whole lot of data on that page, and you can scroll down for a list of all authors. It’s especially great to see the high impact that the month’s new contributors have had, as shown in the group comparison at the bottom of the “natural log of commits” chart!

It’s made with the little toy I wrote a while ago called orglog, which builds on gitstat to help visualize how many people contribute code to a GitHub organization. It’s deployed to GitHub Pages with TravisCI (eww) and nightli.es so that the Rust’s organization-wide contributor stats will be automatically rebuilt and updated every day.

If you’d like to help improve the page, you can contribute to gitstat or orglog!

]]>
Wed, 25 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/23/docker_on_ubuntu.html http://edunham.net/2015/11/23/docker_on_ubuntu.html <![CDATA[PSA: Docker on Ubuntu]]> PSA: Docker on Ubuntu
$ sudo apt-get install docker
$ which docker
$ docker
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker
$ apt-get install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

Oh, you wanted to run a docker container? The docker package in Ubuntu is some window manager dock thingy. The docker binary that runs containers comes from the docker.io system package.

$ sudo apt-get install docker.io
$ which docker
/usr/bin/docker

Also, if it can’t connect to its socket:

FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial
unix /var/run/docker.sock: permission denied. Are you trying to connect to a
TLS-enabled daemon without TLS?

you need to make sure you’re in the right group:

sudo usermod -aG docker <username>; newgrp docker

(thanks, stackoverflow!)

]]>
Mon, 23 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/17/installing_rust_without_root.html http://edunham.net/2015/11/17/installing_rust_without_root.html <![CDATA[Installing Rust without root]]> Installing Rust without root

I just got a good question from a friend on IRC: “Should I ask my university’s administration to install Rust on our shared servers?” The answer is “you don’t have to”.

Pick one of the two following sets of directions. I’d recommend using Multirust, because it automatically checks the packages it downloads and lets you switch between Rust versions trivially.

Without multirust

If you just want one version of Rust, this blog post by Valérian Galliat has a fix in 7 lines:

cd ~/.rust
wget https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
tar xf rust-nightly-x86_64-unknown-linux-gnu.tar.gz
mv rust-nightly-x86_64-unknown-linux-gnu rust
export LD_LIBRARY_PATH=~/opt/rust/rustc/lib:$LD_LIBRARY_PATH
export PATH=~/.rust/rust/rustc/bin:$PATH
export PATH=~/.rust/rust/cargo/bin:$PATH

If you want rust stable instead of rust nightly, use the URL https://static.rust-lang.org/dist/rust-stable-x86_64-unknown-linux-gnu.tar.gz in the wget step to download the latest stable release.

If you’re security-conscious, you might want to verify the integrity of the tarball before inflating it and running its contents. We provide a GPG signature of every tarball, and sha256 sums of the tarballs and signatures.

You can construct the URL for shasum or GPG signature by adding the desired extension to the tarball’s URL, so for nightly:

https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz.sha256
https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz.asc
https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz.asc.sha256

To verify the GPG signature, you’ll also need a copy of the Rust project’s public key. This key is available through several channels:

  • on the Rust website, available only over HTTPS.
  • on keybase.io, correlated to Rust’s Twitter account and URL. Don’t worry, we authenticated the key by signing a string from Keybase with it locally. We don’t trust them to ever see our private key.
  • on GitHub, in the website’s repository.

Remember, verifying the signature only guarantees that the tarball you downloaded matches the one that was produced by the Rust project’s build infrastructure. As with any piece of software, there exist a variety of threat models from which verifying the signatures cannot completely protect you.

Multirust without root

Multirust is a tool that makes it easy to use multiple Rust versions on the same system. Although the absolute easiest way to use it is curl -sf https://raw.githubusercontent.com/brson/multirust/master/blastoff.sh | sh (which will interactively request a sudo password partway through), it can be installed without root as well:

git clone --recursive https://github.com/brson/multirust && cd multirust
./build.sh # create install.sh
mkdir ~/.rust
./install.sh --prefix=~/.rust/
echo "PATH=~/.rust/bin:$PATH" >> ~/.bashrc; source ~/.bashrc

If you run into an error like:

install: WARNING: failed to run ldconfig. this may happen when not installing
as root. run with --verbose to see the error

or in verbose mode,:

install: running ldconfig
/sbin/ldconfig.real: Can't create temporary cache file /etc/ld.so.cache~:
Permission denied
install: WARNING: failed to run ldconfig. this may happen when not installing
as root. run with --verbose to see the error

It means you don’t have permissions to write to /etc/ld.so.cache. Until this issue gets fixed, the easiest workaround to lacking those permissions is to change the script called by the installer to pass -C to ldconfig:

sed -i 's/   ldconfig/   ldconfig -C ~\/.rust\/ld.so.cache/' build/work/multirust-0.7.0/install.sh

Then you should be able to ./install.sh --prefix=~/.rust without the prior warning. Nasty hack, but the easiest way to get it working today.

This technically breaks rustc (since it’s dynamically linked), but if you’re building a Rust project or library, you’ll be using the statically linked cargo tool and thus won’t be affected.

By the way, this is an example of why people who write system utilities like ldconfig should make them able to read their serttings out of environment variables as well as just command-line arguments.

Now you can multirust default nightly to install rust-nightly and configure it as the default, and you’re ready to roll!

Testing your Rust installation

You can now make a package that says “Hello World” in just 5 commands, using a workflow that will scale to packaging and distributing larger projects:

cargo new hello --bin
echo "fn main(){println!(\"Hello World\");}" > hello/src/main.rs
cd hello
cargo build
cargo run

Congratulations, you’re running Rust!

]]>
Tue, 17 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/12/multiple_languages_on_travisci.html http://edunham.net/2015/11/12/multiple_languages_on_travisci.html <![CDATA[Multiple languages on TravisCI]]> Multiple languages on TravisCI

Today I noticed an assumption which was making my life unnecessarily difficult: I assumed that if my .travis.yml said language: ruby on the first line, I was supposed to only run Ruby code from it.

Travis lets you run code much more arbitrary than that.

I did a bunch of tests on a toy repo to see what would happen if I ignored my preconceptions about how you can and can’t test stuff, and learned some interesting things:

  • You can install PyPI packages in a test suite that’s technically Ruby, or gems in a test suite that’s technically Python.
  • If your project is language:ruby, you need to sudo pip install dependencies. If it’s language:python, you can just gem install dependencies without sudo.
  • If I specify multiple instances of language: or multiple build matrices, Travis uses the language whose build matrix occurs last. If I specify a Python matrix and then a Ruby one, the Ruby matrix will be run.

This is especially useful when testing or deployment requires hitting an API whose libraries are most up to date in a language other than that of the project.

]]>
Thu, 12 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/04/beyond_openhatch.html http://edunham.net/2015/11/04/beyond_openhatch.html <![CDATA[Beyond Openhatch]]> Beyond Openhatch

Update: I’m now maintaining the issue aggregator list at http://edunham.net/pages/issue_aggregators.html

OpenHatch is a wonderful place to help new contributors find their first open source issues to work on. Their training materials are unparalleled, and the “projects submit easy bugs with mentors” model makes their list of introductory issues reliably high-quality.

However, once you know the basics of how to engage with an open source project, you’re no longer in the target audience for OpenHatch’s list. Where should you look for introductory issues when you want to get involved with a new project, but you’re already familiar with open source in general?

An excellent slide deck by Josh Matthews contains several answers to this question:

  • issuehub.io scrapes GitHub by labels and language
  • up-for-grabs has an opt-in list of projects looking for new contributors, and scrapes their issue trackers for their “jump in”, “up for grabs” or other “new contributors welcome” tags.
  • If you’re looking for Mozilla-specific contributions outside of just code, What can I do for Mozilla? can help direct you into any of Mozilla’s myriad opportunities for involvement.

Additionally, the servo-starters page has a custom view of easy issues sorted by Servo’s project-specific tags.

GitHub Tricks

If you’re looking for open issues across all repos owned by a particular user or organization, you can use the search at https://github.com/pulls and specify the “user” (or org) in the search bar. For instance, this search will find all the unassigned, easy-tagged issues in the rust-lang org. Breaking down the search:

  • user:rust-lang searches all repos owned by github.com/rust-lang. It could also be someone’s github username.
  • is:open searches only open issues.
  • no:assignee will filter out the issues which are obviously claimed. Note that some issues without an assignee set may still have a comment saying “I’ll do this!”, if it was claimed by a user who did not have permissions to set assignees and then not triaged.
  • label:E-Easy uses my prior knowledge that most repos within rust-lang annotate introductory bugs with the E-easy tag. When in doubt, check the contributing.md file at the top level in the org’s most popular repository for an explanation of what various issue labels mean. If that information isn’t in the contributing file or the README, file a bug!

Am I missing your favorite introductory issue aggregator? Shoot me an email to ___@edunham.net (fill in the blank with anything; the email will get to me) with a link, and I’ll add it here if it looks good!

]]>
Wed, 04 Nov 2015 00:00:00 -0800
http://edunham.net/2015/10/29/psa_pin_versions.html http://edunham.net/2015/10/29/psa_pin_versions.html <![CDATA[PSA: Pin Versions]]> PSA: Pin Versions

Today, the website’s build broke. We made no changes to the tests, yet a wild dependency error emerged:

Generating...

  Dependency Error: Yikes! It looks like you don't have redcarpet or one of
its dependencies installed. In order to use Jekyll as currently configured,
you'll need to install this gem. The full error message from Ruby is: 'cannot
load such file -- redcarpet' If you run into trouble, you can find helpful
resources at http://jekyllrb.com/help/!

  Conversion error: Jekyll::Converters::Markdown encountered an error while
converting 'conduct.md':

                    redcarpet

             ERROR: YOUR SITE COULD NOT BE BUILT:

                    ------------------------------------

                    redcarpet

The command "jekyll build" exited with 1.

Although Googling the error was unhelpful, a bit more digging revealed that our last working build had been on Jekyll 2.5.3 and the builds breaking on a Redcarpet error all used 3.0.0.

The moral of the story is that where the .travis.yml said - gem install jekyll, it should have said - gem install jekyll -v 2.5.3.

]]>
Thu, 29 Oct 2015 00:00:00 -0700
http://edunham.net/2015/10/25/seagl_2015_retrospective.html http://edunham.net/2015/10/25/seagl_2015_retrospective.html <![CDATA[SeaGL 2015 Retrospective]]> SeaGL 2015 Retrospective

As well as nominally helping organize the event, I attended and spoke at SeaGL 2015 this weekend. The slides from my talk are here.

My talk drew an audience of perhaps a dozen people on Friday afternoon. I didn’t record this instance of the talk, but will probably give it at least one more time and be sure to record then.

One of the more useful tools I learned about is called myrepos. It lets you update all of the Git repositories on a machine at the same time, as well as other neat tricks like replaying actions that failed due to network problems. Its author has written a variety of other useful Git wrappers, as well.

Additionally, VCSH seems to be the “I knew somebody else wrote that already!” tool for keeping parts of a home directory in Git.

]]>
Sun, 25 Oct 2015 00:00:00 -0700
http://edunham.net/2015/10/14/upgrading_buildbot_0_8_6_to_0_8_12.html http://edunham.net/2015/10/14/upgrading_buildbot_0_8_6_to_0_8_12.html <![CDATA[Upgrading Buildbot 0.8.6 to 0.8.12]]> Upgrading Buildbot 0.8.6 to 0.8.12

Here are some quick notes on upgrading Buildbot.

System Dependencies

There are more now. In order to successfully install all of Buildbot’s dependencies with Pip, I needed a few more apt packages:

python-dev
python-openssl
libffi-dev
libssl-dev

Then for sanity’s sake make a virtualenv, and install the following packages. Note that having too new a sqlalchemy will break things.:

buildbot==0.8.12
boto
pyopenssl
cryptography
SQLAlchemy<=0.7.10

Virtualenvs

Troubleshooting compatibility issues with system packages on a host that runs several Python services with various dependency versions is predictably terrible.

The potential problem with switching to running Buildbot only from a virtualenv is that developers with access to the buildmaster might want to restart it and miss the extra step of activating the virtualenv. I addressed this by adding the command to activate the virtualenv (using the virtualenv’s absolute path) to the ~/.bashrc of the user that we run Buildbot as. This way, we’ve gained the benefits of having our dependencies consolidated without adding the cost of an extra workflow step to remember.

Template changes

Most of Buildbot’s status pages worked fine after the upgrade, but the console view threw a template error because it couldn’t find any variable named “categories”. The fix was to simply copy the new template from venv/local/lib/python2.7/site-packages/buildbot/status/web/templates/console.html to my-buildbot/master/templates/console.html.

That’s it!

Rust currently has these updates on the development buildmaster, but not yet (as of 10/14/2015) in prod.

]]>
Wed, 14 Oct 2015 00:00:00 -0700
http://edunham.net/2015/09/29/carrying_credentials_between_environments.html http://edunham.net/2015/09/29/carrying_credentials_between_environments.html <![CDATA[Carrying credentials between environments]]> Carrying credentials between environments

This scenario is simplified for purposes of demonstration.

I have 3 machines: A, B, and C. A is my laptop, B is a bastion, and C is a server that I only access through the bastion.

I use an SSH keypair helpfully named AB to get from me@A to me@B. On B, I su to user. I then use an SSH keypair named BC to get from user@B to user@C.

I do not wish to store the BC private key on host B.

SSH Agent Forwarding

I have keys AB and BC on host A, where I start. Host A is running ssh-agent, which is installed by default on most Linux distributions.

me@A$ ssh-add ~/.ssh/AB     # Add keypair AB to ssh-agent's keychain
me@A$ ssh-add ~/.ssh/BC     # Add keypair BC to the keychain
me@A$ ssh -A me@B           # Forward my ssh-agent

Now I’m logged into host B and have access to the AB and BC keypairs. An attacker who gains access to B after I log out will have no way to steal the BC keypair, unlike what would happen if that keypair was stored on B.

See here for pretty pictures explaining in more detail how agent forwarding works.

Anyways, I could now ssh me@C with no problem. But if I sudo su user, my agent is no longer forwarded, so I can’t then use the key that I added back on A!

Switch user while preserving environment variables

me@B$ sudo -E su user
user@B$ sudo -E ssh user@C

What?

The -E flag to sudo preserves the environment variables of the user you’re logged in as. ssh-agent uses a socket whose name is of the form /tmp/ssh-AbCdE/agent.12345 to call back to host A when it’s time to do the handshake involving key BC, and the socket’s name is stored in me‘s SSH_AUTH_SOCK environment variable. So by telling sudo to preserve environment variables when switching user, we allow user to pass ssh handshake stuff back to A, where the BC key is available.

Why is sudo -E required to ssh to C? Because /tmp/sshAbCdE/agent.12345 is owned by me:me, and only the file’s owner may read, write, or execute it. Additionally, the socket itself (agent.12345) is owned by me:me, and is not writable by others.

If you must run ssh on B without sudo, chown -R /tmp/ssh-AbCdE to the user who needs to end up using the socket. Making them world read/writable would allow any user on the system to use any key currently added to the ssh-agent on A, which is a terrible idea.

For what it’s worth, the actual value of /tmp/ssh-AbCdE/agent.12345 is available at any time in this workflow as the result of printenv | grep SSH_AUTH_SOCK | cut -f2 -d =.

The Catch

Did you see what just happened there? An arbitrary user with sudo on B just gained access to all the keys added to ssh-agent on A. Simon pointed out that the right way address this issue is to use ProxyCommand instead of agent forwarding.

No, I really don’t want my keys accessible on B

See man ssh_config for more of the details on ProxyCommand. In ~/.ssh/config on A, I can put:

Host B
    User me
    Hostname 111.222.333.444

Host C
    User user
    Hostname 222.333.444.555
    Port 2222
    ProxyCommand ssh -q -w %h:%p B

So then, on A, I can ssh C and be forwarded through B transparently.

]]>
Tue, 29 Sep 2015 00:00:00 -0700
http://edunham.net/2015/09/10/ansible_conditional_role_dependencies.html http://edunham.net/2015/09/10/ansible_conditional_role_dependencies.html <![CDATA[Ansible: Conditional role dependencies]]> Ansible: Conditional role dependencies

I’ve recently been working on an Ansible role that applies to both Ubuntu and OSX hosts. It has some dependencies which are only needed on OSX. There doesn’t seem to be a central document on all the options available for solving this problem, so here are my notes.

Scenario

The role which must apply to both Ubuntu and OSX hosts builds a Rust compiler capable of cross-compiling to Android, so I call it crosscompiler. To run the crosscompiler role on a Mac, you need the xcode role installed, but applying the xcode role to an Ubuntu host will fail.

A simplified version of this setup looks like:

ansible-configs/
├── galaxy_roles.yaml
├── hosts
├── roles
│   ├── crosscompiler
│   │   ├── defaults
│   │   │   └── main.yaml
│   │   ├── meta
│   │   │   └── main.yaml
│   │   └── tasks
│   │       └── main.yaml
│   └── xcode
│       ├── defaults
│       │   └── main.yaml
│       ├── meta
│       │   └── main.yaml
│       └── tasks
│           └── main.yaml
└── site.yaml

Here are a bunch of different ways to avoid applying a Mac-specific task to an Ubuntu host, or vice versa. Note that any of the following steps in isolation will solve the problem – it should not be necessary to use more than one of them.

Check OS on each task of the role

Add the line when: ansible_os_family == 'Darwin' at the end of each task in roles/xcode/tasks/main.yaml.

This needlessly bloats the code and makes it more difficult to read.

Refactor depended role to ignore non-target platforms

Move the entire contents of roles/xcode/main.yaml into roles/xcode/osx.yaml, then create a new main.yaml containing:

---
- include: osx.yaml
  when: ansible_os_family == 'Darwin'

This avoids the bloat induced by running the conditional on each task, while accomplishing the same goal. Now the xcode role looks like:

xcode
├── defaults
│   └── main.yaml
├── meta
│   └── main.yaml
└── tasks
    ├── main.yaml
    └── osx.yaml

This is the best solution for a role which might later expand to support additional platforms.

Make the dependency conditional in meta/main.yaml of depending role

Edit ansible-configs/roles/crosscompiler/main.yaml so that the dependency on xcode reads:

---
dependencies:
  - { role: 'xcode', when: ansible_os_family == 'Darwin' }

This is the best solution when the inner role will only ever target one platform, as is the case with xcode.

Install role conditionally from site.yaml

Edit ansible-configs/site.yaml to read:

- name: Provision cross-compile hosts
  hosts: xcompilehosts
  roles:
    - { role: xcode, when: ansible_os_family == 'Darwin' }
    - crosscompiler

This is problematic because if I was to distribute the crosscompiler role on the Ansible Galaxy, its dependency logic would not be distributed to other users correctly.

TL;DR

You can conditionally include dependencies in your roles. It’s helpful to end users when galaxy roles only try to apply platform-specific tasks to their target platforms, since you can’t be sure how others will use your code.

]]>
Thu, 10 Sep 2015 00:00:00 -0700
http://edunham.net/2015/08/28/apache_licenses.html http://edunham.net/2015/08/28/apache_licenses.html <![CDATA[Apache Licenses]]> Apache Licenses

At the bottom of the Apache 2.0 License file, there’s an appendix:

APPENDIX: How to apply the Apache License to your work.

...

Copyright [yyyy] [name of copyright owner]

...

Does that look like an invitation to fill in the blanks to you? It sure does to me, and has for others in the Rust community as well.

Today I was doing some licensing housekeeping and made the same embarrassing mistake.

This is a PSA to double check whether inviting blanks are part of the appendix before filling them out in Apache license texts.

]]>
Fri, 28 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/24/x240_trackpoint_speed.html http://edunham.net/2015/08/24/x240_trackpoint_speed.html <![CDATA[X240 trackpoint speed]]> X240 trackpoint speed

The screen on my X1 Carbon gave out after a couple months, and my loaner laptop in the meantime is an X240.

The worst thing about this laptop is how slowly the trackpoint moves with a default Ubuntu installation. However, it’s fixable:

cat /sys/devices/platform/i8042/serio1/serio2/speed
cat /sys/devices/platform/i8042/serio1/serio2/sensitivity

Note the starting values in case anything goes wrong, then fiddle around:

echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/sensitivity
echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/speed

Some binary search themed prodding and a lot of tee: /sys/devices/platform/i8042/serio1/serio2/sensitivity: Numerical result out of range has confirmed that both files accept values between 0-255. Interestingly, setting them to 0 does not seem to disable the trackpoint completely.

If you’re wondering why the configuration settings look like ordinary files but choke on values bigger or smaller than a short, go read about sysfs.

]]>
Mon, 24 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/17/kangaroos.html http://edunham.net/2015/08/17/kangaroos.html <![CDATA[Folklore and fallacy]]> Folklore and fallacy

I was a student employee at the OSU Open Source Lab, on and off between internships and other jobs, for 4 years. Being part of the lab helped shape my life and career, in almost overwhelmingly positive ways. However, the farther I get from the lab the more clearly I notice how being part of it changed the way I form expectations about my own technical skills.

To show you the fallacy that I noticed myself falling into, I’d like to tell you a completely made-up story about some alphabetically named kangaroos. Below the fold, there’ll be pictures!

../../../_images/kangaroos1.jpg

Once upon a time, some kangaroos lived in a desert. One day, for some inscrutable marsupial reason, a bunch of young kangaroos got together to practice jumping. Since they’re not particularly creative creatures, they called the group jumping school.

../../../_images/kangaroos2.jpg

Every kangaroo who came to jumping school started out only being able to jump 1 foot, and got better at a rate of 1 foot per year, and stayed for 4 years.

A kangaroo called Aggie was one of the school’s first students. She came in only able to jump 1 foot, but she improved by 1 foot per year.

../../../_images/kangaroos3.jpg

At the start of the second year that the school was around, a kangaroo named Bill joined. Bill could only jump 1 foot when he started, and improved at a rate of 1 foot per year. But Aggie could always jump 2 feet farther than Bill while she was still in school, because they were both improving at the same rate.

../../../_images/kangaroos4.jpg

Nobody new joined for a while, and Aggie left after her fourth year to go cross roads in front of unwary motorists, but Bill stayed in school and kept improving. When she left school, Aggie could jump 5 feet. She knew she’d worked hard, and could always jump farther than Bill, so she felt pretty good about herself.

At the start of the school’s fourth year, after Aggie had left, a new student named Claire joined. Claire could only jump 1 foot at first, but improved at a rate of 1 foot per year.

../../../_images/kangaroos5.jpg

Claire and Bill would chat about school sometimes, and Claire observed that Bill could always jump 2 feet farther than her. When she commented on it, Bill said “If you think I can jump far, you should have seen Aggie! She was a student here before you came, and she could always jump 2 feet farther than me!”.

At the end of the school’s 6th year, Bill finished up and went away to fight in boxing matches. (When Bill left, he could jump 5 feet. He always suspected he could have done better, since he remembered Aggie always being able to jump just a bit farther than him.)

A new student, Dave, joined after Bill left. Dave started out being able to only jump 1 foot and was able to jump 5 feet by the time he’d been at the school for 4 years. Dave knew that Claire could always jump 2 feet farther than him while they were in school together. Dave heard stories from Claire about Bill and Aggie, who could both jump even farther than her!

../../../_images/kangaroos6.jpg

2 years after Dave joined, Claire left jumping school for a full-time job tearing up farmers’ crops and gardens. She knew she’d tried her best, and she could jump 5 feet after she’d been at school for 4 years, but she also knew that Bill had always been a better jumper than her and Aggie had been even better than Bill. Her 5 feet didn’t seem particularly impressive, since (she even double-checked her math on it!) Aggie would have been able to jump 4 feet farther, so that must have meant Aggie was a student who could jump 9 feet.

A couple of years later, Dave was finally finished with school. He’d come in only being able to jump 1 foot, and left being able to jump 5 feet! But he wasn’t sure if this was better or worse than normal, so he thought about the other kangaroos who’d also gone to the school.

../../../_images/kangaroos7.jpg

Dave knew that Claire was a student, and she was able to jump 2 feet farther than him whenever they studied together. Bill was a student who could jump 2 feet farther than Claire, and Aggie was a student who could jump 2 feet farther than Bill! Since Dave could jump 5 feet, he concluded that Claire could jump 7 feet, Bill could jump 9, and Aggie could jump 11! Not only was Dave the worst of the lot, he wasn’t even half as good as the school’s first student! He felt pretty bad about his accomplishments, and wondered why students were getting worse every year.

Epilogue

../../../_images/kangaroos8.jpg

Once upon a somewhat later time, Jumping School had a reunion and all of the former students attended. Claire and Dave were a little afraid of meeting Aggie, since they’d heard such impressive stories of how far she could jump. All 4 kangaroos tested how far they could jump, just for old times’ sake, and they found that they could all jump about 6 feet! They compared stories about their experiences since leaving school, and found that their rates of improvement had slowed down as they got closer to the limit of how far their species was able to jump.

In reality, red kangaroos only jump about 1.8 meters, which is the only factually accurate part of this entire story.

The Moral

I think that a similar effect distorted my perception of my own competence when I compared myself to past OSL students. One can spot the fallacy pretty easily when everything is spelled out with cute photos: Relative skill levels don’t translate reliably into absolute ones over time. It’s tricker to spot the same fallacy in real life, but you might have an easier time now that you’ve seen the pattern once before.

Photo credits, in order:

]]>
Mon, 17 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/17/rustcamp_videos_are_available.html http://edunham.net/2015/08/17/rustcamp_videos_are_available.html <![CDATA[RustCamp videos are available]]> RustCamp videos are available

The videos from RustCamp are available here.

I asked Gankro what was up with the milkshake thing in his talk, and learned about this meme.

]]>
Mon, 17 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/09/don_t_starve.html http://edunham.net/2015/08/09/don_t_starve.html <![CDATA[Don't Starve]]>

Don’t Starve

It was a lazy Sunday afternoon and I wanted to play Don’t Starve. This actually ended up meaning about 3 hours of intermittent troubleshooting and 1 hour of games, because Linux.

Get the files

I bought Don’t Starve from the Humble Bundle store, although there are other methods of obtaining it which strike a different balance between cost and convenience.

The downloaded file is dontstarve_x64_july21.tar.gz.

This Just Works

$ yaourt -S libcurl-compat
$ tar -xvf dontstarve_x64_july21.tar.gz
$ cd dontstarve/bin
$ LD_PRELOAD=libcurl.so.3 ./dontstarve

Below the fold is the troubleshooting process I went through to make it look so easy. Hopefully it’ll be of assistance to those searching for the errors that I ran into!

This part has SEO for all the intermediate errors

Try to run the script

$ cd dontstarve
$ ./dontstarve.sh
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element
Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element
Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number

(updater:2135): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='latin'

(updater:2135): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='common'
./dontstarve.sh: line 2: unexpected EOF while looking for matching ``'
./dontstarve.sh: line 4: syntax error: unexpected end of file

Look what doesn’t like me! The fonts are sad. Spoiler: You don’t actually need those fonts at all.

Much Frustration

It turns out that the ./dontstarve.sh in the root of the unzipped dontstarve directory is solely an updater, which is totally irrelevant if you just want to play whatever version of the game they happened to ship you. It launches an updater that requests a game key and has hideously broken fonts and generally sprouts new errors like a Hydra every time you think you’re making progress.

Try running the correct dontstarve executable

The correct script to run actually lives in bin/dontstarve.sh, but if you try to run it from outside that directory, it can’t find other files:

$ bin/dontstarve.sh
$ bin/dontstarve.sh: line 3: ./dontstarve: No such file or directory

Cute. So instead:

$ cd bin
$ ./dontstarve.sh
./dontstarve: /usr/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by ./dontstarve)

OR:

$ bin/dontstarve # Spoiler, this will still be broken later
bin/dontstarve: /usr/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by bin/dontstarve)

Fixing the libcurl error

Get libcurl-compat from the AUR:

$ yaourt -S libcurl-compat

At the end of its installation process, it’ll give you a helpful little warning about preloading the library:

Sometimes you have to preload library!
 e.g. if you see this message:
 /usr/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found'
 Do this:
 LD_PRELOAD=libcurl.so.3 youprogname

Run dontstarve with the right libcurl

$ LD_PRELOAD=libcurl.so.3 bin/dontstarve

Hello Segfaults

And then we get a nice reproduceable segfault, ending in:

ERROR: Missing Shader 'shaders/font.ksh'.
Assert failure '0' at ../source/renderlib/OpenGL/HWEffect.cpp(86)

Assert failure 'BREAKPT:' at ../source/renderlib/OpenGL/HWEffect.cpp(86)

Assert failure 'datasize + mReadHead <= mBufferLength' at ../source/util/reader.h(28)

Assert failure 'BREAKPT:' at ../source/util/reader.h(28)

Segmentation fault (core dumped)

This is NOT the cue to go shave a graphics card yak, despite what Googling the error would lead one to believe. This is the cue to:

$ cd bin
$ LD_PRELOAD=libcurl.so.3 ./dontstarve

If it’s stupid, but it works, it ain’t stupid. Running the executable from the right directory makes the game work on an X230 and on an X1 Carbon, so in my (un)professional opinion, it’s got nothing to do with special fancy graphics drivers.

It works!

Scroll way back up to the top for the short version. Have fun, and Don’t Starve!

P.S.

I tried this with the x32 version. It goes all:

~/Downloads/dontstarve/bin $ LD_PRELOAD=libcurl.so.3 ./dontstarve
bash: ./dontstarve: No such file or directory

The script and executable have the same permissions when unzipped from the 32- or 64-bit tarballs... Just, the 32-bit one doesn’t work. There’s probably a good reason for this, possibly related to the fact that I’m running on a 64-bit system, but since the x64 variant of the game runs just fine I didn’t dig into the x32‘s malfunctions any deeper.

]]>
Sun, 09 Aug 2015 00:00:00 -0700
http://edunham.net/2015/07/31/how_many_rust_channels_are_there.html http://edunham.net/2015/07/31/how_many_rust_channels_are_there.html <![CDATA[How many Rust channels are there?]]> How many Rust channels are there?

I’m using search.mibbit.com to count these. All have at least one user in them as of 4pm PST 2015-07-31.

There are 53 Rust-related channels on irc.mozilla.org.

List below the fold.

41 General and Project channels:

##rustfmt
#cargo
#hematite (Minecraft-in-Rust)
#hyper (an HTTP library, https://github.com/hyperium/hyper)
#iron (a web framework, https://github.com/iron/iron)
#mio
#rust
#rust-api
#rust-apidesign
#rust-audio
#rust-bikeshed
#rust-bots
#rust-casino
#rust-community
#rust-config
#rust-crypto
#rust-data
#rust-design
#rust-dev
#rust-diverse
#rust-fsnotify
#rust-gamedev
#rust-internals
#rust-lang
#rust-learners
#rust-libs
#rust-music
#rust-newspeak
#rust-osdev
#rust-politics
#rust-tls
#rust-tools
#rust-triage
#rust-tty
#rust-war
#rust-webdev
#rust-workshop
#rust-ww
#rust_offtopic
#rustaudio
#servo
#winapi

3 social channels:

#rust-chat
#rust-offtopic
#rust-offtopic-offtopic

9 apparently language- or location-specific channels:

#rust-br
#rust-de
#rust-fr
#rust-hu
#rust-learners-de
#rust-nyc
#rust-ru
#rust-seattle
#rust.fi
]]>
Fri, 31 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/28/good_times.html http://edunham.net/2015/07/28/good_times.html <![CDATA[Good times]]> Good times

People sometimes say “morning” or “evening” on IRC for a time zone unlike my own. Here’s a bash one-liner that emits the correct time-of-day generalization based on the datetime settings of the machine you run it on.

case $(($(date +%H)/6)) in 0|1)m="morning";;2)m="afternoon";;3)m="night";;esac; echo good $m

How?

First, check if the feature is already implemented. man date and try not to giggle. Search for morning. It’s not there.

So we need a switch/case:

case EXPRESSION in CASE1) COMMAND-LIST;; CASE2) COMMAND-LIST;; ... CASEN) COMMAND-LIST;; esac

And the expression we’re switching on will be the current hour:

date +%H

My first attempt does not work because I expect too much of Bash:

case $(date +%H) in [0-12]) m="morning";;[13-18]) m="afternoon";;[19-21])m="evening";;*)m="night";;esac; echo $m

It fails because the “ranges” are actually just shell patterns.

I could either expand my script to handle all hours, or compress ranges of hours down into something that can be expressed by patterns. The latter sounds shorter and easier. I want to divide the current hour by 6, to tell which quarter of the day I’m in.

A bit of trial and error reveals that a syntax that allows me to do math on the result of date is:

$(( $(date +%H)/6 ))

because it’s shorthand for assigning the result of the math into a variable and using it immediately. This only adds a few characters to the one-liner:

case $(($(date +%H)/6)) in 0|1)m="morning";;2)m="afternoon";;3)m="night";;esac; echo $m

That’s it!

]]>
Tue, 28 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/20/printing.html http://edunham.net/2015/07/20/printing.html <![CDATA[Printing]]> Printing

The office printers have instructions for setting them up under Windows, Mac, and Ubuntu. I had forgotten how to wrangle printers, since the last time I had to set up new ones was half a decade ago when I first joined the OSL.

Setting up printers on Arch is easy once you know the right incantations, but can waste some time if you try to do it by skimming the huge wiki page rather than either reading it thoroughly or just following these steps:

Install the CUPS client:

$ yaourt -S libcups

Add a magic line to /etc/cups/cups-files.conf:

SystemGroup username

With your username on the system, assuming you have root and will log in as yourself in the dialog it prompts for. That line can go anywhere in the file.

Make the daemon go:

$ sudo systemctl enable org.cups.cupsd.service
$ sudo systemctl start org.cups.cupsd.service

Visit the web interface at http://localhost:631.

Then you have a GUI sufficiently similar to the one in the instructions for Ubuntu!

There is no GUI client for CUPS to install. If you find yourself mucking about with gpr, xpp, kdeprint, or /etc/cups/client.conf, you have gone way too far down the wrong rabbit hole.

]]>
Mon, 20 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/17/replacing_buildbot_s_outdated_cert.html http://edunham.net/2015/07/17/replacing_buildbot_s_outdated_cert.html <![CDATA[Outage postmortem: Replacing Rust Buildbot's outdated cert]]>

Outage postmortem: Replacing Rust Buildbot’s outdated cert

At the end of the day on July 14th, 2015, the certificate that Rust’s buildbot slaves were using to communicate with the buildmaster expired. This broke things. The problem started at midnight on July 15th, and was only fully resolved at the end of July 16th. Much of the reason for this outage’s duration was that I was learning about Buildbot as I went along.

Here’s how the outage got resolved, just in case anyone (especially future-me) finds themself Googling a similar problem.

Troubleshooting

Dave Huseby pointed out the problem on IRC when the slaves that he runs were unable to connect to the buildmaster:

16:48:23 <&brson> edunham: dhuseby said this earlier <huseby> it seems like the verify=3 in the stunnel config is the problem
16:48:39 <&brson> if he changed 'verify' to some other value in the stunnel config it worked

A quick check fo the stunnel docs shows that verify=3 is the strictest setting, and will fail if the locally installed cert isn’t right. This supports the hypothesis that our cert might be expired. On the buildmaster, I found the cert and examined its metadata:

$ find . -type f -name "*.pem"
$ openssl x509 -noout -issuer -subject -dates -in certname.pem

On the old cert, the results contained:

$ openssl x509 -noout -issuer -subject -dates -in rust-bot-cert.pem
issuer= /
    O=Rust Project/
    OU=Bot/
    CN=bot.rust-lang.org/
    emailAddress=admin@rust-lang.org
subject= /
    O=Rust Project/
    OU=Bot/
    CN=bot.rust-lang.org/
    emailAddress=admin@rust-lang.org
notBefore=Jul 14 02:28:50 2012 GMT
notAfter=Jul 14 02:28:50 2015 GMT

This tells me that the cert was created in 2012 and had its expiry set for the seemingly distant future of 2015.

Make a New Cert

To determine whether the old key had a passphrase on it, go openssl rsa -check -in keyname.pem. It writes the private key to your terminal if there’s no password, or prompt for a passphrase if the key has one.

The stunnel docs give most of the relevant incantation. Since no file in our buildbot directory is named precisely stunnel.conf, make cert doesn’t quite work right. But it works fine to manually run a variant of the command given in the docs:

$ openssl req -new -x509 -days 3650 -nodes -out cert.pem -keyout key.pem

That prompted me for a variety of information, which I entered where applicable and left blank where it wasn’t. The metadata is primarily for the benefit of others verifying that a cert belongs to the correct person, which isn’t a relevant concern in our use case.

I then backed up the old key and cert (although they’re no longer usable, they contain a bunch of metadata that I didn’t know whether I’d need later) and moved the new key and cert to match the old ones’ original file names.

Finally, I updated the repository with the new certificate.

Make Buildbot spin up AMIs with the new cert

This was the tricky bit. Since the slave image does not pull updates to its copy of the Rust Buildbot github repo when it boots, the file had to be statically edited and then the AMIs re-saved. But Buildbot makes its instance requests based on AMI ID, and the IDs are unique to a particular image. So the workflow goes:

  • Figure out which AMI Buildbot will spin up for a given job
  • Spin up an instance of the AMI
  • Remote into it and manually update the cert
  • Save the instance into a new AMI, noting its ID
  • Update Buildbot’s configuration with the new ID

Figure out which AMI Buildbot will use

The AMI IDs used for spot requests are stored in /home/rustbuild/rust-buildbot/master/slave-list.txt on the buildmaster. From that file I determined that we only had 4 unique AMIs in use:

ami-b74fa1f3 -- windows
ami-7fd23e3b -- generic linux
ami-381e197d -- android
ami-dbac5f9f -- centos5 (builds snapshots that work with ancient Linux)

The slave list also told me that all requests would be made for instance type c3.2xlarge.

Spin up an instance of the AMI

Since there were only 4, I did this manually. If it was a recurring task or there had been more AMIs, I would have automated this part of the process.

In the AWS Console, go to EC2, then click the AMIs link under “Images” at the left.

Search for the AMI ID in the search box. Only one AMI is found, because IDs are unique. Then click the big blue Launch button up at the left.

../../../_images/amis.png

The only “gotcha” in the ensuing 7-step process is making sure to put the instance into the correct security group. I spun my temporary instances up into the same group as a host I know I can get to from the Bastion server with my credentials, to reduce the number of steps I’d have to troubleshoot if they were difficult to access.

It also helps to tag the spot request with the name of the AMI it was created from, when processing several at once.

Remote into the instance and update the cert

For Linux-flavored instances, this just meant ssh rustbuild@00.00.00.00 (using the instance’s public IP, visible in the main instances list) from the Bastion. I then found the old cert on the host, and verified that it matched the old cert on the buildmaster. Checking that the certs matched was as simple as running md5sum cert.pem on both and visually comparing the results, and reassured me that I was overwriting the correct file.

Getting into the Windows hosts requires using RDP after setting up an SSH tunnel to the bastion. Since I was on airport wifi at the time, I had a teammate stick the cert onto the Windows instances instead.

The cert that stunnel actually uses on a Windows host with our configurations actually lives at C:\Program Files (x86)\stunnel\cert.pem, not in the actual repo like on all the sensible operating systems. Although there exists a C:\bot\cert.pem, replacing it does not cause stunnel to connect successfully.

Save the instance into a new AMI

Check the box by the instance’s name in the EC2 instances list, then follow the menus around for Actions -> Image -> Create Image. Note the new AMI’s ID, and replace all instances of the old AMI’s ID with the new one in the buildmaster’s slave-list.txt.

Kick Buildbot a bit

After the AMIs and FreeBSD, Bitrig, and Mac builders all had the new cert, I restarted Buildbot on the buildmaster and reran its script for creating the stunnels. Althoug it didn’t gracefully pick up where it had left off on partially built pull requests, closing then re-opening the PRs caused it to notice them and resume building successfully.

Hopefully TaskCluster gets OSX support soon, so we can start switching off of Buildbot.

Prevent it from happening again

After I first published this post, Gerv pointed out that the correct final step would be “Add an alarm to the shared IT calendar for a month before the new cert expires”. In my case, the analog to that alarm is “Make sure we move away from Buildbot in less than a decade”. However, if you’re reading this post to solve a similar problem in an infrastructure that will still exist at the date of the cert’s expiry, you should automate a reminder so that you or your successor doesn’t get the same unpleasant surprise.

]]>
Fri, 17 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/16/airport_wifi.html http://edunham.net/2015/07/16/airport_wifi.html <![CDATA[Airport Wifi]]> Airport Wifi

Many “free” wifi hotspots give you a limited time per computer. If you’re traveling light and forgot to bring extra devices, it’s easy to give a Linux laptop multiple personalities:

$ ip link
    1: lo
    2: wlp4s0
    3: enp0s25
$ ip link set dev wlp4s0 down
$ macchanger -r wlp4s0
$ ip link set dev wlp4s0 up

... And then connect to the wifi and jump through its silly captive portal hoops again!

Changing your MAC address occasionally can be part of a healthy security diet, making your device slightly more difficult to track, as well.

]]>
Thu, 16 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/13/interactive_rust_examples_in_static_pages.html http://edunham.net/2015/07/13/interactive_rust_examples_in_static_pages.html <![CDATA[Interactive Rust Examples in Static Pages]]> Interactive Rust Examples in Static Pages

Rust by Example has a little box where readers can interact with some example Rust code, run it using the playground, and see the results in the page. As a sysadmin I’m loath to recommend that anybody trust the playground for anything, but as a nerd and coder I recognize that it’s super cool and people want to use it.

There are 2 ways to stuff a Playground into your website: The easy way, and the “right” way. Here’s how to do it the easy way, and where to look for examples of the hard way.

The Easy Way

There are these cool things called iframes that basically let you put websites in your websites...

../../../_images/xzibit.jpg

All you have to do is stick this one line in your page’s source, and modern browsers will go load and inject the specified page – in this case, the playpen. The exact technique for injecting raw HTML will differ based on your blogging platform. With Tinkerer, the source will look like this:

.. raw:: html

    <iframe src="https://play.rust-lang.org/" style="width:100%; height:400px;"></iframe>

That’ll render with the default contents of the playpen. Note that it’ll someties try to be clever and load the code that the viewer most recently opened in it when loaded with no arguments: