edunham http://edunham.net/ is a "DevOps" Engineer at Mozilla Research en-us Tue, 27 Jun 2017 00:00:00 -0700 http://edunham.net/2017/06/27/internet_safety.html http://edunham.net/2017/06/27/internet_safety.html <![CDATA[Opinion: Levels of Safety Online]]> Opinion: Levels of Safety Online

The Mozilla All-Hands this week gave me the opportunity to explore an exhibit about the “Mozilla Worldview” that Mitchell Baker has been working on. The exhibit sparked some interesting and sometimes heated discussion (as direct result of my decision to express unpopular-sounding opinions), and helped me refine my opinions on what it means for someone to be “safe” on the internet.

Spoiler: I think that there are many different levels of safety that someone can have online, and the most desirable ones are also the most difficult to attain.

Obligatory disclaimer: These are my opinions. You’re welcome to think I’m wrong. I’d be happy to accept pull requests to this post adding tools for attaining each level of safety, but if you’re convinced I’m wrong, the best place to say that would be your own blog. Feel free to drop me a link if you do write up something like that, as I’d enjoy reading it!

Safety to Consume Desired Information

I believe that the fundamental layer of safety that someone can have online is to be able to safely consume information. Even at this basic level, a lot of things can go wrong. To safely consume information, people need internet access. This might mean free public WiFi, or a cell phone data plan. Safety here means that the user won’t come to harm solely as a result of what they choose to learn. “Desired information” means that the person gets a chance to find the “best” answer to their question that’s available.

How could someone come to harm as a result of choosing to learn something? If you’ve ever joked about a particular search getting you “put on a watch list”, I’m sure you can guess. I happen to hold the opinion that knowledge is an amoral tool, and it’s the actions that people take for which they should be held accountable – if you believe that there exist facts that are inherently unethical to know, we’ll necessarily differ on the importance of this safety.

How might someone fail to get the information they desired? Imagine someone searching for the best open source social networking tools on a “free” internet connection that’s provided and monitored by a social networking giant. Do you think the articles that turn up in their search results would be comparable to what they’d get on a connection provided by a less biased organization?

Why “desired information”, and not “truth”? My reason here is selfish. I enjoy learning about different viewpoints held by groups who each insist that the other is completely wrong. If somebody tried to moderate what information is “true” and what’s “false”, I would probably only be allowed to access the propaganda of at most one of those groups.

Sadly, if your ISP is monitoring your internet connection or tampering with the content you’re trying to view, there’s not a whole lot that you can do about it. The usual solution is to relocate – either physically, or feign relocation by using an onion router or proxy. By building better tools, legislation, and localization, it’s plausible that we could extend this safety to almost everyone in the world within our lifetimes.

Safety to Produce Information Anonymously

I think the next layer of internet safety that people need is the ability to produce information anonymously. The caveat here is that, of course, nobody else is obligated to choose to host your content for you. The safety of hosting providers, especially coupled with their ability to take financial payment while maintaining user anonymity, is a whole other can of worms.

Why does producing information anonymously come before producing information with attribution? Consider the types of danger that accompany producing content online. Attackers often choose their victims based on characteristics that the victims have in the physical world. Attempted attacks often cause harm because the attacker could identify the victim’s physical location or social identity. While the best solution would of course be to prevent the attackers from behaving harmfully at all, a less ambitious but more attainable project is to simply prevent them from being able to find targets for their aggression. Imagine an attacker determined to harm all people in a certain group, on an internet where nobody discloses whether or not they’re a member of that group: The attacker is forced to go for nobody or everybody, neither of which is as effective as an individually targeted attack. And that’s just for verbal or digital assaults – it is extremely difficult to threaten or enact physical harm upon someone whose location you do not know.

Systems that support anonymity and arbitrary account creation open themselves to attempted abuse, but they also provide people with extremely powerful tools to avoid being abused. There are of course tradeoffs – it takes a certain amount of mental overhead, and might feel duplicitous, to use separate accounts for discussing your unfashionable polticical views and planning the local block party – but there’s no denying how much less harm it is possible to come to when behaving anonymously than when advertising your physical identity and location.

How do you produce information anonymously? First, you access the internet in a way that won’t make it easy to trace your activity to your house. This could mean booting from a LiveCD and accessing a public internet connection, or booting from a LiveCD and using a proxy or onion router to connect to the sites you wish to access in order to mask your IP address. A LiveCD is safer than using your day-to-day computer profile because browsers store information from sites you visit, and some information about your operating system is sometimes visible to sites you visit. Using a brand-new copy of your operating system, which forgets everything when you shut down, is an easy way to avoid revealing those identifying pieces of information.

Proof read anything that you want to post anonymously to make sure it doesn’t contain any details about where you live, or facts that only someone with your experiences would know.

How do you put information online anonymously? Once you have a connection that’s hard to trace to your real-world self, it’s pretty simple to set up free accounts on mail and web hosting sites under some placeholder name.

Be aware that the vocabulary you use and the way you structure your sentences can sometimes be identifying, as well. A good way to strip all of the uniqueness from your writing voice is to run a piece of writing through http://hemingwayapp.com/ and fix everything that it calls an error. After that, use a thesaurus to add some words you don’t usually use anywhere else. Alternately, you could run it through a couple different translation tools to make it sound less like you wrote it.

How do you share something you wrote anonymously with your friends? Here’s the hard part: You don’t. If you’re not careful, the way that you distribute a piece of information that you wrote anonymously can make it clear that it came from you. Anonymously posted information generally has to be shared publicly or to an entire forum, because to pick and choose exactly which individuals get to see a piece of content reveals a lot about the identity of the person posting it.

Doing these things can enable you to produce a piece of information on the internet that would be a real nuisance to trace back to you in real life. It’s not impossible, of course – there are sneaky tricks like comparing the times when you use a proxy to the times when material shows up online – but someone would only attempt such tricks if they already had a high level of technical knowledge and a grudge against you in particular.

Long story short, in most places with internet access, it is possible but inconvenient to exercise your safety to produce information anonymously. By building better online tools and hosting options, we can extend this safety to more people who have internet access.

Safety to Produce Information Psuedonymously

An important thing to note about producing information anonymously is that if you step up and take credit for another piece of information you posted, you’re less anonymous. Add another attribution, and you’re easier still to track. It’s most anonymous to produce every piece of information under a different throwaway identity, and least anonymous to produce everything under a single identity even if it’s made up.

Producing information pseudonymously is when you use a fake name and biography, but otherwise go about the internet as the same person from day to day. The technical mechanics of producing a single pseudonymous post are identical to what I described for acting “anonymously”, but I differentiate psyedonymity from anonymity in that the former is continuous – you can form friendships with other humans under a psuedonym.

The major hazard to a pseudonymous online presence is that if you aggregate enough details about your physical life under a single account, someone reading all those details might use them to figure out who you are offline. This is addressed by private forums and boards, which limit the number of possible attackers who can see your posts, as well as by being careful of what information you disclose. Beware, however, that any software vulnerability in a private forum may mean its contents suddenly becomes public.

In my opinion, pseudonymous identity is an excellent compromise between the social benefits of always being the same person, and physical safety from hypothetical attackers. I find that behaving pseudonymously rather than anonymously helps me build friendships with people whom I’m not sure at first whether to trust, while maintaining a sense of accountability for my reputation that’s absent in strictly anonymous communication. But hey, I’m biased – you probably don’t know my full name or home address from my web presence, so I’m on the psuedonymity spectrum too.

Safety to Produce Information with Accurate Attribution

The “safety” to produce information with attribution is extremely complex, and the one on which I believe that most social justice advocates tend to focus on. It is as it sounds: Will someone come to harm if they choose to post their opinions and location under their real name?

For some people, this is the easiest safety to acquire: If you’re in a group that’s not subject to hate crimes in your area, and your content is only consumed by people who agree with you or feel neutrally toward your views, you have this freedom by default.

For others, this safety is almost impossible to obtain. If the combination of your appearance and the views you’re discussing would get you hurt if you said it in public, extreme social change would be required before you had even a chance at being comparably safe online.

I hold the opinion that solving the general case of linking created content to real-world identities is not a computer problem. It’s a social problem, requiring a world in which no person offended by something on the internet and aware of where its creator lives is physically able to take action against the content’s creator. So it’d be great, but we are not there yet, and the only fictional worlds I’ve encountered in which this safety can be said to exist are impossibly unrealistic, totalitarian dystopias, or both.

In Summary

In other words, I view misuse of the internet as a pattern of the form “Creator posts content -> attacker views content -> attacker identifies creator -> attacker harms creator”. This chain can break, with varying degrees of difficulty, at several points:

First, this chain of outcomes won’t begin if the creator doesn’t post the content at all. This is the easiest solution, and I point out the “safety to consume desired content” because even someone who never posts online can derive major benefits from the information available on the internet. It’s easy, but it’s not good enough: Producing as well as consuming content is part of what sets the internet apart from TV or books.

The next essential link in the chain is the attacker identifying the content’s creator. If someone has no way to contact you physically or digitally, all they can do is shout nasty things to the world online, and you’re free to either ignore them or shout right back. Having nasty things shouted about your work isn’t optimal, but it is difficult to feel that your physical or social wellbeing is jeopardized by someone when they have no idea who you are. This is why I believe that the safety to produce information anonymously is so important: It uses software to change the outcome even in circumstances where the attacker’s behavior cannot be modified. Perfect psuedonymity also breaks this link, but any software mishap or accidental over-sharing can invalidate it instantly. The link is broken with fewer potential points of failure by creating content anonymously.

The third solution is what I alluded to when discussing the safety of psuedonymity: Prevent the attacker from viewing the content. This is what private, interest-specific forums accomplish reasonably well. There are hazards here, especially if a forum’s contents become public unintentionally, or if a dedicated attacker masquerades as a member of the very group they wish to harm. So it helps, and can be improved technologically through proper security practices by forum administrators, and socially via appropriate moderation. It’s better, from the perspective that assuming the same online identity each day allows creators to build social bonds with one another, but it’s still not optimal.

The fourth and ideal solution is to break the cycle right at the very end, by preventing the attacker from harming the content creator. This seems to be where most advocates argue we should jump straight into, because it’s really perfect – it requires no change or compromise from content creators, and total change from those who might be out to harm them. It’s the only solution in which people of all appearances and beliefs and locations are equally safe online. However, it’s also the most difficult place to break the cycle, and a place at which any error of implementation would create the potential for incalculable abuse.

I’ve listed these safeties in an order that I regard as how feasible they are to implement with today’s social systems and technologies. I think it’s possible to recognize the 4th safety as the top of the heap, without using that as an excuse to neglect the benefits which can come from bringing more of the world its 3 lesser but far more attainable cousins.

]]>
Tue, 27 Jun 2017 00:00:00 -0700
http://edunham.net/2017/05/23/salt_successful_ping_but_highstate_says_minion_did_not_return.html http://edunham.net/2017/05/23/salt_successful_ping_but_highstate_says_minion_did_not_return.html <![CDATA[Salt: Successful ping but highstate says "minion did not return"]]> Salt: Successful ping but highstate says “minion did not return”

Today I was setting up some new OSX hosts on Macstadium for Servo’s build cluster. The hosts are managed with SaltStack.

After installing all the things, I ran a test ping and it looked fine:

user@saltmaster:~$ salt newbuilder test.ping
newbuilder:
    True

However, running a highstate caused Salt to claim the minion was non-responsive:

user@saltmaster:~$ salt newbuilder state.highstate
newbuilder:
    Minion did not return. [No response]

Googling this problem yielded a bunch of other “minion did not return” kind of issues, but nothing about what to do when the minion sometimes returns fine and other times does not.

The fix turned out to be simple: When a test ping succeeds but a longer-running state fails, it’s an issue with the master’s timeout setting. The timeout defaults to 5 seconds, so a sufficiently slow job will look to the master like the minion was unreachable.

As explained in the Salt docs, you can bump the timeout by adding the line timeout: 30 (or whatever number of seconds you choose) to the file /etc/salt/master on the salt master host.

]]>
Tue, 23 May 2017 00:00:00 -0700
http://edunham.net/2017/03/01/advice_on_storing_encryption_keys.html http://edunham.net/2017/03/01/advice_on_storing_encryption_keys.html <![CDATA[Advice on storing encryption keys]]> Advice on storing encryption keys

I saw an excellent question get some excellent infosec advice on IRC recently. I’m quoting the discussion here because I expect that I’ll want to reference it when answering others’ questions in the future.

A user going by Dagnabit asked:

May I borrow some advice specifically on how best to store an ecryption key? I have a python script that encrypts files using libsodium, My question is how can I securely store the encryption key within the file system? Is it best kept in an sqlite db that can only be accessed by the user owning the python script?

This user has run right into one of the fundamental challenges of security: How can my secrets (in this case, keys) be safe from attackers, while still being usable?

HedgeMage replied with a wall of useful advice. Quotations are her words, links and annotations between them are me adding some context and opinions.

So, it depends on your security model: in most cases I’m prone to keeping my encryption key on a hardware token, so that even if the server is compromised, the secret key is not.

You’re probably familiar with time-based one-time-pad hardware tokens, but in the case of key management, the “hardware token” could be as simple as a USB stick locked in a safe. On the spectrum of compromise between security and convenience, a hardware token is toward the DNSSEC keyholder end.

However, for some projects you are on virtualized infrastructure and can’t plug in a hardware token. It’s unfortunate, because that’s really the safest thing, but a reality for many of us.

This also applies to physical infrastructure in which an application might need to use a key without human supervision.

Without getting into anything crazy where a proxy server does signing, etc, you usually are better off trusting filesystem permissions than stuffing it in the database, for the following reasons:

While delegating the task of signing to a proxy server can make life more annoying to an attacker, you’re still going to have to choose between having a human hold the key and get interrupted whenever it’s needed, or trusting a server with it, at some point. You can compromise between those two extremes by using a setup like subkeys, but it’s still inconvenient if a subkey gets compromised.

  • It’s easier to monitor the filesystem activity comprehensively, and detect

intrusions/compromises.

  • Filesystem permissions are pretty dependable at this point, and if the

application doing the signing has permission for the key, whether in a DB or the filesystem, it can compromise that key... so the database is giving you new attack surfaces (compromise of the DB access model) without any new protections.

To put it even more bluntly, any unauthorized access to a machine has the potential to leak all of the secrets on it. The actions that you’ll need to take if you suspect the filesystem of a host was compromised are pretty much identical to those you’d take if the DB was.

  • Stuffing the key in the DB is nonstandard enough that you may be writing more of

the implementation yourself, instead of depending as much as possible on widely-used, frequently-examined code.

Dagnabit’s reply saved me the work of summarizing the key takeaways:

I will work on securing the distrubtion and removing any unnecessary packages.

I’ll look at the possibility of using a hardware token to keep it secure/private.

Reducing the attack surface is logical and something I had not considered.

]]>
Wed, 01 Mar 2017 00:00:00 -0800
http://edunham.net/2017/01/23/tech_internship_hunting_ideas.html http://edunham.net/2017/01/23/tech_internship_hunting_ideas.html <![CDATA[Tech Internship Hunting Ideas]]> Tech Internship Hunting Ideas

A question from a computer science student crossed one of my IRC channels recently:

Them: what is the best way to fish for internships over the summer?
    Glassdoor?

Me: It depends on what kind of internship you're looking for. What kind of
    internship are you looking for?

Them: Computer Science, anything really.

This caused me to type out a lot of advice. I’ll restate and elaborate on it here, so that I can provide a more timely and direct summary if the question comes up again.

Philosophy of Job Hunting

My opinion on job hunting, especially for early-career technologists, is that it’s important to get multiple offers whenever possible. Only once one has a viable alternative can one be said to truly choose a role, rather than being forced into it by financial necessity.

In my personal experience, cultivating multiple offers was an important step in disentangling impostor syndrome from my career choices. Multiple data points about one’s skills being valued by others can help balance out an internal monologue about how much one has yet to learn.

If you disagree that cultivating simulataneous opportunities then politely declining all but the best is a viable internship hunting strategy, the rest of this post may not be relevant or interesting to you.

Identifying Your Options

To get an internship offer, you need to make a compelling application to a company which might hire you. I find that a useful first step is to come up with a list of such companies, so you can study their needs and determine what will make your application interest them.

Use your social network. Ask your peers about what internships they’ve had or applied for. Ask your mentors whether they or their friends and colleagues hire interns.

When you ask someone about their experience with a company, remember to ask for their opinion of it. To put that opinion into perspective, it’s useful to also ask about their personal preferences for what they enjoy or hate about a workplace. Knowing that someone who prefers to work with a lot of background noise enjoyed a company’s busy open-plan office can be extremely useful if you need silence to concentrate! Listening with interest to a person’s opinions also strengthens your social bond with them, which never hurts if it turns out they can help you get into a company that you feel might be a good fit.

Use LinkedIn, Hacker News, Glassdoor, and your city’s job boards. The broader a net you cast to start with, the better your chances of eventually finding somewhere that you enjoy. If your job hunt includes certain fields (web dev, DevOps, big data, whatever), investigate whether there’s a meetup for professionals in that field in your region. If you have the opportunity to give a short talk on a personal project at such a meetup, do it and make sure to mention that you’re looking for an internship.

Identify your own priorities

Now that you have a list of places which might concievably want to hire you, it’s time to do some introspection. For each field that you’ve found a prospective company in, try to answer the question “What makes you excited about working here?”.

You do not have to have know what you want to do with your life to know that, right now, you think DevOps or big data or frontend development is cool.

You do not have to personally commit to a single passion at the expense of all others – it’s perfectly fine to be interested in several different languages or frameworks, even if the tech media tries to pit them against each other.

However, for each application, it’s prudent to only emphasize your interests in that particular field. It’s a bit of a faux pas to show up to a helpdesk interview and focus the whole time on your passion for building robots, or vice versa. And acting equally interested in every other field will cause an employer to doubt that you’re necessarily the best fit for a specialized role... So in an interview, try not to stray too far from the value that you’re able to deliver to that company.

This is also a good time to identify any deal-breakers that would cause you to decline a prospective internship. Are you ok with relocating? Is there some tool or technology that would cause you to dread going to work every day?

I personally think that it’s worth applying even to a role that you know you wouldn’t accept an offer from when you’re early in your career. If they decide to interview you, you’ll get practice experiencing a real interview without the pressure of “I’ll lose my chance at my dream job if I mess this up!”. Plus if they extend an offer to you, it can help you calibrate the financial value of your skills and negotiate with employers that you’d actually enjoy.

Craft an excellent resume

I talk about this elsewhere.

There are a couple extra notes if you’re applying for an internship:

1) Emphasize the parts of your experience that relate most closely to what each employer values. If you can, it’s great to use the same words for skills that were used in the job description.

2) The bar for what skills go on your resume is lower when you have less experience. Did you play with Docker for a weekend recently and use it to deploy a toy app? Make sure to include that experience.

Practice, Practice, Practice

If you’re uncomfortable with interviewing, do it until it becomes second nature. If your current boss supports your internship search, do some mock interviews with them. If you’re nervous about things going wrong, have a friend roleplay as a really bad interview with you to help you practice coping strategies. If you’ll be in front of a panel of interviewers, try to get a panel of friends to gang up on you and ask tough questions!

To some readers this may be obvious, but to others it’s worth pointing out that you should also practice wearing the clothes that you’ll wear to an interview. If you wear a tie, learn to tie it well. If you wear shirts or pants that need to be ironed, learn to iron them comptently. If you wear shoes that need to be shined, learn to shine them. And if your interview will include lunch, learn to eat with good table manners and avoid spilling food on yourself.

Yes, the day-to-day dress codes of many tech offices are solidly in the “sneakers, jeans, and t-shirt” category for employees of all levels and genders. But many interviewers, especially mid- to late-career folks, grew up in an age when dressing casually at an interview was a sign of incompetence or disrespect. Although some may make an effort to overcome those biases, the subconscious conditioning is often still there, and you can take advantage of it by wearing at least business casual.

Apply Everywhere

If you know someone at a company where you’re applying, try to get their feedback on how you can tailor your resume to be the best fit for the job you’re looking at! They might even be able to introduce you personally to your potential future boss.

I think it’s worth submitting a good resume to every company which you identify as being possibly interested in your skills, even the ones you don’t currently think you want to work for. Interview practice is worth more in potential future salary than the hours of your time it’ll take at this point in your career.

Follow Up

If you don’t hear back from a company for a couple weeks, a polite note is order. Restate your enthusiasm for their company or field, express your understanding that there are a lot of candidates and everything is busy, and politely solicit any feedback that they may be able to offer about your application. A delayed reply does not always mean rejection.

If you’re rejected, follow up to thank HR for their time.

If you’re invited to interview, reply promptly and set a time and date. For a virtual or remote interview, only offer times when you’ll have access to a quiet room with a good network connection.

Interview Excellently

I don’t have any advice that you won’t find a hundred times over on the rest of the web. The key points are:

  • Show up on time, looking respectable
  • Let’s hope you didn’t lie on your resume
  • Restate each question in your answer
  • It’s ok not to know an answer – state what you would do if you encountered the problem at work. Would you Google a certain phrase? Ask a colleague? Read the manual?
  • Always ask questions at the end. When in doubt, ask your interviewer what they enjoy about working for the company.

Keep Following Up

After your interview, write to whoever arranged it and thank the interviewers for their time. For bonus points, mention something that you talked about in the interview, or include the answer to a question that you didn’t know off the top of your head at the time.

Getting an Offer

Recruiters don’t usually like to disclose the details of offers in writing right away. They’ll often phone you to talk about it. You do not have to accept or decline during that first call – if you’re trying to stall for a bit more time for another company to get back to you, an excuse like “I’ll have to run that by my family to make sure those details will work” is often safe.

Remember, though, that no offer is really a job until both you and the employer have signed a contract.

Declining Offers

If you’ve applied to enough places with a sufficiently compelling resume, you’ll probably have multiple offers. If you’re lucky, they’ll all arrive around the same time.

If you wish to decline an offer from a company whom you’re certain you don’t want to work for, you can practice your negotiation skills. Read up on salary negotiation, try to talk the company into making you a better offer, and observe what works and what doesn’t. It’s not super polite to invest a bunch of their time in negotiations and then turn them down anyway, which is why I suggest only doing this to a place that you’re not very fond of.

To decline an offer without burning any bridges, be sure to thank them again for their time and regretfully inform them that you’ll be pursuing other opportunities at this time. It never hurts to also do them a favor like recommending a friend who’s job hunting and might be a good fit.

Again, though, don’t decline an offer until you have your actual job’s contract in writing.

]]>
Mon, 23 Jan 2017 00:00:00 -0800
http://edunham.net/2016/09/27/rust_s_community_automation.html http://edunham.net/2016/09/27/rust_s_community_automation.html <![CDATA[Rust's Community Automation]]>

Rust’s Community Automation

Here’s the text version, with clickable links, of my Automacon lightning talk today.

Intro

I’m a DevOps engineer at Mozilla Research and a member of the Rust Community subteam, but the conclusions and suggestions in this talk are my own observations and opinions.

The slides are a result of trying to write my own CSS for sliderust... Sorry about the ugliness.

I have 10 minutes, so this is not the time to teach you any Rust. Check out rust-lang.org, the Rust Community Resources, or your city’s Rust meetup to get started with the language.

What we are going to cover is how Rust automates some community tasks, and what you can learn from our automation.

Community

I define “community”, in this context, as “the human interaction work necessary for a project’s success”. This work is done by a wide variety of people in many situations. Every interaction, from helping a new contributor to discussing a proposed code change to criticizing someone’s behavior, affects the overall climate of a project’s community.

Automation

To me, “automation” means “offloading peoples’ work onto a system”. This can be a computer system, but I think it can also mean changes to the social systems that guide peoples’ behavior.

So, community automation is a combination of:

  • Building tools to do things the humans used to have to
  • Tweaking the social systems to minimize the overhead they create

Scoping the Problem

While not all things can be automated and not all factors of the community are under the project leadership’s control, it’s not totally hopeless.

Choices made and automation deployed by project leaders can help control:

  • Which contributors feel welcome or unwelcome in a project
  • What code makes it into the project’s tree
  • Robots!

Moderation

Our robots and social systems to improve workflow and contributor experience all rely on community members’ cooperation. To create a community of people who want to work constructively together and not be jerks to each other, Rust defines behavior expectations code of conduct. The important thing to note about the CoC is that half the document is a clear explanation of how the policies in it will be enforced. This would be impossible without the dedication of the amazing mod team.

The process of moderation cannot and should not be left to a computer, but we can use technology to make our mods’ work as easy as possible. We leave the human tasks to humans, but let our technologies do the rest.

In this case, while the mods need to step in when a human has a complaint about something, we can automate the process of telling peole that the rules exist. You can’t join the IRC channel, post on the Discourse forums, or even read the Rust subreddit without being made aware that you’re expected to follow the CoC’s guidelines in official Rust spaces.

Depending on the forums where your project communicates, try to automate the process of excluding obvious spammers and trolls. Not everybody has the skills or interest to be an excellent moderator, so when you find them, avoid wasting their time on things that a computer could do for them!

It didn’t fit in the talk, but this Slashdot post is one of my favorite examples of somebody being filtered out of participating in the Rust community due to their personal convictions about how project leadership should work. While we do miss out on that person’s potential technical contributions, we also save all of the time that might be spent hashing out our disagreements with them if we had a less clear set of community guideines.

Robots

This lightning talk highlighted 4 categories of robots:

  • Maintaining code quality
  • Engaging in social pleasantries
  • Guiding new contributors
  • Widening the contributor pipeline

Longer versions of this talk also touch on automatically testing compiler releases, but that’s more than 10 minutes of content on its own.

The Not Rocket Science Rule of Software Engineering

To my knowledge, this blog post by Rust’s inventor Graydon Hoare is the first time that this basic principle has been put so succinctly:

Automatically maintain a repository of code that always passes all the tests.

This policy guides the Rust compiler’s development workflow, and has trickled down into libraries and projects using the language.

Bors

The name Bors has been handed down from Graydon’s original autolander bot to an instance of Homu, and is often verbed to refer to the simple actions he does:

  1. Notice when a human says “r+” on a PR
  2. Create a branch that looks like master will after the change is applied
  3. Test that branch
  4. Fastforward the master branch to the tested state, if it passed.

Keep your tree green

Saying “we can’t break the tests any more” is a pretty significant cultural change, so be prepeared for some resistance. With that disclaimer, the path to following the Not Rocket Science Rule is pretty simple:

  1. Write tests that fail when your code is bad and pass when it’s good
  2. Run the tests on every change
  3. Only merge code if it passes all the tests
  4. Fix the tests whenever thy’re wrong.

This strategy encourages people to maintain the tests, because a broken test becomes everyone’s problem and interrupts their workflow until it’s fixed.

I believe that tests are necessary for all code that people work on. If the code was fully and perfectly correct, it wouldn’t need changes – we only write code when something is wrong, whether that’s “It crashes” or “It lacks such-and-such a feature”. And regardless of the changes you’re making, tests are essential for catching any regressions you might accidentally introduce.

Automating social pleasantries

Have you ever submitted an issue or change request to a project, then not heard back for several months? It feels bad to be ignored, and the project loses out on potential contributors.

Rust automates basic social pleasantries with a robot called Highfive. Her tasks are easy to explain, though the implementaion details can be tricky:

  1. Notice when a change is submitted by a new contributor, then welcome them
  2. Assign reviewers, based on what code changed, to all PRs
  3. Nag the reviewer if they seem to have forgotten about their assignment

If you don’t want a dedicated greeter-bot, you can get many of these features from your code management system:

  • Use issue and pull request templates to guide potential contributors to the docs that can help them improve their report or request.
  • Configure notifications so you find out when someone is trying to interact with your project. This could mean muting all the noise notifications so the signal ones are available, or intermittently polling the repositories that you maintain (a daily cron job or weekly calendar reminder works just fine).

Guide new contributors

In open source projects, “I’m new; what can I work on?” is a common inquiry. In internal projects, you’ll often meet colleagues from elsewhere in your organization who ask you to teach them something about the project or the skills you use when working on it.

The Rust-implemented browser engine Servo is actually a slightly better example of this than the compiler itself, since the smaller and younger codebase has more introductory-level issues remaining. The site starters.servo.org automatically scrapes the organization’s issue trackers for easy and unclaimed issues.

Issue triage is often unrewarding, but using the tags for a project like this creates a greater incentive to keep them up to date.

When filing introductory issues, try to include links to the relevant documentation, instructions for reproducing the bug, and a suggestion of what file you would look in first if you tackled the problem yourself.

Automating mentorship

Mentorship is a highly personalized process in which one human transfers their skills to another. However, large projects often have more contributors seeking the same basic skills than mentors with time to teach them.

The parts of mentorship which don’t explicitly require a human mentor can be offloaded onto technology.

The first way to automate mentorship tasks is to maintain correct and up-to-date documentation. Correct docs train humans to consult them before interrupting an expert, whereas docs that are frequently outdated or wrong condition their users to skip them entirely.

Use tools like octohatrack and your project status updates to identify and recognize contributors who help with docs and issue triage. Docs contributions may actually save more developer and community time than new code features, so respect them accordingly.

Finally, maintain a list of introductory or mentored issues – even if that’s just a Google Doc or Etherpad.

Bear in mind that an introductory issue doesn’t necessarily mean “suitable for someone who has never coded before”. Someone with great skills in a scripting language might be looking for a place to help with an embedded codebase, or a UX designer might want to get involved with a web framework that they’ve used. Introductory issues should be clear about what knowledge a contributor should acquire in order to try them, but they don’t have to all be “easy”.

Automating the pipeline

Drive-by fixes are to being a core contributor as interviews are to full time jobs. Just as a company attempts to interview as many qualified candidates as it can, you can recruit more contributors by making your introductory issues widely available.

Before publicizing your project, make sure you have a CONTRIBUTING.txt or good README outlining where a new contributor should start, or you’ll be barraged with the same few questions over and over.

There are a variety of sites, which I call issue aggregators, where people who already know a bit about open source development can go to find a new project to work on. I keep a list on this page <http://edunham.net/pages/issue_aggregators.html>, pull requests welcome <https://github.com/edunham/site/blob/master/pages/issue_aggregators.rst> if I’m missing anything. Submitting your introductory issues to these sites broadens your pipeline, and may free up humans’ recruiting time to focus on peole who need more help getting up to speed.

If you’re working on internal rather than public projects, issue aggregators are less relevant. However, if you have the resources, it’s worthwhile to consider the recruiting device of open sourcing an internal tool that would be useful to others. If an engineer uses and improves that tool, you get a tool improvement and they get some mentorship. In the long term, you also get a unique opportunity to improve that engineer’s opinion of your organization while networking with your engineers, which can make them more likely to want to work for you later.

Follow Up

For questions, you’re welcome to chat with me on Twitter (@QEDunham), email (automacon <at> edunham <dot> net), or IRC (edunham on irc.freenode.net and irc.mozilla.org).

Slides from the talk are here.

]]>
Tue, 27 Sep 2016 00:00:00 -0700
http://edunham.net/2016/09/23/setting_a_freenode_channel_s_taxonomy_info.html http://edunham.net/2016/09/23/setting_a_freenode_channel_s_taxonomy_info.html <![CDATA[Setting a Freenode channel's taxonomy info]]> Setting a Freenode channel’s taxonomy info

Some recent flooding in a Freenode channel sent me on a quest to discover whether the network’s services were capable of setting a custom message rate limit for each channel. As far as I can tell, they are not.

However, the problem caused me to re-read the ChanServ help section:

/msg chanserv help
- ***** ChanServ Help *****
- ...
- Other commands: ACCESS, AKICK, CLEAR, COUNT, DEOP, DEVOICE,
-                 DROP, GETKEY, HELP, INFO, QUIET, STATUS,
-                 SYNC, TAXONOMY, TEMPLATE, TOPIC, TOPICAPPEND,
-                 TOPICPREPEND, TOPICSWAP, UNQUIET, VOICE,
-                 WHY
- ***** End of Help *****

Taxonomy is a cool word. Let’s see what taxonomy means in the context of IRC:

/msg chanserv help taxonomy
- ***** ChanServ Help *****
- Help for TAXONOMY:
-
- The taxonomy command lists metadata information associated
- with registered channels.
-
- Examples:
-     /msg ChanServ TAXONOMY #atheme
- ***** End of Help *****

Follow its example:

/msg chanserv taxonomy #atheme
- Taxonomy for #atheme:
- url                       : http://atheme.github.io/
- ОХЯЕБУ                    : лололол
- End of #atheme taxonomy.

That’s neat; we can elicit a URL and some field with a cryllic and apparently custom name. But how do we put metadata into a Freenode channel’s taxonomy section? Google has no useful hits (hence this blog post), but further digging into ChanServ’s manual does help:

/msg chanserv help set

- ***** ChanServ Help *****
- Help for SET:
-
- SET allows you to set various control flags
- for channels that change the way certain
- operations are performed on them.
-
- The following subcommands are available:
- EMAIL           Sets the channel e-mail address.
- ...
- PROPERTY        Manipulates channel metadata.
- ...
- URL             Sets the channel URL.
- ...
- For more specific help use /msg ChanServ HELP SET command.
- ***** End of Help *****

Set arbirary metadata with /msg chanserv set #channel property key value

The commands /msg chanserv set #channel email a@b.com and /msg chanserv set #channel property email a@b.com appear to function identically, with the former being a convenient wrapper around the latter.

So that’s how #atheme got their fancy cryllic taxonomy: Someone with the appropriate permissions issued the command /msg chanserv set #atheme property ОХЯЕБУ лололол.

Behaviors of channel properties

I’ve attempted to deduce the rules governing custom metadata items, because I couldn’t find them documented anywhere.

  1. Issuing a set property command with a property name but no value deletes
the property, removing it from the taxonomy.
  1. A property is overwritten each time someone with the appropriate permissions
issues a /set command with a matching property name (more on the matching in a moment). The property name and value are stored with the same capitalization as the command issued.
  1. The algorithm which decides whether to overwrite an existing property or
create a new one is not case sensitive. So if you set ##test email test@example.com and then set ##test EMAIL foo, the final taxonomy will show no field called email and one field called EMAIL with the value foo.
  1. When displayed, taxonomy items are sorted first in alphabetical order (case
insensitively), then by length. For instance, properties with the names a, AA, and aAa would appear in that order, because the initial alphebetization is case-insensitive.
  1. Attempting to place [mIRC color codes](http://www.mirc.com/colors.html) in the

property name results in the error “Parameters are too long. Aborting.”

However, placing color codes in the value of a custom property works just fine.

Other uses

As a final note, you can also do basically the same thing with Freenode’s NickServ, to set custom information about your nickname instead of about a channel.

]]>
Fri, 23 Sep 2016 00:00:00 -0700
http://edunham.net/2016/08/02/adventures_in_mercurial.html http://edunham.net/2016/08/02/adventures_in_mercurial.html <![CDATA[Adventures in Mercurial]]> Adventures in Mercurial

I adore Git, but have needed to ramp up my Mercurial (Hg) skills recently to dig prior work related to my current tasks out of a repo’s history. Here are some things I’m learning:

Command Equivalences

As this tutorial so helpfully explains, the two VCSes aren’t all that dissimilar under their hoods. I condensed the command comparison table into a single page and printed it out for quick reference; a PDF is here.

Clone

The thing I want to clone lives at http://hg.mozilla.org/hgcustom/version-control-tools/file/tip/autoland.

Trying to clone the full URL yields a 404, but snipping the URL back to the top-level directory gets me the repo:

$ hg clone http://hg.mozilla.org/hgcustom/version-control-tools/
destination directory: version-control-tools
requesting all changes
adding changesets
adding manifests
adding file changes
added 4574 changesets with 10874 changes to 1971 files
updating to bookmark @
1428 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ ls
version-control-tools

Examine Log

hg log | less shows me that each commit’s summary in this repo includes the part of the codebase it touches, and a bug number.

hg log | grep autoland: | less gives me the summaries of every commit that touched autoland, but I cannot show a commit from summary alone.

The Hg book helped me construct a filter that will show a unique revision ID onthe same line as each description.

hg log --template '{rev} {desc}\n' | grep autoland: is much more useful. It gives me the local ID of each changeset whose description included “autoland:”.

From here, I can use a bit more grep to narrow down the list of matching messages, then I’m ready to examine commits.

Examining Commits

That {rev} in my filter was the “repository-local changeset revision number”. For these examples I’ll examine revision 2589.

hg status --change 2589 lists the files that were touched by that revision, and hg export 2589 yields a full diff of the changes introduced.

This gets me enough information to make an appropriate set of changes, run the tests, and create my own commits!

]]>
Tue, 02 Aug 2016 00:00:00 -0700
http://edunham.net/2016/07/01/thinkpad_13_trackpoint_i3.html http://edunham.net/2016/07/01/thinkpad_13_trackpoint_i3.html <![CDATA[Thinkpad 13 Trackpoint slowdown in i3 window manager]]> Thinkpad 13 Trackpoint slowdown in i3 window manager

As has been mentioned on Reddit, the Thinkpad 13 trackpoint settings aren’t in the same place as those of older thinkpads. Despite some troubleshooting, I haven’t yet found what files to edit to adjust the trackpoint’s speed and sensitivity in Ubuntu 16.04.

The trackpoint has been slightly sluggish/unresponsive when I use the i3 window manager, and has additional intermittent slowdowns when using Chromium and Firefox in i3.

Although I don’t yet know the right way to fix trackpoint sensitivity on this machine, I accidentally discovered a highly effective workaround today:

  • Log into Unity (the default desktop that Ubuntu ships with) and configure the mouse and input settings as desired
  • Log out, and get back into i3wm
  • Launch unity-settings-daemon
  • And suddenly, the mouse works correctly the way it did in Unity!

I fully realize that this is a nasty hack around identifying and solving the actual problem, but it succeeds at making the mouse responsive while minimizing time spent troubleshooting.

]]>
Fri, 01 Jul 2016 00:00:00 -0700
http://edunham.net/2016/06/27/hieroglyph_and_tinkerer_dependencies.html http://edunham.net/2016/06/27/hieroglyph_and_tinkerer_dependencies.html <![CDATA[Hieroglyph and Tinkerer Dependencies]]> Hieroglyph and Tinkerer Dependencies

In setting up virtualenvs for my slides and blog repos on my new laptop, I’ve been reminded that a variety of Sphinx-based tools require system dependencies as well as the ones in their virtualenvs.

Hieroglyph dependency issues

The error resulting from pip install -r requirements.txt ended with:

Command ".../virtualenv/bin/python2 -u -c
"import setuptools,
tokenize;__file__='/tmp/pip-build-lzbk_r/Pillow/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-BNDc_6-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/home/edunham/repos/slides/rustcommunity/v/include/site/python2.7/Pillow"
failed with error code 1 in /tmp/pip-build-lzbk_r/Pillow/

Its fix, from stackoverflow, was:

$ sudo apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk
$ pip install -r requirements.txt

Tinkerer depencencies, too!

pip install -r requirements.txt over in my blog repo yielded:

Command ".../virtualenv/bin/python2 -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-NVLSBY/lxml/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-qD5QIe-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/home/edunham/repos/site/v/include/site/python2.7/lxml" failed with error code
1 in /tmp/pip-build-NVLSBY/lxml/

The fix is again to install the missing system deps, on Ubuntu:

$ sudo apt-get install libxml2-dev libxslt-dev
$ pip install -r requirements.txt

That’s it! I’m writing this down for SEO on the specific errors at hand, since the first several useful hits are currently stackoverflow.

If you’re a pip developer reading this, please briefly contemplate whether it’d be worthwhile to have some built-in mechanism to translate common dependency errors to the appropriate system package names needed based on the OS on which the command is run.

]]>
Mon, 27 Jun 2016 00:00:00 -0700
http://edunham.net/2016/06/23/cfps_made_easy.html http://edunham.net/2016/06/23/cfps_made_easy.html <![CDATA[CFPs Made Easier]]>

CFPs Made Easier

Check out this post by Lucy Bain about how to come up with an idea for what to talk about at a conference. I blogged last year about how I turn abstracts into talks, as well. Now that the SeaGL CFP is open, it’s time to look in a bit more detail about the process of going from a talk idea to a compelling abstract.

In this post, I’ll walk you through some exercises to clarify your understanding of your talk idea and find its audience, then help you use that information to outline the 7 essential parts of a complete abstract.

Getting ready to write your abstract

Your abstract is a promise about what your talk will deliver. Have you ever gotten your hopes up for a talk based on its abstract, then attended only to hear something totally unrelated? You can save your audience from that disappointment by making sure that you present what your abstract says you will.

I find the abstract to be the hardest part of the talk to write, because it sets the stage for every other part of it. If your abstract is thorough and clear about what your talk will deliver, you can refer back to it throughout the writing process to make sure you’re including the information that your audience is there for!

For both you and your audience to get the most out of your talk, the following questions can help you refine your talk idea before you even start to write its abstract.

Why do you love this idea?

Start working on your abstract by taking some quick notes on why you’re excited about speaking on this topic. There are no wrong answers! Your reasons might include:

  • Document a topic you care about in a format that works well for those who learn by listening and watching
  • Impress a potential employer with your knowledge and skills
  • Meet others in the community who’ve solved similar problems before, to advise you
  • Recruit contributors to a project
  • Force yourself to finish a project or learn more detail about a tool
  • Save novices from a pitfall that you encountered
  • Travel to a conference location that you’ve always wanted to visit
  • Build your resume
  • Or something else entirely!

Starting out by identifying what you personally hope to gain from giving the talk will help ensure that you make the right promises in your abstract, and get the right people into the room.

What’s your idea’s scope?

Make 2 quick little lists:

  • Topics I really want this presentation to cover
  • Topics I do not want this presentation to cover

Once you think that you have your abstract all sorted out, come back to these lists and make sure that you included enough topics from the first list, and excluded those from the second.

Who’s the conference’s target audience?

Keynotes and single-track conferences are special, but generally your talk does not have to appeal to every single person at the conference.

Write down all the major facts you know about the people who attend the conference to which you’re applying. How young or old might they be? How technically expert or inexperienced? What are their interests? Why are they there?

For example, here are some statements that I can make about the audience at SeaGL:

  • Expertise varies from university students and random community members to long-time contributors who’ve run multiple FOSS projects.
  • Age varies from a few school-aged kids (usually brought by speakers and attendees) to retirees.
  • The audience will contain some long-term FOSS contributors who don’t program, and some relatively expert programmers who might have minimal involvement in their FOSS communities
  • Most attendees will be from the vicinity of Seattle. It will be some attendees’ first tech conference. A handful of speakers are from other parts of the US and Canada; international attendees are a tiny minority.
  • The audience comes from a mix of socioeconomic backgrounds, and many attendees have day jobs in fields other than tech.
  • Attendees typically come to SeaGL because they’re interested in FOSS community and software.

Where’s your niche?

Now that you’ve taken some guesses about who will be reading your abstract, think about which subset of the conference’s attendees would get the most benefit out of the topic that you’re planning to talk about.

Write down which parts of the audience will get the most from your talk – novices to open source? Community leaders who’ve found themselves in charge of an IRC channel but aren’t sure how to administer it? Intermediate Bash users looking to learn some new tricks?

If your talk will appeal to multiple segments of the community (developers interested in moving into DevOps, and managers wondering what their operations people do all day?), write one question that your talk will answer for each segment.

You’ll use this list to customize your abstract and help get the right people into the room for your talk.

Still need an idea?

Conferences with a diverse audience often offer an introductory track to help enthusiastic newcomers get up to speed. If you have intermediate skills in a technology like Bash, Git, LaTeX, or IRC, offer an introductory talk to help newbies get started with it! Can you teach a topic that you learned recently in a way that’s useful to newbies?

If you’re an expert in a field that’s foreign to most attendees (psychology? beekeeping? Cray Supercomputer assembly language?), consider an intersection talk: “What you can learn from X about Y”. Can you combine your hobby, background, or day job with a theme from the conference to come up with something unique?

The Anatomy of an Abstract

There are many ways to structure a good abstract. Here are the 7 elements that I try to always include:

  1. Set the scene with a strong introductory sentence, which reminds your target audience of your topic’s relevance to them. Some of mine have included:

    • “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.”
    • “Git is the most popular source code management and version control system in the open source community.”
    • “When you’re new to programming, or self-taught with an emphasis on those topics that are directly relevant to your current project, it’s easy to skip learning about analyzing the complexity of algorithms.”
  2. Ask some questions, which the talk promises to answer. These questions should be asked from the perspective of your target audience, which you identified earlier. This is the least essential piece of an abstract, and can be skipped if you make sure your exposition clearly shows that you understand your target audience in some other way. Here are a couple of questions I’ve used in abstracts that were accepted to conferences:

    • “Do you know how to control what information people can discover about you on an IRC network?”
    • “Is the project of your dreams ignoring your pull requests?”
  3. Drop some hints about the format that the talk will take. This shows the selection commitee that you’ve planned ahead, and helps audience members select session that’re a good fit for their learning styles. Useful words here include:

    • “Overview of”
    • “Case study”
    • “Demonstration”
    • “Deep dive into”
    • “Outline X principles for”
    • “Live coding”
  4. Identify what background knowledge the audience will need to get the talk’s benefit, if applicable. Being specific about this helps welcome audience members who’re undecided about whether the talk is applicable to them. Useful phrases include:

    • “This talk will assume no background knowledge of...”
    • “If you’ve used ____ to ____, ...”
    • “If you’ve completed the ____ tutorial...”
  5. State a specific benefit that audience members will get from having attended the talk. Benefits can include:

    • “Halve your Django website’s page load times”
    • “Get help on IRC”
    • “Learn from ____‘s mistakes”
    • “Ask the right questions about ____
  6. Reinforce and quantify your credibility. If you’re presenting a case study into how your company deployed a specific tool, be sure to mention your role on the team! For instance, you might say:

    • “Presented by [the original author | a developer | a maintainer | a long-term user] of [the project], this talk will...”
  7. End with a recap of the talk’s basic promise, and welcome audience members to attend.

These 7 pieces of information don’t have to each be in their own sentence – for instance, establishing your credibility and indicating the talk’s format often fit together nicely in a single sentence.

Once you’ve got all of the essential pieces of an abstract, munge them around until it sounds like concise, fluent English. Get some feedback on helpmeabstract.com if you’d like assistance!

Give it a title

Naming things is hard. Here are some assorted tips:

  • Keep it under about 50 characters, or it might not fit on the program
  • Be polite. Rude puns or metaphors might be eye-catching, but probably violate your conference or community’s code of conduct, and will definitely alienate part of your prospective audience.
  • For general talks, it’s hard to go wrong with “Intro to ___” or “___ for ___ users”.
  • The form “[topic]: A [history|overview|melodrama|case study|love story]” is generally reliable. Well, I’m kidding about “melodrama” and “love story”... Mostly.
  • Clickbait is underhanded, but it works. “___ things I wish I’d known about ___”, anyone?

Good luck, and happy conferencing!

]]>
Thu, 23 Jun 2016 00:00:00 -0700
http://edunham.net/2016/05/27/thinkpad_13.html http://edunham.net/2016/05/27/thinkpad_13.html <![CDATA[2ish weeks with the Thinkpad 13]]> 2ish weeks with the Thinkpad 13

I recently got a Thinkpad 13 to try replacing my X230 as a personal laptop. Here’s the relevant specs from my order confirmation:

Battery 3cell 42Wh
System Unit 13&S2 IG i5-6200U NvPro
Camera 720p HD Camera
AC Adapter and Power Cord 45W 2pin US
Processor Intel Core i5-6200U MB
Hard drive 128GB SSD SATA3
Keyboard Language KYB SR ENG
Publication Language PUB ENG
Total memory 4GB DDR4 2133 SoDIMM
OS DPK W10 Home
Pointing device 3+2BCP NoFPR SR
Preload Language W10 H64-ENG
Preload OS Windows 10 Home 64
Preload Type Standard Image
TPM Setting Software TPM Enabled
Display Panel 13&S2 FHD IPS AG AL SR
WiFi wireless LAN adapters Intel 8260AC+BT 2x2 vPro

I picked mediocre CPU and RAM because the RAM’s easy to upgrade, and I’m curious about whether I actually need top-of-the-line CPUs to have an acceptable experience on a personal laptop.

Why the 13?

I had a few hard requirements for my next personal laptop:

  • Trackpoint with buttons
  • Decent key travel (the X1 carbon has 1.86mm and typing on it for too long made my hands hurt)
  • USBC port
  • Under $1,000

Plus a few nice-to-haves:

  • Small and light are nice, including charger
  • Screen not much worse than 1920x1080
  • Good battery life
  • Metal case and design for durability make me happy
  • My house is already full of Thinkpad chargers, so a laptop that uses them helps reduce additional clutter

I’ll be the first to admit that this is an atypical set of priorities. My laptop is home to Git, Vim, and a variety of tools for interacting with the internet, so the superficial I/O differences matter more to me than the machine’s internal specs.

Things I like about the 13

  • 2.1mm key travel is everything I hoped for. At least, I’ve used it all day and my hands don’t hurt.
  • Battery life is pretty decent, and battery will be easy to replace when it starts to fail.
  • Light-enough weight. Lighter charger than other Thinkpads I’ve had.
  • Smallest Thinkpad charger that I’ve ever seen.
  • Case screws are all captive.
  • Mystery hole in the bottom case turns out to be a highly convenient hard shutdown button.
  • Hinges feel pretty solid and hold the screen up nicely.
  • No keyboard backlight. I dislike them.
  • 4GB of RAM is a single stick, easy to add more (and I’ll need to for a smoother web browsing experience; neither Firefox nor Chromium is particularly happy with only 4GB)
  • A vanilla Ubuntu 16.04 iso Just Worked for installing Linux. It must have shipped with whatever magic signatures were required to play nice with the new security measures, because the install process was delightfully non-thought-provoking.
  • ~7mm plastic bezel between buttons and trackpad reduces likelihood of accidentally moving cursor when clicking.
../../../_images/13-button-bezel.jpg
  • Screen’s the same as my X240, xrandr calls it 1920x1080 294mm x 165mm. This fits 211 columns of a default font, or 127 columns of a font that’s comfortably legible when the laptop is on the other side of my desk.

Nitpicks about the 13

  • Color.

When I purchased mine at the end of April, only the silver chassis had a metal lid and shipped with a nice screen by default (the higher-res screen is available in the all-plastic black model for an additional charge). So now I own a non-black laptop for the first time since my Dell Latitude D410 in high school. The screen bezel and keys are black, though, and if I really cared I could probably paint the rest of it.

  • Power button.
../../../_images/13-power-button.jpg

It feels horribly... squishy? There’s no satisfying click to tell me when I’ve pushed it far enough. Holding it for 10 seconds only sometimes shuts the laptop off (though there’s a reset switch on the mobo accessible by a paperclip-hole in the bottom panel which forces shutdown instantly when pushed). There’s a circle on the pwoer button that looks like it might be an LED, but it never lights up.

  • Cutesy font.
../../../_images/13-lenovo-font.jpg

This is a tiny nitpick, but they’ve changed the Lenovo logo on the lid, pre-BIOS boot screen, and screen bezel from the already-mediocre font to a super condescending, childish, roundy one. Fortunately the lid one is easily hidden under some stickers.

  • Bottom panel held on by clips as well as screws.

More on this one in the disassembly section below, but I’m afraid they’ll break with how often I take my laptops apart.

  • Mouse buttons feel cheap and plastic-ey.

They feel like thin plastic shells instead of solid buttons like on older Thinkpads. I’m not sure precisely why they feel that way, but it’s a reminder that you’re using a lower-end machine.

  • Longest side is about 1cm greater than the short side of a TSA xray tub.

My X240 fits perfectly along the short end of the tub, leaving room for my shoes beside it. I have to use two tubs or separate my pair of shoes when putting the 13 through the scanner. (see, I wasn’t kidding when I said “nitpicks”)

  • The Trackpoint top is not interchangeable with those of older Thinkpads.
../../../_images/13-trackpoints.jpg

The round part is the same size, but the square hole in the bottom is about 2mm to a side rather than the 4mm of the one on an x220 keyboard. Plus the cylinder bit is about 2mm long rather than the x220’s 3.5mm, so even with an adapter for the square hole, older trackpoints would risk leaving marks on the screen.

  • The fan is a little loud.

I anticipate that this will get a lot less annoying when I upgrade to 16 or 32GB of ram and maybe tune it in software using thinkfan.

Thinkpad 13 partial disassembly photos

To get the bottom case off, pull all the visible screws and also remove the 3 tiny rubber feet from under the palm rest. I stuck my tiny rubber feet in a plastic bag and filed it away, because repeated removal would eventually destroy the glue and get them lost.

../../../_images/13-slide-and-pry.jpg

The bottom case comes off with a combination of sliding and prying. Getting it back on again requires sliding the palmrest edge just right, then snapping the sides and back on before the palm rest slips out of place. It’s tricky.

../../../_images/13-bendy-battery.jpg

The battery is easily removed by pulling out a single (non-captive) screw. It seems to be a thin plastic wrapper around 3 cell phone batteries. The battery has no glue holding it in, just screws.

../../../_images/13-mobo.jpg

Here’s its guts, with battery removed.

../../../_images/13-mobo-annotated.jpg

Note the convenient hard power cycle button (accessible via a tiny hole in the bottom case when assembled), pair of RAM slots and SSD form factor, and airspace compartment that almost looks intended for hiding half a dozen very small items. The coin cell battery (in sky blue shrink wrap) flaps around awkwardly when the machine is disassembled, but at least it’s not glued down.

]]>
Fri, 27 May 2016 00:00:00 -0700
http://edunham.net/2016/05/10/reflections_on_my_first_live_webcast.html http://edunham.net/2016/05/10/reflections_on_my_first_live_webcast.html <![CDATA[Reflections on my first live webcast]]> Reflections on my first live webcast

This morning, I participated in the O’Reilly Emerging Languages Webcast with my “Rust from a Scripting Background” talk. Here’s how it went.

Preparation

I was contacted about a month before my webcast and asked to present my OSCON talk as part of the event. I explained why my “How to learn Rust” talk didn’t sound like a good fit for the emerging languages webcast, and suggested the “Starting Rust from a Scripting Background” talk that I gave at my local Rust meetup recently as an alternative.

After we agreed on what talk would be suitable, O’Reilly’s Online Events Producers emailed me a contract to e-sign. The contract gives O’Reilly the opportunity to reuse and redistribute my talk from the webcast, and promises me a percentage of the proceeds if my recording is sold, licensed, or otherwise used to make them money.

During the week before the webcast, I did a test call in which an O’Reilly representative walked me through how to use the webcast software and verified that my audio was good on the phone I planned to use for the webcast.

A final copy of the slides for the webcast, in my option of PDF or Powerpoint, was due at 5pm the day before the event.

The Software (worked for me on Ubuntu)

O’Reilly Media provided an application called “Presentation Manager XD” from on24.com that presenters log into during the event.

According to my email from O’Reilly, the requirements for the event are:

  • Slides - PowerPoint or PDF only please with no embedded video or audio. Screen ratio of 4:3
  • Robust Internet connection
  • Clear, reliable phone line.
  • Windows 7 or 8 running IE 8+, Firefox 22+ or Chrome 27+
  • Mac OS 10.6+ running Firefox 22+ or Chrome 27+
  • Latest version of Flash Player
  • If you plan to share your screen, you will need to install a small application - you will be prompted to install it the first time you log into the platform.

Some of these requirements are lies. I used Firefox 46.0 on Ubuntu 14.04. I did rewrite my slides in LibreOffice because it emits better PDFs than the HTML tools I normally use, but I was also looking for an excuse to rewrite them to clean up their organization.

I clicked around in the “Presentation Manager XD” UI and downloaded a file called “ON24-ScreenShare-plugin”, then chmod +x‘d it and executed it with ./ON24-ScreenShare-plugin. This caused Wine to run, install some Gecko stuff, and start the screenshare plugin sufficiently well to share my screen to the webcast tool.

I had to re-run the plugin in Wine after logging out of and back into my window manager, of course. Additionally, the screenshare window’s resizing is finicky. It’s fine to grab and drag the highlighted parts of the window’s border with the mouse, but the meta+click command with which one usually moves windows in i3 causes the sides of the screenshare window to move independently of each other.

Here’s what the webcast UI looked like during streaming, just at the end of the Kotlin talk while I was getting ready to start mine:

../../../_images/oreillywebcast.png

The Talk

As previously mentioned, I rewrote my talk in LibreOffice Impress – ostensibly to get a prettier PDF, but also because it’s been a month or two since I last prepared for it and re-writing helps me refresh my memory and verify that all my facts are up to date.

GUI-based slide editing is downright painful after using rst-based tools for so long, especially because LibreOffice has no good way to embed code samples out of the box. I ended up going with screenshots from the Rust playground, which were probably better than embedded code samples, but relearning how to edit slides like a regular person wasn’t a pleasant experience.

I took more notes than I normally do, since nobody on the webcast could see whether I was reciting or reading. I’m glad I did, as having the notes on a physical page in front of me was reassuring and helped me avoid missing any important points.

I rehearsed the timing of each section of my slides individually, since it naturally broke down into 7 or so discrete parts and I had previously calculated how much of my hour to allocate to each section. Most sections ran consistently over time when preparing, yet under time during the actual talk. The lesson here is to rehearse until I’m happy with a section and can make it the same duration twice in a row.

The experience of presenting a talk in a subjectively empty room made me realize just what high-bandwidth communication regular conferences are.

Pros:

  • No need to worry about eye contact
  • All the notes you want
  • Can’t see anyone sleeping
  • Chat channel allowed instant distribution of links
  • Chat channel allowed expert attendees to help answer questions
  • Presentation software allowed gracefully skipping slides, rather than the usual paging back and forth with the arrow keys

Cons:

  • Can’t take quick surveys by show of hands
  • Negligible feedback on how many people are there and their body language of engagement/disengagement
  • Silences are super awkward
  • Can’t see the shy attendees, in order to encourage participation

The audience asked fewer questions during the talk than I expected. Fortunately, they came up with plenty of questions at the end – extra fortunate because I overcompensated on time and finished my slides about 15 minutes before the end of my speaking slot!

Q&A was surprisingly relaxing, as it was totally up to me which questions to answer. I’ll admit that I did select in favor of those that I could answer concisely and eloquently, deferring the questions that didn’t make as much sense to think about them while I answered easier ones.

tl;dr

In my experience, presenting a webcast was lower-stress and comparably impactful to a conference talk.

For would-be presenters concerned about their or the audience’s appearance, the visual anonymity of a webcast could be a great place to start a speaking career.

Speakers accustomed to presenting in rooms full of humans should expect subtle feedback, like nods, smiles, and laughter, to be totally invisible in a webcast environment.

And if O’Reilly asks you to do a webcast with them, I’d say go for it – they made the whole experience as seamless and easy as possible.

]]>
Tue, 10 May 2016 00:00:00 -0700
http://edunham.net/2016/05/05/paths_into_devops.html http://edunham.net/2016/05/05/paths_into_devops.html <![CDATA[Paths Into DevOps]]> Paths Into DevOps
../../../_images/twitter.png

Today, Carol asked me about how a current sysadmin can pivot into a junior “devops” role. 10 tweets into the reply, it became obvious that my thoughts on that type of transition won’t fit well into 140-character blocks.

My goal with this post is to catalog what I’ve done to reach my current level of success in the field, and outline the steps that a reader could take to mimic me.

Facets of DevOps

In my opinion, 3 distinct areas of focus have made me the kind of person from whom others solicit DevOps advice:

  • Cultural background
  • Technical skills
  • Self-promotion

I place “cultural background” first because many people with all the skills to succeed at “DevOps” roles choose or get stuck with other job titles, and everyone touches a slightly different point on the metaphorical elephant of what “DevOps” even means.

Cultural Background

What does “DevOps” mean to you?

  • Sysadmins who aren’t afraid of automation?
  • 2 sets of job requirements for the price of 1 engineer?
  • Developers who understand what the servers are actually doing?
  • Reducing the traditional divide between “development” and “operations” silos?
  • A buzzword that increases your number of weekly recruiter emails?
  • People who use configuration management, aka “infrastructure as code”?

From my experiences starting Oregon State University’s DevOps Bootcamp training program, speaking on DevOps related topics at a variety of industry conferences, and generally being a professional in the field, I’ve seen the term defined all of those ways and more.

Someone switching from “sysadmin” to “devops” should clearly define how they want their day-to-day job duties to change, and how their skills will need to change as a result.

Technical Skills

The best way to figure out the technical skills required for your dream job will always be to talk to people in the field, and read a lot of job postings to see what you’ll need on your resume and LinkedIn to catch a recruiter’s eye.

In my opinion, the bare minimum of technical skills that an established sysadmin would need in order to apply for DevOps roles are:

  • Use a configuration management tool – Chef, Puppet, Salt, or Ansible – to set up a web server in a VM.
  • Write a script in Python to do something more than Hello World – an IRC bot or tool to gather data from an API is fine.
  • Know enough about Git and GitHub to submit a pull request to fix something about an open source tool that other sysadmins use, even if it’s just a typo in the docs.
  • Understand enough about continuous integration testing to use it on a personal project, such as TravisCI on a GitHub repo, and appreciate the value of unit and integration tests for software.
  • Be able to tell a story about successfully troubleshooting a problem on a Linux or BSD server, and what you did to prevent it from happening again.

Keep in mind that your job in an interview is to represent what you know and how well you can learn new things. If you’re missing one of the above skills, go ask for help on how to build it.

Once you have all the experiences that I listed, you are no longer allowed to skip applying for an interesting role because you don’t feel you know enough. It’s the employer’s job to decide whether they want to grow you into the candiate of their dreams, and your job to give them a chance. Remember that a job posting describes the person leaving a role, and if you started with every skill listed, you’d probably be bored and not challenged to your full potential.

Self Promotion

“DevOps” is a label that engineers apply to themselves, then justify with various experiences and qualifications.

The path to becoming a community leader begins at engaging with the community. Look up DevOps-related conferences – find video recordings of talks from recent DevOps Days events, and see what names are on the schedules of upcoming DevOps conferences.

Look at which technologies the recent conferences have discussed, then look up talks about them from other events. Get into the IRC or Slack channels of the tools you want to become more expert at, listen until you know the answers to common questions, then start helping beginners newer than yourself.

Reach out to speakers whose talks you’ve enjoyed, and don’t be afraid to ask them for advice. Remember that they’re often extremely busy, so a short message with a compliment on their talk and a specific request for a suggestion is more likely to get a reply than overly vague requests. This type of networking will make your name familiar when their companies ask them to help recruit DevOps engineers, and can build valuable professional friendships that provide job leads and other assistance.

Contribute to the DevOps-related projects that you identify as having healthy communities. For configuration management, I’ve found that SaltStack is a particularly welcoming group. Find the source code on GitHub, examine the issue tracker, pick something easy, and submit a pull request fixing the bug. As you graduate to working on more challenging or larger issues, remember to advertise your involvment with the project on your LinkedIn profile!

Additionally, help others out by blogging what you learn during these adventures. If you ever find that Google doesn’t have useful results for an error message that you searched, write a blog post with the message and how you fixed it. If you’re tempted to bikeshed over which blogging platform to use, default to GitHub Pages, as a static site hosted there is easy to move to your own hosting later if you so desire.

Examine job postings for roles like you want, and make sure the key buzzwords appear on your LinkedIn profile wherever appropriate. A complete LinkedIn profile for even a relatively new DevOps engineer draws a surprising number of recruiters for DevOps-related roles. If you’re just starting out in the field, I’d recommend expressing interest in every opportunity that you’re contacted about, progressing to at least a phone interview if possible, and getting all the feedback you can about your performance afterwards. It’s especially important to interview at companies that you can’t see yourself enjoying a job at, because you can practice asking probing questions that tell you whether an employer will be a good fit for you. (check out this post for ideas).

Another trick for getting to an interview is to start something with DevOps in the name. It could be anything from a curated blog to a meetup to an online “book club” for DevOps-related papers, but leading something with a cool name seems to be highly attractive to recruiters. Another way to increase your visibility in the field is to give a talk at any local conference, especially LinuxFest and DevOpsDays events. Putting together an introductory talk on a useful technology only requires intermediate proficiency, and is a great way to build your personal brand.

To summarize, there are really 4 tricks to getting DevOps interviews, and you should interview as much as you can to get a feeling for what DevOps means to different parts of the industry:

  • Contribute back to the open source tools that you use
  • Network with established professionals
  • Optimize your LinkedIn and other professional profiles to draw recruiters
  • Be the founder of something.

Questions?

I collect interesting job search and interview advice links at the bottom of my resume repo readme.

I bolded each paragraph’s key points in the hopes of making them easier to read.

You’re welcome to reach out to me at blog @ edunham.net or @qedunham on Twitter if you have other questions. If I made a dumb typo or omitted some information in this post, either tell me about it or just throw a pull request at the repo to fix it and give yourself credit.

]]>
Thu, 05 May 2016 00:00:00 -0700
http://edunham.net/2016/04/18/persona_and_3rd_party_cookies_in_firefox.html http://edunham.net/2016/04/18/persona_and_3rd_party_cookies_in_firefox.html <![CDATA[Persona and third-party cookies in Firefox]]> Persona and third-party cookies in Firefox

Although its front page claims we’ve deprecated persona, it’s the only way to log into the statusboard and Air Mozilla. For a long time, I was unable to log into any site using Persona from Firefox 43 and 44 because of an error about my browser not being configured to accept third-party cookies.

The support article on the topic says that checking the “always accept cookies” box should fix the problem. I tried setting “accept third-party cookies” to “Always”, and yet the error persisted. (setting the top-level history configuration to “always remember history” didn’t affect the error either).

Fortunately, there’s also an “Exceptions” button by the “Accept cookies from sites” checkbox. Editing the exceptions list to universally allow “http://persona.org” lets me use Persona in Firefox normally.

_static/persona-exception.png

That’s the fix, but I don’t know whose bug it is. Did Firefox mis-balance privacy against convenience? Is the “always accept third-party cookies” setting’s failure to accept a cookie without an exception some strange edge case of a broken regex? Is Persona in the wrong for using a design that requires third-party cookies at all? Who knows!

]]>
Mon, 18 Apr 2016 00:00:00 -0700
http://edunham.net/2016/04/11/plushie_rustacean_pattern.html http://edunham.net/2016/04/11/plushie_rustacean_pattern.html <![CDATA[Plushie Rustacean Pattern]]> Plushie Rustacean Pattern

I made a Rustacean. He’s cute. You can make one too.

../../../_images/ferris-on-pattern.jpg

You’ll Need

  • A couple square feet of orange polar fleece, or any other orange fabric that won’t stretch or fray too much
  • A handful of stuffing. I cannibalized a throw pillow.
  • A needle and some orange thread
  • Black and white fabric scraps and thread, or black and white embroidery floss, for making the face.
  • Intermediate sewing skills
  • This pattern

The Pattern

../../../_images/ferris-pattern-color.png

Get yourself a front, back, underside, and claw drawn on paper, either by printing them out or tracing from a screen. The front, back, and underside should have horizontal symmetry, except for the face placement. Make sure the points marked in red and blue on this pattern are noted on your paper.

Mine measure about 6” wide between the points marked in red.

Sewing vocabulary

  • The right side of a fabric is what ends up on the outside of the finished item. The wrong side ends up where you can’t see it. Some fabrics have both sides the same; in that case, the wrong side is whichever one you feel like tracing the pattern onto.
  • seam allowance is some extra fabric that ends up on the inside of the item when you’re done. The pattern above does not include seam allowance. This means that if you cut the fabric along the lines in the pattern, your finished rustacean will be tiny and sad and shaped wrong. You cut the paper along the lines, then trace it onto the fabric, then sew along the lines.
  • applique is where you sew one piece of fabric onto the surface of another to make a design.
  • There are a bunch of great youtube videos on basic sewing skills. Watch whichever ones you need.

Assembly

  1. Trace a front, a back, an underside, and the 2 claws onto the wrong side of your fabric with whatever will write on it without bleeding through. Make sure to transfer the blue centerline marks and the red three-point join marks.
  2. Cut out the shapes you just traced, leaving about 1” of margin around them. We’ll trim the seams properly later, so don’t worry about getting it exact.
  3. Find a couple claw-sized chunks of leftover fabric and pin one to the back of each claw (right sides together, of course).
  4. Sew around both claws, leaving the arm ends open so you can turn them. I find it’s easiest to backstitch, and you can get away with stitches up to about 1.5mm apart with normal weight polar fleece.
  5. Trim around the outside of the seams on the claws to leave about 1/4” seam allowance, and clip right up to the stitches in the concave spot. If you backstitched, make sure flip them over before trimming the seams so you don’t accidentally cut through the longer stitches.
  6. Turn the claws so the right side of the fabric is out and the seams are on the inside, and stuff them with stuffing or fabric scraps. A pair of wooden chopsticks from a fast food place are a great tool for turning and stuffing.
  7. Put the front and back pieces right sides together so the points marked in red and blue on the pattern line up. Pin them together.
  8. Sew from one red mark to the other along Ferris’s spiky back.
  9. Trim around the spikes leaving about 1/4” seam allowance, clipping right up to the seam in the concave spots.
  10. Figure out which side is front (hint, it has only 2 legs rather than 4). Imagine where Ferris’s little face will go when he’s finished. Now, pin both claws onto the right side of the front piece, so they’ll be oriented correctly when he’s done. If in doubt, pin the bottom front in place and turn the whole thing inside out to make sure the claws are right.
  11. Match the center front of the underside with the center of Ferris’s front (both have a blue + on the pattern). Be sure the pieces have their right sides together and the claws are sandwiched between them.
  12. Match the points marked with red triangles on each side of the front and underside together and pin them. If the claws are sticking out at this point, go back to step 10 and try again
  13. Sew from one red mark to the other to join Ferris’s front to the front of his underside. Put a few extra stitches in the part of the seam where his “arms”/claws are attached, to make sure they can’t be pulled out.
  14. Trim around the 2 tiny legs that you’ve sewn so far, with about 1/8” seam allowance.
  15. Now you can applique his face onto the right side of his front. Or embroider it if you know how. Cut the black and white felt scraps into face-shaped pieces and sew them down, giving Ferris whatever expression you want.
  16. Line up the 4 back legs on the underside and back pieces, and pin them right sides toether. Sew everything except the part marked in green – that’s the hole through which you’ll turn him inside out.
  17. Trim around those last 4 legs, leaving at least 1/8” seam allowance. Don’t cut away any more fabric from the bit marked in green. If you leave a bit of extra fabric around the leg seams, they’ll be harder to turn but require less stuffing.
  18. Turn Ferris right side out. Again, chopsticks or the non-pointy end of a barbeque skewer are useful for getting the pointy bits to do the right thing.
  19. Stuff Ferris with the filling. I filled mine quite loosely, because it makes him softer and more huggable. If you overfill his body, his spikes will look silly. If you overfill his legs, they’ll stick out in funny directions and not bend right.
  20. Tuck the seam allowance back into the hole through which you stuffed Ferris and sew it shut. Congratulations, you have your own toy crab!

The Finished Product

../../../_images/ferris-plushie-montage.jpg

He’s cute, cuddly, and palm-sized. Lego dude for scale.

]]>
Mon, 11 Apr 2016 00:00:00 -0700
http://edunham.net/2016/03/24/could_rust_have_a_left_pad_incident.html http://edunham.net/2016/03/24/could_rust_have_a_left_pad_incident.html <![CDATA[Could Rust have a left-pad incident?]]> Could Rust have a left-pad incident?

The short answer: No.

What happened with left-pad?

The Node community had a lot of drama this week when a developer unpublished a package on which a lot of the world depended.

This was fundamentally possible because NPM offers an unpublish feature. Although the docs for unpublish admonish users that “It is generally considered bad behavior to remove versions of a library that others are depending on!” in large bold print, the feature is available.

What’s the Rust equivalent?

The Rust package manager, Cargo, is similar to NPM in that it helps users get the libraries on which their projects depend. Rust’s analog to the NPM index is crates.io.

The best explanation of Cargo’s robustness against unpublish exploits is the docs themselves:

cargo yank

Occasions may arise where you publish a version of a crate that actually ends up being broken for one reason or another (syntax error, forgot to include a file, etc.). For situations such as this, Cargo supports a “yank” of a version of a crate.:

$ cargo yank --vers 1.0.1
$ cargo yank --vers 1.0.1 --undo

A yank does not delete any code. This feature is not intended for deleting accidentally uploaded secrets, for example. If that happens, you must reset those secrets immediately.

The semantics of a yanked version are that no new dependencies can be created against that version, but all existing dependencies continue to work. One of the major goals of crates.io is to act as a permanent archive of crates that does not change over time, and allowing deletion of a version would go against this goal. Essentially a yank means that all projects with a Cargo.lock will not break, while any future Cargo.lock files generated will not list the yanked version.

As Cargo author Alex Crichton clarified in a GitHub comment yesterday, the only way that it’s possible to remove code from crates.io is to compel the Rust tools team to edit the database and S3 bucket.

Even if a crate maintainer leaves the community in anger or legal action is taken against a crate, this workflow ensures that code deletion is only possible by a small group of people with the motivation and authority to do it in the way that’s least problematic for users of the Rust language.

For more information on the crates.io package and copyright policies, see this internals thread.

But I just want to left pad a string in Rust??

Although a left-pad crate was created as a joke, you should probably just use the format! built-in from the standard library.

]]>
Thu, 24 Mar 2016 00:00:00 -0700
http://edunham.net/2016/03/23/reducing_saltstack_log_verbosity_for_travisci.html http://edunham.net/2016/03/23/reducing_saltstack_log_verbosity_for_travisci.html <![CDATA[Reducing SaltStack log verbosity for TravisCI]]> Reducing SaltStack log verbosity for TravisCI

Servo has some Salt configs, hosted on GitHub, for which changes are smoke-tested on TravisCI before they’re deployed. Travis only shows the first 10k lines of log output, so I want to minimize the amount of extraneous information that the states print.

My salt state looks like::

android-sdk:
  archive.extracted:
    - name: {{ common.homedir }}/android/sdk/{{ android.sdk.version }}
    - source: https://dl.google.com/android/android-sdk_{{
      android.sdk.version }}-linux.tgz
    - source_hash: sha512={{ android.sdk.sha512 }}
    - archive_format: tar
    - archive_user: user
    - if_missing: {{ common.homedir }}/android/sdk/{{ android.sdk.version
      }}/android-sdk-linux
    - require:
      - user: user

The output in TravisCI is::

      ID: android-sdk
Function: archive.extracted
    Name: /home/user/android/sdk/r24.4.1
  Result: True
 Comment: https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz extracted in /home/user/android/sdk/r24.4.1/
 Started: 17:46:25.900436
Duration: 19540.846 ms
 Changes:
          ----------
          directories_created:
              - /home/user/android/sdk/r24.4.1/
              - /home/user/android/sdk/r24.4.1/android-sdk-linux

          extracted_files:
              ... 2755 lines listing one file per line that I don't want to see in the log

https://docs.saltstack.com/en/latest/ref/states/all/salt.states.archive.html has useful guidance on how to increase the tar state’s verbosity, but not to decrease it. This is because the extra 2755 lines aren’t coming from tar itself, but from Salt assuming that we want to know.

terse outputter settings

The outputter takes several state_output setting options. The terse option summarizes the result of each state into a single line.

There are a couple places you can set this:

  • Invoke Salt with salt --state-output=terse hostname state.highstate
  • Add the line state_output: terse to /etc/salt/minion, if you’re using salt-call
  • Setting state_output_terse is apparently an option, though I can’t find any example of a real-world salt config that uses it

Setting the terse option in /etc/salt/minion dropped the output of a highstate from over 10,000 lines to about 2500.

]]>
Wed, 23 Mar 2016 00:00:00 -0700
http://edunham.net/2016/03/14/fixing_sudo_on_osx.html http://edunham.net/2016/03/14/fixing_sudo_on_osx.html <![CDATA[Fixing sudo errors from the command line on OSX]]> Fixing sudo errors from the command line on OSX

The first symptom that I had made a terrible mistake showed up in an Ansible playbook:

GATHERING FACTS
***************************************************************
fatal: [...] => ssh connection closed waiting for a privilege escalation password prompt
fatal: [...] => ssh connection closed waiting for a privilege escalation password prompt
fatal: [...] => ssh connection closed waiting for sudo password prompt
fatal: [...] => ssh connection closed waiting for sudo password prompt

That looks like the sudo binary might be broken. To rule out Ansible problems, remote into the machine and try to use sudo:

administrators-Mac-mini:~ administrator$ sudo ls
sudo: effective uid is not 0, is sudo installed setuid root?

This meant that there was a file permissions problem:

working-host administrator$ ls -al /usr/bin/sudo
-r-s--x--x  1 root  wheel  164560 Sep  9  2014 /usr/bin/sudo

broken-host administrator$ ls -al /usr/bin/sudo
-rwxrwxr-x  1 root  wheel  164560 Sep  9  2014 /usr/bin/sudo

Now the problem is reduced to fixing the permissions. One does not simply sudo to root, because there’s no working sudo. However, Apple provides a utility which allows you to enable root login using only the administrator account’s permissions:

broken-host administrator$ dsenableroot
username = administrator
user password:
root password:
verify root password:

dsenableroot:: ***Successfully enabled root user.

The first password is the current one for the administrator account, and the other two should be the same string and will become the root account’s password.

After enabling root login, disconnect then SSH into the host as root:

broken-host root# chmod 4411 /usr/bin/sudo

And test that the fix fixed it:

broken-host root# su administrator
broken-host administrator$ sudo ls

Finally, clean up after yourself to inconvenience any future attackers:

broken-host administrator$ dsenableroot -d

Moral of the story: Errant chowns of /usr/bin are just as bad when they come from automation as when they come from humans.

]]>
Mon, 14 Mar 2016 00:00:00 -0700
http://edunham.net/2016/03/08/ansible_vagrant_and_changed_host_keys.html http://edunham.net/2016/03/08/ansible_vagrant_and_changed_host_keys.html <![CDATA[Ansible, Vagrant, and changed host keys]]> Ansible, Vagrant, and changed host keys

Related to this bug, the Vagrant Ansible provisioner seems to ignore some system settings.

The symptom is that when you update a previously used Vagrant box, or otherwise change its host key, Ansible provisioning fails with the error:

fatal: [hostname] => SSH Error: Host key verification failed.
    while connecting to 127.0.0.1:2200
It is sometimes useful to re-run the command using -vvvv, which prints SSH
debug output to help diagnose the issue.

The standard solution would be to forget about the old host key with ssh-keygen -R 127.0.0.1:2200 or ignore the change with export ANSIBLE_HOST_KEY_CHECKING=false.

If you trust the box not to be evil and expect its host key to change frequently due to your testing, a fix which the Ansible provisioner does respect is to add ansible.host_key_checking = false to the Vagrantfile, like:

Vagrant.configure(2) do |config|
...
    config.vm.define "hostname" do |prodmaster|
        hostname.vm.provision "ansible" do |ansible|
            ansible.playbook = "provision/hostname.yaml"
            ansible.sudo = true
            ansible.host_key_checking = false
            ansible.verbose = 'vvvv'
            ansible.extra_vars = { ansible_ssh_user: 'vagrant'}
        end
    end
...
end
]]>
Tue, 08 Mar 2016 00:00:00 -0800
http://edunham.net/2016/03/07/vidyo_with_ubuntu_and_i3wm.html http://edunham.net/2016/03/07/vidyo_with_ubuntu_and_i3wm.html <![CDATA[Vidyo with Ubuntu and i3wm]]> Vidyo with Ubuntu and i3wm

Mozilla uses Vidyo for virtual meetings across distributed teams. If it doesn’t work on your laptop, you can use the mobile client or book a meeting room in an office, but neither of those solutions is optimal when working from home.

Vidyo users within Mozilla can download a .deb or .rpm installer from v.mozilla.org. On Ubuntu, it’s easy to install the downloaded package with sudo dpkg -i path/to/the/file.deb.

The issue is that when you invoke VidyoDesktop from your launcher of choice (dmenu for me), i3 does what’s usually the right thing and makes the client fullscreen in a tile. This doesn’t allow the interface to pop up a floating window with the confirm dialog when you try to join a room, so you can’t.

mod + shift + space

Mod was alt by default last time I installed i3, but I’ve since remapped it to the window key (as IRC clients use alt for switching windows). Some people use caps lock as their mod key.

mod + shift + space makes the window floating, which allows it to pop up the confirmation dialog when you try to join a call.

Float windows by default

Alternately, stick the line:

for_window [class="VidyoDesktop"] floating enable

in your ~/.i3/config.

Installing Vidyo despite the libqt4-gui error

Edited as of May 2017: Recent Vidyos depend on a package that’s not available in Ubuntu’s repos. The easiest workaround is:

sudo dpkg -i --ignore-depends=libqt4-gui path/to/VidyoInstaller.deb
]]>
Mon, 07 Mar 2016 00:00:00 -0800
http://edunham.net/2016/02/26/are_we_building_are_we_sites_yet.html http://edunham.net/2016/02/26/are_we_building_are_we_sites_yet.html <![CDATA[Are we 'are we' yet?]]> Are we ‘are we’ yet?

The Rust community, being founded and enjoyed by a variety of Mozilians, seems to have inherited the tradition of tracking top-level progress metrics using are we sites.

  • Are we concurrent yet? tracks the progres of Rust’s concurrency ecosystem
  • Are we web yet? tracks the status of Rust’s HTTP stack, web frameworks, and related libraries
  • Are we IDE yet? provides a list of what features are supported for Rust per IDE, and links to the relevant tracking issues and RFCs

If this blog post was an ‘are we’ page itself, the big text at the top would probably say “Getting There”.

]]>
Fri, 26 Feb 2016 00:00:00 -0800
http://edunham.net/2016/02/19/buildbot_withproperties.html http://edunham.net/2016/02/19/buildbot_withproperties.html <![CDATA[Buildbot WithProperties]]> Buildbot WithProperties

Today, I copied an existing command from a Buildbot configuration and then modified it to print a date into a file.:

...
if "cargo" in component:
    cargo_date_cmd = "echo `date +'%Y-%m-%d'` > " + final_dist_dir + "/cargo-build-date.txt"
    f.addStep(MasterShellCommand(name="Write date to cargo-build-date.txt",
                             command=["sh", "-c", WithProperties(cargo_date_cmd)] ))
...

It broke:

Failure: twisted.internet.defer.FirstError: FirstError[#8, [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.FirstError'>: FirstError[#2, [Failure instance: Traceback: <type 'exceptions.ValueError'>: unsupported format character 'Y' (0x59) at index 14

Why? WithProperties.

It turns out that WithProperties should only be used when you need to interpolate strings into an argument, using either %s, %d, or %(propertyname)s syntax in the string.

The lesson here is Buildbot will happily accept WithProperties("echo 'this command uses no interpolation'") in a command argument, and then blow up at you if you ever change the command to have a % in it.

However, it appears that build steps run as MasterShellCommand``s without ``WithProperties do not display their name in the waterfall, but rather say “running” or “ran”.

]]>
Fri, 19 Feb 2016 00:00:00 -0800
http://edunham.net/2016/02/15/using_notty.html http://edunham.net/2016/02/15/using_notty.html <![CDATA[Using Notty]]> Using Notty

I recently got the “Hey, you’re a Rust Person!” question of how to install notty and interact with it.

A TTY was originally a teletypewriter. Linux users will have most likely encountered the concept of TTYs in the context of the TTY1 interface where you end up if your distro fails to start its window manager. Since you use ctrl + alt + f[1,2,...] to switch between these interfaces, it’s easy to assume that “TTY” refers to an interactive workspace.

Notty itself is only a virtual terminal. Think of it as a library meant as a building block for creating graphical terminal emulators. This means that a user who saw it on Hacker News and wants to play around should not ask “how do I install notty”, but rather “how do I run a terminal emulator built on notty?”.

Easy Mode

Get some Rust:

curl -sf https://raw.githubusercontent.com/brson/multirust/master/blastoff.sh | sh
multirust update nightly

Get the system dependencies:

sudo apt-get install libcairo2-dev libgdk-pixbuf2.0 libatk1.0 libsdl-pango-dev libgtk-3-dev

Run Notty:

git clone https://github.com/withoutboats/notty.git
cd notty/scaffolding
multirust run nightly cargo run

And there you have it! As mentioned in the notty README, “This terminal is buggy and feature poor and not intended for general use”. Notty is meant as a library for building graphical terminals, and scaffolding is only a minimal proof of concept.

Explanation: Getting Rust

Since the Rust language is still under active development, many features are available in the Nightly version of the compiler which are not yet available in Stable. If you got Rust from your package manager, you probably are using Stable. To check, run rustc --version and see whether the result says “nightly” in it.

Notty uses some features that’re available in Nightly but not Stable. If you try to compile it with Stable, you’ll get an error that makes this obvious:

Compiling notty v0.1.0 (file:///home/edunham/code/notty)
src/lib.rs:16:1: 16:16 error: #[feature] may not be used on the stable release channel
src/lib.rs:16 #![feature(io)]
              ^~~~~~~~~~~~~~~
error: aborting due to previous error
Could not compile `notty`.

When you need to switch between Rust versions frequently, multirust is the tool for the job.

Explanation: Getting system dependencies

I’ve reproduced the following error messages in full to help out any confused new Rustaceans Googling for them:

Cairo is a graphics library that you can get from your system package manager. If you try to compile notty’s dependencies without it, you’ll get an error:

Build failed, waiting for other jobs to finish...
failed to run custom build command for `cairo-sys-rs v0.2.1`
Process didn't exit successfully:
`/home/edunham/code/notty/notty-cairo/target/release/build/cairo-sys-rs-1d0cf50d5d2dab2f/build-script-build`
(exit code: 101)
--- stderr
thread '<main>' panicked at '`"pkg-config" "--libs" "--cflags" "cairo"` did
not exit successfully: exit code: 1
--- stderr
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
', /home/edunham/.multirust/toolchains/nightly/cargo/registry/src/github.com-0a35038f75765ae4/cairo-sys-rs-0.2.1/build.rs:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.

The only other gotcha about the dependencies is that errors about gdk actually mean you need to install the libgtk-3-dev package:

failed to run custom build command for `gdk-sys v0.2.1`
Process didn't exit successfully:
`/home/edunham/code/notty/scaffolding/target/release/build/gdk-sys-e1b0a13b32593729/build-script-build`
(exit code: 101)
--- stderr
thread '<main>' panicked at '`"pkg-config" "--libs" "--cflags" "gdk-3.0"` did
not exit successfully: exit code: 1
--- stderr
Package gdk-3.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gdk-3.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gdk-3.0' found
', /home/edunham/.multirust/toolchains/nightly/cargo/registry/src/github.com-0a35038f75765ae4/gdk-sys-0.2.1/build.rs:17
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Running notty

Compiling and running scaffolding necessarily builds a bunch of dependencies, some of which throw various warnings. You also might be able to crash scaffolding with an error such as:

thread '<main>' panicked at 'not yet implemented', .../notty/src/datatypes/mod.rs:160

This, along with everywhere else that unimplemented!() occurrs in the notty source code, is an opportunity for you to contribute and help improve the project!

]]>
Mon, 15 Feb 2016 00:00:00 -0800
http://edunham.net/2016/01/19/how_much_knowledge_do_you_need_to_give_a_conference_talk.html http://edunham.net/2016/01/19/how_much_knowledge_do_you_need_to_give_a_conference_talk.html <![CDATA[How much knowledge do you need to give a conference talk?]]> How much knowledge do you need to give a conference talk?

I was recently asked an excellent question when I promoted the LFNW CFP on IRC:

As someone who has never done a talk, but wants to, what kind of knowledge do you need about a subject to give a talk on it?

If you answer “yes” to any of the following questions, you know enough to propose a talk:

  • Do you have a hobby that most tech people aren’t experts on? Talk about applying a lesson or skill from that hobby to tech! For instance, I turned a habit of reading about psychology into my Human Hacking talk.
  • Have you ever spent a bunch of hours forcing two tools to work with each other, because the documentation wasn’t very helpful and Googling didn’t get you very far, and built something useful? “How to build ___ with ___” makes a catchy talk title, if the thing you built solves a common problem.
  • Have you ever had a mentor sit down with you and explain a tool or technique, and the new understanding improved the quality of your work or code? Passing along useful lessons from your mentors is a valuable talk, because it allows others to benefit from the knowledge without taking as much of your mentor’s time.
  • Have you seen a dozen newbies ask the same question over the course of a few months? When your answer to a common question starts to feel like a broken record, it’s time to compose it into a talk then link the newbies to your slides or recording!
  • Have you taken a really interesting class lately? Can you distill part of it into a 1-hour lesson that would appeal to nerds who don’t have the time or resources to take the class themselves? (thanks lucyw for adding this to the list!)
  • Have you built a cool thing that over a dozen other people use? A tutorial talk can not only expand your community, but its recording can augment your documentation and make the project more accessible for those who prefer to learn directly from humans!
  • Did you benefit from a really great introductory talk when you were learning a tool? Consider doing your own tutorial! Any conference with beginners in their target audience needs at least one Git lesson, an IRC talk, and some discussions of how to use basic Unix utilities. These introductory talks are actually better when given by someone who learned the technology relatively recently, because newer users remember what it’s like not to know how to use it. Just remember to have a more expert user look over your slides before you present, in case you made an incorrect assumption about the tool’s more advanced functionality.

I personally try to propose talks I want to hear, because the dealine of a CFP or conference is great motivation to prioritize a cool project over ordinary chores.

]]>
Tue, 19 Jan 2016 00:00:00 -0800
http://edunham.net/2016/01/16/buildbot_and_eoferror.html http://edunham.net/2016/01/16/buildbot_and_eoferror.html <![CDATA[Buildbot and EOFError]]> Buildbot and EOFError

More SEO-bait, after tracking down an poorly documented problem:

# buildbot start master
Following twistd.log until startup finished..
2016-01-17 04:35:49+0000 [-] Log opened.
2016-01-17 04:35:49+0000 [-] twistd 14.0.2 (/usr/bin/python 2.7.6) starting up.
2016-01-17 04:35:49+0000 [-] reactor class: twisted.internet.epollreactor.EPollReactor.
2016-01-17 04:35:49+0000 [-] Starting BuildMaster -- buildbot.version: 0.8.12
2016-01-17 04:35:49+0000 [-] Loading configuration from '/home/user/buildbot/master/master.cfg'
2016-01-17 04:35:53+0000 [-] error while parsing config file:
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1155, in gotResult
        _inlineCallbacks(r, g, deferred)
      File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1099, in _inlineCallbacks
        result = g.send(result)
      File "/usr/local/lib/python2.7/dist-packages/buildbot/master.py", line 189, in startService
        self.configFileName)
    --- <exception caught here> ---
      File "/usr/local/lib/python2.7/dist-packages/buildbot/config.py", line 156, in loadConfig
        exec f in localDict
      File "/home/user/buildbot/master/master.cfg", line 415, in <module>
        extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
      File "/usr/local/lib/python2.7/dist-packages/buildbot/status/status_push.py", line 404, in __init__
        secondaryQueue=DiskQueue(path, maxItems=maxDiskItems))
      File "/usr/local/lib/python2.7/dist-packages/buildbot/status/persistent_queue.py", line 286, in __init__
        self.secondaryQueue.popChunk(self.primaryQueue.maxItems()))
      File "/usr/local/lib/python2.7/dist-packages/buildbot/status/persistent_queue.py", line 208, in popChunk
        ret.append(self.unpickleFn(ReadFile(path)))
    exceptions.EOFError:

2016-01-17 04:35:53+0000 [-] Configuration Errors:
2016-01-17 04:35:53+0000 [-]   error while parsing config file:  (traceback in logfile)
2016-01-17 04:35:53+0000 [-] Halting master.
2016-01-17 04:35:53+0000 [-] Main loop terminated.
2016-01-17 04:35:53+0000 [-] Server Shut Down.

This happened after the buildmaster’s disk filled up and a bunch of stuff was manually deleted. There were no changes to master.cfg since it worked perfectly.

The fix was to examine master.cfg to see where the HttpStatusPush was created, of the form:

c['status'].append(HttpStatusPush(
    serverUrl='http://build.servo.org:54856/buildbot',
    extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
))

Digging in the Buildbot source reveals that persistent_queue.py wants to unpickle a cache file from /events_build.servo.org/-1 if there was nothing in /events_build.servo.org/. To fix this the right way, create that file and make sure Buildbot has +rwx on it.

Alternately, you can give up on writing your status push cache to disk entirely by adding the line maxDiskItems=0 to the creation of the HttpStatusPush, giving you:

c['status'].append(HttpStatusPush(
   serverUrl='http://build.servo.org:54856/buildbot',
   maxDiskItems=0,
   extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
))

The real moral of the story is “remember to use logrotate.

]]>
Sat, 16 Jan 2016 00:00:00 -0800
http://edunham.net/2016/01/13/who_would_you_hire.html http://edunham.net/2016/01/13/who_would_you_hire.html <![CDATA[Who would you hire?]]> Who would you hire?

If you’re using open source as a portfolio to make yourself a more competitive job candidate, it can feel like you have to start your own project to show off your skills.

In the words of one job seeker I chatted with recently, “I feel like most of my contributions [to other peoples’ projects] aren’t that significant or noteworthy”. Here’s a thought experiment to justify including projects to which you contribute, even without a leadership role, on your resume:

Imagine you want to hire a coder.

Candidate A always works alone and refuses to contribute to a project if it doesn’t make her look like a rockstar.

Candidate B triages the unglamorous issues that affect multiple users, and steadily produces small, self-contained fixes that avoid introducing new bugs.

When the situation is framed in these terms, I hope that it’s obvious which coder you’d want on your team.

When writing your resume, there’s only space to include a few of the many activities in which you invest your time. It’s tempting to only include your biggest, highest-profile solo projects, while disregarding those projects to which you’ve made a small but steady stream of useful contributions.

Reread your resume from the perspective of someone who hasn’t met you yet and has only the information in that document available to form a first impression of your character. Which of the 2 hypothetical coders does it make you sound like? Is that how you really are?

]]>
Wed, 13 Jan 2016 00:00:00 -0800
http://edunham.net/2016/01/09/troubleshooting_stunnel.html http://edunham.net/2016/01/09/troubleshooting_stunnel.html <![CDATA[Troubleshooting stunnel]]> Troubleshooting stunnel

Today I’ve learned a few things aout how stunnel works. The main takeaway is that Googling for specific errors in the stunnel log is incredibly unhelpful, resulting in a variety of mailing list posts with no replies. Tracking an error message through the source of the program doesn’t lead to any useful comments, either. So here’s some SEO bait with concrete troubleshooting suggestions.

I started out, as usual, with a pile of errors:

/usr/local/bin/stunnel
[ ] Clients allowed=2000
[ ] Cron thread initialized
[.] stunnel 5.27 on x86_64-apple-darwin14.0.0 platform
[.] Compiled/running with OpenSSL 1.0.2e 3 Dec 2015
[.] Threading:PTHREAD Sockets:POLL,IPv6 TLS:ENGINE,FIPS,OCSP,PSK,SNI
[ ] errno: (*__error())
[.] Reading configuration from file stunnel.conf
[.] UTF-8 byte order mark not detected
[ ] Initializing service [9987]
[!] Error resolving "127.0.0.1": Neither nodename nor servname known (EAI_NONAME)
[ ] Cannot resolve connect target - delaying DNS lookup
[ ] No certificate or private key specified
[ ] SSL options: 0x03000004 (+0x03000000, -0x00000000)
[.] Configuration successful
[ ] Listening file descriptor created (FD=6)
[!] bind: Address already in use (48)
[!] Error binding service [9987] to 127.0.0.1:9987
[ ] Closing service [9987]
[ ] Service [9987] closed
stunnel startup failed, already running?

[!] Error resolving “127.0.0.1”: Neither nodename nor servname known

This was the biggest wat, and the hardest to track down because the solution is so obvious.

“Error resolving” sounds like the machine hasn’t been informed of localhost’s existance, so let’s check:

$ cat /etc/hosts
127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost

And I can even ping 127.0.0.1 successfully. So the message and I have different ideas about what it means to “resolve” an IP.

I found the fix here by diffing the stunnel.conf against that on a working machine, and learned that I’d neglected to specify the correct port number on the destination host.

The solution to the “Error resolving localhost” turned out to be specifying the correct port for the other end of the stunnel:

$ cat stunnel.conf
pid =

[9987]
client = yes
accept = 127.0.0.1:9987
cafile = ./cert.pem
verify = 3
connect = 01.23.456.789:9988

Wow. Painfully obvious after you realize what’s wrong, and just plain painful before.

[!] Error binding service [9987] to 127.0.0.1:9987

The “already running?” hint is correct here. This error means stunnel didn’t let go of the port despite failing to start on a previous attempt.

Easy fix; check whether it’s really stunnel hogging the port:

$ lsof -i tcp:9987
COMMAND PID      USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
stunnel 363        me    6u  IPv4 0x20f17e5e0dd35277      0t0  TCP localhost:dsm-scm-target (LISTEN)

and if so, whack it with a metaphorical hammer:

$ sudo killall stunnel

Tada!

After getting the destination IP+port combination specified correctly and the old broken stunnel killed, the stunnel starts successfully.

]]>
Sat, 09 Jan 2016 00:00:00 -0800
http://edunham.net/2015/12/21/questions_about_open_source_and_design.html http://edunham.net/2015/12/21/questions_about_open_source_and_design.html <![CDATA[Questions about Open Source and Design]]> Questions about Open Source and Design

Today, I posed a question to some professional UI and UX designers:

How can an open source project without dedicated design experts collaborate with amateur, volunteer designers to produce a well-designed product?

They revealed that they’ve faced similar collaboration challenges, but knew of neither a specific process to solve the problem nor an organization that had overcome it in the past.

Have you solved this problem? Have you tried some process or technique and learned that it’s not able to solve the problem? Email me (design@edunham.net) if you know of an open source project that’s succeeded at opening their design as well, and I’ll update back here with what I learn!

In no particular order, here are some of the problems that we were talking about:

  • Non-designers struggle to give constructive feedback on design. I can say “that’s ugly” or “that’s hard to use” more easily than I can say “here’s how you can make it better”.
  • Projects without designers in the main decision-making team can have a hard time evaluating the quality of a proposed design.
  • Non-designers struggle to articulate the objective design needs of their projects, so design remains a single monolithic problem rather than being decomposed into bite-sized, introductory issues the way code problems are.
  • Volunteer designers have a difficult time finding open source projects to get involved with.
  • Non-designers don’t know the difference between different types of design, and tend to bikeshed on superficial, obvious traits like colors when they should be focusing on more subtle parts of the user experience. We as non-designers are like clients who ask for a web site without knowing that there’s a difference between frontend development, back end development, database administration, and systems administration.
  • The tests which designers apply to their work are often almost impossible to automate. For instance, I gather that a lot of user interaction testing involves watching new users attempt to complete a task using a given design, and observing the challenges they encounter.

Again, if you know of an open source project that’s overcome any of these challenges, please email me at design@edunham.net and tell me about it!

]]>
Mon, 21 Dec 2015 00:00:00 -0800
http://edunham.net/2015/12/03/linode_plan_names_and_pricing.html http://edunham.net/2015/12/03/linode_plan_names_and_pricing.html <![CDATA[Linode vs AWS]]> Linode vs AWS

I’m examining a Linode account in order to figure out how to switch the application its instances are running to AWS. The first challenge is that instance types in the main dashboard are described by arbitrary numbers (“UI Name” in the chart below), rather than a statistic about their resources or pricing. Here’s how those magic numbers line up to hourly rates and their corresponding monthly price caps:

RAM Hourly $ Monthly $ UI Name Cores GB SSD
1GB $0.015/hr $10/mo 1024 1 24
2GB $0.03/hr $20/mo 2048 2 48
4GB $0.06/hr $40/mo 4096 4 96
8GB $0.12/hr $80/mo 8192 6 192
16GB $0.24/hr $160/mo 16384 8 384
32GB $0.48/hr $320/mo 32768 12 768
48GB $0.72/hr $480/mo 49152 16 1152
64GB $0.96/hr $640/mo 65536 20 1536
96GB $1.44/hr $960/mo 98304 20 1920

AWS “Equivalents”

AWS T2 instances have burstable performance. M* instances are general-purpose; C* are compute-optimized; R* are memory-optimized. *3 instances run on slightly older Ivy Bridge or Sandy Bridge processors, while *4 instances run on the newer Haswells. I’m disergarding the G2 (GPU-optimized), D2 (dense-storage), and I2 (IO-optmized) instance types from this analysis.

Note that the AWS specs page has memory in GiB rather than GB. I’ve converted everything into GB in the following table, since the Linode specs are in GB and the AWS RAM amounts don’t seem to follow any particular pattern that would lose information in the conversion.

Hourly price is the Linux/UNIX rate for US West (Northern California) on 2015-12-03. Monthly price estimate is the hourly price multiplied by 730.

Instance vCPU GB RAM $/hr $/month
t2.micro 1 1.07 .017 12.41
t2.small 1 2.14 .034 24.82
t2.medium 2 4.29 .068 49.64
t2.large 2 8.58 .136 99.28
m4.large 2 8.58 .147 107.31
m4.xlarge 4 17.18 .294 214.62
m4.2xlarge 8 34.36 .588 429.24
m4.4xlarge 16 68.72 1.176 858.48
m4.10xlarge 40 171.8 2.94 2146.2
m3.medium 1 4.02 .077 56.21
m3.large 2 8.05 .154 112.42
m3.xlarge 4 16.11 .308 224.84
m3.2xlarge 8 32.21 .616 449.68
c4.large 2 4.02 .138 100.74
c4.xlarge 4 8.05 .276 201.48
c4.2xlarge 8 16.11 .552 402.96
c4.4xlarge 16 32.21 1.104 805.92
c4.8xlarge 36 64.42 2.208 1611.84
c3.large 2 4.02 .12 87.6
c3.xlarge 4 8.05 .239 174.47
c3.2xlarge 8 16.11 .478 348.94
c3.4xlarge 16 32.21 .956 697.88
c3.8xlarge 32 64.42 1.912 1395.76
r3.large 2 16.37 .195 142.35
r3.xlarge 4 32.75 .39 284.7
r3.2xlarge 8 65.50 .78 569.4
r3.4xlarge 16 131 1.56 1138.8
r3.8xlarge 32 262 3.12 2277.6

Comparison

Linode and AWS do not compare cleanly at all. The smallest AWS instance to match a given Linode type’s RAM typically has fewer vCPUs and costs more in the region where I compared them. Conversely, the smallest AWS instance to match a Linode type’s number of cores often has almost double the RAM of the Linode, and costs substantially more.

Switching from Linode to AWS

When I examine the Servo build machines’ utilization graphs via the Linode dashboard, it becomes clear that even their load spikes aren’t fully utilizing the available CPUs. To view memory usage stats on Linode, it’s necessary to configure hosts to run the longview client. After installation, the client begins reporting data to Linode immediately.

After a few days, these metrics can be used to find the smallest AWS instance whose specs exceed what your application is actually using on Linode.

Sources:

]]>
Thu, 03 Dec 2015 00:00:00 -0800
http://edunham.net/2015/11/25/giving_thanks_to_rust_contributors.html http://edunham.net/2015/11/25/giving_thanks_to_rust_contributors.html <![CDATA[Giving Thanks to Rust Contributors]]> Giving Thanks to Rust Contributors

It’s the day before Thanksgiving here in the US, and the time of year when we’re culturally conditioned to be a bit more public than usual in giving thanks for things.

As always, I’m grateful that I’m working in tech right now, because almost any job in the tech industry is enough to fulfill all of one’s tangible needs like food and shelter and new toys. However, plenty of my peers have all those material needs met and yet still feel unsatisfied with the impact of their work. I’m grateful to be involved with the Rust project because I know that my work makes a difference to a project that I care about.

Rust is satisfying to be involved with because it makes a difference, but that would not be true without its community. To say thank you, I’ve put together a little visualization for insight into one facet of how that community works its magic:

../../../_images/orglog_deploy_teaser.png

The stats page is interactive and available at http://edunham.github.io/rust-org-stats/. The pretty graphs take a moment to render, since they’re built in your browser.

There’s a whole lot of data on that page, and you can scroll down for a list of all authors. It’s especially great to see the high impact that the month’s new contributors have had, as shown in the group comparison at the bottom of the “natural log of commits” chart!

It’s made with the little toy I wrote a while ago called orglog, which builds on gitstat to help visualize how many people contribute code to a GitHub organization. It’s deployed to GitHub Pages with TravisCI (eww) and nightli.es so that the Rust’s organization-wide contributor stats will be automatically rebuilt and updated every day.

If you’d like to help improve the page, you can contribute to gitstat or orglog!

]]>
Wed, 25 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/23/docker_on_ubuntu.html http://edunham.net/2015/11/23/docker_on_ubuntu.html <![CDATA[PSA: Docker on Ubuntu]]> PSA: Docker on Ubuntu
$ sudo apt-get install docker
$ which docker
$ docker
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker
$ apt-get install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

Oh, you wanted to run a docker container? The docker package in Ubuntu is some window manager dock thingy. The docker binary that runs containers comes from the docker.io system package.

$ sudo apt-get install docker.io
$ which docker
/usr/bin/docker

Also, if it can’t connect to its socket:

FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial
unix /var/run/docker.sock: permission denied. Are you trying to connect to a
TLS-enabled daemon without TLS?

you need to make sure you’re in the right group:

sudo usermod -aG docker <username>; newgrp docker

(thanks, stackoverflow!)

]]>
Mon, 23 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/17/installing_rust_without_root.html http://edunham.net/2015/11/17/installing_rust_without_root.html <![CDATA[Installing Rust without root]]> Installing Rust without root

I just got a good question from a friend on IRC: “Should I ask my university’s administration to install Rust on our shared servers?” The answer is “you don’t have to”.

Pick one of the two following sets of directions. I’d recommend using Multirust, because it automatically checks the packages it downloads and lets you switch between Rust versions trivially.

Without multirust

If you just want one version of Rust, this blog post by Valérian Galliat has a fix in 7 lines:

cd ~/.rust
wget https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
tar xf rust-nightly-x86_64-unknown-linux-gnu.tar.gz
mv rust-nightly-x86_64-unknown-linux-gnu rust
export LD_LIBRARY_PATH=~/opt/rust/rustc/lib:$LD_LIBRARY_PATH
export PATH=~/.rust/rust/rustc/bin:$PATH
export PATH=~/.rust/rust/cargo/bin:$PATH

If you want rust stable instead of rust nightly, use the URL https://static.rust-lang.org/dist/rust-stable-x86_64-unknown-linux-gnu.tar.gz in the wget step to download the latest stable release.

If you’re security-conscious, you might want to verify the integrity of the tarball before inflating it and running its contents. We provide a GPG signature of every tarball, and sha256 sums of the tarballs and signatures.

You can construct the URL for shasum or GPG signature by adding the desired extension to the tarball’s URL, so for nightly:

https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz.sha256
https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz.asc
https://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz.asc.sha256

To verify the GPG signature, you’ll also need a copy of the Rust project’s public key. This key is available through several channels:

  • on the Rust website, available only over HTTPS.
  • on keybase.io, correlated to Rust’s Twitter account and URL. Don’t worry, we authenticated the key by signing a string from Keybase with it locally. We don’t trust them to ever see our private key.
  • on GitHub, in the website’s repository.

Remember, verifying the signature only guarantees that the tarball you downloaded matches the one that was produced by the Rust project’s build infrastructure. As with any piece of software, there exist a variety of threat models from which verifying the signatures cannot completely protect you.

Multirust without root

Multirust is a tool that makes it easy to use multiple Rust versions on the same system. Although the absolute easiest way to use it is curl -sf https://raw.githubusercontent.com/brson/multirust/master/blastoff.sh | sh (which will interactively request a sudo password partway through), it can be installed without root as well:

git clone --recursive https://github.com/brson/multirust && cd multirust
./build.sh # create install.sh
mkdir ~/.rust
./install.sh --prefix=~/.rust/
echo "PATH=~/.rust/bin:$PATH" >> ~/.bashrc; source ~/.bashrc

If you run into an error like:

install: WARNING: failed to run ldconfig. this may happen when not installing
as root. run with --verbose to see the error

or in verbose mode,:

install: running ldconfig
/sbin/ldconfig.real: Can't create temporary cache file /etc/ld.so.cache~:
Permission denied
install: WARNING: failed to run ldconfig. this may happen when not installing
as root. run with --verbose to see the error

It means you don’t have permissions to write to /etc/ld.so.cache. Until this issue gets fixed, the easiest workaround to lacking those permissions is to change the script called by the installer to pass -C to ldconfig:

sed -i 's/   ldconfig/   ldconfig -C ~\/.rust\/ld.so.cache/' build/work/multirust-0.7.0/install.sh

Then you should be able to ./install.sh --prefix=~/.rust without the prior warning. Nasty hack, but the easiest way to get it working today.

This technically breaks rustc (since it’s dynamically linked), but if you’re building a Rust project or library, you’ll be using the statically linked cargo tool and thus won’t be affected.

By the way, this is an example of why people who write system utilities like ldconfig should make them able to read their serttings out of environment variables as well as just command-line arguments.

Now you can multirust default nightly to install rust-nightly and configure it as the default, and you’re ready to roll!

Testing your Rust installation

You can now make a package that says “Hello World” in just 5 commands, using a workflow that will scale to packaging and distributing larger projects:

cargo new hello --bin
echo "fn main(){println!(\"Hello World\");}" > hello/src/main.rs
cd hello
cargo build
cargo run

Congratulations, you’re running Rust!

]]>
Tue, 17 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/12/multiple_languages_on_travisci.html http://edunham.net/2015/11/12/multiple_languages_on_travisci.html <![CDATA[Multiple languages on TravisCI]]> Multiple languages on TravisCI

Today I noticed an assumption which was making my life unnecessarily difficult: I assumed that if my .travis.yml said language: ruby on the first line, I was supposed to only run Ruby code from it.

Travis lets you run code much more arbitrary than that.

I did a bunch of tests on a toy repo to see what would happen if I ignored my preconceptions about how you can and can’t test stuff, and learned some interesting things:

  • You can install PyPI packages in a test suite that’s technically Ruby, or gems in a test suite that’s technically Python.
  • If your project is language:ruby, you need to sudo pip install dependencies. If it’s language:python, you can just gem install dependencies without sudo.
  • If I specify multiple instances of language: or multiple build matrices, Travis uses the language whose build matrix occurs last. If I specify a Python matrix and then a Ruby one, the Ruby matrix will be run.

This is especially useful when testing or deployment requires hitting an API whose libraries are most up to date in a language other than that of the project.

]]>
Thu, 12 Nov 2015 00:00:00 -0800
http://edunham.net/2015/11/04/beyond_openhatch.html http://edunham.net/2015/11/04/beyond_openhatch.html <![CDATA[Beyond Openhatch]]> Beyond Openhatch

Update: I’m now maintaining the issue aggregator list at http://edunham.net/pages/issue_aggregators.html

OpenHatch is a wonderful place to help new contributors find their first open source issues to work on. Their training materials are unparalleled, and the “projects submit easy bugs with mentors” model makes their list of introductory issues reliably high-quality.

However, once you know the basics of how to engage with an open source project, you’re no longer in the target audience for OpenHatch’s list. Where should you look for introductory issues when you want to get involved with a new project, but you’re already familiar with open source in general?

An excellent slide deck by Josh Matthews contains several answers to this question:

  • issuehub.io scrapes GitHub by labels and language
  • up-for-grabs has an opt-in list of projects looking for new contributors, and scrapes their issue trackers for their “jump in”, “up for grabs” or other “new contributors welcome” tags.
  • If you’re looking for Mozilla-specific contributions outside of just code, What can I do for Mozilla? can help direct you into any of Mozilla’s myriad opportunities for involvement.

Additionally, the servo-starters page has a custom view of easy issues sorted by Servo’s project-specific tags.

GitHub Tricks

If you’re looking for open issues across all repos owned by a particular user or organization, you can use the search at https://github.com/pulls and specify the “user” (or org) in the search bar. For instance, this search will find all the unassigned, easy-tagged issues in the rust-lang org. Breaking down the search:

  • user:rust-lang searches all repos owned by github.com/rust-lang. It could also be someone’s github username.
  • is:open searches only open issues.
  • no:assignee will filter out the issues which are obviously claimed. Note that some issues without an assignee set may still have a comment saying “I’ll do this!”, if it was claimed by a user who did not have permissions to set assignees and then not triaged.
  • label:E-Easy uses my prior knowledge that most repos within rust-lang annotate introductory bugs with the E-easy tag. When in doubt, check the contributing.md file at the top level in the org’s most popular repository for an explanation of what various issue labels mean. If that information isn’t in the contributing file or the README, file a bug!

Am I missing your favorite introductory issue aggregator? Shoot me an email to ___@edunham.net (fill in the blank with anything; the email will get to me) with a link, and I’ll add it here if it looks good!

]]>
Wed, 04 Nov 2015 00:00:00 -0800
http://edunham.net/2015/10/29/psa_pin_versions.html http://edunham.net/2015/10/29/psa_pin_versions.html <![CDATA[PSA: Pin Versions]]> PSA: Pin Versions

Today, the website’s build broke. We made no changes to the tests, yet a wild dependency error emerged:

Generating...

  Dependency Error: Yikes! It looks like you don't have redcarpet or one of
its dependencies installed. In order to use Jekyll as currently configured,
you'll need to install this gem. The full error message from Ruby is: 'cannot
load such file -- redcarpet' If you run into trouble, you can find helpful
resources at http://jekyllrb.com/help/!

  Conversion error: Jekyll::Converters::Markdown encountered an error while
converting 'conduct.md':

                    redcarpet

             ERROR: YOUR SITE COULD NOT BE BUILT:

                    ------------------------------------

                    redcarpet

The command "jekyll build" exited with 1.

Although Googling the error was unhelpful, a bit more digging revealed that our last working build had been on Jekyll 2.5.3 and the builds breaking on a Redcarpet error all used 3.0.0.

The moral of the story is that where the .travis.yml said - gem install jekyll, it should have said - gem install jekyll -v 2.5.3.

]]>
Thu, 29 Oct 2015 00:00:00 -0700
http://edunham.net/2015/10/25/seagl_2015_retrospective.html http://edunham.net/2015/10/25/seagl_2015_retrospective.html <![CDATA[SeaGL 2015 Retrospective]]> SeaGL 2015 Retrospective

As well as nominally helping organize the event, I attended and spoke at SeaGL 2015 this weekend. The slides from my talk are here.

My talk drew an audience of perhaps a dozen people on Friday afternoon. I didn’t record this instance of the talk, but will probably give it at least one more time and be sure to record then.

One of the more useful tools I learned about is called myrepos. It lets you update all of the Git repositories on a machine at the same time, as well as other neat tricks like replaying actions that failed due to network problems. Its author has written a variety of other useful Git wrappers, as well.

Additionally, VCSH seems to be the “I knew somebody else wrote that already!” tool for keeping parts of a home directory in Git.

]]>
Sun, 25 Oct 2015 00:00:00 -0700
http://edunham.net/2015/10/14/upgrading_buildbot_0_8_6_to_0_8_12.html http://edunham.net/2015/10/14/upgrading_buildbot_0_8_6_to_0_8_12.html <![CDATA[Upgrading Buildbot 0.8.6 to 0.8.12]]> Upgrading Buildbot 0.8.6 to 0.8.12

Here are some quick notes on upgrading Buildbot.

System Dependencies

There are more now. In order to successfully install all of Buildbot’s dependencies with Pip, I needed a few more apt packages:

python-dev
python-openssl
libffi-dev
libssl-dev

Then for sanity’s sake make a virtualenv, and install the following packages. Note that having too new a sqlalchemy will break things.:

buildbot==0.8.12
boto
pyopenssl
cryptography
SQLAlchemy<=0.7.10

Virtualenvs

Troubleshooting compatibility issues with system packages on a host that runs several Python services with various dependency versions is predictably terrible.

The potential problem with switching to running Buildbot only from a virtualenv is that developers with access to the buildmaster might want to restart it and miss the extra step of activating the virtualenv. I addressed this by adding the command to activate the virtualenv (using the virtualenv’s absolute path) to the ~/.bashrc of the user that we run Buildbot as. This way, we’ve gained the benefits of having our dependencies consolidated without adding the cost of an extra workflow step to remember.

Template changes

Most of Buildbot’s status pages worked fine after the upgrade, but the console view threw a template error because it couldn’t find any variable named “categories”. The fix was to simply copy the new template from venv/local/lib/python2.7/site-packages/buildbot/status/web/templates/console.html to my-buildbot/master/templates/console.html.

That’s it!

Rust currently has these updates on the development buildmaster, but not yet (as of 10/14/2015) in prod.

]]>
Wed, 14 Oct 2015 00:00:00 -0700
http://edunham.net/2015/09/29/carrying_credentials_between_environments.html http://edunham.net/2015/09/29/carrying_credentials_between_environments.html <![CDATA[Carrying credentials between environments]]> Carrying credentials between environments

This scenario is simplified for purposes of demonstration.

I have 3 machines: A, B, and C. A is my laptop, B is a bastion, and C is a server that I only access through the bastion.

I use an SSH keypair helpfully named AB to get from me@A to me@B. On B, I su to user. I then use an SSH keypair named BC to get from user@B to user@C.

I do not wish to store the BC private key on host B.

SSH Agent Forwarding

I have keys AB and BC on host A, where I start. Host A is running ssh-agent, which is installed by default on most Linux distributions.

me@A$ ssh-add ~/.ssh/AB     # Add keypair AB to ssh-agent's keychain
me@A$ ssh-add ~/.ssh/BC     # Add keypair BC to the keychain
me@A$ ssh -A me@B           # Forward my ssh-agent

Now I’m logged into host B and have access to the AB and BC keypairs. An attacker who gains access to B after I log out will have no way to steal the BC keypair, unlike what would happen if that keypair was stored on B.

See here for pretty pictures explaining in more detail how agent forwarding works.

Anyways, I could now ssh me@C with no problem. But if I sudo su user, my agent is no longer forwarded, so I can’t then use the key that I added back on A!

Switch user while preserving environment variables

me@B$ sudo -E su user
user@B$ sudo -E ssh user@C

What?

The -E flag to sudo preserves the environment variables of the user you’re logged in as. ssh-agent uses a socket whose name is of the form /tmp/ssh-AbCdE/agent.12345 to call back to host A when it’s time to do the handshake involving key BC, and the socket’s name is stored in me‘s SSH_AUTH_SOCK environment variable. So by telling sudo to preserve environment variables when switching user, we allow user to pass ssh handshake stuff back to A, where the BC key is available.

Why is sudo -E required to ssh to C? Because /tmp/sshAbCdE/agent.12345 is owned by me:me, and only the file’s owner may read, write, or execute it. Additionally, the socket itself (agent.12345) is owned by me:me, and is not writable by others.

If you must run ssh on B without sudo, chown -R /tmp/ssh-AbCdE to the user who needs to end up using the socket. Making them world read/writable would allow any user on the system to use any key currently added to the ssh-agent on A, which is a terrible idea.

For what it’s worth, the actual value of /tmp/ssh-AbCdE/agent.12345 is available at any time in this workflow as the result of printenv | grep SSH_AUTH_SOCK | cut -f2 -d =.

The Catch

Did you see what just happened there? An arbitrary user with sudo on B just gained access to all the keys added to ssh-agent on A. Simon pointed out that the right way address this issue is to use ProxyCommand instead of agent forwarding.

No, I really don’t want my keys accessible on B

See man ssh_config for more of the details on ProxyCommand. In ~/.ssh/config on A, I can put:

Host B
    User me
    Hostname 111.222.333.444

Host C
    User user
    Hostname 222.333.444.555
    Port 2222
    ProxyCommand ssh -q -w %h:%p B

So then, on A, I can ssh C and be forwarded through B transparently.

]]>
Tue, 29 Sep 2015 00:00:00 -0700
http://edunham.net/2015/09/10/ansible_conditional_role_dependencies.html http://edunham.net/2015/09/10/ansible_conditional_role_dependencies.html <![CDATA[Ansible: Conditional role dependencies]]> Ansible: Conditional role dependencies

I’ve recently been working on an Ansible role that applies to both Ubuntu and OSX hosts. It has some dependencies which are only needed on OSX. There doesn’t seem to be a central document on all the options available for solving this problem, so here are my notes.

Scenario

The role which must apply to both Ubuntu and OSX hosts builds a Rust compiler capable of cross-compiling to Android, so I call it crosscompiler. To run the crosscompiler role on a Mac, you need the xcode role installed, but applying the xcode role to an Ubuntu host will fail.

A simplified version of this setup looks like:

ansible-configs/
├── galaxy_roles.yaml
├── hosts
├── roles
│   ├── crosscompiler
│   │   ├── defaults
│   │   │   └── main.yaml
│   │   ├── meta
│   │   │   └── main.yaml
│   │   └── tasks
│   │       └── main.yaml
│   └── xcode
│       ├── defaults
│       │   └── main.yaml
│       ├── meta
│       │   └── main.yaml
│       └── tasks
│           └── main.yaml
└── site.yaml

Here are a bunch of different ways to avoid applying a Mac-specific task to an Ubuntu host, or vice versa. Note that any of the following steps in isolation will solve the problem – it should not be necessary to use more than one of them.

Check OS on each task of the role

Add the line when: ansible_os_family == 'Darwin' at the end of each task in roles/xcode/tasks/main.yaml.

This needlessly bloats the code and makes it more difficult to read.

Refactor depended role to ignore non-target platforms

Move the entire contents of roles/xcode/main.yaml into roles/xcode/osx.yaml, then create a new main.yaml containing:

---
- include: osx.yaml
  when: ansible_os_family == 'Darwin'

This avoids the bloat induced by running the conditional on each task, while accomplishing the same goal. Now the xcode role looks like:

xcode
├── defaults
│   └── main.yaml
├── meta
│   └── main.yaml
└── tasks
    ├── main.yaml
    └── osx.yaml

This is the best solution for a role which might later expand to support additional platforms.

Make the dependency conditional in meta/main.yaml of depending role

Edit ansible-configs/roles/crosscompiler/main.yaml so that the dependency on xcode reads:

---
dependencies:
  - { role: 'xcode', when: ansible_os_family == 'Darwin' }

This is the best solution when the inner role will only ever target one platform, as is the case with xcode.

Install role conditionally from site.yaml

Edit ansible-configs/site.yaml to read:

- name: Provision cross-compile hosts
  hosts: xcompilehosts
  roles:
    - { role: xcode, when: ansible_os_family == 'Darwin' }
    - crosscompiler

This is problematic because if I was to distribute the crosscompiler role on the Ansible Galaxy, its dependency logic would not be distributed to other users correctly.

TL;DR

You can conditionally include dependencies in your roles. It’s helpful to end users when galaxy roles only try to apply platform-specific tasks to their target platforms, since you can’t be sure how others will use your code.

]]>
Thu, 10 Sep 2015 00:00:00 -0700
http://edunham.net/2015/08/28/apache_licenses.html http://edunham.net/2015/08/28/apache_licenses.html <![CDATA[Apache Licenses]]> Apache Licenses

At the bottom of the Apache 2.0 License file, there’s an appendix:

APPENDIX: How to apply the Apache License to your work.

...

Copyright [yyyy] [name of copyright owner]

...

Does that look like an invitation to fill in the blanks to you? It sure does to me, and has for others in the Rust community as well.

Today I was doing some licensing housekeeping and made the same embarrassing mistake.

This is a PSA to double check whether inviting blanks are part of the appendix before filling them out in Apache license texts.

]]>
Fri, 28 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/24/x240_trackpoint_speed.html http://edunham.net/2015/08/24/x240_trackpoint_speed.html <![CDATA[X240 trackpoint speed]]> X240 trackpoint speed

The screen on my X1 Carbon gave out after a couple months, and my loaner laptop in the meantime is an X240.

The worst thing about this laptop is how slowly the trackpoint moves with a default Ubuntu installation. However, it’s fixable:

cat /sys/devices/platform/i8042/serio1/serio2/speed
cat /sys/devices/platform/i8042/serio1/serio2/sensitivity

Note the starting values in case anything goes wrong, then fiddle around:

echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/sensitivity
echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/speed

Some binary search themed prodding and a lot of tee: /sys/devices/platform/i8042/serio1/serio2/sensitivity: Numerical result out of range has confirmed that both files accept values between 0-255. Interestingly, setting them to 0 does not seem to disable the trackpoint completely.

If you’re wondering why the configuration settings look like ordinary files but choke on values bigger or smaller than a short, go read about sysfs.

]]>
Mon, 24 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/17/kangaroos.html http://edunham.net/2015/08/17/kangaroos.html <![CDATA[Folklore and fallacy]]> Folklore and fallacy

I was a student employee at the OSU Open Source Lab, on and off between internships and other jobs, for 4 years. Being part of the lab helped shape my life and career, in almost overwhelmingly positive ways. However, the farther I get from the lab the more clearly I notice how being part of it changed the way I form expectations about my own technical skills.

To show you the fallacy that I noticed myself falling into, I’d like to tell you a completely made-up story about some alphabetically named kangaroos. Below the fold, there’ll be pictures!

../../../_images/kangaroos1.jpg

Once upon a time, some kangaroos lived in a desert. One day, for some inscrutable marsupial reason, a bunch of young kangaroos got together to practice jumping. Since they’re not particularly creative creatures, they called the group jumping school.

../../../_images/kangaroos2.jpg

Every kangaroo who came to jumping school started out only being able to jump 1 foot, and got better at a rate of 1 foot per year, and stayed for 4 years.

A kangaroo called Aggie was one of the school’s first students. She came in only able to jump 1 foot, but she improved by 1 foot per year.

../../../_images/kangaroos3.jpg

At the start of the second year that the school was around, a kangaroo named Bill joined. Bill could only jump 1 foot when he started, and improved at a rate of 1 foot per year. But Aggie could always jump 2 feet farther than Bill while she was still in school, because they were both improving at the same rate.

../../../_images/kangaroos4.jpg

Nobody new joined for a while, and Aggie left after her fourth year to go cross roads in front of unwary motorists, but Bill stayed in school and kept improving. When she left school, Aggie could jump 5 feet. She knew she’d worked hard, and could always jump farther than Bill, so she felt pretty good about herself.

At the start of the school’s fourth year, after Aggie had left, a new student named Claire joined. Claire could only jump 1 foot at first, but improved at a rate of 1 foot per year.

../../../_images/kangaroos5.jpg

Claire and Bill would chat about school sometimes, and Claire observed that Bill could always jump 2 feet farther than her. When she commented on it, Bill said “If you think I can jump far, you should have seen Aggie! She was a student here before you came, and she could always jump 2 feet farther than me!”.

At the end of the school’s 6th year, Bill finished up and went away to fight in boxing matches. (When Bill left, he could jump 5 feet. He always suspected he could have done better, since he remembered Aggie always being able to jump just a bit farther than him.)

A new student, Dave, joined after Bill left. Dave started out being able to only jump 1 foot and was able to jump 5 feet by the time he’d been at the school for 4 years. Dave knew that Claire could always jump 2 feet farther than him while they were in school together. Dave heard stories from Claire about Bill and Aggie, who could both jump even farther than her!

../../../_images/kangaroos6.jpg

2 years after Dave joined, Claire left jumping school for a full-time job tearing up farmers’ crops and gardens. She knew she’d tried her best, and she could jump 5 feet after she’d been at school for 4 years, but she also knew that Bill had always been a better jumper than her and Aggie had been even better than Bill. Her 5 feet didn’t seem particularly impressive, since (she even double-checked her math on it!) Aggie would have been able to jump 4 feet farther, so that must have meant Aggie was a student who could jump 9 feet.

A couple of years later, Dave was finally finished with school. He’d come in only being able to jump 1 foot, and left being able to jump 5 feet! But he wasn’t sure if this was better or worse than normal, so he thought about the other kangaroos who’d also gone to the school.

../../../_images/kangaroos7.jpg

Dave knew that Claire was a student, and she was able to jump 2 feet farther than him whenever they studied together. Bill was a student who could jump 2 feet farther than Claire, and Aggie was a student who could jump 2 feet farther than Bill! Since Dave could jump 5 feet, he concluded that Claire could jump 7 feet, Bill could jump 9, and Aggie could jump 11! Not only was Dave the worst of the lot, he wasn’t even half as good as the school’s first student! He felt pretty bad about his accomplishments, and wondered why students were getting worse every year.

Epilogue

../../../_images/kangaroos8.jpg

Once upon a somewhat later time, Jumping School had a reunion and all of the former students attended. Claire and Dave were a little afraid of meeting Aggie, since they’d heard such impressive stories of how far she could jump. All 4 kangaroos tested how far they could jump, just for old times’ sake, and they found that they could all jump about 6 feet! They compared stories about their experiences since leaving school, and found that their rates of improvement had slowed down as they got closer to the limit of how far their species was able to jump.

In reality, red kangaroos only jump about 1.8 meters, which is the only factually accurate part of this entire story.

The Moral

I think that a similar effect distorted my perception of my own competence when I compared myself to past OSL students. One can spot the fallacy pretty easily when everything is spelled out with cute photos: Relative skill levels don’t translate reliably into absolute ones over time. It’s tricker to spot the same fallacy in real life, but you might have an easier time now that you’ve seen the pattern once before.

Photo credits, in order:

]]>
Mon, 17 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/17/rustcamp_videos_are_available.html http://edunham.net/2015/08/17/rustcamp_videos_are_available.html <![CDATA[RustCamp videos are available]]> RustCamp videos are available

The videos from RustCamp are available here.

I asked Gankro what was up with the milkshake thing in his talk, and learned about this meme.

]]>
Mon, 17 Aug 2015 00:00:00 -0700
http://edunham.net/2015/08/09/don_t_starve.html http://edunham.net/2015/08/09/don_t_starve.html <![CDATA[Don't Starve]]>

Don’t Starve

It was a lazy Sunday afternoon and I wanted to play Don’t Starve. This actually ended up meaning about 3 hours of intermittent troubleshooting and 1 hour of games, because Linux.

Get the files

I bought Don’t Starve from the Humble Bundle store, although there are other methods of obtaining it which strike a different balance between cost and convenience.

The downloaded file is dontstarve_x64_july21.tar.gz.

This Just Works

$ yaourt -S libcurl-compat
$ tar -xvf dontstarve_x64_july21.tar.gz
$ cd dontstarve/bin
$ LD_PRELOAD=libcurl.so.3 ./dontstarve

Below the fold is the troubleshooting process I went through to make it look so easy. Hopefully it’ll be of assistance to those searching for the errors that I ran into!

This part has SEO for all the intermediate errors

Try to run the script

$ cd dontstarve
$ ./dontstarve.sh
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element
Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element
Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number

(updater:2135): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='latin'

(updater:2135): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='common'
./dontstarve.sh: line 2: unexpected EOF while looking for matching ``'
./dontstarve.sh: line 4: syntax error: unexpected end of file

Look what doesn’t like me! The fonts are sad. Spoiler: You don’t actually need those fonts at all.

Much Frustration

It turns out that the ./dontstarve.sh in the root of the unzipped dontstarve directory is solely an updater, which is totally irrelevant if you just want to play whatever version of the game they happened to ship you. It launches an updater that requests a game key and has hideously broken fonts and generally sprouts new errors like a Hydra every time you think you’re making progress.

Try running the correct dontstarve executable

The correct script to run actually lives in bin/dontstarve.sh, but if you try to run it from outside that directory, it can’t find other files:

$ bin/dontstarve.sh
$ bin/dontstarve.sh: line 3: ./dontstarve: No such file or directory

Cute. So instead:

$ cd bin
$ ./dontstarve.sh
./dontstarve: /usr/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by ./dontstarve)

OR:

$ bin/dontstarve # Spoiler, this will still be broken later
bin/dontstarve: /usr/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by bin/dontstarve)

Fixing the libcurl error

Get libcurl-compat from the AUR:

$ yaourt -S libcurl-compat

At the end of its installation process, it’ll give you a helpful little warning about preloading the library:

Sometimes you have to preload library!
 e.g. if you see this message:
 /usr/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found'
 Do this:
 LD_PRELOAD=libcurl.so.3 youprogname

Run dontstarve with the right libcurl

$ LD_PRELOAD=libcurl.so.3 bin/dontstarve

Hello Segfaults

And then we get a nice reproduceable segfault, ending in:

ERROR: Missing Shader 'shaders/font.ksh'.
Assert failure '0' at ../source/renderlib/OpenGL/HWEffect.cpp(86)

Assert failure 'BREAKPT:' at ../source/renderlib/OpenGL/HWEffect.cpp(86)

Assert failure 'datasize + mReadHead <= mBufferLength' at ../source/util/reader.h(28)

Assert failure 'BREAKPT:' at ../source/util/reader.h(28)

Segmentation fault (core dumped)

This is NOT the cue to go shave a graphics card yak, despite what Googling the error would lead one to believe. This is the cue to:

$ cd bin
$ LD_PRELOAD=libcurl.so.3 ./dontstarve

If it’s stupid, but it works, it ain’t stupid. Running the executable from the right directory makes the game work on an X230 and on an X1 Carbon, so in my (un)professional opinion, it’s got nothing to do with special fancy graphics drivers.

It works!

Scroll way back up to the top for the short version. Have fun, and Don’t Starve!

P.S.

I tried this with the x32 version. It goes all:

~/Downloads/dontstarve/bin $ LD_PRELOAD=libcurl.so.3 ./dontstarve
bash: ./dontstarve: No such file or directory

The script and executable have the same permissions when unzipped from the 32- or 64-bit tarballs... Just, the 32-bit one doesn’t work. There’s probably a good reason for this, possibly related to the fact that I’m running on a 64-bit system, but since the x64 variant of the game runs just fine I didn’t dig into the x32‘s malfunctions any deeper.

]]>
Sun, 09 Aug 2015 00:00:00 -0700
http://edunham.net/2015/07/31/how_many_rust_channels_are_there.html http://edunham.net/2015/07/31/how_many_rust_channels_are_there.html <![CDATA[How many Rust channels are there?]]> How many Rust channels are there?

I’m using search.mibbit.com to count these. All have at least one user in them as of 4pm PST 2015-07-31.

There are 53 Rust-related channels on irc.mozilla.org.

List below the fold.

41 General and Project channels:

##rustfmt
#cargo
#hematite (Minecraft-in-Rust)
#hyper (an HTTP library, https://github.com/hyperium/hyper)
#iron (a web framework, https://github.com/iron/iron)
#mio
#rust
#rust-api
#rust-apidesign
#rust-audio
#rust-bikeshed
#rust-bots
#rust-casino
#rust-community
#rust-config
#rust-crypto
#rust-data
#rust-design
#rust-dev
#rust-diverse
#rust-fsnotify
#rust-gamedev
#rust-internals
#rust-lang
#rust-learners
#rust-libs
#rust-music
#rust-newspeak
#rust-osdev
#rust-politics
#rust-tls
#rust-tools
#rust-triage
#rust-tty
#rust-war
#rust-webdev
#rust-workshop
#rust-ww
#rust_offtopic
#rustaudio
#servo
#winapi

3 social channels:

#rust-chat
#rust-offtopic
#rust-offtopic-offtopic

9 apparently language- or location-specific channels:

#rust-br
#rust-de
#rust-fr
#rust-hu
#rust-learners-de
#rust-nyc
#rust-ru
#rust-seattle
#rust.fi
]]>
Fri, 31 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/28/good_times.html http://edunham.net/2015/07/28/good_times.html <![CDATA[Good times]]> Good times

People sometimes say “morning” or “evening” on IRC for a time zone unlike my own. Here’s a bash one-liner that emits the correct time-of-day generalization based on the datetime settings of the machine you run it on.

case $(($(date +%H)/6)) in 0|1)m="morning";;2)m="afternoon";;3)m="night";;esac; echo good $m

How?

First, check if the feature is already implemented. man date and try not to giggle. Search for morning. It’s not there.

So we need a switch/case:

case EXPRESSION in CASE1) COMMAND-LIST;; CASE2) COMMAND-LIST;; ... CASEN) COMMAND-LIST;; esac

And the expression we’re switching on will be the current hour:

date +%H

My first attempt does not work because I expect too much of Bash:

case $(date +%H) in [0-12]) m="morning";;[13-18]) m="afternoon";;[19-21])m="evening";;*)m="night";;esac; echo $m

It fails because the “ranges” are actually just shell patterns.

I could either expand my script to handle all hours, or compress ranges of hours down into something that can be expressed by patterns. The latter sounds shorter and easier. I want to divide the current hour by 6, to tell which quarter of the day I’m in.

A bit of trial and error reveals that a syntax that allows me to do math on the result of date is:

$(( $(date +%H)/6 ))

because it’s shorthand for assigning the result of the math into a variable and using it immediately. This only adds a few characters to the one-liner:

case $(($(date +%H)/6)) in 0|1)m="morning";;2)m="afternoon";;3)m="night";;esac; echo $m

That’s it!

]]>
Tue, 28 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/20/printing.html http://edunham.net/2015/07/20/printing.html <![CDATA[Printing]]> Printing

The office printers have instructions for setting them up under Windows, Mac, and Ubuntu. I had forgotten how to wrangle printers, since the last time I had to set up new ones was half a decade ago when I first joined the OSL.

Setting up printers on Arch is easy once you know the right incantations, but can waste some time if you try to do it by skimming the huge wiki page rather than either reading it thoroughly or just following these steps:

Install the CUPS client:

$ yaourt -S libcups

Add a magic line to /etc/cups/cups-files.conf:

SystemGroup username

With your username on the system, assuming you have root and will log in as yourself in the dialog it prompts for. That line can go anywhere in the file.

Make the daemon go:

$ sudo systemctl enable org.cups.cupsd.service
$ sudo systemctl start org.cups.cupsd.service

Visit the web interface at http://localhost:631.

Then you have a GUI sufficiently similar to the one in the instructions for Ubuntu!

There is no GUI client for CUPS to install. If you find yourself mucking about with gpr, xpp, kdeprint, or /etc/cups/client.conf, you have gone way too far down the wrong rabbit hole.

]]>
Mon, 20 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/17/replacing_buildbot_s_outdated_cert.html http://edunham.net/2015/07/17/replacing_buildbot_s_outdated_cert.html <![CDATA[Outage postmortem: Replacing Rust Buildbot's outdated cert]]>

Outage postmortem: Replacing Rust Buildbot’s outdated cert

At the end of the day on July 14th, 2015, the certificate that Rust’s buildbot slaves were using to communicate with the buildmaster expired. This broke things. The problem started at midnight on July 15th, and was only fully resolved at the end of July 16th. Much of the reason for this outage’s duration was that I was learning about Buildbot as I went along.

Here’s how the outage got resolved, just in case anyone (especially future-me) finds themself Googling a similar problem.

Troubleshooting

Dave Huseby pointed out the problem on IRC when the slaves that he runs were unable to connect to the buildmaster:

16:48:23 <&brson> edunham: dhuseby said this earlier <huseby> it seems like the verify=3 in the stunnel config is the problem
16:48:39 <&brson> if he changed 'verify' to some other value in the stunnel config it worked

A quick check fo the stunnel docs shows that verify=3 is the strictest setting, and will fail if the locally installed cert isn’t right. This supports the hypothesis that our cert might be expired. On the buildmaster, I found the cert and examined its metadata:

$ find . -type f -name "*.pem"
$ openssl x509 -noout -issuer -subject -dates -in certname.pem

On the old cert, the results contained:

$ openssl x509 -noout -issuer -subject -dates -in rust-bot-cert.pem
issuer= /
    O=Rust Project/
    OU=Bot/
    CN=bot.rust-lang.org/
    emailAddress=admin@rust-lang.org
subject= /
    O=Rust Project/
    OU=Bot/
    CN=bot.rust-lang.org/
    emailAddress=admin@rust-lang.org
notBefore=Jul 14 02:28:50 2012 GMT
notAfter=Jul 14 02:28:50 2015 GMT

This tells me that the cert was created in 2012 and had its expiry set for the seemingly distant future of 2015.

Make a New Cert

To determine whether the old key had a passphrase on it, go openssl rsa -check -in keyname.pem. It writes the private key to your terminal if there’s no password, or prompt for a passphrase if the key has one.

The stunnel docs give most of the relevant incantation. Since no file in our buildbot directory is named precisely stunnel.conf, make cert doesn’t quite work right. But it works fine to manually run a variant of the command given in the docs:

$ openssl req -new -x509 -days 3650 -nodes -out cert.pem -keyout key.pem

That prompted me for a variety of information, which I entered where applicable and left blank where it wasn’t. The metadata is primarily for the benefit of others verifying that a cert belongs to the correct person, which isn’t a relevant concern in our use case.

I then backed up the old key and cert (although they’re no longer usable, they contain a bunch of metadata that I didn’t know whether I’d need later) and moved the new key and cert to match the old ones’ original file names.

Finally, I updated the repository with the new certificate.

Make Buildbot spin up AMIs with the new cert

This was the tricky bit. Since the slave image does not pull updates to its copy of the Rust Buildbot github repo when it boots, the file had to be statically edited and then the AMIs re-saved. But Buildbot makes its instance requests based on AMI ID, and the IDs are unique to a particular image. So the workflow goes:

  • Figure out which AMI Buildbot will spin up for a given job
  • Spin up an instance of the AMI
  • Remote into it and manually update the cert
  • Save the instance into a new AMI, noting its ID
  • Update Buildbot’s configuration with the new ID

Figure out which AMI Buildbot will use

The AMI IDs used for spot requests are stored in /home/rustbuild/rust-buildbot/master/slave-list.txt on the buildmaster. From that file I determined that we only had 4 unique AMIs in use:

ami-b74fa1f3 -- windows
ami-7fd23e3b -- generic linux
ami-381e197d -- android
ami-dbac5f9f -- centos5 (builds snapshots that work with ancient Linux)

The slave list also told me that all requests would be made for instance type c3.2xlarge.

Spin up an instance of the AMI

Since there were only 4, I did this manually. If it was a recurring task or there had been more AMIs, I would have automated this part of the process.

In the AWS Console, go to EC2, then click the AMIs link under “Images” at the left.

Search for the AMI ID in the search box. Only one AMI is found, because IDs are unique. Then click the big blue Launch button up at the left.

../../../_images/amis.png

The only “gotcha” in the ensuing 7-step process is making sure to put the instance into the correct security group. I spun my temporary instances up into the same group as a host I know I can get to from the Bastion server with my credentials, to reduce the number of steps I’d have to troubleshoot if they were difficult to access.

It also helps to tag the spot request with the name of the AMI it was created from, when processing several at once.

Remote into the instance and update the cert

For Linux-flavored instances, this just meant ssh rustbuild@00.00.00.00 (using the instance’s public IP, visible in the main instances list) from the Bastion. I then found the old cert on the host, and verified that it matched the old cert on the buildmaster. Checking that the certs matched was as simple as running md5sum cert.pem on both and visually comparing the results, and reassured me that I was overwriting the correct file.

Getting into the Windows hosts requires using RDP after setting up an SSH tunnel to the bastion. Since I was on airport wifi at the time, I had a teammate stick the cert onto the Windows instances instead.

The cert that stunnel actually uses on a Windows host with our configurations actually lives at C:\Program Files (x86)\stunnel\cert.pem, not in the actual repo like on all the sensible operating systems. Although there exists a C:\bot\cert.pem, replacing it does not cause stunnel to connect successfully.

Save the instance into a new AMI

Check the box by the instance’s name in the EC2 instances list, then follow the menus around for Actions -> Image -> Create Image. Note the new AMI’s ID, and replace all instances of the old AMI’s ID with the new one in the buildmaster’s slave-list.txt.

Kick Buildbot a bit

After the AMIs and FreeBSD, Bitrig, and Mac builders all had the new cert, I restarted Buildbot on the buildmaster and reran its script for creating the stunnels. Althoug it didn’t gracefully pick up where it had left off on partially built pull requests, closing then re-opening the PRs caused it to notice them and resume building successfully.

Hopefully TaskCluster gets OSX support soon, so we can start switching off of Buildbot.

Prevent it from happening again

After I first published this post, Gerv pointed out that the correct final step would be “Add an alarm to the shared IT calendar for a month before the new cert expires”. In my case, the analog to that alarm is “Make sure we move away from Buildbot in less than a decade”. However, if you’re reading this post to solve a similar problem in an infrastructure that will still exist at the date of the cert’s expiry, you should automate a reminder so that you or your successor doesn’t get the same unpleasant surprise.

]]>
Fri, 17 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/16/airport_wifi.html http://edunham.net/2015/07/16/airport_wifi.html <![CDATA[Airport Wifi]]> Airport Wifi

Many “free” wifi hotspots give you a limited time per computer. If you’re traveling light and forgot to bring extra devices, it’s easy to give a Linux laptop multiple personalities:

$ ip link
    1: lo
    2: wlp4s0
    3: enp0s25
$ ip link set dev wlp4s0 down
$ macchanger -r wlp4s0
$ ip link set dev wlp4s0 up

... And then connect to the wifi and jump through its silly captive portal hoops again!

Changing your MAC address occasionally can be part of a healthy security diet, making your device slightly more difficult to track, as well.

]]>
Thu, 16 Jul 2015 00:00:00 -0700
http://edunham.net/2015/07/13/interactive_rust_examples_in_static_pages.html http://edunham.net/2015/07/13/interactive_rust_examples_in_static_pages.html <![CDATA[Interactive Rust Examples in Static Pages]]> Interactive Rust Examples in Static Pages

Rust by Example has a little box where readers can interact with some example Rust code, run it using the playground, and see the results in the page. As a sysadmin I’m loath to recommend that anybody trust the playground for anything, but as a nerd and coder I recognize that it’s super cool and people want to use it.

There are 2 ways to stuff a Playground into your website: The easy way, and the “right” way. Here’s how to do it the easy way, and where to look for examples of the hard way.

The Easy Way

There are these cool things called iframes that basically let you put websites in your websites...

../../../_images/xzibit.jpg

All you have to do is stick this one line in your page’s source, and modern browsers will go load and inject the specified page – in this case, the playpen. The exact technique for injecting raw HTML will differ based on your blogging platform. With Tinkerer, the source will look like this:

.. raw:: html

    <iframe src="https://play.rust-lang.org/" style="width:100%; height:400px;"></iframe>

That’ll render with the default contents of the playpen. Note that it’ll someties try to be clever and load the code that the viewer most recently opened in it when loaded with no arguments: