Oh yeah. I’m over a week late at this point, but…I am now officially an MIT alumnus. (I’d get a better picture, but I sent my diploma home with my parents for safe keeping)

Can’t say I really feel any different, but life keeps moving on. I move to the Bay Area on July 13th (officially less than a month from now!), and I start at MokaFive on July 19th. In the mean time, I’m having fun working as a full-time intern on some of Ksplice’s internal infrastructure.

I suspect it won’t be until I actually move out to California and start being a real “Real Person” that I’ll really start to notice, so I’m going to try and spend as much of the next month as possible being as unstressed as possible and enjoying everything I’ll be leaving behind here.

I upgraded my laptop to Snow Leopard yesterday, and one thing I’m still reeling from is the changes to Kerberos. I’m not usually one to fault developers for wanting to move forward at the cost of compatibility, especially for rarely used features, so when I found that Apple made substantial changes to the user-facing side of Kerberos, I started updating my scripts and configuration to catch up.

I still have a few more bugs to track down, so i you know how to solve either of these, let me know:

Automatically renewing tickets.
/System/Library/LaunchAgents/com.apple.Kerberos.renew.plist appears to be a launchd job to do this, but I can’t figure out what’s supposed to trigger it.
Triggering code on ticket acquisition/renewal.
On older versions of OS X, you could set the libdefaults.login_logout_notification option in /Library/Preferences/edu.mit.Kerberos and cause Kerberos to call into a bundle in /Library/Kerberos Plug-Ins, but that doesn’t seem to work on Snow Leopard – the Login and Logout Notification API appears to be gone

In the mean time, after two hours of source diving later, I have solved one of my major bugs with Kerberos: de-stickifying options passed to kinit.

kinit seems to have gotten a lot of TLC in Snow Leopard—it appears to have been substantially re-written to take advantage of the Kerberos Identity Management API, which looks to be an attempt to genericize everything that made Kerberos on OS X special (multiple credential caches, system Keychain integration, etc.).

Unfortunately, one of the features it gained through this API was the ability to remember Kerberos ticket settings. In Leopard, if you changed ticket parameters in the “New Tickets” window of Kerberos.app (such as the duration of the tickets or the flags on them), those changes would be remembered the next time you used Kerberos.app to get tickets. But if you changed ticket parameters by passing flags to kinit, they would only work for one invocation.

But now, if I pass a flag like -l1m (get tickets that last one minute), that sticks across kinit invocations:

fanty:~ evan$ kinit -l1m -r5m broder
fanty:~ evan$ klist
[...]
Valid Starting     Expires            Service Principal
06/13/10 00:43:14  06/13/10 00:44:13  krbtgt/ATHENA.MIT.EDU@ATHENA.MIT.EDU
	renew until 06/13/10 00:48:13
fanty:~ evan$ kdestroy
fanty:~ evan$ kinit broder
fanty:~ evan$ klist
[...]
Valid Starting     Expires            Service Principal
06/13/10 00:43:26  06/13/10 00:44:26  krbtgt/ATHENA.MIT.EDU@ATHENA.MIT.EDU
	renew until 06/13/10 00:48:26

I found this undesirable, because when I pass flags to kinit, I want them to be for that invocation only, and any other time, I want my “defaults” – i.e. tickets that last as long as possible.

It turns out that the KIM API implementation on OS X takes backends into the standard OS X preferences API, and writes its settings to ~/Library/Preferences/edu.mit.Kerberos.IdentityManagement.plist. And in particular, it checks for the RememberCredentialAttributes to determine whether or not to store the preferences that are passed in. So just run

defaults write edu.mit.Kerberos.IdentityManagement RememberCredentialAttributes -bool false

to disable this feature. (Replace false with true if you want to undo the change)

If you want to follow the maze of twisty passages yourself, you can grab KerberosLibraries-81.46.1.tar.gz from Apple Open Source. Here’s the relevant call chain (except for kinit, which is in KerberosClients/kinit/Sources, all paths are relative to KerberosFramework/Kerberos5/Sources):

  • main (kinit.c:591)
  • kim_ccache_create_new (kim/lib/kim_ccache.c:214)
  • kim_ccache_create_new_with_password (kim/lib/kim_ccache.c:234)
  • kim_credential_create_new_with_password (kim/lib/kim_credential.c:411)
  • kim_credential_remember_prefs (kim/lib/kim_credential.c:224)
  • kim_preferences_create (kim/lib/kim_preferences.c:620)
  • kim_preferences_read (kim/lib/kim_preferences.c:433)
  • kim_os_preferences_get_boolean_for_key (kim/lib/mac/kim_os_preferences.c:412)
  • kim_os_preferences_copy_value (kim/lib/mac/kim_os_preferences.c:205)
  • kim_os_preferences_copy_value_for_file (kim/lib/mac/kim_os_preferences.c:142)

Around here the linearity of the call chain starts to break down, but kim_os_preferences_get_boolean_for_key gets called with kim_preference_key_remember_options, which gets passed to kim_os_preferences_cfstring_for_key in kim_os_preferences_copy_value, returning CFSTR ("RememberCredentialAttributes")

Eventually CFPreferencesCopyValue gets called asking for the “RememberCredentialAttributes” key in “edu.mit.Kerberos.IdentityManagement”, checking current-host/current-user, any-host/current-user, current-host/any-user, and any-host/any-user configuration settings, in that order.

This is one of my favorite recipes ever, which I was reminded of when we spotted Andouille sausage at the grocery store today, after a few months of not being able to find it.

I love this recipe most of all because it’s delicious, but it cooks fairly quickly, doesn’t require any babysitting for most of the time it’s cooking, and you only have to chop the first ingredients you use. Since I never remember to actually cut up everything before I start cooking, it’s nice when the recipe forces me to.

I got this recipe from my mom, but I don’t have any information on where she got it from.

Spicy Jambalaya

12 ounces Andouille sausage, sliced
1 onion, chopped
1 green pepper, chopped
1 rib celery, chopped [1]
1.5 tbsp cajun seasoning [2]
1 bay leaf
14.5 ounces stewed tomatoes
3 cups chicken broth
1.5 cups long grain white rice

  1. Cook andouille sausage, onion, green pepper, and rib celery in large non-stick saucepan over medium-high heat for three minutes.
  2. Add cajun spiced seasoning and bay leaf; cook 5 minutes.
  3. Add stewed tomatoes and chicken broth, and stir in rice.
  4. Simmer, covered, 15 – 20 minutes, until rice is tender and excess liquid has boiled off.
  5. Remove bay leaf and serve.

Serves 6.

[1] I find celery to be an incredibly boring food, so I’ve never actually included the celery when I make this.
[2] The dish will suffer if you don’t have a good cajun seasoning. I’ve never been disappointed by the cajun seasoning from McCormick’s Gourmet collection.

So, secretly, in spite of not saying anything about it on here, I’ve spent the last two months or so jobhunting. The whole process was definitely exciting and fun experience, but also really exhausting and stressful, so I’m glad it’s over. All in all, I applied to 9 companies, interviewed on-site at 7, and received offers from 6. So, a pretty good take, I suppose, and definitely very self-validating.

I want to try and pull some comments together on the process at some point, but for now I’ll just talk about the job itself.

On Wednesday, I accepted a position with MokaFive, in Redwood City, CA. I’ll be working in the Office of the CTO, under John Whaley. We haven’t finalized my start date yet, but it’ll likely be late July or early August.

MokaFive, of course, isn’t a Facebook or a Google, and not many people have necessarily heard of them. So, in a buzzword-loaded sentence, MokaFive is selling distributed, centrally-managed desktop virtualization. Phew! Let’s try to do that with a few more words.

Let’s say you work in IT at a company. And, like most companies, you have some software you want your employees to be able to use. But for whatever reason, people want to use their own computers instead of yours – maybe you’re just not providing them with a computer because they’re a short-term contractor, maybe it’s personal preference. But managing software on a heterogeneous environment like people’s personal computers is basically impossible.

MokaFive’s platform lets you wrap up that software in a VM, which your employees can then download and run on any platform – Windows, Mac, or Linux. You keep the ability to centrally manage those VMs, but your employees can still use them where they want, when they want, and without needing internet connectivity.

The point, though, is that I’m really excited about MokaFive, because the company is basically tailor-made for my background, and lets me keep doing the sorts of virtualization stuff that I’ve been doing.

For today’s post, I thought I’d survey what software projects related to virtualization name themselves.

It turns out that most projects have cute, but relatively unoriginal, names, formed by finding a word with “vert” and changing it to “virt”.

To collect today’s list, I started with sed -ne 's/vert/virt/p' /usr/share/dict/words. That gives me 433 words.

To pare down the list a bit, I used Google’s web search API to ignore words that had no Google results. That got the list down to 125 words.

Finally, I filtered the list down to things that I actually thought were reasonable project names (for instance, “advirt” might be a reasonable project; “incontrovirtible” and “ovirtalkative”, not so much). This admittedly subjective sampling got me down to 27 words that seemed worth exploring further.

And so, without further ado, here is a sample of names for today’s vertvirtualization projects:

  • ConVirt: ConVirt has seen a lot of evolution. It was originally a Linux desktop graphical Xen management application, a la virt-manager, and was originally called XenMan (you can still see some evidence of this at their old website. Since then, it’s been renamed to ConVirt, enhanced to support KVM (using libvirt), and moved into a rich web application. It’s still available under the GPL, but Convirture, the company supporting ConVirt development, is selling some advanced features along with their paid support.
  • Divirt: Divirt is a project to create virtual networks, allowing geographically disparate virtual machines to act as if they’re all connected by a LAN. Doesn’t really look like it ever got off the ground, though.
  • ExtraVirt: ExtraVirt was a project from UMich to detect processor errors by running the same system synchronously in multiple VMs, regulating non-deterministic inputs, and comparing the output. All I can find is a single two-page brief on the project.
  • IntroVirt: Another project from the same group at UMich, IntroVirt was a system for both active intrusion detection and post-facto intrusion analysis. It used a bunch of tests from the host to monitor suspicious activity.
  • Invirt: The super awesome project from the MIT SIPB. Invirt is a full-stack, multi-host, Xen-based management platform targeted at semi-public deployments with a web-based control interface. It has per-machine access control and quotas, and supports creation, deletion, installation, and general administration through the web interface, including an autoinstaller for Debian and Ubuntu. The primary deployment of Invirt, the XVM service, provides free VMs for the MIT community. XVM is currently running 246 separate VMs on 4 physical servers.
  • oVirt: Similar to ConVirt (at least in its current form), oVirt is a RedHat-sponsored web-based virtualization management platform. In contrast to something like Invirt that’s designed for building a “public cloud”, oVirt is designed for building more of a “private cloud”, where all of the VMs are managed by the same person. oVirt was one of the first projects to heavily utilize libvirt, and both projects come from the same group in RedHat.
  • ReVirt: From the group that brought you ExtraVirt and IntroVirt, ReVirt is a trustworthy execution logger. It logs a VM’s execution for later replay. Like IntroVirt, it seems to be primarily designed for intrusion detection and analysis.
  • SubVirt: Yet another paper from UMich, this paper was somewhat groundbreaking research into malicious virtualization technology. The SubVirt project developed proof-of-concept “VMBRs” (virtual-machine based rootkits), which installed themselves as a hypervisor on machines, transparently turning the OS previously running on bare metal into a virtual machine.
  • TransVirtual Systems: These guys sell a compatibility layer that lets you run ancient software from Wang VS on a modern UNIX platform.
  • Xilinx Virtex: While not virtualizing the same layer as the other projects and products here, FPGA’s are basically virtualization for silicon, letting you literally create new digital logic on the fly, and Xilinx’s Virtex series of FPGAs is the top-of-the-line.
  • Virtigo: I’m sort of cheating here, because Google won’t help you find this one. When I left my internship at Google, I decided to pull the virtualization testing framework I was working on out from the larger body of work it was originally included in, and Virtigo is what I decided to call it. If I ever have the time to pull the project back together, it’ll live at virtigo.org.

In addition to those names, though, there are actually some names that haven’t yet been taken:

  • advirt
  • ambivirt
  • antevirt
  • chetvirt
  • controvirt
  • covirt
  • culvirt
  • discovirt
  • evirt
  • obvirt
  • pervirt
  • povirty
  • retrovirt
  • virtebra
  • virtical

So – what are you going to name your next virtualization-related project or product?

Magic SysRqs

One of the most powerful ways to debug or recover Linux is through the Magic SysRq keys.

Normally, if you’re using a serial console, you send BREAK, followed by the command you want. However, this doesn’t work with the Xen paravirtualized serial console driver. Instead, you have to use Ctrl+o, followed by the command you want.

For instance, to do an emergency sync, press Ctrl+o, then s.

Note that you use Ctrl+o instead of BREAK for both dom0 and domU serial consoles – they both are using the Xen paravirtualized driver.

I live in a really awesome apartment. I’m living with really awesome people. And we tend to err on the side of awesome when it comes to buying stuff for the apartment. Specifically, we’re big fans of communalism – we do communal groceries, communal furniture, whatever.

But all four of us are paying for stuff for all four of us, it does make keeping track of money a little tricky. The traditional solution is for everybody to stuff their receipts into a drawer, and every month you all sit down and slowly work your way through the receipts.

Each of us owes you $50 for your grocery run, and you owe me $20 for my grocery run.

It’s a painful process, and the difficulty scales super-linearly as you add more people. Even with four, it would be pretty bad.

Now, it turns out that programmers hate tedious tasks like that, so there’s a long history amongst my friends of programmatically solving this in various ways. When we moved into this apartment, I figured I’d try my hand at it, and BlueChips is the result.

Since we set it up, BlueChips has been used by us and by other roommate setups to manage their expenses. We use it for tracking everything – rent, utilities, groceries, furniture, when all of us go out for dinner…

BlueChips has a very simple data model. There are users. A user can move money in two ways: expenditures and transfers. In an expenditure, one person spends money on behalf of a bunch of people. As a result of the expenditure, each of those people owes the spender some amount of money. BlueChips lets different people owe different amounts as the result of a single expenditure. For example, when we pay rent, each of us pays a different percentage, and BlueChips can follow that.

BlueChips’ biggest feature, though, is its ability to calculate the transfers necessary to settle the books. When it makes this calculation it also does something we call “pushing transfers”. Let’s say Larry owes Moe $1, and Moe owes Curly $1. BlueChips can “push” the $1 through, and will tell you that, to settle the books, Larry should give Curly $1.

If you’re still confused, or just want to see what the app looks like, I have a demo instance running at http://demo.bluechi.ps.

The software’s been around for a year, and it’s been open-source for most of that time, but I’ve never quite gotten around to putting a finishing coat of polish on it and getting it into a form that other people can use it.

When I lived with Scott Torborg and some other friends over the summer, we used BlueChips again for handling finances. Scott decided to put some of that polishing effort into BlueChips, and I have him to thank for all of the styling, excellent test coverage, and the iPhone interface, along with innumerable other tweaks.

I finally coded up the last big feature that BlueChips was missing: the ability to add new users without directly interacting with the database.

And so today I’m pleased to announce that I’ve tagged and released a version 1.0.0 of BlueChips.

BUT WAIT! THERE’S MORE!

For those of you MIT folks, I’ve worked with the scripts.mit.edu team to provide a Scripts autoinstaller. To install BlueChips, you can run the following commands from any Athena workstation:

dr-wily:~ broder$ add blue-sun
dr-wily:~ broder$ scripts-bluechips

Please remember that this is not a Scripts-managed autoinstaller. If you run into any problems, like it says, please let me know at bluechips@mit.edu.

And if you find BlueChips to be missing a feature you want, please feel free to write it yourself! In general, I don’t expect to have a lot of time going forward for new feature development, but I’m more than willing to review contributions from others. It’s my hope that the community can pick up my slack and keep BlueChips moving forward.

As part of my involvement with SIPB, one of the biggest problems we run into is getting people started. As much as we emphatically insist that you don’t need to know anything about computers coming in (just be interested in them), it’s hard to implement that in practice.

One area that I think we do a particularly bad job of spinning people up on is how to use Unix-like environments. We’re a very Linux-heavy organization, and without some amount of *nix (and, in particular, *nix command line) comfort, it’s hard to figure out where to start.

When I’ve tried to teach people this sort of thing in the past, one thing I’ve always found is that you can’t use a system you don’t understand. You might be able to apply formulas to it (i.e. you might know “ls” or “blanche“), but without understanding the system, you can’t do things like building awesome complicated pipelines of 12 different commands, or whatever. So in the last 6 months or so, whenever I’m trying to teach somebody something, I take the time to teach it to them from the ground up. But I still didn’t have a good answer for teaching Unix.

I realized last night that I really learned how to think about Unix in 6.033, when we read the Unix Paper. In particular, sections 3, 5, and 6 are a pretty concise explanation of open, read, write, pipe, fork, exec, wait, exit, not to mention how input/output redirection, file descriptors, and shell fork+exec loops work.

And, modulo some slight naming changes, all of the information still applies to modern Linux. Not bad for a paper that’s 36 years old!

In any case, I’ve decided that pointing people at those three sections of that paper is my new answer for how to go from a formulaic understanding of Unix to actually being able to work with it. But it’s still only a start.

When did Unix click for you, and what actually did it? How do you help it click for other people? What other good beginner material is there, not just for Unix but other technical topics as well?

This is a blatant abuse of any Google juice this blog has picked up.

Every time I look for the definition of Vcs-Browser, Vcs-bzr, Vcs-cvs, Vcs-git, Vcs-hg, Vcs-hg, and Vcs-svn, it takes me forever to track it down.

In particular, searching for debian vcs-svn doesn’t actually find what I’m looking for. It took me running through a series of blog posts linking to forum posts linking to online list archives linking to Debian bugs to finally get the hint.

Now, for everybody’s sake, the information is not in Debian Policy, but rather in the Debian Developer’s Reference. Specifically, section 6.2.5.

Hopefully Google will pick this up and make searching for those fields easier. But even if it doesn’t, at least I know that I have a blog post with the information.

After Kevin’s post on commenting, I realized that I tend to be really bad about following through with blog comment conversations.

Kevin pointed out that he’s more likely to take the discussion to zephyr, the mostly-MIT-internal chat server. In fact, Nelson started the Iron Blogger event as a way to combat the fact that we tend to have all our interesting discussions on zephyr, instead of with the rest of the world. So blogging openly but replacing “commenting” with zephyr really defeats a lot of the point.

I know that for me the biggest reason I like having discussions on zephyr is because it’s easy to have a discussion. I don’t have to go seek out replies to my commentary – they show up automatically.

On the other hand, I read blogs through an RSS reader. I don’t tend to visit sites directly. And certainly I don’t go back through a blog’s history looking for replies to my replies. This means that it’s far too easy to make a comment and never look at the comment site again.

To try and combat this, at least for my blog, I’ve installed the “Subscribe to Comments” plugin. It was really easy – the plugin automatically adds the subscription checkbox to the comments form, although I decided to move it to put it above the comment textarea.

I’d encourage the rest of you to do the same – let’s bring the discussion, as well as the blogs, out of the MIT bubble.

© 2013 No Name Blog Suffusion theme by Sayontan Sinha