The Right Tool For The Job Isn’t What You Think It Is

This tweet recently took me down a rabbit hole of ideas about software and the epiphenomenon that we produce when we write it.  As is often the case when I start thinking about something, other seemingly random events or articles bubble to the top of my consciousness or Twitter feed or whatever.    Choose Boring Technology had recently popped up, linked from another article on architectural working groups and the idea of talking about technology choices. Outside of all that, I’ve recently been waking up at 1 in the morning thinking about some looming changes at work in our technology stack. It’s weird how the universe knows when you are ready for an idea and suddenly, you can tie multiple streams of thought into a coherent whole. Well, you can at least try. This post is an attempt to do that.

Epiphenomenon is a secondary effect that an action has that occurs in parallel to the primary effect. The medical world is rife with examples of epiphenomenon. I assert the software world is too but that they are poorly documented or catalogued because they are primarily negative. I believe epiphenomenon are what Michael Feathers is talking about in the lede. If you only see the effects of your software choices, you don’t really understand what you have built. It is only when you see the effects of the effect, the epiphenomenon, do you really understand. I contend this is rarely technological in nature but is instead cultural and has wide ranging effects, many of them negative.

How is this related to choosing boring technology? Epiphenomenon are much more well known and much less widespread in boring, well understood technologies. When you choose exciting technologies, the related effects of the effects of your choices are deeper and broader because you understand fewer of the implications of the choice.  These are the unknown unknowns that Dan talks about.  We see this over and over in the tech space where people think that choices are made in a total vacuum with no organizational effects outside the primary technological ones.

At Amazon, they are famous for their service oriented architecture.   It sounds so dreamy.  We’ll have services that allow us to iterate independently and deploy pieces independently and we’ll all be so independent.  The problem is that independence requires incredible discipline, discipline that is paradoxically very dependent on everyone being on the same page about what a service looks like and what it has access to and how it goes about getting the data it needs to function.  Without any of that very hard discipline that rarely seems to exist outside the Amazons of the world, what you have is not your dreamy Service Oriented Architecture but instead a distributed monolith that is actually a hundred times worse than the actual monolith you replaced.

I saw several people disagreeing with that tweet and wondered why it was so controversial.  It dawned on me that the people disagreeing with it were developers, people deep down in the corporate food chain who have this idea of using the right tool for the job in all instances which is great if you are a carpenter but fucking insane if you are a software shop.  When a carpenter uses a miter saw instead of a hammer, it’s because you can’t cut a 2×4 with a hammer unless you are very very dedicated and also the shittiest carpenter in the world.  However, when an engineer says “This is the job for Super Document Database (which by the way we’ve never once run in production)!” in his best Superman voice, he’s saying that in a total vacuum, a vacuum that doesn’t exist for the carpenter (and actually doesn’t exist for the engineer, he just doesn’t know it).  Now you have your data in two places.  Now you need different engineering rules for how its accessed, what its SLAs are, how its monitored, how it gets to your analytics team who just got blindsided for the fourth time this year with some technology, the adoption of which they had no input into, etc, etc, etc, until everyone in the company wants to go on a homicidal rampage.

Logical conclusion time: Imagine a team of 5 developers with 100 microservices.  Imagine the cognitive overload required to know where something happens in the system.  Imagine the operational overload of trying to track down a distributed system bug in 100 microservices when you have 5 developers and 1 very sad operations person.  Ciaran isn’t saying it’s technologically a bad idea to have more services than developers.  He’s saying it’s a cultural/organizational bad idea.  He didn’t say it in the tweet or the thread because he didn’t have #280Characters or just doesn’t know how to express it.  But that’s what he’s saying.  It introduces a myriad of problems that a monolith or a very small set of team or developer owned services do not.

Our industry has spread this “right tool for the job” meme and to our benefit, it’s stuck.  It’s to our benefit because we developers get to play with shiny jangly things and then move on to some other job.  People who don’t have such fluid career options are then stuck supporting or trying to get information out of a piece of technology that isn’t the right tool for THEIR particular job.  “The Job” is so much broader than the technological merits and characteristics of a particular decision.  As Dan points out in his point, it’s amazing what you can do with boring technology like PHP, Postgres and Python.  You better have a really damn good reason that you can defend to a committee of highly skeptical people.  If you can’t do that, you use the same old boring technology.

Our industry and by extension our careers live in this paradoxical contradiction.  On the one hand, a developer can’t write VB.Net his entire career because he’ll watch his peers get promoted and his salary not keep up with inflation and his wife leave him for the sexy Kotlin developer who just came to town.  On the other hand, taking a multimillion dollar company that happens to use VB.net and using that as an excuse to scorch the earth technologically speaking is in my mind very nearly a crime.  There is a middle ground of course but it’s a difficult one, fraught with large falling rocks, slippery corners with no guard rails and a methed out semi driver careening down the mountain in the opposite direction you are going.

Changing technologies has impacts for different arms of the organization and I’ve found it useful to frame these in terms of compile versus runtime impacts.  Developers and development teams get to discover things at compile time.  When you choose a new language, you learn it slowly over the course of a project or 4.  But if you operate in a classic company where you throw software over the wall for operations, they get to find out about the new tech stack at runtime, i.e. at 3 AM when something is segfaulting in production.  The pain for choosing a new technology is felt differently by different groups of the organization.  Development teams have a tendency to locally optimize for pain, e.g. push it off into the distant future because they are under a deadline and trying to get something, anything to work and so decisions are made that put off a great deal of pain.

Technological change requires understanding the effects of the effects of your decisions.  Put more succinctly, it requires empathy.  It’s a good thing most developers I’ve known are such empathetic creatures.  SIgh.  Perhaps it’s time we start enforcing empathy more broadly.  The only way I know to do that is oddly a technological solution.  If you want to roll out some new piece of technology (language, platform, database, source control, build tool, deployment model or in the case of where I currently work all of the above), you have to support it from the moment it’s a cute little wonderful baby in your hands all the way up to when it’s a creaky old geezer shitting its pants and mumbling about war bonds.  Put more directly, any time someone has a question or a problem with your choice, you have to answer it.  You don’t get to put them off or say it’s someone else’s job or hire a consultancy to tell you what to do.  If it’s broken at 3 AM, you get the call.  If analytics doesn’t know how to get data out of the database, you get to teach them.  If you fucked up a kubernetes script and deployed 500 instances of your 200 line microservice, you get to explain to the CFO why the AWS bill is the same amount as he’s paying to send his daughter to Yale.  Suddenly, that boring technology that you totally understand sounds fantastic because you’d like to go back to sleeping or drinking Dewars straight from the bottle or whatever.

We cannot keep existing as an industry by pushing the pain we create off onto other people.  On the flip side, those people we have been pushing pain to need to make it easier for us to run small experiments and not say no to everything just because “it’s production”.  There has to be a discussion.  That’s where things seem to completely fall apart because frankly, almost no developer or operations person I’ve known has, when faced with a technological question, said “I know, I’ll go talk to this other team I don’t really ever interface with and see what they think of the idea.”

Software is just as much cultural as it is technological.  Nothing exists in a vacuum.  The earlier we understand that and the more dedicated to the impact and effects of that understanding, the happier we’ll be as teams of people trying to deliver value to the business.   Because in the end, as Dan puts it, the actual job we’re doing is keeping the business in business.  All decisions about tooling have to be made in that framework.  Any tool that doesn’t serve that job and end is most decidedly NOT the right tool for the job.

Brett’s Drunken Tech Ramblings

Steve Yegge once wrote an internal blog at Amazon that later became a public blog not at Amazon called Stevey’s Drunken Blog Rants™. Anything I do here is a poor approximation of those masterpieces but we must all have mentors to look up to and convince us we are still terrible. So this is a Friday night, three glasses of wine and possibly one more on the way rambling about what I’ve been thinking about in tech today or this week or possibly it will devolve into kitten videos. Who knows.

screen-shot-2016-10-07-at-9-11-43-pm

Lots of thinking lately on old code, rewrites, fancy new technologies and a monstrous Windows Service that can’t be opened in anything other than Visual Studio 2010 led me to that tweet. There was a brief conversation around whether that was true and that led me to start thinking about analogies for code and applications and the crap we produce on a daily basis. I recently read this article on building technical wealth instead of accumulating technical debt. I’m not sure that’s actually possible outside the concept that an application can make enough money to justify it’s existence. Still, one part of the article stands out and thats this item:

Stop thinking about your software as a project. Start thinking about it as a house you are going to live in for a (very very very very interminably) long time.

I added that part in parentheses but still, that concept is at the heart of my original statement that all code is technical debt. Almost no one pays off their house in less than 30 years and frankly, most people buy a new one and up their mortgage and don’t pay off a house for 40 or 50 years. All that time they have a house payment not to mention yard work (done by the Mexicans Trump wants to eliminate) and dishes and maintenance and whatever else you do with a house. And because so few of us have 15 year mortgages, we pay 40-50% in interest but it’s invisible because it’s just a monthly payment.

It’s no different with code. When an application first hits production, it’s like signing that mortgage. Except most of us treat our code like an acid driven hallucination powered by Home Depot. Instead of just making sure the plumbing is solid and cleaning the gutters and painting the shutters, we build a second wing, add on a third floor, tear out the garage and make it a disco parlor. And slowly (or quickly in some things I’ve seen) the second law of thermodynamics starts to kick in and soon we have chaos. Making a change means a week of testing. Deleting that third floor no one wants is impossible because maybe our fourth cousin by marriage is actually living up there and we don’t want them to be homeless (or worse, living on our floor). Just like it gets harder and harder to keep our bodies in shape the longer they are around especially if the only shape they’ve ever really been in is “round”, it gets harder and harder to get the code into shape.

And so then the day comes when we get a new management structure who believes us when we say “it’s all a bunch of technical debt” and they say “rewrite it” ignoring the fact we wrote the first part of it and off we go to creating a whole new piece of technical debt that can’t possible even do what the original piece of crap did because it evolved over time with the business and so the new piece of crap is hated and eventually gets thrown away entirely.

The moment you write a single line of code, it’s legacy code. I don’t even care if it’s covered by unit tests like Michael Feathers says it should be. It’s legacy. And some day the tests will get thrown away because they take too long to run and they have to wire up 14 dependencies in IOC because that’s how we roll and then you’ve got crap. Keeping a system running smoothly with a minimum of debt requires a Herculean effort that frankly is almost non-existent in the software world that I inhabit. Then you say microservices will fix everything but you don’t add any logging or monitoring and everyone quits to work at remote jobs and the CEO thinks it’s NHibernate that’s causing all the problems when really it’s just the fact that none of us can manage to maintain our damn cars very well much less a complex system written over years of changing business requirements. Maintenance is hard. It’s harder than writing a bunch of new crap in the latest hot technology. But no one pays hundreds of thousands of dollars for maintenance programmers because that’s what the people in India are for. Never mind that they can’t read the output of a build program to understand why something broke.

It’s all technical debt. Maybe the people who work at Google would disagree but all the software I’ve seen in the business world is technical debt and we’re drowning in it the same way the world is drowning in regular debt and some day the bill will have to be paid. But maybe you and I will have moved on to cushy jobs that are greenfield or at least have “Consultant” in their titles.

On a happier note (unlikely, this is a drunken blog rant), I listened to the first half of this podcast with Jay Kreps on Kafka Streams and it got me to thinking about what streams are and how they are underutilized in most of software. Kreps’ definition of a stream (paraphrased) is that it’s the mashup of batch processing and future services. You write a service to process events in the future. You write a batch process to process events in the past. Streams are this lush, fertile middle ground where you can build near real time apps using a tool like Kafka.

The canonical stream I think of is a data analytics stream like click tracking or user actions on a website. But I started thinking about streams as a replacement for jobs, jobs that are almost always written as batch processes but invariably should just be a stream. We have a job at work that’s happily run for years. Until recently when it seems to be the main culprit in tipping over the database occasionally (which is similar to but not the same as tipping over a cow, a pastime I am only tangentially familiar with as an Amarillo native). This job only processes 500 or so records at a time so how could it possible tip over a Mae West size database? Maybe the data is bad. Which it is. This job cancels certain customers for a variety of reasons and frankly is a batch process job only because it’s easy to do such a thing.

But in reality, this is just a stream, a stream of events that cause my company to want to cancel a customer. If you turn this into a stream, not only do you have to worry less about tipping over the database (because the processing is spread out over time and space), but you can also begin to action on the events, possibly monitoring the performance or the number of cancelations or a whole host of other things you can do with a stream along with the main activity.

There are streams everywhere but we treat them as chunks of past events because processing chunks of past events as batches is easier from a technological perspective or at least it has been up until now. But streams make more sense in almost all cases.

Speaking of streams, the kid needs a diaper and it’s past 10 PM which means I’m turning into a pumpkin.

Upgrading To Mountain Lion, PostgreSQL and Rails

I’m probably late to the party on this since Mountain Lion has been out for an internet eternity but if I ever have to do it again or if someone is a worse procrastinator than me, this will live in perpetuity in the Googleverse to aid us on out travels.

I upgraded to Mountain Lion this weekend and several things immediately went wrong. I’ve already forgotten a couple (which is why I’m writing this) but the one I ran into last night involved creating a new Rails site using PostgreSQL. When I did a “rails new”, I received the following error:

Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

        /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/bin/ruby extconf.rb
checking for pg_config... yes
Using config values from /usr/bin/pg_config
checking for libpq-fe.h... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
	--with-opt-dir
	--without-opt-dir
	--with-opt-include
	--without-opt-include=${opt-dir}/include
	--with-opt-lib
	--without-opt-lib=${opt-dir}/lib
	--with-make-prog
	--without-make-prog
	--srcdir=.
	--curdir
	--ruby=/Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/bin/ruby
	--with-pg
	--without-pg
	--with-pg-dir
	--without-pg-dir
	--with-pg-include
	--without-pg-include=${pg-dir}/include
	--with-pg-lib
	--without-pg-lib=${pg-dir}/lib
	--with-pg-config
	--without-pg-config
	--with-pg_config
	--without-pg_config
/Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:368:in `try_do': The complier failed to generate an executable file. (RuntimeError)
You have to install development tools first.
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:452:in `try_cpp'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:853:in `block in find_header'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:693:in `block in checking_for'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:280:in `block (2 levels) in postpone'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:254:in `open'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:280:in `block in postpone'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:254:in `open'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:276:in `postpone'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:692:in `checking_for'
	from /Users/osiris43/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/mkmf.rb:852:in `find_header'
	from extconf.rb:43:in `
' Gem files will remain installed in /Users/osiris43/.rvm/gems/ruby-1.9.2-p136@rails3tutorial/gems/pg-0.15.1 for inspection. Results logged to /Users/osiris43/.rvm/gems/ruby-1.9.2-p136@rails3tutorial/gems/pg-0.15.1/ext/gem_make.out An error occured while installing pg (0.15.1), and Bundler cannot continue. Make sure that `gem install pg -v '0.15.1'` succeeds before bundling.

Several StackOverflow posts led me in different directions but eventually, I stumbled onto the following which worked: Install latest XCode, upgrading postgres using homebrew (if you aren’t using HomeBrew, you’re really making life hard for yourself) and then running bundle install again on the new rails app.

Please Don’t Learn To Speak French

This essay is a parody of this essay. You should read it as such. It will be of little interest to the great majority of my readers but there was no point in putting it on my rapidly dying technology blog.

Today (regardless on which day you are reading this) on Twitter, a hundred or so people publicly declared their desire to learn French. A noble gesture of some sort or the other to be sure but if any of these people need to learn French to do his or her job, there is something terribly, deeply, horribly wrong with the state of whatever it is that his or her non-French requiring job happens to be. Even if those hundred or so random people did learn how to speak French, I expect we’d end up with something like this:

Le vin a du gros cuisses. C’est la vie. Translation – The wine has fat thighs. Such is life.

Fortunately, the odds of this linguistical (sic) flight of fancy are zero and for good reason. Most of those people are just tweeting shit out their ass and have no desire to put in the requisite effort to actually learn French. Hopefully, they have other things to do in their day to day job like dig ditches or take out the trash or…do I need to go on?

To those of you arguing that speaking French is an essential skill we should be teaching our children right up there with reading, writing and arithmetic (Is anyone really arguing that programming is an essential skill? Or are we arguing that the world needs more programmers, let’s see if there are people out there who are interested?): can you explain to me how this random person I picked off the Internet would be any better at her day to day job doing whatever the hell it is if she woke one morning as a crack French speaker (it’s at this point in the essay where I really want to go off on a tangent about a modern day Kafka and a French speaking roach but I’ll refrain. The essay I’m parodying is bad enough as it is.) It is obvious to me how being a skilled reader, a skilled writer, and at least high school level math are fundamental to performing the job of a whatever job it is that that poor random person linked above does. Or at any job, for that matter. But understanding what “gateau” or “chat” means and to be able to use them in a sentence? I can’t see it.

(A minor digression from the parody. It’s interesting how we jump from “reading, writing and arithmetic” to “being a skilled reader, a skilled writer and at least high school level math”. It’s as if teaching our children the basics of all three will make them experts at two but average at the math part. But we know that Jeff thinks writing is hard. He said as much back in 2006. In that post, he said writing was just exercise and the more you do of it, the better you get at it. This is absolutely, unequivocally true up to a certain, God given talent threshold. But isn’t the same true for programming? Can’t I take any random person of average intelligence off the street who is willing to put in an hour a day and have them improve drastically as a programmer? If they want to be a better programmer – or writer – why question their motives? This is one of the fundamental flaws of Jeff’s entire argument not to mention an interesting clue into what he really thinks about reading, writing and math and the ability to become good at some but not others.)

Look, I love French (and the French and France and French toast.) I also think knowing how to speak French is important…in the right context, for some people. But so are a lot of skills. I would no more urge everyone to learn to speak French than I would urge everyone to learn to weave baskets.

(Another digression – here lie some big fucking dragons. Did he really just say that he thinks programming is right for the right people in the right context? Seriously? That’s like code (pun intended) for some serious discrimination at the very least. Maybe not racial or gender discrimination but he’s clearly supporting a position that has been shown to be untenable for decades in most other arenas. To hop up on a big ass high horse and say programming may be right for me but it’s not for thee is opening a can of worms he can’t possibly mean. I have to chalk this up to some truly bad writing. I hope.)

— END OF PARODY due to lack of ongoing interest in parodying a self-parody.

In the pantheon of Coding Horror entries, this one is going to go into the list of “Top Three Horrifically Bad Essays”. No one (to my knowledge) is saying that “everyone should learn to code”. But if coding (or French or any other skill) interests you, what a wonderful, amazing time to be living in. There are resources available (like CodeYear, the site Jeff is going out of his way to denigrate) that make learning to code (or speak French, hello Rosetta Stone) infinitely easier than it was 20 years ago. Why write a thousand words telling people to not do something they might have a genuine interest in? No one in their right mind would tell people not to learn to speak French. Do what you want. That’s the beauty of the Internet and the spread of information it provides.

Please DO advocate learning to code (or speak French) just for the sake of learning how to code. This is exactly why we should learn anything, for its own sake. For too long, education in this country tried too hard to take the enjoyment and satisfaction out of the act of learning. Places like CodeYear are fighting an uphill battle to change that. Do everything you can to support learning for learning sake. And if it results in a $79K a year job, God bless you and the Internet.

Why would Jeff malign the Mayor of New York’s light hearted attempt to learn programming? Zed Shaw thinks it’s because of resentment. This may be true but invoking Hanlon’s Razor (and with apologies to Atwood, I don’t actually think he’s stupid, just rather misguided in an overly public way this time), let’s not attribute to malice what can easily be explained by stupidity. Jeff can write (not in a Cormac McCarthy kind of way but in a “my audience is largely .Net programmers who like to play video games and build computers that glow” sort of way) as evidenced by the post before this “Don’t Learn To Program” atrocity. His post about automatic cat feeders is interesting, amusing and not likely to offend anyone other than my cats for whom I’m immediately considering getting automatic feeders for. He can write as long as the subject is clear, concise and lacking in any kind of subtlety.

The real problem is Jeff’s writing doesn’t lend itself well to subtlety. Attacking an idea like “More people should learn to program” requires a deft touch and a subtle ability to tease out nuances, assuming there is any reason to attack such an idea in the first place. Frankly, that’s just not how Jeff writes. Telling people “Please Don’t Learn To Program” in bold H1 at the top of your post while inserting a small “I suppose I can support learning a tiny bit about programming just so you can recognize what code is, and when code might be an appropriate way to approach a problem you have” sentence at the end isn’t the best way to explain why you think CodeYear is a bad idea. This is one of those posts that Jeff should have run through multiple censors before hitting the publish button. If he actually believes any of the horrifyingly illogical suppositions he puts forth in the essay, he needs to try much, much harder in explaining them because this time, he’s come across as a resentful, maladjusted programmer who wants to take his ball and go home.

Just Because You Can Kill Someone With It Does Not Make It Insecure

This weekend, Github had what an impartial, understated observer might call a small dustup related to how the Rails web framework functions “out of the box”. For those amongst my non-technical audience (well, your eyes have probably glazed over at this point anyway), Github is a company that provides hosting of source code using a tool called Git. Github has become uniquely popular in the tech circles and hosts (insert some large number here) of projects, most of which are open source (which means not proprietary). Source control allows developers to track the history of code among other functions I won’t get into. If you are in the non-technical portion of my audience and are still reading, God bless you.

What happened at Github roughly goes like this. Github is built on Rails which is a hugely popular web framework that makes it dirt simple to create a pretty decent website. So at this point, we have the facts that Rails is hugely popular and Github is hugely popular. So lots of people are involved. Rails is what’s called an MVC framework. The MVC stands for Model-View-Controller. A Model is a representation of your data. A View is a way to display that data. A Controller facilitates the interaction between the two. This is a 10,000 foot view but will work for our purposes. In a real world example, you might have an ordering system for widgets. You might then have two views, one which shows you all the orders in your database and another that lets you create new orders. You would have an Order model that represented a physical order in the real world with customer information, product information, etc. The controller would manage interactions between the view and the model such as saving the order to the database, deleting orders, showing how many trillions of dollars you made last month selling widgets to Bill Gates, etc.

The beauty of Rails is that it makes it extremely easy to build a web site. You can go through this fantastic Rails tutorial in a weekend if you are motivated and starting writing a website on Monday. This was undoubtedly one of the goals of the creator of Rails because at every turn, when the decision is between hard or easy, he seems to have chosen easy. This is a GOOD THING. As with all GOOD THINGS (like scotch and kittens), decisions have to be made that may have some unpleasant side effects. Let’s go back to your trillion dollar widget web site. The only way an order should be able to get into your database (or be deleted more importantly) is through your view which of course you secured using authentication that is robust and thorough. However, in Rails, the models have what could be called a safety mechanism. The default behavior is that any attribute on a model (say Email on the Customer model) is open for mass assignment unless you, as the developer, take a certain precaution to prevent it. Mass assignment means that if an attacker gets control of the request between the web browser and the server, he can manipulate the data in the request to be whatever he wants. It’s right there in the documentation. This is a BAD THING in many people’s eyes.

Let’s take a little detour that might help. I have a gun (because I live in Texas and every one in Texas has a gun in the back of their pickup and a deer on the hood of the pickup.) The gun (you can assume I only have 1 gun for the purposes of this discussion but no one in Texas has one gun) I have is a Springfield XD 9MM semi-automatic pistol. I’m going to let you in on a little secret. My XD was created with the express intent and purpose of shooting bullets. The target of those bullets may vary but the general purpose of the gun is unchanged regardless of the target. It’s designed to shoot things.

However, luckily for me, the makers of the XD had the foresight to include mechanisms called safeties on my pistol. For the sake of this argument, you can also assume that all guns have safeties. This is probably not exactly true but the population of guns that don’t have safeties in this day and age is small enough to be ignored. Why does my gun have safeties? Because there are times when you would prefer that the gun did not go off when you pull the trigger. There are three safeties on my XD (other guns have one or two, it all depends on the make and model). The first is a trigger safety. It’s essentially a dual trigger that locks the main trigger in place so that it can’t go off if the gun is dropped or bumped. The second is a grip safety. It’s a small lever in the butt of the grip that must be fully and completely depressed before the gun can fire. This prevents accidental discharges if the trigger were to get caught on something. Finally, there is a loaded chamber indicator and a striker indicator that tell me when there is a round in the chamber and when the striker is cocked. This is more of a logical safety in that I can always be aware of the status of the pistol.

The safeties are nice but again I return to the essential raison d’être for any gun which is shoot bullets. If I pick up the gun, depress both safeties and there is a bullet in the chamber with the striker cocked, depressing the trigger will result in said bullet flying out the other end in the general direction the gun is pointing. This fact about guns does not entail A SECURITY VULNERABILITY. If I take a gun into a bank and start shooting people, the gun is not displaying some inherent security flaw. The operator of the gun has a personal responsibility to use the gun in morally and legally just ways. It is up to me as the shooter of the gun to know and be cognizant of this fact. There are ways to make a gun perfectly safe and they turn it into an expensive paper weight because the only way a gun is perfectly safe is if it is physically impossible to get a bullet to come out the business end of the gun. But then we don’t really have a gun, do we?

What does this all have to do with the Rails issues at Github (other than the fact you may want to shoot yourself if you’re still reading)? It is my contention (and this is mostly conjecture but reasonably based conjecture in that Rails has a philosophy and mass assignment fits that philosophy) that it is the express intent of Rails to make building websites easy. If Rails forced developers to prevent mass assignment on all models, it would be distinctly harder to build web sites. However, Rails does have a mechanism for preventing mass assignment and it’s detailed right there in the docs and the tutorial. Rails essentially says this: “I don’t want to get in your way of building web sites so there are features of Rails that make it easier to build websites at the expense of possibly doing something that allows people to hose up your website. Use at your own damn risk.”

There are a lot of people weighing in on this issue at Github and many of them seem to think that mass assignment is a security vulnerability inherent in Rails. I do not see how this is the case. Just as the gun has no way of knowing if I’m shooting innocent people in a bank or paper targets at the range, Rails has no way of knowing what attributes of any model you want exposed for assignment. It’s not the gun’s or Rails duty to police your behavior. As a user of Rails, you have the responsibility to understand the framework and work within the limits and guidances it provides you. I would argue that Rails even goes farther than the safeties on the gun because it has an explicit way to prevent mass assignment which keeps an attacker from destroying your web site. If you choose to avoid the safeties built into Rails, it’s your fault if you kill someone. That does not make Rails inherently insecure.

Anything in the world can be perfectly safe but it’s always at the cost of perfect liberty. It’s a tradeoff we make constantly at all kinds of levels. It strikes me as coming down far too near the perfectly safe side to advocate mass assignment as a security vulnerability in Rails. We could make Rails perfectly safe but then it stops being a framework that enables you to build web sites easily and becomes something else entirely. Take some personal responsibility for your actions and do the legwork required to properly use the framework.

Google+ Is What I Want Facebook To Be

While it’s still very early in the process for Google+, already I’m seeing things they are doing that I wish Facebook did. The primary difference for me is the Circles component of Google+. From the description:

Google+ Circles helps you organize everyone according to your real-life social connections–say, ‘family,’ ‘work friends,’ ‘music buddies,’ and ‘alumni’. Then, you can share relevant content with the right people, and follow content posted by people you find interesting.

Part of the problem I have with Facebook is how it treats all my “friends” as the same. I’m either friends with you or I’m not according to Facebook and frankly, that’s not a very subtle distinction when it comes to how I want to interact with people online. This causes me to be very cautious with accepting requests on Facebook. Many times, I either have to choose to ignore someone I’m not that interested in or accept their request and then quietly click the X button when it turns out I’m just not that interested in what they have to say. That’s not to say that I wouldn’t like to accept all requests. I’d just like to have some control over what I read and share beyond the concept of “Everyone is my friend” which they clearly aren’t.

Google+ fixes this with Circles, the concept being that your social circle is actually made up of lots of little circles, some of which overlap, supersede or ignore completely other circles. This is a more accurate portrayal of what goes on in the real world. This may of course be a solution dreamed up by engineering nerds and introverts looking for a problem to solve. I know that many of the people I’ve run into on Facebook have so many friends, they can’t possibly use FB in any meaningful way to keep up with people unless they spend entirely too much time there. Wait, nevermind.

Still, when someone has 1000 friends on Facebook, it’s off putting in a variety of ways to me. For one, chances are they use Facebook as more of a giant online Rolodex, a place where anyone and everyone they have ever encountered can be grouped in one place for easy tracking. That’s fine except that in order for me to be in your Rolodex, you have to be in mine as well unless I specifically do something to pretend like you aren’t there. Not particularly optimal. Not to mention, if you have 1000 friends on Facebook, the likelihood that you actually pay attention in any meaningful way to what I put on Facebook is rapidly approaching zero and since this is all about me, why would I want that?

With Google+ Circles, a lot of those issues vanish. If I accept a request from someone with 1000 friends, I can put them in my “Extroverts are insane” Circle, choose to share almost nothing with them and view almost nothing and be done with it. I’m in their Rolodex, they are in mine, but that’s the limit of it and no one has to get their feelings hurt. They don’t have to listen to me say how much Facebook sucks (not that they were paying any attention anyway) and I don’t have to listen to whatever it is they say on FB. Everyone’s a winner.

The ability to put people in loosely organized groups is a key component of evolutionary biology. It’s important that we are able to know who we can count on, who to share information about drunken orgies with, etc. The evolution of social media from the beginnings in AOL chat rooms to Google+ Circles is an evolution towards better representation online of the relationships we actually have in daily life. I don’t know what the long term chances are for this latest project of Google’s but I personally am rooting for their success.

Live Search Seems Pretty Broken

So after seeing a very odd spike in traffic to my blog today regarding Crossfit, I went to Live Search to see why quite a few people were finding my humble corner of the tubes. As it turns out, if you search for “crossfit training” using Live Search, I’m number four in the result list all because of this post. Does that make any sense to you?

In comparison, I’m not even in the top 100 on Google which is what I would expect given the fact that there are innumerable other sites out there that discuss Crossfit in much more detail than I do in that post. Seems to me that Live Search operates under an awfully odd algorithm if I show up in the top 10 results for crossfit training.

Google Is Starting To Scare Me

I wrote a post this morning at 11:10 AM detailing getting Clojure set up on my system and I briefly mentioned that I would probably try both Vim and Komodo as editors. At 3:39 PM, someone had come to my site from Google using the keywords “clojure komodo”. I don’t think I’m ever going to get used to the power of Google and its creepy little bots that document the entire freaking Internet. That’s just over 3.5 hours to crawl my site and assimilate the post from this morning into the Google brain. Yikes.