My convoluted CD importing system

And here’s how a CD gets from HMV on to my iPod…

Hmmm, it’s complicated alright.

But it needs to be as I have several goals I am trying to meet.


Never have to rip the CD more than once.

I rip to FLAC format which is lossless so I can recreate the original wav file at any time.

Be able to play my music back from a variety of sources.

I need to be able to play music on both my iPod(s) and through MPD.

The ripping and encoding bit is done with a Java app I wrote that encodes in parallel (it can handle Ogg Vorbis too if I want).

It’s not as automated as I’d like, I need to run it by hand when I insert a CD and choose the matching CDDB entry if there are multiple matches.

Oh, and here’s what I bought.

Sidebar feeds

I did a minor blog re-design the other day.

Some cosmetic font tweaks and removal of a lot of the sidebar content.

I also added a left-hand sidebar (previously there was only one on the right).

This makes the content easier to read as the middle column is narrower and it also gives me an excuse to pull in content from Flickr and Twitter for the new sidebar.

Both Flickr and Twitter offer “badges” which give you a bunch of HTML and Javascript to put on your site and they then pull content in automatically.

They’re great but have one fundamental flaw – if Flickr or Twitter are down then they don’t work.

So, I’ve decided to adopt a more robust approach.

I’ve been pulling in links from my del.icio.us account to make a linkblog for a while so will follow a similar approach for Flickr and Twitter.

This post explains the theory behind it – the only new thing was the XSL stylesheets I used.

The Flickr stylesheet is fairly straightforward, it uses some CSS styles (below) to display the photos in two columns.


.right-box-img
{
float: left;
}
.right-box-clear
{
clear: both;
}

The Twitter stylesheet is a little more complex.

It automatically filters out Twitter replies and strips the username from the text also.

Anyway, feel free to grab them for your own usage – leave feedback about them in the comments below.

Laptop stickers

So, my shiny white Macbook is almost a year old.

To celebrate, I’ve decided to “sticker it up“.

Unfortunately, I seem to have a lack of stickers.

So, I’ve been scouring the Web trying to find cheap and/or free stickers.

Cool stickers mind, not just any old crap.

Free stickers.

All I’ve found on the free front is that the nice people at Daily WTF will send you two stickers if you send them a “small souvenir”. So, I’ll be popping some Moo cards in the post to them.

Cheap stickers

Jeff Atwood at Coding Horror will send you 3 or 4 stickers for $4 including shipping, which is a real bargain.

Moo do a great deal on sticker books which you can make up from your own photos. They also have a cool ready-made set for laptops.

The other way to get stickers is of course to hang around at cool conferences and hope that people take pity on me and give me cool stickers.

Failing that, I could always resort to begging on my blog.

Visible Progress

A few weeks back I’d been sat at my desk at work reading Steve McConnell’s blog entry on building a fort and how it compares with software development.

It’s quite interesting how many similarities there are between a software project and an engineering project (of sorts) even though they don’t really become apparent until after the fact.

Steve even managed to make a “classic mistake”.

4. Substituting a target for an estimate.

I had 7 days to do the project, and my estimate turned out to be 7 days. That’s a little suspicious, and I should have known better than to make that particular mistake!

It shows that when we are outside of our problem domain it’s easy to forget what we’ve learned.

I had an object lesson in this later on that same day when I was thinking about some small project that is going on in our office.

We’re getting a shower installed at work, and as I’ve just started cycling into work I had been taking an interest in the work.

The work started the week before but I wasn’t getting too excited about when it would be done as I know these things can drag on and hit all sorts of snags and problems.

It’s like software right? You can plan and plan but some unforeseen problem is always waiting around the next corner.

So, I was quite excited later that same day when I walked past the shower cubicle to see the shower unit had been attached to the wall.

My immediate, almost instinctive reaction was that the shower was pretty much installed and I’d be able to use it by the end of the week.

I walked back to my desk feeling quite pleased when suddenly I realised that I too had made a “classic mistake”.

Joel Spolsky explains this problem quite succinctly.

If you show a nonprogrammer a screen which has a user interface which is 100% beautiful, they will think the program is almost done.

People who aren’t programmers are just looking at the screen and seeing some pixels. And if the pixels look like they make up a program which does something, they think “oh, gosh, how much harder could it be to make it actually work?”

The big risk here is that if you mock up the UI first, presumably so you can get some conversations going with the customer, then everybody’s going to think you’re almost done. And then when you spend the next year working “under the covers,” so to speak, nobody will really see what you’re doing and they’ll think it’s nothing.

I was acting like that client, seeing the “front-end” and assuming all the back-end work was done and/or didn’t exist.

I felt pretty silly once I’d had this realisation.

And it turned out that I was being silly as it still took a few more weeks before the shower was actually complete – of course, during that time the workmen were nowhere to be seen – but hey, it’s not like me to question the integrity of the great British workman.

“Source control ate my files!”

Everyone who has worked in Software Development for long enough must have heard somebody say that source control ate their files – it’s up there with “Works on my machine” and other such silliness.

Invariably source control didn’t eat their files at all – the problem boiled down to a (sadly) not too uncommon condition of “Fear of Source Control“.

Here are some of the symptoms of such a fear.

Unwillingness to do an update

I worked on a project years back with a tight deadline, not a huge amount going on in the way of process and about 6 developers all coding like hell.

Invariably, every other update broke your local build as someone had changed an interface or forgotten to check in a file (we had no daily build either), so yes, updating was a pain.

For two of the less experienced team members their solution to this problem was to hold off updating as long as possible (they’d happily go two weeks without doing an update).

When I queried their approach their answer was that doing an update “broke things” so was best avoided.

This symptom of course goes hand in hand with…

Unwillingness to do a commit

As anyone who has worked with source control systems long enough knows, they won’t let you commit a file if there are outstanding changes to be merged in.

So, leaving long gaps between updates almost always leads to massive problems when you finally commit your changes.

Long update intervals lead to long commit intervals.

My usual solution to this is to do an update every morning before I start any development work for that day.

Of course, I now work on much saner projects where things don’t break so often (or when they are likely to break things then someone has already warned you in advance).

That way I get the small amount of pain out of the way without huge disruption to whatever I am working on.

I then commit my work once it’s complete and passes its tests.

I also try to pay attention to what my colleagues are doing on the project too so I can avoid nasty surprises.

Deleting other people’s code

I have seen this happen, usually when someone gets a merge conflict.

Merge conflicts are what happens when two people work on the same code at the same time and the changes from one person’s work are merged into the other’s work.

Sometimes this happens smoothly and everybody is happy and sometimes not so smoothly and one person is very unhappy.

In the face of a merge conflict the correct approach is to fix the code by hand, which often involves talking to the other person who worked on the file to ensure that both their changes and your changes are preserved.

The incorrect approach (and yes, I’ve seen it done) is to remove the offending lines (someone else’s code), keep yours and commit away. This is of course a “bad thing”.

Merge conflicts are of course best avoided, even experienced developers strongly dislike them.

The way to do this is through communication with your fellow team members and knowing who is working on what area of the code (Scrum-like daily meetings are great for keeping up with who is doing what).

Often, said conflicts can be avoided by a bit of advance planning (you do your bit, test and commit, then they update and pick up your changes before they do their bit etc).

Commenting out unused code

If you’re lucky it will be accompanied by a vague comment along the lines of “I don’t think this is used any more” .

This stems from the fear of not knowing how to use source control to retrieve old versions of a file.

The correct thing to do of course is to remove it, then commit that change and mention said removal in the commit log.

If the code is being replaced then a comment along the lines of “Replaced foobar1 with foobar2 – foobar1 code lives in version 4.1.2 in CVS” would be most appreciated by future developers (which could of course be you).

Committing backup versions of files

I’ve just finished working on a project where a binary file that was kept in CVS had no less than 7 alternate versions (none of which were used) checked in alongside it.

I spent ages working out what each one did, seeing it did nothing, then removing it from CVS.

Again it stems from confusion and fear of using source control to get access to older verions of a file.

The solution

The solution to all of these problem is of course to “lose that fear” and learn to love your source control tool.

One good way to start doing this is to stop using a GUI to manage your source control tool and learn the command line instead (assuming your tool has that option).

That will remove a lot of the mystery of what is going on.

Learn the mechanics of your source control tool by reading the manual.

Eric Sink has an excellent series of posts on source control that highlight some of the many benefits of using it.

The final thing to realise is that correct use of source control will save your bacon.

The initial learning investment will be repaid time and time again.

And that, is worth its weight in gold.

Pagination, thing of the past

I’ve been trying to choose a motherboard to replace a dodgy one in a computer.

As usual I went to dabs to get one as I’ve always found them to be reliable.

I needed a motherboard that supported a given chip-set, had on-board video and LAN and did SATA.

Finding a board that meets all those requirements on dabs used to be a real pain.

You had to click on the motherboards section, then do some searches on strings that may or not match what data they had entered for each product.

You also had to hope that they had followed a consistent vocabulary (Socket-775 vs S775 vs S-775 etc).

It’s a wonder I found anything!

A few years back though dabs re-did their site and introduced a new tool for finding products.

Essentially, they made searching redundant with a clever filtering tool that allowed you to drill down by any of the major criteria for a product.

As can be seen from the screen-shot it’s really quick now to choose a motherboard that matches my criteria and is in stock.

When I first saw this I was overjoyed but 2-3 years later I still see very few sites doing anything similar.

Not only does it make pagination redundant (for all but very large data-sets) but it also does away with so-called “advanced search”.

Just chuck out all the results and let the user filter them.

The “page 2 of 20” links become almost redundant.

Now obviously this sort of filtering solution only really works with “filterable” data.

But even when there’s only a small amount of data like that it can still help.

I implemented it at home on my photo database – the only meta-data I used was the date the photo was taken (or imported in the case of film photos) and it’s still really useful.

I have thousands of photos in there with little in the way of tagging so searching was next to impossible – the filtering thing makes it much easier to find things though.

I just wish more sites would do this.

Thinking in Ruby

So, I’m learning Ruby (it only took me a year to get started!).

I’m working my way through Programming Ruby and doing a few different scripts to see what it can and can’t do.

Most of it seems fairly straightforward stuff and I’m liking what I’m seeing for the most part.

One of the things that crops up from time to time in examples in books and online is something along these lines:

print total unless total.zero?

That’s it, the “unless construct”.

I’ve seen this before in Perl and I’ve always avoided using it – I personally find it unintuitive so I always write my code in the if x do y style.

Do x unless y has always seemed a little, errr, backwards.

Seeing it again in Ruby I again decided I’d avoid using it and carry on as I had before – then I began to wonder if I was simply imposing my “Java programming style” on to my Ruby code.

It’s an easy enough trap to fall into, much like early C++ programmers wrapping their C-style static methods up in a class and think they were doing OO.

Thinking about it, most of my Perl code is written in a similar style to my Java – I always apply “use strict” and enable warnings, always put code into methods, almost always have a main method etc.

But hold on, am I writing Perl in a Java style and thereby restricting my ability with the language, or am I simply applying sensible practices to my Perl code?

My Perl code never really extended much beyond occasional scripts to process photos so I have no clear answer to that.

I hope that my Ruby coding will move beyond that (possibly into the realm of Ruby on Rails) so as it does I’ll have to constantly be asking myself if I am thinking in Java or thinking in Ruby.

Sit up and don’t slouch

I’ve just been trying out the suggestions on Jeff Atwood’s post on Computer Workstation Ergonomics.

I’ve known for ages that my seating position when at my computer is wrong; I’m a terrible sloucher you see.

Fortunately I’ve never experienced any back pain from it, so I’ve just carried on doing it.

But at the back of my mind I’ve always suspected that at some point I will experience some ill effects from it and I’ll regret my years of slouching.

So, Jeff’s post provided me with the impetus (and information) to do something about it.

Following the graphics and suggestions of Jeff’s site, I adjusted my chair (the only thing I can adjust) to match the ideal image (back straight, knees bent at 90 degree angles, eyes in line with top of monitor etc).

I’m in that position now as I type this post.

Some initial observations:

  • It feels strange, not uncomfortable, but I do keep getting the urge to slouch downwards – fortunately the new set-up pretty much prevents me from slouching and still being able to type at the keyboard.

  • I’m a little achy from spending much of the bank holiday weekend riding my new bike, so it’ll take me a few days to determine if any aches and pains are bike related or chair related.

  • I’m finding new reasons to learn to touch type! One of the guidelines is to use my chair armrests to support my elbows – for me this means that my left hand can only comfortably reach the left side of the keyboad, and vice versa for my right hand. What I am finding now is that as I type, sometimes my left hand instinctively tries to press a key on the right hand side of the keyboard (and vice versa, I type with both hands), which means that my arm then lifts off the armrest. I need to learn to touch type properly so that my arm doesn’t have to lift up.

  • My left elbow no longer hurts from resting on the (stupid) curved desk – woohoo!

I need to do the same at home now, and then monitor the situation long term.

Unchecked Exceptions

On our new project at work we’re using JPA sitting on top of Hibernate.

I’ve used Hibernate several times now and am familiar with it.

JPA is mostly similar in use but there are a few gotchas.

One that got me the other day was what happens when you write a query that you expect to return a single result.

In Hibernate I’d have called query.uniqueResult();

The Javadoc for that method says:

Convenience method to return a single instance that matches the query, or null if the query returns no results.

So, the query either returns my object or null (an exception is thrown if my query returns more than one result – fair enough).

I had to do something similar in JPA-land so I looked at its Query class.

It offered a similarly named method: query.getSingleResult();.

All good, I wrote my code, compiled it and restarted my application server.

Unfortunately, when I ran the code, it fell over with a NoResultException.

For my particular query, there were no results in our test database.

Fine, my code can deal with that, but clearly the JPA method works quite differently from the Hibernate version.

Its Javadoc says:

Execute a SELECT query that returns a single result.

Returns:

the result

Throws:

NoResultException – if there is no result


So, unlike the Hibernate version this one will throw an exception if the query returns no results.

Hmmmm, I think I prefer the Hibernate version.

Of course, if it had thrown a checked exception my code would not have even compiled.

As it was it was just luck that the database had no results so I found the problem right away.

I’m not saying unchecked exceptions are bad, on the whole I prefer them.

But there’s a certain element of retraining your brain to no longer rely on the compiler to tell you that you’re dealing with all possible error conditions.

I know, I know, there wouldn’t be a problem if I’d read the Javadoc up front, but how many people can honestly say that they read the Javadoc for every new method the first time that they call it?

The continuing adventures of my Yashica

Carousel

I got my film back from my Yashica.

If anyone’s interested, I used Spectrum Imaging in Newcastle (mail order).

They are very fast and very cheap (yay!), but they don’t do B&W (boo!).

Of the 12 shots I took this is the one that I’m happiest with, it’s also the one with the best exposure.

Most of them were a bit over-exposed (nothing that can’t be rescued) but this one was pretty accurate.

I was trying to compensate for what I believed was the meter’s tendency to over-expose (based on a comparison with my SLR) but it looks like I need to compensate more.

Many people who own this camera ignore the built-in meter as it has a tendency to be wrong.

These people then either carry around an SLR or a light meter to meter with; or they guess.

Well, guess is not really an accurate description (although I imagine some people do genuinely guess – but I’m not referring to them here).

I’m referring to a form of educated guessing.

There’s a thing called the sunny f/16 rule that can be used to determine exposure.

As usual, Wikipedia knows all…

In photography, the sunny 16 rule (or, less often, the “sunny f/16 rule”) is a method to estimate correct daylight exposures without using a light meter.

The basic sunny 16 rule, applicable on a sunny day, is this:

Set aperture to f/16 and shutter speed (reciprocal seconds) to ISO film speed.

For example, for ISO 100 film, choose shutter speed of 1/100 second (or 1/125 second).

There’s also an interesting page here about calculating exposure that looks quite interesting.

I don’t want to carry my SLR around with me everytime I use the Yashica so it looks like I’ll be going down the “guessing” route (assuming I can’t trust the meter that is).

Today though I went for a walk in the park and I did have my SLR with me so I used that to set my first reading.

The light didn’t change for a while so I left it at that.

Later the sun came out so I metered again with the SLR.

I’ll find out when I get the film back how it all worked out.

As I’m shooting negative film I can afford to be a bit lax with my metering as corrections can be made at processing time – if I shoot slide film I’ll have to be more accurate as it’s much less forgiving.

I’m not sure if that matters too much with scans though.

The above was shot with Fujicolor Superia film.

I have another 3 rolls of that.

I’ve also ordered a selection of black and white film too.

Some Ilford HP5+ which I’ve heard good things about and some Kodak Tri-X.

Now I just have to hope that I don’t get bored with it all before I run out of film!