Tuesday 4 February 2014

A most Practical lesson in Clean Code

Our wireless thermostat recently shuffled off its digital coil and I can only assume is on its descent to robot hell.  The maldesign on the device is subtle that you can imagine even a half-competent engineer signing off the specifications without noticing the error.

Just to be clear - this isn’t a metaphor or some contrived example to prove a point - although I’m hoping one day it will be. Our thermostat is broken, its the middle of winter and we are very cold.

The thermostats wireless receiver is a mains powered device that sits in an operations chain in series with the timer control unit. Every few minutes the thermostat receiver polls the wireless thermostat and the input line from the timer. If both are true, it flipflops its output to TRUE - and the heating comes on.

As an additional feature the unit also has a manual override button - so if you want to ignore the thermostat and timer input you can force your heating on. I won’t go all UML on this right now, but you can imagine a reasonably simple diagram that explains this all.

But what I’m facing with a faulty thermostat and its faulty “Manual Override” button. I’ve seen the wiring diagram - an override switch that bypassed the circuitry isn’t very difficult to add. It’d literally be a switch that shorts two physical wires. I don’t want a “manual override” button that relies on the same faulty circuitry that I’m trying to override - I want an external independent mechanism.

The override should adhere to the same interface declaration as the normal thermostat, have the same dependencies at a factory level, but its operation entirely driven by a public method. That way, it could be injected into a test with an instantiated timer or used if thermostasis isn’t a customer requirement for this sprint.

And then BAM! My thermostat was just a metaphor* for The Substitution Principle all along!  Its not always enough just to assume that because you declared an interface and implemented it that you have covered all the bases. Zero-To-One is a great step, but Zero-To-One-To-Many is the path you are trying to lay down to make maintenance easier.

And that’s the take-home lesson for today, in a very practical sense. Clean-Code programming lessons in applied engineering:
  • The Thermostat, the Override and the Timer should all fill the signal control interface. (Substitution Principle)
  • The signal control interface should just be concerned with being a programmable gate. Two input wires, One output wire. (Interface Segregation)
  • Each component should do what is says on the tin (Ronseals Law)
  • The factory should be responsible for constructing and wiring them together (Factory Construction)
  • Each object should pass its signal on to the next (Dependency Injection)
  • The override responsibility and the thermostat responsibility should not live in the same object (Single Responsibility Principle)
Until next time: Enjoy your warm house, keep coding, and remember to think inside the box!

 * = Actually not a metaphor, our house really is brass monkeys right now.

Monday 3 February 2014

Programming with the Man Flu (Comfort Zone Programming)


To cut the explanation short, Last week I was taken out of action for four or five days with Man Flu. And like all sofa-bound individuals reached for the trusty laptop and immediately opened my IDE to get some uninterrupted* programming time in with the determination that this wasn't going to be a total waste of time.

* = Programming time was frequently interrupted by naptime.

Attempts at developing software under these conditions were met with varying degrees of success - some tasks were quite possible while others not so much.Your brain uses some 20% of your energy, and in my energy-restricted flu-coma my immune system had clearly decided to shut down those pesky thought processes that were hogging system resources.

The end result was a pared-back subset of programming and refactoring skills and I was focused on just doing the things I knew how to do - the core actions that I could perform on autopilot. Coding in my Comfort Zone.

Comfort Zone programming is a reflection on your skillset, and in your confidence in those skills. I naturally tended toward the things I found easiest and away from 'difficult' tasks - when you have no brainwidth, go with what you know.

Over this time I performed several pre-emptive refactors of non-critical systems. Clearly I could spot problems, demeter violations, better injection patterns - but lacked the judgement to ignore problems off the critical path.  I was also quite happy spending a few hours writing some packet processors for data bytestreams, which I attribute to a lot of the image processing and network code I've written in the past rising to the surface.

While dying from Man Flu isn't a great way to identify your comfort zone - and not an experience I'd recommend for anybody - The process of identifying what you are good at provides value not only because you can Keep Doing It, but also because you can stretch further, learn something and better yourself.

Comfort Zone programming has helped me learn which skills I default to when times are hard - what my reflex actions are - who my autopilot is.

Learning what to learn is one of the hardest skills to develop, and in this instance I've got a list of the "difficult" tasks my fever brain avoided, and my programming autopilot wasn't able to handle.  Over the coming weeks I'll be able to reflect upon that list and get practice at my weaker refactoring and programming skills, to drill them into my brain until even my autopilot can manage them.

The take home lesson here - and your challenge, should you decide to accept it, is to get out of your comfort zone. And a great way to do that might just be to get in to your comfort zone and see what it looks like. Identify the skills that you don't use enough, follow the advice you frequently ignore.

Sunday 26 January 2014

Drive By Refactoring


Today's post is a code post, specifically about homebrew programming but actually about programming mindsets in general. I've done some code review recently - professional, personal and random internet peoples code where I just look at what they are doing and see how it differs from what I've seen before.

I'm going to talk about the style of code reviews I've been enjoying recently - drive-by-refactors. In a formal review or refactor process, its sometimes difficult to know what to focus on and its easy to go into too much or too little detail. The Drive-By-Refactor is a way of identifying which changes you need to make and which  can wait.

As an example, I was quickly browsing the API of a package, and about to use it when I noticed an unusual declaration...
public: virtual void MyMethod( int parameter ) {}
Spotting a method in an interface with a default implementation led to a drive-by refactor.  Why a default implementation? Does the concrete class implement it? This rather poor practice has probably pushed a compile-time error to a runtime error.
Of course I changed it to a pure virtual / abstract declaration and boom! compile error. Could not instantiate abstract class.  The game is afoot!
I followed the trail of breadcrumbs and found the problem.  It used to take a filename and now takes a file handle, but the interface had both methods with the obsolete one providing a default implementation.

The drive-by refactor is fun because they are generally small, self-motivated and quality driven. You get to flex one of the too-often underused refactoring tools - the gut feeling. Often referred to as code smells, your gut reaction to a bad fragment of code can count for a lot and you have the chance to eye-spy a problem before it gets any worse.

Drive-By refactors can be great for those moments where you spot a problem and want to investigate, but sometimes you don't need to take any action beyond noting that you've found a problem - in this case just leave a tagged refactor comment and carry on driving would have been enough.
There are times when you don't need to take immediate action and it's acceptable to describe the problem in a comment and move on.
A drive-by review is a quick overview of code where you don't get to see the detail, but do spot the sore-thumbs. You are only going to spot things that are obviously wrong, and you'll find your tolerance for wrong shifts the more you review.

This brings us on to the second thing that a drive-by spots. It spots review comments where you-in-the-past or somebody else has written "Law of Demeter Violation" or "method doesn't use its parameter" or "method operates on side-effects only" or "this class has too many constructors"
You might see one of those and really agree with it, and decide now is the time to fix it - or you might decide its no sufficient to take action.

And that's the take home lesson for today - when you spot a problem you should make a note of it. Make a note in the code, because that's the only place that it'll be read. Make it useful to the future programmer, be brief, descriptive, concise. To start you only need to point out problems, and you can stop there. A typical drive-by code review for me includes spotting problems or responding to previous review comments as much as it is leaving review comments or making code changes.

Typically I'll invest time in the refactor if the functionality is on the critical path to my current sprint feature set, while if its incidental then I'll leave a review comment.  One day I'll be working in that code anyway and see the previous review suggestion and it might be a good time to deal with it.

By learning to do Drive-By reviews, you practice quick decision making, and its this practice that winds its way into your day-to-day programming. Instead of writing something you know to be second rate, you might find yourself adding a review comment explaining what's wrong with it.
Before long you find yourself writing the review comment, and stopping to fix the problem. It starts to become automatic and you can do it at the smallest level.

Review fast, review often. Read a lot of code. Read what you wrote today, read what you wrote yesterday and learn from it. See which shortcuts you took, and unlearn your bad habits. Study history or find yourself repeating it. Drive-By yesterdays code, free your mind and make different mistakes tomorrow.

I hope you've learned something from the Drive-By refactor and are able to add it to your toolbox. Review code every time you read it. Drive by a lot of code, because the more you see then more you learn, but stop when you need to.

Saturday 4 January 2014

Coffee Cup


Hello hello and happy new year!
The cup of the day is the Australian Skyberry - a brew I've had the pleasure to encounter before but this time available in my local supermarket. It seems 2014 may see the dawn of civilized drinks, either that or it was an overflowing seasonal specialty. Nonetheless it has been prepared, as have I, and together we will get acquainted.

The first warning sign is that it was one of a selection of ground coffees in the finest range - The Skyberry was in good company with an Ethiopian Sidamo and some South American thing, so i'm hopeful that the drink reaches me in prime condition.

The second warning sign - as if buying ground coffee isn't problematic enough - is that the Best Before is months in the future.  This rather stretches the definition of "best". Technically, it'll be worse after the smugly printed so by process of elimination better before it, but "best" before it implies some objective yardstick by which you can measure the coffee and if you want to get specific about "Best" before rather than "better" before then your process should be counted not on a calender, but by a wristwatch.

Nonetheless, in a hopeless sea of optimism and toxic cocktail of stubbornness and thirst I purchased my beloved Skyberry. Preparation was a two-shot black americano, slightly strong. And I figured I'd take my time to decide on sugar or biscuit accompaniment - My coffee-shop goto at the moment is to dunk a spoon to leave a few drops of coffee on it and set it aside on a saucer, about half full of brown sugar. Once the drink is complete, take the sugar as required for a sweet sugar rush. Its a strange ritual, but counteracts the occasional bitter aftertaste of some high street blends.

Well, as can only be expected, the poor drink was doomed before it passed my lips. It can only be a christmas gimmick to pad the shelves and was as stale as a new-year Turkey sandwich. I managed about a finger or so, but its flat stale taste suffocated the taste buds with a dry ash finish. It wasn't terrible, but I own an instant coffee that's less unpleasant and there didn't seem to be a reason to continue any longer than needed to finish this write-up.

So I bring you the disappointment that was supermarket ground coffee, defined by their remarkable ability to start with something delicious and completely ruin it. To continue the theme, next week I'll be playing SimCity!

I am Roger in Technology, thanks for reading.

Thursday 2 January 2014

Digital Rage Management

So I started writing a blog post about DRM but got bored and wandered off but the topic has been on my mind during the commercial season.  It bugs me that we've got from Vinyl to Tape to CD to MP3 to DRM. I don't feel like we won that last technological step, however the sands are still shifting.

Television services have benefitted from the same three funding models that we have now - Adverts, Subscription and Pay-as-you-go rental and owned.

Services like netflix are subscription, while YouTube and Vevo are largely Advert funded - although this is changing. Media companies like HBO and Virgin offer subscription and rental options, while iTunes, GooglePlay, et al are offering the new model of PAYG Owned DRM.

Meanwhile, traditional piracy always had the edge against slow-delivery, quality-limited or poor-choice content providers but is struggling to keep up with HD and 4K On Demand-PAYG services. Its the convienience, not the cost, driving the consumption market.

So, why do I think the consumer lost out on the last step? Well as long as the content providers service is more convienent than piracy then they will take our money.  My background concern is a very shallow one - its about limited access to a century of education and entertainment that we potentially - but not actually - have access to.
But this isn't a loss to the consumer, its a loss to humanity and far beside the road I'm travelling at the moment.

The loss to the consumer is the fractured nature of the market. Fractured and competing music companies retailing vinyl had little cost to the consumer, who could easily walk into a record shop and buy music. And actually, competition between artists and record labels was probably a good thing for the consumer, leading to musical and eventually technological revolution. Ok there were a few record shops but the retail was divided from the production by physical distribution. The system was PAYG-owned-media, but you had a freedom to store and use the media as you wanted and it was easy to find and buy.

Under the new system, as I browse Netflix, Vevo, YouTube, Picturebox, Blinkbox, iPlayer, BSkyB, iTunes, GooglePlay, SteamPlay, Virgin, HBO and a dozen other services they all have one thing in common - a limited subsection of content and they are often moving toward a PAYG owned model instead of a subscription or PAYG consumable item cost.  I don't mind a PAYG consumable item because I pay for it, watch it and I'm done. The transaction is complete, like a cinema ticket or a bucket of popcorn. It's cheap and once its gone, its gone
But when I'm forced into a PAYG owned purchase its going to cost a lot more than the rental and I'm at the whim of the content provider still carrying and supplying my content to my device, OS and country. Some aforementioned providers are better at support than others, but each has a similar yet different selection of content.

Imagine having to go to the Virgin shop to buy virgin label records and going to the EMI store to buy EMI vinyl. And some of your vinyl will self destruct after X paybacks or a set number of days.
Or all of your HMV CDs would self-destruct when the HMV shop closed down, or your DVDs would skip and stutter during peak hours.

But this is where we are, led by the leash of modern DRM. If I buy a film on GooglePlay, I can later buy the same film on another service. I might have to if GooglePlay stop providing it, or don't support my playback device. Digital Rights - as the are - are a stop gap. I don't want to have to browse a dozen content providers, each asking for unique login credentials and push-button access to my wallet. I don't want to check which provider I bought BattleStar Galactica from or stand puzzling if I should spend £1.89 on a single episode of Dr.Who or if I should series-link it from a different provider.
For every preceeding format my media could sit on a shelf - albiet a digital shelf in the case of MP3s - without this additional barrier between me and my content. But I wouldn't have gone this far without proposing a solution. The 21st century solution is as always, to go meta. 

Lets assume that a cloud-hosted or better yet distributed database with an authenticated token vending machine manages all of my accounts. I have a single login and it records which media is bought from which service, so just provides me with sort/search and filter options to playback to the requested device.  
Now, lets go multi-user. We each have a login and can watch our own media. Each encrypted token represents the digital rights to watch an episode, movie or whatever and is authenticated by the network.
My Meta ownership of digital media means my distribution network can publish new tokens for other users. Perhaps I wanted to gift you my copy of Ghostbusters - its in my account and I can watch it, so I instuct the network to publish a new token for you and BAM! one set of Google, Netflix, iTunes et al. accounts with meta-layer for authentication to prevent abuse - My Ghostbusters token would of course be revoked. It'd still exist in the cloud somewhere, on GooglePlay or Blinkbox or somewhere as a purchased digital right but I'm extracted from that by my single-sign-on meta account layer.
We *could* de-restrict it and just share accounts of course, but the meta-layer with access to multiple content providers gives me a single sign-on and single-interface so we might as well go one step further and respect the DRM only allowing me to watch the content I've paid for and adding sharing by issuing and revoking tokens. Lastly, this does pose the question - what if we both pay for the same film - well, the collection of accounts will already own the film. The cost could be divided equally between users - so the secound user gets it half-price and the value is credited to the other account.
Or the money could be given to the content provider, or directly to the artist. Or to a local childrens charity. Or my personal favorite, it goes into a fund and can be used to "upgrade"  purchase from SD to HD, or from HD to 4K. Because I really hate having to buy the same film over and again as each format changes. The fund would also be used to Re-Buy content in the case that one content provider stops carrying something we've paid for, or if we have to switch content provider to support new hardware or codecs. This latter point is in the best interest of the content provider - if they have a better selection of long lasting content, then we'll switch to them and the re-buy fund goes straight into their filthy gold-laden pockets.