Saturday, September 05, 2009

iPhone development using standard web technologies

Fresh after giving my talk from BarCampBright4, I wanted to get my notes down from my session on iPhone development using standard web technologies. This all started when I wanted to do an iPhone application for my Latest Scores Twitter app. So I bought a book and made my way through it slowly, trying to decode Objective C before coming across the part I was really after, the UIWebView class.

What I ended up arriving at was the ability to load and render a HTML file with it's own resources imported such as CSS, Javascript and images. The real magic in Objective C is this:


NSURL *url = [NSURL fileURLWithPath:[ [ NSBundle mainBundle ] pathForResource: @"index" ofType:@"html" ]];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
[webView loadRequest:request];


What this is doing is creating a URL with a resource on the path called 'index' with the file extension 'html'. Build a request from this URL and pass the request into the instance of UIWebView.

From that point, as long as all of the resources that the index page requires are bundled into the project, you can load the page and have that actually be the application.

So that's all nice if you have a certain type of application which doesn't require any device interaction. For example, if you had a blog/news site and wanted to make it into an application, you could just get the content and display. You can also change a few settings in a file called info.plist to set the icon image and application name.

That template project is available on GitHub, http://github.com/robb1e/iWeb

However, if you want device interaction you'll need a richer framework, and that's where something like PhoneGap.com comes in. With the abstracted Javascript file that works for iPhone, Blackberry and Android, functions are available to get access to the contact list, audio playback and other device features.

To play an embedded audio resource for example, you can write the following Javascript, provided you've included the PhoneGap.js file.


new Media('beep.wav').play();


With both of these projects there is one caveat with Xcode in that it wants to compile Javascript files. You have to make sure you set those files as resources to copy, not to compile. This can be done in the target list on the left hand side.

To extend the idea of using standard web technologies whilst not building web sites, this next example uses Adobe's AIR run time and the HTML, CSS, Javascript and image from the first example. This is also available on GitHub: http://github.com/robb1e/AirWeb

By creating the application.xml file, you can tell Air to launch a file as the initial content and that will be rendered using it's WebKit engine.


<content>www/index.html</content>


To run the desktop app, install the Adobe AIR SDK then run the command:


adl application.xml


This launches the application in debug mode. To create an Air file, you need to compile using ADT.

This has shown that using the same technologies and languages you'd use to build a web page can be used for both mobile devices with and without device interaction and client applications that are installed on a computer.

There are other ways of achieving the same result using Apple's web namespace tags in the markup with HTML5 tricks like offline storage. There's also other libraries like jQTouch, so check out what else is around too.

The "write once, run anywhere" idea may just be coming into reality .

Sunday, May 17, 2009

Metadata Cloud Gaming

So, what the hell am I talking about? Let me give you a concrete example, and a personal wish list of mine for many a long year.

When I was growing up, one of the most capturing games was Championship Manager. This allowed the player to experience running a club, from transfers, tactics, training and more. However, the thing it always missed was being able to actually play the game. In terms of actually playing the game, when I was younger, the FIFA/EA Sports combo was hard to beat, although now I prefer the Konami Pro Evolution Soccer range.

I've always though that it'd be amazing to merge these two together, and I think now with Web2.0/Mashups/Cloud Computing and Web APIs this really should be possible. I could easily see a situation where as a gamer I could mash two of these together. For the tactical, and running of the game, I could use Championship Manager, and as a simulator for the actually games, I could use my game of choice. In this instance it really comes down to player stats. If I could load the game from Championship Manager, with the teams, players and 'stats' to boot, I could play out the game in my simulator and then upload the result along with accommodating stats into the cloud where my 'managerial' game could pick them up from, and vice versa.

Really, this is ripe for the picking.

Sunday, April 26, 2009

The Great Twitter Camera Migration

I've just started a fun little side project called the 'Great Twitter Camera Migration', check it out: http://cameramigration.blogspot.com

Thursday, February 26, 2009

XFM Bot

Just put together an XFM bot to scrobble the xfm.co.uk radio station recently played tracks and push that into Last.FM.

How compatible are you?

http://last.fm/user/xfmbot

Source code is here: http://github.com/robb1e/xfmscrobbler/tree/master

Friday, January 30, 2009

Vendor Driven Architecture

So, you have your off the shelf application driving some important business process, maybe it doesn't quite do everything you need so you think about altering the application through plugins, changing source code or otherwise. Some of these changes may drive you further away from that off the shelf's product road map, making upgrading later difficult and expensive. Perhaps you don't intend to alter the code, but you may think that later on there may be a better off the shelf product that suites the needs of the business and that changing solutions is the best thing to do.

Depending on how these tools are used really drives the way these tools should be deployed and designed into a business process. If you only have one client talking to this application, you probably don't need to think too hard about how to solve this problem. However, if you have multiple clients wanting to talk to this application, you may want to give it some thought. If you have multiple clients, accessing the application in multiple different ways, then you definitely want to think about this problem.

I'm not an advocate of big design up front, but what I'm proposing isn't that exactly. What I'm suggesting is capturing the data model and contract of interactions early and use this as a way of driving an abstraction layer between the off the shelf product and the clients using that product. By doing this, you've isolated the application, meaning you can upgrade easier even if there are breaking changes, and you can introduce additional application logic all behind the safely of a contract and data model. After you've done this, anyone who uses your abstraction layer, need not know what you're using underneath. The way it should be really. This also defines a contract for working against, meaning that applications who talk to the abstraction layer can be tested easier against known behaviours.

We can start to push the boat out a little here and start thinking about roles and policy for access. There maybe different types of clients and users who wish to use the abstraction layer, and they may have different permissions and access to different resources, so how can you control access and protect your resources? You may be able to control that at the application layer, but what if you've added plug ins or additional application logic layers? And what if you change the application which manages permissions, and how do you bind them to the resources at the abstraction layer? It may be better to think in terms of the abstraction layer and data model rather than what's behind the lines of the abstraction. This way, binding to resources will be easier to control. While we're dealing with authorisation, authenticaion doesn't stray far, you'll want a way to authenticate the clients and users attempting to use the abstraction layer, and you may want that authentication to come from multiple sources, say a database user store, or an internal directory service. This will make it easier for those users to gain access to the systems, and easier to administer access through roles and policies of clients and users.

What we now have is a platform. An off the shelf application with or without additional application logic, wrapped with an abstracted and understood data model which must be accessed via an authenticaion and authorisation mechanism. Put a bow on it, you're done.

Depending on your network security preferences, you could now expand this in two ways. For every off the shelf product you employ for different business processes, you could put an abstraction layer on it with an understood data model and give access to that abstraction via the same authentication and authorisation layer you've created for the first platform, or you could replicate the authenticaion and authorisation layer and have lots of self contained, secured platforms. It doesn't particularly matter which is employed, but what should fall out of this is a common data model across several platforms, with a uniform authentication and authorisation layer.

It does raise the question of why you'd want to be doing this in the first place, you are after all creating more work than strictly necessary. There is a temptation to say "if we're building this abstraction layer, we may as well replace the off the shelf component itself". Well, let's not get ahead of ourselves. We have to remember where the business value comes from, and that's generally in products and services, not business support and operational support systems themselves. However, by abstracting the applications used and exposing the data, the cost of integration, migration and maintenance is lowered. This model can be applied to internal exposures and for exposing data on the web.

I just want to close with what I believe is the most important aspect here, and this is the business case. There is a business case for abstraction, but it does come at a cost. What it brings is contractual interactions and can lead to common, agreed upon resource based exposure.

Saturday, December 27, 2008

The fall of Zavvi along with several other soon to be notable high street absentees and the troubles the banking system has been going through has got me thinking.

First I want to tackle failure of business processes. Zavvi's main reason for going out of business, according to the press was a domino affect from Woolworth's going into administration. Zavvi relied heavily on Woolworth's wholesale as a source of DVDs and CDs. From the outside in, it seems Zavvi operated with wholesale distributors of goods rather than dealing directly with manufactures. There's nothing wrong with that of course, however, if you deal with a middle man who goes out of business than you're left scrambling looking for a replacement at the same costs. No doubt, if you have one middle man, you've probably aligned prices closely (albeit probably a given percentage above) the distributor.

Why wasn't this adequately highlighted as a risk? Why only have one supplier? If you must have one supplier, at least have a good idea of alternatives at the drop of a hat if that risk is realised.

This comes nicely into my second point, it almost seems that Zavvi have suffered a misapplication of the Toyota Production System in that they pulled stock into their store as to minimise inventory, and therefore waste. Keeping the time between ordering from the wholesaler to getting into the customers hands is a worthy goal to aim for. After all, unsold stock is not money in the bank. But again, this raises the risk of only having one core supplier.

Moving onto Woolworth's itself, I find myself asking what Woolworth's actually sells and the only answer I can fathom is that it's a general store. They never specialised in anything, meaning that they never had the buying power to lower their prices and pass the savings onto their customers leaving them with a breadth of stock. In my opinion that stock wasn't a breadth of high end stock, it was a breadth of medium to low end goods, odds and ends and overly priced electrical/entertainment goods. If I needed a new set of tea towels, brilliant, but for the floorspace they wouldn't be able to justify just selling bits and bobs like that.

This leaves me asking, who are your customers, and what differentiates you from your competitors?

Finally, how can you expect to survive charging the prices both Zavvi and Woolworth's seemed happy to charge? In both stores I could find a product, say a DVD, from a fairly well known internet only business, play.com, cheaper that wasn't even on sale than on the highsteet in post Christmas/liquidation sales. Play.com does advertise nationally on TV, radio and in print, and although I won't argue the brand awareness between Woolworths and Play.com, it's not like people couldn't find Play.com.

I guess after all is said and done, as a business you need to know your customers, know your competitors, know your business processes and understand your risks and how to mitigate those risks. On top of that, I'd like to add you should always be asking these questions, always looking for improvements and changes.

Just look at Tesco, boy have they changed! They seem to know what their customers want, and are constantly changing, adding new product lines in store and online.

Saturday, December 20, 2008

Doing the Right thing vs Doing it right

There was one lightening talk at XPDay that has stuck in my head, and I want to try to reproduce the discussion here. I'm sure I'll get some details wrong, but you'll just have to bare with me. The talk was on doing the right thing verses doing it right.

If you look at the image below, you'll see an XY graph, along the Y axis is doing the right thing, i.e. building the right solution. Along the X axis is doing it right, i.e. building the solution well. There was some studies into these metrics, but I don't know what it is, so I'm just going to use pseudo numbers.

In the lower left quadrant, you'll see doing the wrong thing, and doing it badly. The cost of the project is greater than the average IT project, and the revenue is lower than expected. In the upper left quadrant your doing the right thing, but you're doing it wrong. Your costs are still larger than average but you are making more revenue as you're building the required product. In the bottom right you're doing the wrong thing, but you're doing it well so your costs are lower than average, but your revenue isn't as good as expected. In the top right, you're doing the right thing and you're doing it well. Your costs are lower than average, and your revenue is higher than expected.

Now, of course, this is abstract and to be taken with a large pinch of salt, but there are some interesting deductions to be made. Firstly, we were told that according to the study, it's near impossible to go from building the right thing badly to building the right thing well. Looking at the rest, it seems intuitive that if you were doing the wrong thing badly, you'd want to do the right thing badly before doing the right thing well, but in actual fact, it's less costly and more efficient to do the wrong thing well and progress to building the right thing well.

So, where's your project on the quadrant, where do you want to be and how to you want to get there?

Software Development: By Way

I tend to think the way libraries and frameworks are made could easily be applied to teams and projects. When building an application, generally you focus on solving the business problem with the best tools available. When you move onto the next problem, you might have that eureka moment where you find that there are sub parts of the new problem which overlaps with the previous problem. The fact that you've already solved this problem, means you'd be silly not to reuse that. Thus, you have identified a common functionality, and if you extract that into a new library, you have reusable code across more than one project. Brilliant.

The other way that you'd come across common functionality is through conversation. In true water cooler mannerisms, this will often be coincidental, but again, if you can identify commonality and extract that, you are deriving business value. Not only are you reusing something, meaning time can be saved and all teams benefit from faster delivery, but with more eyes on the same code, the chances of that becoming more robust grow.

So far we mentioned code, but this could just as easily be process and tools, like build scripts, deployment practices, server provisioning and lots more too. The key is to make discovery, and sharing easy. I think the key here is relationships and communication.

I think this model can be applied to projects too, where new projects are spawned to develop/manage commonality, this team doesn't necessarily have to consist of full time staff, it could be part time or ad-hoc, but there should always be a project owner/lead even if the project is sleeping. Anyone should feel they can contribute to the project, for the benefit of all the teams, even if there are full timers on the project.

I think, over time, this will form a network of loosely joined teams surrounding a handful of core projects, almost like atoms around a nucleus. I see these core projects being things like:

* build scripts, including testing tools and reporting
* logging and monitoring scripts
* deployment processes
* network and hardware management and provisioning
* data access layers (especially if all projects share common relations like customers)
* service access layers, for accessing hosted services
* user interface asset management
* hosted tools support, e.g. source control, software repositories, wikis, mailing lists, story tracking, continuous integration boxes etc

There's probably a bunch more too. But as I mentioned earlier, relationships are probably the most important aspect here. Establishing a community between the project leads and another for the wider developer group across all the projects will help, but you have to support the community, let it flourish. The more the community collaborates and communicates the more they'll drive towards commonality and reuse. Of course, people will have different ideas and prefer different tools, and that should be supported as long as the principle of spinning out or contributing to projects carries on.

Monday, December 15, 2008

XPDay Review

On Thursday and Friday of last week I showed up at XPDay, and attended some cool sessions including:

Nat Pryce on TDD of async code: http://www.natpryce.com/articles/000755.html

Matt Wynne on Lean Engineering: http://blog.mattwynne.net/2008/12/14/slides-from-xp-day-talk/

Had some interesting discussions with some cool people including:

http://gojko.net/tag/xpday08/

So, I was interested in Gojko Anzic's open space session on the new Fitnesse SLIM implemenation. It looks like it's much, much easier to wrap domain objects instead of creating fixture classes for testing. It's not quite ready yet, but I'm looking forward to a stable release.

There was also an interesting keynote from Marc Baker of LeanUK.org (http://www.leanuk.org/pages/about_team_marc_baker.htm) on his experiences of introducing lean principles into the NHS.

Tuesday, December 09, 2008

XP Day 2008

If you're attending, I'm presenting with Rags and Fabs on Friday morning. Hope to see you there.

Friday, November 28, 2008

Why I'm begining to dislike frameworks

I've been having discussions with David and Dr Pep over the last few days about the pros and cons of development frameworks. This is probably a measure a my increasing level of cynisium but I found myself looking at a framework that T4 was looking at and thought to myself "what is this framework going to stop me doing?"

David raises the point of the productivity curve; in most cases when picking up a new library or framework doing the 'hello world' stuff is easy and your productivity is high. As you come across issues or want to solve more complicated puzzles your productivity drops as you struggle with the design choices of others.

I think one way in which these frameworks can be assessed is asking how much will it cost you to remove it? We're currently using Symfony, a PHP MVC framework, and it follows this pattern exactly. It's easy to get started, and doing easy things is easy, but when we want to stray off the beaten path, we get beaten ourselves. The documentation is sparce, and if we decided to use something other that Symfony, we might as well re-write our project.

Now, I'm not trying to start a Symfony bashing dialog, I'm just using it as an example. In fact, Milan makes the point that this particular framework is Open Source, and if we have problems, we should commit something back to the project. Which is the right way to view this.

It still doesn't stop me from having second thoughts about using a framework again. Give me swapable libraries everytime.

Thursday, November 13, 2008

How Sainsburys are Coming into the 21st Century

I was in my local Sainsburys the other day, doing my shopping (what else?). As I was at the checkout I noticed a few things.

First, you've probably noticed at the checkout gift cards for iTunes, or something similar, however what I'd noticed this time was what I want to call a "student food card". This is a two part card, one for topping up the account, one for using credit from the account. This is plainly aimed at the concerned parent who wants to make sure their kids are eating properly whilst at university. Those parents who do not want to put money into their kids account so they can spend it on nights out and pot noddles for dinner (though the flaw here being that Sainsburys sell booze too, but we'll brush over that). I did think though that this could be used the other way round too, what about someone with an aging parent or perhaps a lowly paid sibling. Wouldn't you want to make sure they were eating properly too?

So, I thought that was cool, but there was a little more to come.

Second, there were no bags at the checkout, I told the cashier who told me they'd stopped putting bags out. She pointed me to a sign which I'd ignored as an 'out of bags' sign, saying as much. Sainsburys in environmental awareness are trying to get their customers to reuse their bags, so you now have to ask for carrier bags. They've given store points away for ages for reusing bags but obviously isn't having the desired affect.

One of the things they're doing to help their customers is providing a free SMS reminder service. You send a message saying when you usually go shopping and it sends you a reminder a few hours before, saying not to forget your bags.

This could be a tentative first step, but I can see much more here. They're only providing this until the end of the year, but how about linking this to my store points card and instead of charging me for SMS, remove some of those points. Also, by combining your mobile to your store card just think of the data mining possibilities. You wouldn't have to tell them when you go shopping they have that data. Also, what better direct marketing would you like? Sainsburys have about ten years data on me, they have a pretty good idea what I buy and when and could send me special offers just for me, with some kind of code that a cashier could enter.

We'll see what happens. It's an odd combination of convenience and scariness.

Tuesday, October 28, 2008

Agile Software Development and Black Swans

Throughout reading the excellent book, Black Swan, by Nassim Nicolas Taleb, I was trying to think of scenarios and examples I that I've seen that I could apply to what Taleb was talking about. Although I don't really follow economics, I did find myself thinking about agile software development.

I don't want to go into what Black Swans are, other than to say they represent a highly improbable event, if you want to know more, pick up a copy of the book. Essentially, Taleb says that when a Black Swan occurs the affects can be disastrous because they are often ignored when calculating risks. Of course, many Black Swans are unknown unknowns, so how can you measure something like that? Well, you can't, but as the saying goes, "it's better to be broadly right then precisely wrong".

So, how does this apply to agile? Well, the way I've practiced it, it's about drilling out uncertainty by planning a prioritised work stack for a limited scope (generally 3 months). This minimises risk by only addressing immediate and near term issues, allowing a project to be flexible (think broadly right rather than precisely wrong). The daily stand ups, acceptance demos and retrospectives are all forums for identifying risk early in the hope of lowering the overall risk to a project delivery. Finally, and above all, agile and it's practices of test driven design, continuous integration and burn down charts display empirical evidence of delivery, rather than anecdotal.

In todays software world where it's acceptable to delivery functional, if not complete, applications into service, why wouldn't you want to work with a limited prioritised stack with empirical evidence to demonstrate readiness?

Tuesday, September 16, 2008

Re-Developing an Application using TDD against an Establised API

The last few months have provided an interesting experiment to test our application testing. For several reasons that I won't go into, we decided to rebuild one of our web services, and we made the decision to try and keep as many of the tests we had as possible. We found out quite quickly that moving unit tests to a new project when you're redesigning a system just isn't feasible. Our unit tests were too tightly tied to the design (well, that's the point of test driven design right?). However, our acceptance tests which tests our API as a black box was a perfect candidate to measure our progress.

It turned out to be more useful then we'd realised. When you have working black box tests, you can do whatever you want to the internals of the application as you already have appropriate tests to validate the application. This is exactly what we did.

We could happily start turning on our acceptance tests as and when we felt a feature was implemented. By insisting that we should not change these tests, it gives a an excellent benchmark for completeness. Of course, this is quite a rare occurrence, but it goes to show that if you test at different layers of your application and regard APIs or interfaces as unchangeable, you can re-implement to your hearts content.

Wednesday, September 10, 2008

Convergence, Divergence, Convience and Quality

When I got my first job, I was really keen to get a great stereo. At the time the thinking was that to get quality music playback, hi-fi separates ware the way forward. Each component was stand alone, loosely joined by standard interconnects (phono, coaxial or fibre optics), and was easily replaced, upgraded or repaired without affecting the rest of the product. This was opposed to a hi-fi that when, say the CD player breaks, you need to replace the whole damm thing.

Now the trend is convergence, as Dr. Pep mentions, I have been tempting him, and myself, into buying an iPhone. The obvious comparison is the iPhone is convergence personified. It's a mobile phone, a 3G device, it's wifi enabled, a MP3 player, a GPS receiver and a pseudo computer that fits in your pocket.

What happens if one of those features breaks, or you need to change the battery? Well, it seems your in trouble. But that's the price for convenience. The alternative is a mobile phone (with or without 3G access), a laptop with wifi and a 3G dongle, a MP3 player and a GPS receiver. You can of course get other devices than the iPhone that do many of these things, but let's face it, they not an iPhone.

So here's the dilemma, I want to upgrade/get new features like GPS and 3G access. But I already have an iPod, a mobile phone and a wifi enabled laptop. If I were to buy the iPhone, do I throw this all away? If so, it becomes an expensive upgrade.

Does it follow that, more convergence means less devices, means more time on a single device, means wearing that divice out sooner, means expensive upgrades more often? How does having one be and end all device affect my battery life, I don't want to be have to plug something in every four hours.

Wednesday, August 13, 2008

Pushing the Boundaries of Testing and Continuous Integration

At Agile2008, I had the opportunity to present an experience report with two colleague's Raghav and Fabrizio about bringing robustness and performance testing into your development cycle and continuous build. Below is the video, along with the slides. Enjoy.

The submission can also be read here.


Pushing the Boundaries of Testing and Continuous Integration from Robbie Clutton on Vimeo

Sunday, August 03, 2008

Enterprise Agile vs Start Up Agile

Discussions in the office typically bring interesting conversation, and as I head to Agile2008 (I sit on the flight whilst writing this), I wanted to ask what's the difference between enterprise agile and start up agile, and how does that difference affect not just the maintainability of the products build, but also the innovation of the respective engineering group?

There is always the classic trade off between feature delivery and quality, but is it really the case that startups are generally exempt from these quality metrics? If this is the case, what other practices can be put in place instead of code metrics?

One of my memorably quotes from high school was my physics teacher, Mr Shave. He said to the class 'you can never prove anything, you can only disprove something'. This applies here with quality metrics. Code coverage, convention and duplication don't prove quality in code, even intergration testing or robustness and performance testing do not prove anything, they only dis-prove that your system doesn't fail under certain conditions. However, all of these steps do increase confidence in a product, and that is an integral part of any product development (confidence that is).

You can still not do any of this and be agile right? These practices are generally incoraged, but agile is about customer engagement and acceptance and if you're customer knows what they're getting and they accept your stories then it's OK right? What is enterprise agile anyway? Do enterprise developers build a product, deliver it into an in-life team and then move on? Does that require higher confidence as you're handing over? Is a start up developer likely to support that product in-life and as such be more likely to stick around, fix issues and add functionality?

GMail Log4J Appender

Application logging is vital for diagnostics on a server product, but there can be so much, how can you tell what to watch or follow? Through tools like Log4J, you can have separate logs for different levels (typically debug, info, error and fatal). In Log4J, these are called log appenders, they can be anything from console or file based, or custom log appenders. One of the custom appenders I've been playing with recently is email logging. Our team has some excellent scripts written by Senor DB which scan log files for patterns and send email reports based on it's findings. I wanted to see if I could add application logging directly into the application. I wanted an email based appender and here's how I got that.

Now, there is an SMTPAppender in the Log4J package, but I wanted to use GMail, and the Log4J appender doesn't quite set up properly for GMail. I used Spring implementation of JavaMail for sending emails and extended the SMTPAppender in the log4J package. The method you'll be most interested in overriding is 'append', this gets called when your log is called into action depending on the settings in your log4j configuration.

public GmailAppender() {
super();
notifier = new JavaMailSenderImpl();
notifier.setHost("smtp.gmail.com");
notifier.setPort(587);
notifier.setUsername(yourGmailEmailAddress);
notifier.setPassword(yourGmailEmailPassword);
Properties props = new Properties();
props.setProperty("mail.smtp.auth", "true");
props.setProperty("mail.smtp.starttls.enable", "true");
notifier.setJavaMailProperties(props);

try {
InetAddress addr = InetAddress.getLocalHost();
hostname = addr.getHostName();
} catch (UnknownHostException e) {
}

}

For brevity, I've used the constructor to build the notifier (JavaMailSenderImpl from Spring) with desired properties, you can just as easily use properties in the log4j config, and I'll do just that for the host name (which for me means which IP address the application is running on). You can see above we've set the STMP host for Gmail with the port, email address and password for the email to be sent from. The little extra bit of magic is the properties 'mail.smtp.auth' and 'mail.smpt.starttls.enable'. The last is to get the default machine name.
 
@Override
public void append(LoggingEvent event) {

super.append(event);
SimpleMailMessage message = new SimpleMailMessage();
StringBuilder builder = new StringBuilder();
builder.append(getLayout().format(event));
String[] stackTrace = event.getThrowableInformation().getThrowableStrRep();
for(int i = 0; i < stackTrace.length; i++)
builder.append(stackTrace[i] + "\n");
message.setText(builder.toString());
message.setTo(emailAddress);
message.setSubject(event.getLevel().toString() + " on host " + hostname);
try{
notifier.send(message);
} catch (MailException ex){
}
}

Using the SimpleMailMessage (again from Spring), we add the text, the address and subject. The parts to grasp here are the layout and the stack trace. The layout is derived from your configuration, this is where you'd typically set the log message/heading including timestamp, thread name and other supporting information. Passing the event to the 'getLayout().format(event)' method returns this nicely formatted string. Then to get the stack trace we grab the array (each element being a line in the stack trace) from the event throwable information method. Someone thought that a method name indicating that your getting a string should pass back that array instead of adding new line characters, so we have to fix that before setting the text as the message body. Finally, we construct a useful subject containing the log level and host/IP address.

log4j.appender.mail=com.iclutton.GmailAppender
log4j.appender.mail.Threshold=ERROR
log4j.appender.mail.layout=org.apache.log4j.PatternLayout
log4j.appender.mail.layout.ConversionPattern=%d{ISO8601} [%t] %-5p [%C{1}.%M() %L] - %m%n
log4j.appender.mail.hostname=important-live-server

I've mentioned the configuration a few times, and here is the basic one comprising the full path to the class, the threshold (meaning the lowest level to log at, I don't want an email for every debug log entry after all). There's the pattern I mentioned before, this pattern includes the date, threadname, level name, class, method and line number, then the message and a new line. Finally there's the host name. Each instance of my application has a different name to identify it, typically something development, staging, production. To enable that property in the class, simply add a Spring like named setter:

public void setHostname(String hostname){
this.hostname = hostname;
}

Properties like this can be set if you want to have the username/password of your GMail account configurable or anything else for that matter, just don't forget to set it on the JavaMailSenderImpl object before sending the email in the append method.

Monday, July 14, 2008

Some Guiding Principles for Supporting Production Apps

I just finished a two week rotation supporting our production web services and wanted to jot down some thoughts. This coincided with being sent a link to a Scott Hanselman blog post about guiding principles of software development which really enjoyed reading. I thought I'd extend a few of those now I've come out of my support rotation an enlightened developer (I hope).

Although Scott's blog is for Windows/.NET development, there's lots of goodies in there which can be applied to any language or ported to any language. I'd like to narrow in on the support based principles: tracing (logging) and error handling.

Scott mentions that for critical/error/warn logging, your audience is not a developer. Well in this case, I am, however, I may not know the application, so I need as much information as I can get. This leads me onto my first extension; transaction IDs.

Now, there's two kinds of transaction IDs really, especially when dealing with a service based application. One is every invocation of your service should be given a unique ID and that should be passed around the code (and logged against) until the path returns to the client. Depending on your application, this could get really noisy, but a multi-threaded application could benefit from such information. Second is a resource/task based ID, if a service does something (let's call it A), but the execution path returns to the client before A is complete, the ID for A should be returned to the client and this ID should live through the execution of A and stored or achieved accordingly. In terms of logging, when ever you come across this ID, use it. This generally follows for all IDs, if you have an ID, use it when logging.

With error handling, it's accepted that catching a generic exception is bad, although permissible if at the boundary of your application. If you must catch 'Exception' or 'RuntimeException', log it and use at least a error condition. It's a generic exception for a reason and that reason is you don't know what happened.

AOP is a great technique for logging, don't be frightened (and this goes with all logging) to use it everywhere.

Moving onto a topic glanced upon, but not thoroughly explored; configuration. If building a server product, config is your friend. When things go bad and you need to re-wire your application, not having to do a code change is a massive benefit. If the config is dynamically read, that's even better as it won't require a restart to the server.

Lastly, think about your archive mechanism and think about what data you'll need to preserve and for how long. Ideally, from that data, execution paths should be interpretable, at least at a high level.

Oh, and please be kind to your support team, make it easy for them to gather information about your application. If an application is one of many, consider implementing a common health/management (and provisioning if you can) interface.

I think putting together a list of principles like this is a great idea, each engineering group should think about the ways they work and how to drive commonality across products. Teams shouldn't be scared of thinking about support throughout the development process either, although they've been hearing the same thing about security for years ;)

Tuesday, July 08, 2008

Scott Hansleman podcast with the Poppendieck's

A great podcast which deserves it's own post to comment on it. Mary Poppendieck really came out with some cracking soundbites including:

* product teams should be driven by profit and loss, not by on-time, on-budget, in-scope, customer satisfaction and quality.
* engineering and business - it's not them and us, it should be all of us
* no excuse for IT people to be separate (organisational) from the business if IT
* If IT it routine (email etc), outsourcing is fine, how innovate can your email be? If building core competencies, outsourcing is generally a bad idea
* leverage workers intelligence to deliver, management should be leaders, not micro managers

loads of good stuff, check it out!