Friday, June 25, 2010

Spider and Validate

One of the things we've been trying to achieve is automation is every sense. As my old mate Otu once said "automate until it hurts!" Well, hopefully this will make it a little less painful. We wanted to point to a single page, have the application spider the entire site finding pages which return anything apart from 200 OK, and validate all those who do return 200 OK.

By combining two PHP projects, PHPCrawl (http://sourceforge.net/projects/phpcrawl/) and the frontend-test-suite (http://github.com/NeilCrosby/frontend-test-suite) it's been possible to do just that. Throw a little Ant build script in there too and deploy into a continuous integration container like Hudson, and you start to get a feel for the state of websites in development, or even in production as part of a monitoring tool.

To make use of this, edit the build.xml file to change the SITE_URL to be the endpoint which you want to test and run 'ant validate'. This will do an Ant copy with filtering to create a file called test.php. Ant then runs this and then captures the output from this. It collects which pages may have returned a 500, a 404 and 200 OK pages. It then passes an array of 200 OK pages into the frontend-test-suite which uses PHPUnit to report on HTML validation.

There's still plenty of work to do, mainly introducing the ability to supply your own W3C validator endpoint and introducing CSS validation. It'll come though, so keep an eye out!

The result of this is over at Github - http://github.com/robb1e/Validator

As a side note, to get frontend-test-suite as a submodule on Github I followed instructions from FND, thanks!

Tuesday, June 22, 2010

Grails multiple web sites, same web application

We had a need recently to have one web application serving many web sites, where the same data schema was in place with data shared across multiple web sites, but we wanted to show different styling and different data depending on the URI. In the end it turned out to be fairly simple. With our naming conventions, it's possible to hit one of several URIs for the same site, such as:

* project.testing.com
* project.staging.com
* project.com

So I had created a domain lookup service that when passed the request URI could work out what site the request for targeting.


def targetUri = request.getServerName()


I did a lookup for that target name against an object stored in the database relating to that targetUri and passed that back, making it available for queries for the correct data to show, and passing that down into the views to render the relevant CSS and Javascript files.

Thursday, June 17, 2010

Grails, Ant, Hudson and automated deployments

One of the things I like about Grails is the embedded server, it makes deployment and getting things up and running nice and simple. I've tried building a WAR and deploying into Jetty and Tomcat, but for some reason the system resources seem to spike when deployed in this fashion compared to the built in 'run-app' functions of the Grails command line tools.

With this in mind, I wanted to use the Grails command line to do automated deployment based off our Hudson continuous integration container via Ant build scripts.

First I created an init.d start/stop script for the server. At theTeam we can be working on multiple projects at the same time, so we need to set some conventions about port usage so we can manage our applications easier, so this is the first step towards just that.



This script is simple enough, on start it checks to see if that port is used, if so it bombs out, if not it nohup's the Grails command, telling it which port to use, and also just for ease of management, adding the name of the site. This means running commands like ps -ef and pipping to grep shows what's running. Stopping searches for a process running with that port and kills it.

That's the first step. Next is the Ant script to deploy the latest code after tests and static analysis. In this, the target folder has the latest code, we stop any server that's running, do a Grails clean and then run the server. The script then waits to see if it can open an socket to the port that the server is running on, and fails it it cannot. This usually means the server has failed to startup. The variables are gathered from a properties file and should be fairly obvious.



Now, you might have noticed a few oddities in the script like setting JAVA_HOME and BUILD_ID. First, when running in Hudson, any processes are run using the JRE and JDK that is deployed with Hudson, and I wanted to ensure it was the system Java I had installed. The BUILD_ID is a little trick that tells Hudson not to terminate the process after the build has finished, which it had a habit of doing in earlier versions.

When starting Hudson, a flag should also be set to tell it not to kill child processes. My start command is thus:



Adding this all together means it's possible to get automated deployments with Grails added into your builds.

AWS S3 Integration with Grails

I’ve been playing with the S3 plugin for a Grails project and wanted to share how I set it up, as the documentation on the site doesn’t cover an example.

First, in my model I added a byte array field and made it transient. Transient basically means it doesn’t add the bytes to the database, but just keeps the data in memory. This means you can take advantage of all of the automatically generated controllers and view classes which are created from the model without having to really deal with the contents.

class ImageHolder{
byte[] image
String imageUri
static transients = ['image']
}

In the controller we use the request object to get the file and save it to disk, removing any spaces that may have crept in. Then we start using the plugin, and create a S3Asset from the file and put it into the asset service class. The assetService must also be declared at the top, and it’s injected via Spring from the plugin.

def s3AssetService
....
def imageholder = new ImageHolder(params)
....
def mhsr = request.getFile('image')
def fileName = mhsr.getOriginalFilename().replace(' ','_')
def ext = fileName.substring(fileName.lastIndexOf(".")+1);
def file = new File(fileName)
if (!mhsr?.empty && mhsr.size < 1024*200){
mhsr.transferTo(file)
def asset = new S3Asset(file)
asset.mimeType = ext
s3AssetService.put(asset)
imageholder.imageUri = asset.url()
imageholder.save()
}

What this does, is now uses the async service to put the file into S3, so your request returns very quickly. You can see that the image URI is now persisted into the domain object, but the content isn’t.

One final thing is to configure AWS in config.groovy. I set the flag to not add a prefix to the bucket name, as I’m using reverse DNS/package style bucket name to ensure uniqueness.

aws {
domain="s3.amazonaws.com"
accessKey="YOURACCESSKEY"
secretKey="YOURSECRETKEY"
bucketName="your.bucket.name"
prefixBucketWithKey=false
timeout=3000
}

Wednesday, June 16, 2010

Apple, standards and plugins

I hate to sound dramatic but with Apple sounding off about web standards over plugins with their latest products I was wondering just how much Apple was dragging he industry forward, or if it was doing the opposite.

Some years ago, developing web applications meant having to know the details about the browsers your audience were using. We're still suffering from this today with discussion over different box models and questions about just when are Microsoft going to release a standards compliant browser. One example of this is anyone who uses Exchange for email. They might know the pain of using the web client given how rich it is in Internet Explorer and how average it is in any other browser.

The advent of cross browser JavaScript libraries helped reduce many of these pains and it's great to see a lot of work going into the new HTML5 specs that are taking lessons direct from these libraries. However, whilst singing the praises of a few websites that are using the new HTML5 audio and video elements, Apple seem to be creating their own events for touch that I can't see anywhere in the specs. It seems Mozilla are also playing the same game also and I'm worried that we're returning to a world where we have to tailor lots of code to target specific browsers.

You only have to go to Nike.com or log into Gmail to see iPad websites that have orientation and swipe support. It's all very nice but how much extra work is going to have to take place to get a website to work in different browsers and different devices? What happens when a new device comes out with different events, or even as with the new iPhone and it's different screen resolutions, how will that change the work already in the wild?

This raises further questions for me, especially after reports of JavaScript execution speed on the iPad , even though some frameworks claim this isn't so much of a problem, such as Spoutcore Touch
and others go further saying that no framework should be used at all to ensure performance

Although Apple have created fairly complete documentation about building web content for their devices, it does beg the question why there should be differences if we are supposed to live in a standards world? How long will it be before you hit a webpage and have it say "The website is best viewed with iPad", or any other device or browser for that matter?

Should we ignore the devices ad go for standards like the BBC iPlayer for big screens like PS3 and iPad , use libraries to augment sites like how apple.com/iphone allows the user to swipe the image carousel, tailor something specific for each device such as Nike.com or should we just be building native applications?

How should we progress? Should we let our analytics decide? Should we take the standards route or device specific route? This will certainly be one to keep an eye on.