Monday, December 20, 2010

Cache me if you can

From the producers who brought you 'Tengo and Cache', we present 'Cache me if you can'.

The concept is all about HTML delivery into iOS devices alongside a mechanism for updating the application without going through the Apple app store. Loading HTML from a local file is not a new idea, as a few companies and projects have sprung up around this type of mobile app development have emerged, most notably Phonegap and more recently Apparatio.

The original iOS app I had built, Tengo and Cache, created a way that took this idea in a slightly different direction. The projects above deliver HTML within an application and would rely on Javascript to update any content within the application, using local storage to persist data. What Tengo and Cache offered was a way to download new HTML documents in their entirety, storing the files in writable areas of the file system on a device. This worked by providing a manifest file, based on the HTML5 manifest, on the domain which was the iOS app was trying to download files from.

During a hack day for the Guardian, I extended this work to add a further downstream cache. Although every effort is made to cache resources before loading, some files may be requested which are not included in the manifest. Overriding the NSURLCache class, the application can intercept any calls which attempt to go out to the web. This cache then retrieves the file, stores it, and then serves this instead of continuing down the pipeline. The next time this file is requested, the cache serves the file on disk instead of hitting the web at all.

Now that I had an iOS application that should pre-cache and intercept, I needed some content to install onto a device. I had wanted to use a Wordpress blog, or RSS feed, but felt that this content could prove difficult. Without knowing what content would be delivered, I couldn't build a manifest file that would capture the entire content. Also, I thought it might prove to be bad user experience if something that would work on a web page when online might not work offline. Even something as ubiquitous as search might look to have failed miserably.

I decided to use something I knew I had control over, the Guardian Content API. Using a small NodeJS application, I could retrieve a query and present that as HTML, alongside an appropriate manifest file. Then all that was required was a small property change in the iOS app and a native application was ready to be launched.


Tuesday, November 23, 2010

Samsung Galaxy Tab - Review

The first thing you notice when you start using a Samsung Galaxy Tablet is that it clearly thinks it’s a phone. Most people I show the device to also think it’s a phone as they do their best Dom Joly impersonation. I had only intended to use the device over wifi but I’m constantly reminded that I haven’t put a SIM card in, and that the phone can only make emergency calls. I noticed at one point that a significant portion of battery life was being put to use on phone related activities. At that point, I put the device into flight mode and then enabled wifi to try and make the battery last longer. Trying not to use the device as a phone means many of the Samsung applications simply don’t work as they require a SIM card, although why they need the SIM card I couldn’t tell you.

One of the things I was looking forward to doing with the device when I got it was to use it around the office as I’m using a desktop and I wanted a way to take notes, look things up and do demos. My first snag there was that the electronic keyboard is just too small for any note taking at length and when I held the device in portrait mode I was typing by thumbs alone. I’ve found that the screen on the Galaxy is just about right to web pages, and due to the size, I can hold the device in one hand, not unlike the Kindle. Many websites are redirecting me to their mobile version, which although can look very nice on the size of the device, it sort of deflects the purpose of having a tablet as opposed to a phone.

Some of the plain oddities are exposed when trying to use Google Docs on the phone. Having sent you to the mobile version, you can edit documents and spreadsheets, but not presentations. When trying to change to desktop mode you get a bizarre error about the browser not supporting web word processing. There was me thinking that all that was needed was HTML rendering, Javascript processing and an Internet connection.

I have felt I’m getting benefit when you consider just about how ‘cloud-enabled’ the device is. I’m yet to plug the device in to my computer, apart from trying to deploy an application I’d built onto it. For music I’ve been using Spotify, Last.FM or iPlayer for Radio 6. Tools like Google Voice have let me find, download and play podcasts without needing desktop software like iTunes with varying success. The speakers on the device are just about good enough to carry around the house with you, or plug the headphone jack into stereos in different rooms. I’ve used DropBox to drop video files onto the device and they just play without having to configure anything or install any codecs.

I’m generally enjoying being able to think of this as more of a computer than a locked down device though, by seeing the running applications and being able to navigate the file system. However there seem to be simple things I just can’t find, like changing the auto-lock timeout or being able to wake the device from being locked other than pushing a button at the top of the device, when my hands tend to be at the other end of the device.

I’m happy that Apple has some competition in the tablet market, but I think the experience needs to improve a lot, through both the physical device and the software that powers it. I’d suggest a good start would be to get the device to stop thinking it’s a phone. The Amazon Kindle has a SIM card for 3G access, but it’s knows it isn’t a phone. It may be a small point, but I prefer my devices without the split personality.

Saturday, September 11, 2010

nodejs, ndistro and git submodules

On the (very early) tapas platform at theteam I got a little stuck when mixing nDistro and git submodules and wanted to explain what I'd done to get around those issues. On the tapas-models module, MongoDB is used for data storage and tapas-models uses Mongoose for MongoDB integration, but nDistro doesn't download Mongoose's downstream dependencies

This is because nDistro downloads the tarball of the project from Github and that tarball doesn't include dependencies. It might be nice for Github to do this, and I'll search the Github Support site to see if I can find something about that. Anyhoo, the tarball doesn't contain it's git bindings, so I can't go in there and update the git submodules.

The unfortunate thing is I've had to expose myself to some of the Mongoose inner workings to get the dependencies, but once I know them, it's a light touch for the next part. Using nDistro as normal, I include that dependency, including the revision number in my .ndistro file. As nDistro is executed in a bash environment, shell scripting can be used alongside the module declarations. So, I use a Linux move command to put the dependency where Mongoose expects it and everyone's happy.

It's a simple solution to what was a nagging issue and keeps me on my happy path toward NNNN

Thursday, August 19, 2010

Recipe: Guardian Fried Chicken

This evening I made an attempt at the Guardian Fried Chicken, and although Tim Hayward gives the recipe in the associated video, I wanted to share how I had gotten on.

Although the recipe is simple, it does take some time.



I also added some BBQ beans (just normal beans with brown sauce and lots of pepper) and some coleslaw for good measure.

I also had to make some compromises. I don't have a fryer, so I used a pan. Without a temperature guide, I used a trick my Mum used to do which is to drop a small piece of bread into the oil to get a feel for how hot the oil is.

So generally it all went well, although I had the oil too hot, so the coating became too crispy too early and when cut, the chicken looked slightly underdone. I put it in the oven for 10 minutes to finish it off, so while the meat was cooked it also dried out the coating. However, as I was cooking the pieces one at a time, I turned the heat down a lot so that the 5-6 minutes fried cooked the meat all the way through.

The one thing that was missing from the recipe was MSG for the spices. It might have enhanced the flavour more, it was already pretty good. It still didn't have that KFC feel to it though. I think I'll be trying it again though.

Check out the set for more photos.

Thursday, August 05, 2010

Tengo and Cache

This project was born from a desire to push on from some work of creating iPhone applications using only web technologies that I had done some time ago. That work can be seen http://github.com/robb1e/iWeb. This first project was a way of deploying a HTML/CSS/Javascript application onto an iOS device and have it run as a native application.

The issue with that was the only way to change HTML, CSS, Javascript, images or other data was to depend on a live internet connection or do an application update. I was looking for a way to make it easier to cache those resources with a specific eye on offline experience.

After some thought, the lightbulb switched on and it became an immediate choice to use the HTML5 Cache Manifest file as a basis of discovering what resources should cached. This project aims to do just that.

Between the application bootstrapping and a webpage being rendered, the cache manifest is read and each file is downloaded and stored in the iOS application sandbox directory. After all files have been saved, the UIWebView will render an index.html file in that sandbox.

Check it out: http://github.com/robb1e/Tengo-and-Cache

Thursday, July 08, 2010

Json and Grails

Recently I was trying to return JSON data from a Grails app and I was having some issues. Grails has some built in conversion utilities to enable code like


render myObject as JSON


This is all well and good, but I have a few fundamental issues with this. First off, attributes like fully-qualified class name and the database primary key are returned. I don't really want to expose internal information to client side applications. It also doesn't allow any renaming of the attributes. Say you have a String called firstName using camel case in your domain object, it returns that to the client. What if you want the API to return first_name?

The answer was a lot simpler then my searching could muster, and that is to use the template system, Sitemesh in this case, to render exactly what you want in JSON as you would if you were rendering HTML. OK, it's a bit of a headache, but you have complete control over the content and style. Combine that with a one liner in the controller to set the content type, and you're flying.


response.contentType = "text/javascript"

Friday, June 25, 2010

Spider and Validate

One of the things we've been trying to achieve is automation is every sense. As my old mate Otu once said "automate until it hurts!" Well, hopefully this will make it a little less painful. We wanted to point to a single page, have the application spider the entire site finding pages which return anything apart from 200 OK, and validate all those who do return 200 OK.

By combining two PHP projects, PHPCrawl (http://sourceforge.net/projects/phpcrawl/) and the frontend-test-suite (http://github.com/NeilCrosby/frontend-test-suite) it's been possible to do just that. Throw a little Ant build script in there too and deploy into a continuous integration container like Hudson, and you start to get a feel for the state of websites in development, or even in production as part of a monitoring tool.

To make use of this, edit the build.xml file to change the SITE_URL to be the endpoint which you want to test and run 'ant validate'. This will do an Ant copy with filtering to create a file called test.php. Ant then runs this and then captures the output from this. It collects which pages may have returned a 500, a 404 and 200 OK pages. It then passes an array of 200 OK pages into the frontend-test-suite which uses PHPUnit to report on HTML validation.

There's still plenty of work to do, mainly introducing the ability to supply your own W3C validator endpoint and introducing CSS validation. It'll come though, so keep an eye out!

The result of this is over at Github - http://github.com/robb1e/Validator

As a side note, to get frontend-test-suite as a submodule on Github I followed instructions from FND, thanks!

Tuesday, June 22, 2010

Grails multiple web sites, same web application

We had a need recently to have one web application serving many web sites, where the same data schema was in place with data shared across multiple web sites, but we wanted to show different styling and different data depending on the URI. In the end it turned out to be fairly simple. With our naming conventions, it's possible to hit one of several URIs for the same site, such as:

* project.testing.com
* project.staging.com
* project.com

So I had created a domain lookup service that when passed the request URI could work out what site the request for targeting.


def targetUri = request.getServerName()


I did a lookup for that target name against an object stored in the database relating to that targetUri and passed that back, making it available for queries for the correct data to show, and passing that down into the views to render the relevant CSS and Javascript files.

Thursday, June 17, 2010

Grails, Ant, Hudson and automated deployments

One of the things I like about Grails is the embedded server, it makes deployment and getting things up and running nice and simple. I've tried building a WAR and deploying into Jetty and Tomcat, but for some reason the system resources seem to spike when deployed in this fashion compared to the built in 'run-app' functions of the Grails command line tools.

With this in mind, I wanted to use the Grails command line to do automated deployment based off our Hudson continuous integration container via Ant build scripts.

First I created an init.d start/stop script for the server. At theTeam we can be working on multiple projects at the same time, so we need to set some conventions about port usage so we can manage our applications easier, so this is the first step towards just that.



This script is simple enough, on start it checks to see if that port is used, if so it bombs out, if not it nohup's the Grails command, telling it which port to use, and also just for ease of management, adding the name of the site. This means running commands like ps -ef and pipping to grep shows what's running. Stopping searches for a process running with that port and kills it.

That's the first step. Next is the Ant script to deploy the latest code after tests and static analysis. In this, the target folder has the latest code, we stop any server that's running, do a Grails clean and then run the server. The script then waits to see if it can open an socket to the port that the server is running on, and fails it it cannot. This usually means the server has failed to startup. The variables are gathered from a properties file and should be fairly obvious.



Now, you might have noticed a few oddities in the script like setting JAVA_HOME and BUILD_ID. First, when running in Hudson, any processes are run using the JRE and JDK that is deployed with Hudson, and I wanted to ensure it was the system Java I had installed. The BUILD_ID is a little trick that tells Hudson not to terminate the process after the build has finished, which it had a habit of doing in earlier versions.

When starting Hudson, a flag should also be set to tell it not to kill child processes. My start command is thus:



Adding this all together means it's possible to get automated deployments with Grails added into your builds.

AWS S3 Integration with Grails

I’ve been playing with the S3 plugin for a Grails project and wanted to share how I set it up, as the documentation on the site doesn’t cover an example.

First, in my model I added a byte array field and made it transient. Transient basically means it doesn’t add the bytes to the database, but just keeps the data in memory. This means you can take advantage of all of the automatically generated controllers and view classes which are created from the model without having to really deal with the contents.

class ImageHolder{
byte[] image
String imageUri
static transients = ['image']
}

In the controller we use the request object to get the file and save it to disk, removing any spaces that may have crept in. Then we start using the plugin, and create a S3Asset from the file and put it into the asset service class. The assetService must also be declared at the top, and it’s injected via Spring from the plugin.

def s3AssetService
....
def imageholder = new ImageHolder(params)
....
def mhsr = request.getFile('image')
def fileName = mhsr.getOriginalFilename().replace(' ','_')
def ext = fileName.substring(fileName.lastIndexOf(".")+1);
def file = new File(fileName)
if (!mhsr?.empty && mhsr.size < 1024*200){
mhsr.transferTo(file)
def asset = new S3Asset(file)
asset.mimeType = ext
s3AssetService.put(asset)
imageholder.imageUri = asset.url()
imageholder.save()
}

What this does, is now uses the async service to put the file into S3, so your request returns very quickly. You can see that the image URI is now persisted into the domain object, but the content isn’t.

One final thing is to configure AWS in config.groovy. I set the flag to not add a prefix to the bucket name, as I’m using reverse DNS/package style bucket name to ensure uniqueness.

aws {
domain="s3.amazonaws.com"
accessKey="YOURACCESSKEY"
secretKey="YOURSECRETKEY"
bucketName="your.bucket.name"
prefixBucketWithKey=false
timeout=3000
}

Wednesday, June 16, 2010

Apple, standards and plugins

I hate to sound dramatic but with Apple sounding off about web standards over plugins with their latest products I was wondering just how much Apple was dragging he industry forward, or if it was doing the opposite.

Some years ago, developing web applications meant having to know the details about the browsers your audience were using. We're still suffering from this today with discussion over different box models and questions about just when are Microsoft going to release a standards compliant browser. One example of this is anyone who uses Exchange for email. They might know the pain of using the web client given how rich it is in Internet Explorer and how average it is in any other browser.

The advent of cross browser JavaScript libraries helped reduce many of these pains and it's great to see a lot of work going into the new HTML5 specs that are taking lessons direct from these libraries. However, whilst singing the praises of a few websites that are using the new HTML5 audio and video elements, Apple seem to be creating their own events for touch that I can't see anywhere in the specs. It seems Mozilla are also playing the same game also and I'm worried that we're returning to a world where we have to tailor lots of code to target specific browsers.

You only have to go to Nike.com or log into Gmail to see iPad websites that have orientation and swipe support. It's all very nice but how much extra work is going to have to take place to get a website to work in different browsers and different devices? What happens when a new device comes out with different events, or even as with the new iPhone and it's different screen resolutions, how will that change the work already in the wild?

This raises further questions for me, especially after reports of JavaScript execution speed on the iPad , even though some frameworks claim this isn't so much of a problem, such as Spoutcore Touch
and others go further saying that no framework should be used at all to ensure performance

Although Apple have created fairly complete documentation about building web content for their devices, it does beg the question why there should be differences if we are supposed to live in a standards world? How long will it be before you hit a webpage and have it say "The website is best viewed with iPad", or any other device or browser for that matter?

Should we ignore the devices ad go for standards like the BBC iPlayer for big screens like PS3 and iPad , use libraries to augment sites like how apple.com/iphone allows the user to swipe the image carousel, tailor something specific for each device such as Nike.com or should we just be building native applications?

How should we progress? Should we let our analytics decide? Should we take the standards route or device specific route? This will certainly be one to keep an eye on.