Monday, October 17, 2011

CSV parser in JavaScript

I wanted a JavaScript based CSV parser. I couldn't find a good one (that would allow multi-character delimiters, and text qualifiers), so I wrote it. The textToArray function should run under any js implementation; but the demonstration of it is written for a ringojs environment.

Call textToArray with a line of delimited text, and the delimiter and text qualifier you're expecting to find; it will give back an array of the elements it finds.
var textToArray = function (txtLine, del, txtQual) {
    "use strict";
    var datArr = [], newStr = "";
    while (txtLine.length > 0) {
        if (txtLine.substr(0, txtQual.length) === txtQual) {
            // get quoted block
            newStr = txtLine.substr(0, txtQual.length + txtLine.indexOf(txtQual, txtQual.length));
            datArr.push(newStr.substr(txtQual.length, newStr.length - txtQual.length * 2));
        else {
            // get data block
            if (txtLine.indexOf(del) !== -1) {
                newStr = txtLine.substr(0, txtLine.indexOf(del));
            } else {
                newStr = txtLine;
        txtLine = txtLine.substr(newStr.length + del.length, txtLine.length);
    return datArr;

var fs = require('fs');
var con = require('console');
var del = ";;";
var txtQual = "\"\""; // a pair of quotes.
var file ='D:/ringojs-0.8/test.txt');
var line = "";
for (line in file) {
    con.log(textToArray(line, del, txtQual).join("---"));

Tuesday, September 27, 2011

Sane coding templates for javascript

It's just craziness. Apparently it's possible to treat JavaScript like a real programming language. Very exciting stuff, given that it seems to be becoming the ethernet of programming languages - despite many and varied shortcomings it's versatility, ubiquity, and low barriers to entry are likely to ensure it sees off any competitors.

Here ( ) is a handy clip and keep guide of how to implement common object orientation patterns. James Crockford's excellent musing continue through to inheritance too,

Make no mistake, I don't love Javascript, but it's got a lot of momentum. We (the technically involved population) should embrace patterns of Javascript use that will make it sustainable into the future. We need to embrace the sane subset of the language's use.

node.js is good; mongodb and couchdb are stubbornly continuing to exist, and web browsers (whether on smart phones, or desktops) aren't going anywhere. Javascript is a fact, get used to it.

Friday, July 22, 2011

Should I buy a new graphics card?

I entered into a "more detailed than necessary" examination of this. The basic questions are:
  • What will it cost?
  • What sort of improvement to performance can I expect (can I play Shogun 2)?
  • Will any power savings pay for some new card?
Given my current rig is a pair of nVidia 8800 GT (2 x 512MB) in an SLI config; and Shogun 2 reports graphics requirements as "AMD Radeon HD 5000 and 6000 series graphics cards or equivalent" (, I had no idea. I went looking for benchmarks, and couldn't find everything (my old/current  rig, and some equivalent to the Radeons) on the same page.

I compiled some stats from previous years, did some (very) rough math and came up with this worksheet:
GPU decision worksheet. It turns out that my pair of 8800GT offers identical performance to a (more) modern GeForce GTX 460 (1GB RAM), which offers sufficient performance to play Shogun 2.

The upshot of this? I can play Shogun 2 on my current rig, and it's very unlikely that I can recover the cost of the new GPU through power savings. No new kit for me :-(

Tuesday, February 08, 2011

The cloud - public, private, and the appliances within

I talked to a sales guy the other day, one of the better ones. Not only was he forthcoming, helpful, and charming, he was knowledgable and experienced; a rare breed. I had been out of this vendor's loop for a bit and asked what was going on; he mentioned all kinds of things he thought I'd be interested in, including a big cloud push for the vendor's software. Great. I segued into software appliances and he told me they were dead. Not limping, not re-purposed - DEAD.

From a techie's point of view (which is, to be fair, not the same as a salesman's) I see these as points along a continuum, or at least pre-requisite. "The Cloud" is all about ecomomies of scale, and industrialisation of information technology. In order to achieve the desired outcomes from The Cloud, standardisation and mechanisation are pre-requisites. A software applicance is a standardised "unit" (when properly built), an abstraction of an underlying mechanisation, it's the thing you want in The Cloud.

Amazon knows this, rPath knows this, VMware knows this, and Google has been doing this implicitly (as have all big web shops) since its inception; but the commentariat seem to have completely missed it, the analysts have missed it, and most of the vendors have missed it. It's a joke.

The Cloud is only a cloud while the hard bits are hidden. When the hard bits start peeking out, The Cloud becomes The Mess. When the hard bits get hidden, you're running someone else's software, on someone else's hardware (or your own - pick a public or private cloud as suits you), and we all become a lot more happier with the result.

The Cloud == Appliances.

Thursday, February 03, 2011

TIBCO fiddles

In brief:
  1. When fiddling with TIBCO Software, use an OpenSUSE virtual machine, it hurts less.
  2. Oracle XE is very handy for fiddling with TIBCO software - Here's a great web page explaining how to set it up on OpenSUSE -
  3. Give your VM about 2GB RAM, or more.
  4. Create a different Unix user for each 'build' (ActiveMatrix v2 suite, ActiveMatrix v3 suite, BusinessWorks/Administrator suite, etc.). Don't let them read/write to each other's files, ever.
  5. Do not install gcj, OpenJDK, or anything remotely like Java and not made by Sun and/or Oracle.

Wednesday, January 05, 2011

Further Time Machine-esque behaviour

My earlier post on Time Machine-esque backups ( )  has some useful links for getting regular differential backups going. However, it can be done more neatly, particularly when it come to Windows, and I've come across a nifty solution here

This approach requires the vshadow.exe application, which is its own can of worms. Typically, vshadow ships with a/the Windows SDK - ServerFault has some info ( ) which suggests getting the Vista era SDK (v6.1, from ). You will only need to install the "Win32 Developer Tools" component; you can ignore everything else.

Using VSS (Volume Shadow Service) will allow an internally-consistent copy to be made of a Windows drive. All I then need to do is port the script here ( ) to Windows. I could use the script to push the image onto the file server, where it can update SVN repository (a physical backup, for bare metal recovery). Handling the (Windows) shadow drive can be achieved using the Windows-native 'dd' from . The same dd tool can be used for an 'easy' recovery. If I get this right, it should be Windows native, and easy to install/maintain.

A separate job will pick out user directories for the Time Machine treatment (more of a logical backup, also on my file server, for basic file recovery) via the mercifully short script I previously linked to, thus:

Technically, I could use Zumastor/ddsnap on the file server to get snapshots/revisions, and I still might...