tag:blogger.com,1999:blog-275907612024-02-29T08:37:22.708+11:00Last Infinite TentacleThe bits of the IT world that apply to me right now. Blogged in the hope that someone (even me) will find them useful.Unknownnoreply@blogger.comBlogger39125tag:blogger.com,1999:blog-27590761.post-91089256100883521862019-02-26T19:29:00.000+11:002019-02-26T19:29:10.145+11:00A forking tiny web serverThis is the uncommonly low denominator in HTTP servers. It does very little, and conforms to HTTP spec in only the most absurdly rudimentary way. Clearly, it's very insecure and poorly coded.<br />
<br />
This toy HTTP server is supposed to be used as a tool of last resort. It's implemented in perl because perl version 5 remains ubiquitous - it comes with git on Windows!?! When you need a web server to have a particular behavior, to test a scenario, to break a client in <i>just </i>the right way, it's perfect - all it requires is more code. About the only thing it abstracts is raw socket handling (a bit - thanks perl), and the actual implementation of fork().<br />
<br />
However, it's small enough to type in if you're in one of those annoyingly secure sites where you can't just download random rubbish from the Internet and execute it (where's the excitement in that?).<br />
<br />
Features:<br />
<br />
<ul>
<li>Serves multiple HTTP clients <i>simultaneously</i></li>
<li>Will run in the most constrained environments (like ancient Unixes, or Visual Studio with git).</li>
<li>Simple enough to be easily re-written for different test scenarios.</li>
<li>Implements enough of the HTTP spec that curl won't complain.</li>
<li>It's moderately secure (because it doesn't do much).</li>
<li>No bugs (because it doesn't claim to do much)</li>
<li>Is small enough to be typed in on a coffee break.</li>
<li>Does some logging</li>
<li>Serves up 10 GiB fast enough to flood some networks and crash some clients.</li>
<li>Has enough problems that everyone can find something to fix</li>
<li>Supports CPUs (more than one!)</li>
<li>Has comments(? - ok, I'm stretching here)</li>
</ul>
<br />
<br />
<br />
<br />
<br />
<script src="https://gist.github.com/richard087/a1d3ed5234a2ddf5c90c0b686678f7a1.js"></script>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-4796098991376363802014-12-11T13:30:00.000+11:002014-12-11T13:30:00.532+11:00VoltDB first steps - schemaThe first, and most critical, step for working with VoltDB is figuring out what table(s) will be partitioned, and how. So, the schema...<br />
<br />
The case I'm looking at is for a hypothetical ticket booking agency, for very large events (rock bands playing large stadiums, etc.). There's three basic tables:<br />
<ol>
<li>seat_info</li>
<li>booking_info</li>
<li>booking_seats</li>
</ol>
seat_info is a list of all of the seats, and whatever metadata is required.<br />
booking_info is a list of the bookings, so far, and whatever metadata is required.<br />
booking_seats relates seats to bookings, and events.<br />
<br />
booking_seats is the table that's interesting; it will be the largest, and hottest, by far. The schema elements for booking_seats is below:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">create table booking_seats (<br /> booking_id varchar(10) not null,<br /> seat_number smallint not null,<br /> seat_row int not null,<br /> event_id int not null,<br /> constraint booking_seats_unique_hash unique (seat_number, seat_row, event_id)<br />);<br /><br />partition table booking_seats on column seat_row;<br /><br />create index event_booking_hash_idx on booking_seats (event_id);</span><br />
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span>booking_id is a foreign key for booking_info.<br />
seat_number and seat_row form a composite key into the seat_info table.<br />
event_id will be a foreign key relationship "in the future".<br />
<br />
seat_row is used for partitioning the booking_seats table.<br />
<br />
The VoltDB user manual has an <a href="http://docs.voltdb.com/UsingVoltDB/ChapAppDesign.php">example of a flight booking</a> schema that's been partitioned by flight_id, analogous to event_id in my example. The use case here is different, and so the partitioning scheme needs to be different.<br />
<br />
<ul>
<li>event_id would cause all of the traffic for the ticket booking system would be directed to a single partition when tickets for that event were put on sale - which sort of defeats the purpose of partitioning the data in the first place.</li>
<li>booking_id could be used for partitioning, but as it's not a part of the unique constraint, the constraint would be expensive to enforce.</li>
<li>seat_number isn't a good choice for partitioning because buying multiple tickets is normal, and each transaction should be kept within a single partition, if possible.</li>
<li>seat_row is my choice for partitioning, because most bookings tend to be in the same row, and because it's a part of the unique constraint that prevents seats being double-booked.</li>
</ul>
<br />
In theory booking_id, or even event_id, may make a sound choice for partitioning. For this example I really would like to have aggressive partitioning of the data to <strike>see what breaks</strike> stress VoltDB's capabilities.<br />
<br />
This is a "not bad" schema; but the query for finding available seats is a bit ugly. I added the index on event_id, to optimise the query for available seats for a given event, but testing is really essential to see if it helps or hinders performance overall.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-3081396578055535742014-12-07T15:50:00.001+11:002014-12-07T15:50:39.153+11:00Explorations of VoltDBI've started on a little project to see how <a href="http://voltdb.com/">VoltDB</a> lines up against its promises.<br />
<br />
The Unique Selling Point (sort of) of VoltDB is that you can horiztonally scale an OLTP database, in a shared-nothing environment. That's quite a trick.<br />
<br />
Horizontally-scalable OLAP systems are well-known, even passe; but getting an OLTP system that scales elegantly has been hard/impossible for a reasonable cost. Could you scale out an Oracle RAC? Sure, if you have a few million bucks for licences and hardware. Can you scale out MySQL? Sure, if you have an army of programmers willing and able to implement sharding in their applications.<br />
<br />
The idea is that VoltDB takes the headache of managing sharding and scalable storage, from the <strike>children</strike> developers, and hands it to the <strike>crazy people</strike> DBAs.<br />
<br />
VoltDB has a base price of Free, which I like, so I decided to give it a go. The idea is to come up with a little benchmark problem, to stress its ACIDity. My example case is for a ticketing agency, such as those types who manage major events like rock bands that will sell-out a 100,000 person stadium. At this point, I'm just mocking up a little data, and a few queries. I hope I'll get around to building a little erlvolt app to go with it.<br />
<br />
Getting up and running was a breeze. The documentation and build is excellent in this respect, I can't fault it. I took the hardest possible route and compiled the code from scratch (on Ubuntu) without reading the documentation (which I've since read) - and it went brilliantly well, with an idiot-resistant build that coaxed me towards the correct answer when I got things wrong (generally missing dependencies which are "ant" and "g++").<br />
<br />
For sane people, you can just register on their website, download a binary, and install it. There's an (oh-so-cool) docker image available, but I got bored of playing with that in less time than it took to download, so install it, it's easy.<br />
<br />
I've found that by being badly behaved, I can crash the VoltDB server (and/or my browser) on my ancient little laptop. This worries me (only) a little. I have been pretty rough on the DB, throwing all kinds of broken rubbish at it, but a DB really does need to be bullet-proof. In terms of running queries, procedures, etc. voltdb just fine, and does what it says on the tin.<br />
<br />
If anyone from VoltDB reads this, please address bug <a href="https://issues.voltdb.com/browse/ENG-2526">ENG-2526</a> - it forces developers to build misleading interfaces, and that's a <i>really</i> bad idea.<br />
<br />
VoltDB looks promising, so far, and has all kinds of interesting features. I'd like to see VoltDB extend with more data types, and functions, and maybe referential integrity enforcement... and a pot of gold, and a pony...<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-78818472839060606412014-08-08T19:41:00.000+10:002014-08-08T19:41:00.261+10:00So.... Google made a mess of your AdWords profile?Google makes a mess of accounts, pretty regularly. With a reliability that comes with extensive automation, Google also ignores any and all appeals for intervention to fix busted accounts.<br />
<br />
My problem: My Google account can do all kinds of stuff on Google, except manage the ads on my blog. I had to create a new account, exclusively to manage ads on my account.<br />
<br />
Salvation came in the form of fiddling about in the AdWords/AdSense console. I poked about here, on the basis of "follow the money". While publishers provide content, which is nice; advertisers provide cash money; which is life itself, for Google. In a not-at-all-surprising twist, the console for placing ads (rather than hosting) has heaps of functions, is beautiful, and actually works! One of the AdWords functions is to allow multiple users for a given AdWords account, and additional users can be administrators! Woo!<br />
<br />
To re-enable my account with AdSense/AdWords, I followed the following steps.:<br />
<br />
<ol>
<li>If you haven't already, go make a new Google account.</li>
<li>If you haven't already, go make a blog. It may need a published post, but the content doesn't matter.</li>
<li>Login to http://adwords.google.com/ using your new account.</li>
<li>Go to the settings menu (under that little cog thing in the top-right corner of the screen).</li>
<li>Go to the "Access and Authorization" menu (listed down the left side of the screen).</li>
<li>Go to the "User Management" menu (listed down the left side of the screen).</li>
<li>Add the Email address for your broken GMail account. DO NOT check the "Administrator" checkbox, yet.</li>
<li>In your GMail account, open the email, and accept the invitation.</li>
<li>Back in the "User Management" menu (for the new Google account), you can now tick the "Administrator" checkbox new to your older, previously broken, Google account.</li>
</ol>
<div>
You'll need to have two browsers (not just windows, but separate browsers, like IE and Chrome) to make this easier, one for each Google account. You can do this with one browser, but there'll be a crazy amount of logout/login processes, and that gets boring, and can cause weird problems.</div>
<br />
I hope this works for you. After this, I now have One Google account managing all of my blog stuff, including AdWords/AdSense; which is nice.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-23899174094813412632014-08-06T19:01:00.000+10:002014-08-06T19:01:00.117+10:00Adventures in Ruby and CapistranoMy my current project has seen a revisit of an old role - automating the deployment of distributed systems. Owing to the vagaries of consultancy, I found myself the developer and maintainer of a Ruby-themed deployment environment designed by someone else. I was thrown in the deep end, which is ok - the water was warm, and the current, gentle.<br />
<br />
The (non-Ruby) components:<br />
GitLab - centralised git repository management.<br />
Jenkins - schedules, catalogues, and organises builds, and test runs.<br />
Puppet - setup infrastructure and install underlying software. I'm aware that Puppet is built using Ruby, but as a user, this is well-hidden.<br />
<br />
The Ruby bits:<br />
rvm - The Ruby Version Manager. Installs and maintains the Ruby environment.<br />
gem - a packaged up Ruby library. Analogous to a lightweight rpm, or (more of a stretch) msi file.<br />
geminabox - server and caching proxy for gems built, and used.<br />
bundle - a tool for managing gems.<br />
rake - make in Ruby, can use gems.<br />
Capistrano - a set of tools to do stuff (mostly rake) on other machines (via ssh).<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-24892744291720009242014-07-10T18:57:00.000+10:002014-07-10T18:57:00.684+10:00Light touch configurationThis is a great little blog post on how to automatically fill out templates for system configuration. What it lacks in 'completeness', it makes up for in flexibility and simplicity. Sadly, it's Ruby-based, but that's ok on my current project.<br />
<br />
http://findingscience.com/linux/sysadmin/ruby/2010/10/27/config-template-class.html<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-75925972260066726162014-06-03T14:42:00.002+10:002014-06-03T14:42:35.254+10:00Oracle JDK on Linux, done right!http://d.stavrovski.net/blog/post/how-to-install-and-setup-oracle-java-jdk-in-centos-6<br />
<br />
Wooo! There's a way forward.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-80889769183589341582013-08-22T15:16:00.002+10:002013-10-08T16:44:51.266+11:00avconv doesn't love my webcam :(I had a way to get a view of my DCS-932L, but now it's broken! Somone's gone and fixed (or introduced) a bug into the Ubuntu 12.04 build of avconv, and it's dead in the water. The IP cam didn't change, and the Ubuntu machine is updated regularly, so I'm calling it Ubuntu's change. The output from the DCS-932L is buggy over any interface, so I suspect the original fault lies there, but.. Well, I just want the picture back.
The original article is <a href="http://lastinfinitetentacle.blogspot.com.au/2013/06/viewing-d-link-dcs-932l-ip-camera-from.html">here</a>, and I'll update it when I get to it...
Bug reporting time.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-78088497729031631732013-08-06T19:38:00.000+10:002013-10-08T16:45:39.177+11:00Clock synchronisation approachesFurther to my earlier thoughts about Spanner.
http://krzyzanowski.org/rutgers/notes/pdf/06-clocks.pdfUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-16829236906410516872013-07-06T10:05:00.000+10:002013-07-06T10:05:00.784+10:00Shared directories in Ubuntu using ACLsA nifty article on how to use ACLs in Ubuntu to share directories effectively between user accounts.<br />
<br />
<a href="http://brunogirin.blogspot.com.au/2010/03/shared-folders-in-ubuntu-with-setgid.html">http://brunogirin.blogspot.com.au/2010/03/shared-folders-in-ubuntu-with-setgid.html</a><br />
<br />
I read in another blog on the way to the above link, that Ubuntu has a fairly prescriptive user permissions model, and I just wish they'd spell out what it is... Instead, users and administrators are left to pull together blogs, like this...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-38906598685169871632013-07-02T19:56:00.000+10:002013-07-03T23:17:15.618+10:00Yahoo Pipe to Google maps integrationYahoo Maps are a bit rubbish, so I looked for a way to easily pull together a map I don't dislike using.<br />
<br />
The map's <a href="https://maps.google.com/maps?q=http://pipes.yahoo.com/pipes/pipe.run%3F_id%3D34fd690dbfd606bfbfeb62ee501bc624%26_render%3Dkml" target="">here</a>; the original data's <a href="http://www.health.vic.gov.au/foodsafety/regulatory_info/register.htm">here</a>; and the Yahoo pipe between is <a href="http://pipes.yahoo.com/pipes/pipe.info?_id=34fd690dbfd606bfbfeb62ee501bc624">here</a>. <br />
<br />
This map is a handy reference to the Victorian state government's public register of convictions for food safety offences. It's also a nice example of how cloud services can be integrated and mixed, and matched - Yahoo does provide a map interface to this data, but their maps are terrible.<br />
<br />
<br />
<iframe frameborder="0" height="350" marginheight="0" marginwidth="0" scrolling="no" src="https://maps.google.com/maps?q=http:%2F%2Fpipes.yahoo.com%2Fpipes%2Fpipe.run%3F_id%3D34fd690dbfd606bfbfeb62ee501bc624%26_render%3Dkml&ie=UTF8&t=h&ll=-37.387951,144.67173&spn=1.6586,1.594528&output=embed" width="425"></iframe><br />
<small><a href="https://maps.google.com/maps?q=http:%2F%2Fpipes.yahoo.com%2Fpipes%2Fpipe.run%3F_id%3D34fd690dbfd606bfbfeb62ee501bc624%26_render%3Dkml&ie=UTF8&t=h&ll=-37.387951,144.67173&spn=1.6586,1.594528&source=embed" style="color: blue; text-align: left;">View Larger Map</a></small>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-75134821321387529182013-06-27T16:05:00.001+10:002013-06-27T16:05:16.782+10:00Viewing D-Link DCS-932L IP camera from UbuntuIn a word (line) ...<br />
<div>
<br />avconv -r 15 -f mjpeg -i http://USERNAME:PASSWORD@IP-ADDRESS/video.cgi -i http://USERNAME:PASSWORD@IP-ADDRESS/audio.cgi -vcodec mpeg4 -f mpegts file:///dev/stdout | vlc file:///dev/stdin<br />
<div>
<br />
<br />
Now, if only the audio didn't crash avconv, we'd be cooking with gas! The silent version is here, and seems stable.<br />
<br />
avconv -r 15 -f mjpeg -i http://USERNAME:PASSWORD@IP-ADDRESS/video.cgi -vcodec mpeg4 -f mpegts file:///dev/stdout | vlc file:///dev/stdin<br />
<br /></div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-61104869650986678462013-02-26T15:27:00.000+11:002014-02-20T09:21:11.894+11:00SOA version mangementI was looking for a way to not write this whole report, and fortunately, Google found this report for me. Yay! I didn't really want to write this stuff, so I'm glad I don't have to, now.<br />
<br />
The title of "Best Practices for Artifact Versioning in Service-Oriented Systems" is pretty accurate. For the TIBCO geeks, this is required knowledge for the (deprecated) TIBCO SOA Architect qualification.<br />
<br />
<a href="http://www.sei.cmu.edu/library/abstracts/reports/11tn009.cfm">http://www.sei.cmu.edu/library/abstracts/reports/11tn009.cfm</a><br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-45774760051300518512013-02-06T11:37:00.001+11:002013-02-06T11:37:59.338+11:00File upload and copy patternI was asked a different question; but then ended up with this instead. Shame to waste it...<br />
<br />
The pattern that I've used in the past has directories like this (under some root directory like /var/lib/uploads):<br />
<br />
/partial<br />
/ready<br />
/working<br />
/success<br />
/error<br />
<br />
This set of directories are required for all traffic to a given recipient. Essentially, a file makes its way through the directories, from top to bottom. All directories should be on a single file system. File uploaders/clients should be able to write to /partial and /ready. File receivers/servers should be able to write to everything, except /partial.<br />
<br />
Step 1: A file is uploaded/copied into the /partial directory; with a (globally) unique file name. This step completes when there's sufficient confidence that the file has been copied (usually that just means that the expected number of bytes has been written without an error being thrown).<br />
Step 2: The file uploader/client moves the newly uploaded file into the /ready directory. DO NOT COPY THE FILE!!!! In general, moving/renaming a file within a file system is guaranteed to be an atomic operation, but copying is not. This signifies that the file is ready (from the perspective of the client).<br />
Step 3: When the recipient application/process is ready to process a file in /ready, it should first move the file into its /working directory.<br />
Step 4: When the recipient application has finished processing a file (due to completion, or error) it should move the file into the /success, or /error, directory.<br />
<br />
Things to watch<br />
- more than one file in the working directory is likely to indicate a failure.<br />
- Any file in the error directory is likely to indicate a failure.<br />
- "old" files in the partial directory indicate unsuccessful copies/uploads.<br />
- "old" files in the ready directory indicate that processing has failed/slowed.<br />
- "old" files in the working directory indicate a failure/ABEND.<br />
- Make sure the file system doesn't fill.<br />
- Archiving is not covered here; that's a different pattern.<br />
<br />
Other notes;<br />
- If there is more than one recipient (e.g.: a multi-process server) there should be multiple working directories (working01 working02, working03, etc. - one for each process.<br />
- this is essentially a queue implementation with single phase commit transactions.<br />
- You can implement exactly the same pattern using file renaming, rather than separate directories. I prefer directories.<br />
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-54651405381150556792012-11-27T15:54:00.000+11:002012-11-27T15:54:13.631+11:00Google Spanner reflections<a href="http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//archive/spanner-osdi2012.pdf" target="_blank">Google Spanner</a> (warning: PDF) has made some noise recently, and I got around to giving their paper a quick read. I was happy to discover that it describes a solution to a problem I've been thinking about for awhile. I was happy to read that my musing were heading in the right direction. The key (as the Googlers are at pains to point out, and I was also considering) is how to handle distributed co-ordination of commits (in a scalable manner).<br />
<br />
Spanner introduces the concept of tracking time by a combination of instants, intervals, and assertions about the two - on the understanding that there's uncertainty of the "current" time. As I read through the paper, I tend to think that there's no particular <em>need</em> for atomic and GPS clocks (although they're definitely desirable); so long as the confidence in the current time can be established. The effect of rising uncertainty in the current time would be a reduction in performance, I imagine...<br />
<br />
It turns out that the venerable <a href="http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm" target="_blank">NTP protocol is able to provide measures of uncertainty</a>; so one could re-implement Spanner without atomic clocks (at least, initially), and could then assault the various questions of electing "leaders" and manging reads and writes in a scalable manner.<br />
<br />
<a href="http://zookeeper.apache.org/" target="_blank">Apache Zookeeper</a> is the first thought for the leader election, but they talk about clusters of 4 machines being a large deployment, and about "within this data centre", etc. Let's reserve Zookeeper for a use case that fits better; something without Spanner's global ambitions. Spanner's architecture means that this is a key real concern. A closer reading of the paper is necessary to figure out what's involved here; and how it intimately it's related to uncertain timing.<br />
<br />
The distributed reads and writes problem does look a bit like <a href="http://ceph.com/" target="_blank">Ceph</a>...<br />
<br />
Spanner may not be so intractable, after all.Unknownnoreply@blogger.com0Melbourne VIC, Australia-37.8113667 144.9718286-38.6142332 143.7084011 -37.0085002 146.23525610000002tag:blogger.com,1999:blog-27590761.post-66795582202516823322012-05-22T01:08:00.000+10:002012-05-22T01:08:11.996+10:00VMware player 4.0.3 on Ubuntu 12.04VMware player has some well-known issues with setting up the network for player under Ubuntu 12.04 (that is, where Ubuntu is the host OS).<br />
<br />
With one small change, I followed the great instructions here: <a href="http://www.kartook.com/2012/05/vmware-virtual-network-device-unable-to-loadcompile-vm-player-4-0-2-in-ubuntu-12-04/">http://www.kartook.com/2012/05/vmware-virtual-network-device-unable-to-loadcompile-vm-player-4-0-2-in-ubuntu-12-04/</a><br />
<br />
The change is that the script you download has the recognised version numbers hardcoded in the header.<br />
<br />
Before you run the script (patch-modules_3.2.0.sh), open the file using your preferred text editor, and change the 4.0.2 to 4.0.3, so it looks like:<br />
<br />
<span style="font-family: 'Courier New', Courier, monospace;"> plreqver=4.0.3</span><br />
<br />
The script (and VMware player) seems to run fine for VMware Player 4.0.3 on Ubuntu 12.04. If it should make a difference, I'm running x86_64 Ubuntu 12.04.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-81490713213668181792012-03-28T14:07:00.001+11:002015-08-18T11:51:11.985+10:00Empty an Oracle schema, leave an empty schemaWhat it says on the tin. Sufficient code analysis will reveal that some bizarre schema dependencies may not be destroyed by this; but experience will show that's not actually a problem :-)<br />
<br />
The code lives as a <a href="https://gist.github.com/richard087/542206a7e74202d628b7" target="_blank">gist over on github</a>. A snapshot is below, for simplicity:<br />
<br />
<br />
<br />
<pre>set echo off
set verify off
set serveroutput on size 100000
-- Hosted at http://lastinfinitetentacle.blogspot.com/2012/03/empty-oracle-schema-leave-empty-schema.html
-- Disable all contraints
BEGIN
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'ENABLED'
ORDER BY c.constraint_type DESC)
LOOP
dbms_utility.exec_ddl_statement('alter table ' || c.owner || '.' || c.table_name || ' disable constraint ' || c.constraint_name);
END LOOP;
END;
/
-- remove all objects
declare
cursor dropObjectsCusor is
select 'drop ' || object_type || ' ' || object_name as sqlDropStmt
from user_objects
where object_type <> 'TABLE' and object_type <> 'INDEX'
order by object_type;
cursor dropTablesCusor is
select 'truncate table ' || object_name as sqlTruncTbl,
'drop table ' || object_name || ' cascade constraints' as sqlDropTbl
from user_objects
where object_type = 'TABLE'
order by object_type;
begin
for ob in dropTablesCusor
loop
begin
execute immediate ob.sqlTruncTbl;
exception when others then dbms_output.put_line('Could not truncate a table.');
end;
begin
execute immediate ob.sqlDropTbl;
exception when others then dbms_output.put_line('Could not drop a table.');
end;
end loop;
for ob in dropObjectsCusor
loop
begin
execute immediate ob.sqlDropStmt;
exception when others then dbms_output.put_line('Could not drop some object.');
end;
end loop;
end;
/
purge recyclebin;
</pre>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-48073440491081363232011-10-17T11:34:00.001+11:002011-10-17T11:37:26.058+11:00CSV parser in JavaScriptI wanted a JavaScript based CSV parser. I couldn't find a good one (that would allow multi-character delimiters, and text qualifiers), so I wrote it. The textToArray function should run under any js implementation; but the demonstration of it is written for a ringojs environment.<br />
<br />
Call textToArray with a line of delimited text, and the delimiter and text qualifier you're expecting to find; it will give back an array of the elements it finds.<br />
<pre>var textToArray = function (txtLine, del, txtQual) {
"use strict";
var datArr = [], newStr = "";
while (txtLine.length > 0) {
if (txtLine.substr(0, txtQual.length) === txtQual) {
// get quoted block
newStr = txtLine.substr(0, txtQual.length + txtLine.indexOf(txtQual, txtQual.length));
datArr.push(newStr.substr(txtQual.length, newStr.length - txtQual.length * 2));
}
else {
// get data block
if (txtLine.indexOf(del) !== -1) {
newStr = txtLine.substr(0, txtLine.indexOf(del));
} else {
newStr = txtLine;
}
datArr.push(newStr);
}
txtLine = txtLine.substr(newStr.length + del.length, txtLine.length);
}
return datArr;
};
var fs = require('fs');
var con = require('console');
var del = ";;";
var txtQual = "\"\""; // a pair of quotes.
var file = fs.open('D:/ringojs-0.8/test.txt');
var line = "";
for (line in file) {
con.log(textToArray(line, del, txtQual).join("---"));
}
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-46184422331426846332011-09-27T17:24:00.000+10:002011-09-29T12:30:31.680+10:00Sane coding templates for javascriptIt's just craziness. Apparently it's possible to treat JavaScript like a real programming language. Very exciting stuff, given that it seems to be becoming the ethernet of programming languages - despite many and varied shortcomings it's versatility, ubiquity, and low barriers to entry are likely to ensure it sees off any competitors.<br />
<br />
Here ( <a href="http://www.crockford.com/javascript/private.html">http://www.crockford.com/javascript/private.html</a> ) is a handy clip and keep guide of how to implement common object orientation patterns. James Crockford's excellent musing continue through to inheritance too, <a href="http://javascript.crockford.com/inheritance.html">http://javascript.crockford.com/inheritance.html</a><br />
<br />
Make no mistake, I don't love Javascript, but it's got a lot of momentum. We (the technically involved population) should embrace patterns of Javascript use that will make it sustainable into the future. We need to embrace the <a href="http://c2.com/cgi/wiki?SaneSubset">sane subset</a> of the language's use.<br />
<br />
node.js is good; mongodb and couchdb are stubbornly continuing to exist, and web browsers (whether on smart phones, or desktops) aren't going anywhere. Javascript is a fact, get used to it.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-57686610135345279472011-07-22T13:14:00.003+10:002011-07-29T15:20:29.103+10:00Should I buy a new graphics card?I entered into a "more detailed than necessary" examination of this. The basic questions are:<br />
<ul><li>What will it cost?</li>
<li>What sort of improvement to performance can I expect (can I play Shogun 2)?</li>
<li>Will any power savings pay for some new card?</li>
</ul>Given my current rig is a pair of nVidia 8800 GT (2 x 512MB) in an SLI config; and Shogun 2 reports graphics requirements as "AMD Radeon HD 5000 and 6000 series graphics cards or equivalent" (<a href="http://www.pcgamer.com/2011/01/06/total-war-shogun-2-system-specs-revealed/">pcgamer.com</a>), I had no idea. I went looking for benchmarks, and couldn't find everything (my old/current rig, and some equivalent to the Radeons) on the same page.<br />
<br />
I compiled some stats from previous years, did some (very) rough math and came up with this worksheet:<br />
<a href="https://spreadsheets.google.com/spreadsheet/ccc?key=0AqjnfvEY10rddDFUMkxlMVhmTHp1cEphaXF6TjhzQXc&hl=en_GB">GPU decision worksheet</a>. It turns out that my pair of 8800GT offers identical performance to a (more) modern <a href="http://www.nvidia.com/object/product-geforce-gtx-460-us.html">GeForce GTX 460</a> (1GB RAM), which offers sufficient performance to play Shogun 2.<br />
<br />
The upshot of this? I can play Shogun 2 on my current rig, and it's very unlikely that I can recover the cost of the new GPU through power savings. No new kit for me :-(Unknownnoreply@blogger.com0Melbourne VIC, Australia-37.8131869 144.96297960000004-38.213623899999995 144.27785560000004 -37.4127499 145.64810360000004tag:blogger.com,1999:blog-27590761.post-77518077318604123402011-06-20T17:26:00.000+10:002011-06-20T17:26:27.855+10:00How to build a (modern) Windows Server<a href="http://jeremywaldrop.wordpress.com/2008/10/28/how-to-build-a-windows-2008-vmware-esx-vm-template/">http://jeremywaldrop.wordpress.com/2008/10/28/how-to-build-a-windows-2008-vmware-esx-vm-template/</a><br />
<br />
Excellent. I think some of the decisions are little odd (turn off the firewall?!?), but this covers a lot of ground.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-71375997006803178422011-02-08T14:32:00.000+11:002013-10-09T10:05:19.288+11:00The cloud - public, private, and the appliances withinI talked to a sales guy the other day, one of the better ones. Not only was he forthcoming, helpful, and charming, he was knowledgable and experienced; a rare breed. I had been out of this vendor's loop for a bit and asked what was going on; he mentioned all kinds of things he thought I'd be interested in, including a big cloud push for the vendor's software. Great. I segued into software appliances and he told me they were dead. Not limping, not re-purposed - DEAD.<br />
<br />
From a techie's point of view (which is, to be fair, not the same as a salesman's) I see these as points along a continuum, or at least pre-requisite. "The Cloud" is all about ecomomies of scale, and industrialisation of information technology. In order to achieve the desired outcomes from The Cloud, standardisation and mechanisation are pre-requisites. A software applicance is a standardised "unit" (when properly built), an abstraction of an underlying mechanisation, it's the thing you want in The Cloud.<br />
<br />
Amazon knows this, rPath knows this, VMware knows this, and Google has been doing this implicitly (as have all big web shops) since its inception; but the commentariat seem to have completely missed it, the analysts have missed it, and most of the vendors have missed it. It's a joke.<br />
<br />
The Cloud is only a cloud while the hard bits are hidden. When the hard bits start peeking out, The Cloud becomes The Mess. When the hard bits get hidden, you're running someone else's software, on someone else's hardware (or your own - pick a public or private cloud as suits you), and we all become a lot more happier with the result.<br />
<br />
The Cloud == Appliances.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-33854251500983312922011-02-03T17:01:00.001+11:002011-02-08T15:19:51.919+11:00TIBCO fiddlesIn brief:<br />
<ol><li>When fiddling with TIBCO Software, use an OpenSUSE virtual machine, it hurts less.</li>
<li>Oracle XE is very handy for fiddling with TIBCO software - Here's a great web page explaining how to set it up on OpenSUSE - <a href="http://forums.opensuse.org/install-boot-login/414654-how-install-oraclexe-opensuse-11-1-a.html">http://forums.opensuse.org/install-boot-login/414654-how-install-oraclexe-opensuse-11-1-a.html</a></li>
<li>Give your VM about 2GB RAM, or more.</li>
<li>Create a different Unix user for each 'build' (ActiveMatrix v2 suite, ActiveMatrix v3 suite, BusinessWorks/Administrator suite, etc.). Don't let them read/write to each other's files, ever.</li>
<li>Do not install gcj, OpenJDK, or anything remotely like Java and not made by Sun and/or Oracle.</li>
</ol>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-62957454708158104692011-01-05T11:18:00.007+11:002011-01-06T14:37:07.770+11:00Further Time Machine-esque behaviourMy earlier post on Time Machine-esque backups ( <a href="http://lastinfinitetentacle.blogspot.com/2009/06/backups-for-lazy.html">http://lastinfinitetentacle.blogspot.com/2009/06/backups-for-lazy.html</a> ) has some useful links for getting regular differential backups going. However, it can be done more neatly, particularly when it come to Windows, and I've come across a nifty solution here <a href="http://www.robgolding.com/blog/2009/01/14/leveraging-vss-and-robocopy-for-robust-backups/">http://www.robgolding.com/blog/2009/01/14/leveraging-vss-and-robocopy-for-robust-backups/</a><br />
<br />
This approach requires the vshadow.exe application, which is its own can of worms. Typically, vshadow ships with a/the Windows SDK - ServerFault has some info <a href="http://serverfault.com/questions/137126/vss-error-521-when-attempting-backup/137254#137254">( http://serverfault.com/questions/137126/vss-error-521-when-attempting-backup/137254#137254</a> ) which suggests getting the Vista era SDK (v6.1, from <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=e6e1c3df-a74f-4207-8586-711ebe331cdc&displaylang=en">http://www.microsoft.com/downloads/details.aspx?FamilyID=e6e1c3df-a74f-4207-8586-711ebe331cdc&displaylang=en</a> ). You will only need to install the "Win32 Developer Tools" component; you can ignore everything else.<br />
<br />
Using VSS (Volume Shadow Service) will allow an internally-consistent copy to be made of a Windows drive. All I then need to do is port the script here ( <a href="http://serverfault.com/questions/27397/sync-lvm-snapshots-to-backup-server/168034#168034">http://serverfault.com/questions/27397/sync-lvm-snapshots-to-backup-server/168034#168034</a> ) to Windows. I could use the script to push the image onto the file server, where it can update SVN repository (a physical backup, for bare metal recovery). Handling the (Windows) shadow drive can be achieved using the Windows-native 'dd' from <a href="http://www.gmgsystemsinc.com/fau/">http://www.gmgsystemsinc.com/fau/</a> . The same dd tool can be used for an 'easy' recovery. If I get this right, it should be Windows native, and easy to install/maintain.<br />
<br />
A separate job will pick out user directories for the Time Machine treatment (more of a logical backup, also on my file server, for basic file recovery) via the mercifully short script I previously linked to, thus: <a href="http://blog.interlinked.org/tutorials/rsync_time_machine.html">http://blog.interlinked.org/tutorials/rsync_time_machine.html</a><br />
<br />
Technically, I could use Zumastor/ddsnap on the file server to get snapshots/revisions, and I still might...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-27590761.post-79441005198793605652010-11-22T14:30:00.001+11:002011-01-05T14:53:57.713+11:00MDX for the hard of thinkingThat includes me. Between Mondrian's exciting behaviour (at least that of the schema editor) and the vagaries of MDX, I'm finding this OLAP stuff hard to get into. This is a nice rundown on what MDX is. I'm still hoping there's a nifty Mondrian intro I haven't found yet. <a href="http://www.databasejournal.com/features/mssql/article.php/10894_1495511_6/MDX-at-First-Glance-Introduction-to-SQL-Server-MDX-Essentials.htm">http://www.databasejournal.com/features/mssql/article.php/10894_1495511_6/MDX-at-First-Glance-Introduction-to-SQL-Server-MDX-Essentials.htm</a>Unknownnoreply@blogger.com0