Irrepressibility in the face of popularity, pixies and other illnesses

I’m home sick today, so of course it seemed like a good idea to update my blog…

After adding that new irrepressible.info banner to the sidebar it occurred to me that I should active the neat Sidebar Widgets feature… Of course, this required me to either give up my non-standard sidebars, or widgetise them…

Firstly I modified the Top 10 posts plugin (original version) to have a widget mode for the sidebar, and also to do everything in a plugins_loaded hook, since it may load before the widget support plugin itself.

I then knocked up a quick irrepressible.info widget, based on the Google Search widget that comes with the plugin, which produces random chunks of censored material from irrepressible.info. These widget things are quite easy. ^_^

I also quickly knocked up a Weatherpixie widget for the Weatherpixie, although I didn’t go as far as to offer a drop-down for the troopers or countries. This way people still go visit the pixie-chooser page, where the author’s Google Ads are. I don’t feel bad this way, given I emailed the author a few months ago about this, and didn’t get any reply.

I did all my uploading/editing in lftp, but I intend to set up an FTP mount of some kind on my machine, now that I’m trying to avoid sshing into the webhosting box. I’m also looking into chroot’d sshd or sftp, but the quick answer so far is “hard”.

I started doing this post in the Performancing Toolbar for firefox, but after two paragraphs I decided I much preferred the normal WordPress interface. I’ll be uninstalling Performancing pretty soon.

I’ll stick and updates or other WordPress stuff in http://www.tbble.net/wp/ until such time as I can be assed putting them into version control, and fixing the bzr browser on bzr.tbble.net.

Advertisements

CeBIT: Prologue

Yup, that’s right. I’m in beautiful Sydney for CeBIT. The joys of trains mean that not only am I out of the office for Tuesday, Wednesday and Thursday, but also Monday and most of Friday.

At this rate, all my holiday leave will be spent on trade shows and suchlike. The first three months at CBIT, I spent a week at LCA2006. This quarter, a week out for CeBIT. (Well, actually, that’s the Feb-Mar-April quater’s week). So now I’m looking for a tradeshow or other event that’s both local, cheap and in the next three months… Then again, Clare’s MedRevue is coming up, so I’ll need to save a day or two for that.

Anyway, CeBIT. Having just spent an evening on dial-up Internet planning my schedule, here’s how it looks:

Tuesday

10:15 – 10:55
Hall 6: Telstra
11:05 – 11:50
Hall 6: Disney
12:00 – 12:50
Hall 6: Music
13:00 – 13:45
Hall 6: LG – Mobility/Convergence
2 pm
Stand P1, Hall 4: Digital Broadcasting
3 pm
Stand D50 – Hall 2: The realities of Fibre to the Home
4 pm
Stand D50 – Hall 2: Digital Media and Convergence
5 pm
Stand D50 – Hall 2: Utilities and Broadband Power Line

Wednesday

Morning
Stand J1, Front of Hall 3: Future parc launch
11am
Stand P1, Hall 4: IT Services
12 pm
Stand D50 – Hall 2: Next Generation Networks, IP and VoIP
1pm
Stand P1, Hall 4: Open Source 1
2pm
Stand P1, Hall 4: Venture Capital
3 pm
Stand D50 – Hall 2: Fixed wireless broadband developments
4 pm
Stand D50 – Hall 2: $3 billion for Regional Telecoms
5 pm
Stand D50 – Hall 2: Mobile voice still the killer application

Thursday

10:30 – 10:45
Stand J1, Front of Hall 3: Ontologies and topic maps for smart information use
11am
Hall 3: CompTIA
13:00 – 13:15
Stand J1, Front of Hall 3: Disaster prediction, response and recovery
13:30 – 13:45
Stand J1, Front of Hall 3: Health data integration
2pm
Hall 3: CompTIA
3pm
Stand P1, Hall 4: Open Source 2
12 pm
Stand D50 – Hall 2: The battle between 3G HSDPA and WiMAX
4.00pm to 5.00pm
Stand J50, Rear of Hall 3: The BlackBerry Advantage for Small & Medium Businesses

In summary: All of the keynotes, all of the open-source stuff I can manage, and what time’s left for Internet and Blackberry stuff. I think I’ll have a short period to wander around the stands too, visit the Linux Australia guys.

I’m also meeting a vendor down here, with luck, so I’m feeling all well-travelled-businessmany today. ^_^

The disadvantage of being in Sydney is I’m on dialup, and also a few hundred kilometres away, so logging in to the office Terminal Server for email is a painfully slow experience.

Hopefully tomorrow I’ll find an Internet cafe in the city where I can plug my laptop in and get some work done. ^_^

Sin, Certs and Wans; or Sun Tzu VS Bikinis

I pre-ordered Sin Episode 1: Emergence on the weekend. It was cheap (AU$23 or so) and included a Steam version of the original Sin. This is partly my fault, I was hoping for a steamy version of Original Sin… (Sorry if you were hoping for a different original sin joke. ^_^)

I actually own Sin, but I don’t know where the CD is. The original box is still on my shelf. So I’m taking the opportunity to actually finish the game, since the new one is set four years later. And it’s still as I remember, one of the best-fun first-person shooters I’ve played… Dragged me right away from Half-Life and its expansion packs. (Although I’m finished Half-Life and Blue Shift now, and I think I’m close to the end of Opposing Force)

A recent topic on Slashdot about The changing value of certifications. Beyond the somewhat inaccurate summarising of the arcticle on Slashdot (certifications still attract a pay premium, they don’t actively hurt your career) I think a rather important oversight was made in much of the discussion (ie. that bit which survived my threshhold) — and maybe this was covered in the original research, I didn’t bother trying to track down the report mentioned in the article — that for some jobs a certification doesn’t attract a premium, because it’s a neccessity.

Certainly the terms of employment at CBIT require that I hold a certification of some kind within six months of joining. It originally specified MCSE, but they happily let me substitute my LPIC-1. I since discovered that my Windows NT4 MCSE is still valid, so I’m putting the MCSE upgrade on hold to get my CCNA done.

Then a lot of the posters proceeded to confuse certification with qualifications. Having both, I’m amazed that this happens. On the other hand, the people generating this confusion were usually on the “I didn’t need stuffy boring university or a do-in-my-sleep MCSD, I just walked in and told them how I’ve been running Windows since I was six and they hired me” side of the debate.

I’m going to get condescending here. I’ll let you know when it’s over. I really think these attitudes go hand in hand, and are usually closely followed by “Why won’t <large company> hire me as their CTO? I know as much as all these highly qualified lawyers and managers. They’ll fail now, and it’ll be all their fault for not hiring me,” and then later followed by “I’ve been working this same $30k/year first-level support role for ten years now, because management are too short sighted to realise that I was just too smart to waste three years on a degree.”Done with the condescending bit.

And sure, I myself have been guilty of this. I still am, frequently. I think most of us in IT do it to some extent. This is also how we end up with the armchair lawyers, armchair managers, armchair accountants and armchair linguists that pervade our community. (I pick those because I’ve done them all myself. Ranter, berate theyself. ^_^) It might be a symptom of the type of person who succeeds in IT (self-confident, multi-skilled and widely read/educated) as compared to those who fail (obstinant, unfocussed and arrogant).

So why certify? I do it partly because I love training and learning, and having something to show for it — Ignore that I waited five years to graduate my B.Sc — and partly because it makes financial sense. I like to read when I go to bed… It settles me down and clears my mind. However, a $20 novel will only last two or three days. My CCNA INTRO book has taken me over a month to get about half-way into… I think because it’s so dry, I can’t read more than a few minutes. Either way, good value for $50.

Flicking through Planet Linux Australia as I do when I forget how much time it sucks up… Between the sordid tales of a Power5 lying with a SunFire — Oh I wish I had a project to throw at them… Where’s my multi-threaded Sudoko solver? — I came across this gem of an idea for a Canberra-wide wireless mesh network. This is something I’d heartily endorse, and help with where I could… I’ll have to dig out my old Gungahlin-wireless-mesh plans…

A quick aside: Another Rich Web with PHP talk from Rasmus Lerdorf. The content aside, the template is sweet. The template aside, the content is fascinating.

Another quick aside: Digital cameras have their own digital signature. I’m not sure if this is even vaugely practical for any use, but here’s the original article‘s PDF. Someone in the discussion pointed me at a US Government scheme to convince colour laser printer manufacturers to encode printer identification data onto each page. It’s a scary world we live in.

Last aside… Two more links: One to make you cry: Even if the author is being facetious, someone somewhere belives this; and one to make you laugh: Every workplace should have a plan on its books for how to deal with Viking Raids.

But what of the bikinis, I hear you ask… Or would, were I narrating this out loud instead of in my head… And you cared about bikinis more than you care about Sun Tzu… OK. This is an IRC story. You’ve been warned.

On one of the IRC channels I hang out on, discussion turned to bikinis, and one of the channel regulars (Io) mentioned that she doesn’t wear bikinis.

Pseudo-aside: This is Io. I knocked up this image that same evening, as it happens. MSPaint’s so much easier to use than The Gimp… I did the whole image in the time it took me to give up the other day on finding a “Draw a square” button in The Gimp.

Anyway, it seemed sensible at this poit to consult with my new TzuBBot. TzuBBot being my Art-Of-War-based-advice-bot. Here’s the results.


03:00 < TBBle> TzuBBot: Any advice on the best way to get Io out of Brazil and into a bikini?
03:00 < TzuBBot> THE NINE SITUATIONS: 55. Hence he does not strive to ally himself with all and sundry, nor does he foster the power of other states. He carries out his own secret designs, keeping his antagonists in awe. Thus he is able to capture their cities and overthrow their kingdoms.
03:01 < TBBle> TzuBBot: Any advice on the best way to get Io out of a bikini and into my lap?
03:01 < TzuBBot> THE ARMY ON THE MARCH: 45. If a general shows confidence in his men but always insists on his orders being obeyed, the gain will be mutual.

The TzuBBot gives useful advice so often, it’s scary…

Final note: My goodneess, but the move from Yurika to Dino has made this website soooo much snappier!

Internet on Internet action: Routing around the bad to (speech-)recognise the good

Hmm, time to resurrect an old posting format…

Good
The Internet — automatically routing around damage such as a DMCA from Apple.
Bad
Storyline patents — Everytime I think the world has dug itself to rock-bottom, someone hits me on the back of the head with a shovel.
Educational
Kotodama, a video game research prototype for teaching Japanese to anime fans — Now this is where I’d like to be taking my university education… I wonder where the project’s going, and how I can get onboard… And of course, this led me to Julius, a speech-recognition system that I wish I had time to play with.

Thanks to Hellblazer via Slashdot for the heads-up on the patent.

Slashdot is prolly also the viaduct via which I got the Kotodama link, as well as a reminder about the Linux-based GP2X portable gaming doodad, and AnoNet, like FreeNet but built from VPN and SSH tunnels which leave you in control of your own machine’s actions. I guess the difference is that on AnoNet, if someone does work out who you are and they seize your equipment, you don’t have the I didn’t know that was on there defense you get from FreeNet. There’s also the issue that, if you do something heinous enough, such that international authorities can co-operate on it, then you can be tracked down.

One of the things AnoNet’s Wikipedia entry suggests would be a good thing to protect on AnoNet is bnetd, the Battle.net Server that Blizzard Entertainment had shutdown in the US. Mind you, even on the regular Internet finding bnetd source was as easy as following the link from the bnetd Wikipedia entry, once again demonstrating how the Internet routes around damage. ^_^

Niagra and network stacks, TCP and talloc: LCA Presentations Day 1 Morning

Overnight interlude: I spent all evening installing WordPress 2.0, and fixing up a few old posts for XHTML compliance. The new WYSIWYG editor is neat, but will lose chunks of unparsable markup (ie. missed quotes and brackets). New posts will prolly be fine to use it for, but for the moment I’m sticking with writing straight HTML.

The whole AJAX interface thing is cool. I’m looking forward to the PHP5 talk this afternoon.

Of course, once I had that done, I decided to grab a new theme. This one’s pretty cool, although the whole lense thing is a bit weird…

And I’m now appearing on Planet linux.conf.au 2006, although because I use the excerpt in all my posts to produce a clarification (or declarification) in Chinese Kung Fu Novel Chapter Synopsis Style, my posts end up being quite short on the site, while long on my page.

This morning’s keynote by David Miller was interesting. He maintains the Linux networking stack, and also is the sole porter of the Sparc64 port. So he actually gave three presentations, an overview of the recent changes in the Linux networking stack, a presentation about the Linux port to the new Sun Niagra CPU line, and a brief talk about how to actually deal with kernel maintainers. Lack of wireless there meant I didn’t get my laptop out, so don’t have much more to say about it.

Well, I’ll talk about the new Sun chip, known as Niagra, UltraSPARC T1 or CoolThreads depending on who’s marketing department you ask. It’s a 8-core CPU, each core actually runs four threads in a round-robin fashion when they are able to be scheduled, and leaving them out when they’re waiting on main memory or the FPU or otherwise. This means that any task which can actually _use_ 32 threads for integer-only code will be able to run fast. Kernel compiles are a prime example (looking forward to the kernels-per-second numbers for comparison to the 128 CPU PowerPC G5 box talked about at LCA05. This would also be very nice for video encoding, I suspect. Mind you, the small Sun Fire T1000 Server (shipping March 2006) lists at US$3495, so I doubt I’ll have an array of these to play with anytime soon…. Imagine a Beowulf cluster of these things. ^_^

Morning tea interlude: Posters have gone up. There’s Thousand Parsec, WorldForge and FAI. I’ve looked at WorldForge and Thousand Parsec before, at LCA05, but if I have time tonight (Ha!) I might see where they’re up to these days. FAI on the other hand I’ve only been vaugely aware of, since I never seem to deploy more than one box at a time… But now it’s in my blog, so I’ll be able to find the link when I do want it.

Congestion Advancements with Ian McDonald. A technically-oriented delve into the new congestion control algorithm module structure for TCP, as touched upon by Dave Miller.

Ian presented both the work done recently to generalise and modularise the congestion control algorithms for TCP in the Linux kernel, which had originally been kind of ad-hoc and wide-raning in their touching. The interface they use is fairly simple (if you know TCP backwards, that is ^_^) and they turn out to be per-socket switchable. This will allow much easier use of different algorithms, which are optimised for various combinations of high and low bandwidth, high and low latency, and timeout vs loss vs congestion vs drop situations.

He then presented his current research project, which is a TCP-like protocol (I think… Or was it a congestion-control algorithm?) called TCP-Nice, which is designed to back off from congestion so that the rest of the network functions as if it wasn’t there, while it uses all the left-over bandwidth… I like this, I’d love to see BitTorrent ported to use it. Then I could give free TCP-Nice traffic, and lower my TCP quotas significantly. ^_^ A vast improvement over my previous Second-Class Traffic plan.

He then presented a futher, already live use of the modularised congestion control code in Linux, DCCP. This is a session-based congestion controlled (like TCP) unreliable (like UDP) protocol, mainly intended for multimedia traffic, where you want as much as possible to get through, to back off (somewhat) under congestion, while not doing retransmits and re-ordering since retransmitting live data is a pain.

It’s in the final call for the RFC, and he’s already gotten it working. It’s in the 2.6.14 Linux kernel, with a NAT fix to come in 2.6.16. However, they still haven’t gotten the perfect congestion control algorithm for multimedia streams… The TCP-like CCID2 isn’t very good, the smoothed and slower-falling version CCID3/TFRC didn’t help much, and the latest attempt, MFRC is currently too agressive, and needs tuning to avoid killing other traffic under congestion conditions. But it’s getting there, and shows a lot of promise.

Netem: A last-five-minutes gem… Introduces loss, delay, reorder and duplicate packets on an intermediate box. Can only currently work on output queues.

Finally for the morning, Rusty Russell presented Talloc. Talloc was touched upon by Tridge in his “non-junk” code tour at LCA05, but he didn’t spend too much time on it, looking mainly instead at tdb and ldb…

Basically, talloc is a hierachial pool allocator, which gives destructors, pools and hierachy to your memory allocation calls. This mean that managing your memory usage in C becomes sensible. It’s mainly been driven by Samba, which in fact produces huge whacks of memory allocation… Rusty showed a graph of it, I’ve no idea where to find it. (There was also a URL to the program to make such graphs, I missed that too. >_<) Anyway, it’s pretty impressive.

Andrew Bartlett pointed out last week that he’s using talloc in Samba 4 to trivially wrap the krb5-allocated blobs coming out of the kerberos libraries. This basically gets him free destructors, solving the nasty lifetime problems kerberos’s allocation and free activities otherwise bring.

nfsim uses talloc to simulate kmalloc, providing simple and easy kernel memory leak detection in the netfilter modules being tested. Also has a very neat graphical live talloc allocation tree display. I think that is really neat!

Now to find myself a project to use talloc on… That’s also what I said last year about tdb, as it happens. I actually have one for the latter…. I want to unbone FreeRADIUS‘s IP Pool module, specifically so I don’t have to kill FreeRADIUS to make changes to the pools. I just didn’t get it done in the last 9 months. Gah.

In the more general programming talk at the beginning of the talloc presentation, Rusty suggested that interfaces should be hard to misuse first, easy to use second. He also suggested the following list of tools as being of great importance:

  • distcc
  • ccache
  • ccontrol – This one’s new to me. In fact, I’m still not clear what it does…
  • Mercurial – Source control tool. I’ve not tried it, but Alan DeKok from FreeRADIUS uses it for his own development, and then breaks up the patches for shoving into CVS for the rest of us…

Oscar Wilde VS the Robots of Dawn, in stunning 3D!

The fact is, that civilization requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends. – Oscar Wilde

Now there’s a man who saw the computer revolution coming…

Yesterday, a customer noted that he couldn’t follow what I was doing to his computer, and that they’re really not that user-friendly, despite all the advances in technology. This is of course correct, because advances in technology aren’t making things user-friendlier, they’re just getting faster at being unintuitive.

As happens every time this comes up, I spent some daydreaming time trying to work out what _is_ the best user interface. There’s not a single true answer to this, so I’m limiting myself to what would work best for _me_.

The first and most defining aspect of my workflow is I tend to switch between focussed single-tasking and frantic multi-tasking. So I need a system which lets me run multiple single-tasking areas and switch between them quite quickly. I’m of course visualising a projected 3D space via goggles, where I look around to change workspaces, and the longer I’m focussed in a single space, the further other spaces move away from my direct field of view, while when I flick quickly from space to space the ones I’m moving between will hang around the center of my vision to be switched to.

This I think highlights a very very important general principle, that a workflow must be adaptive. You must be able to deal with each task in a way that is most appropriate, and the system must adjust to what you’re doing at the time, removing extraneous things where possible without hiding things that should be at the fingertips.

When I work with papers on my desk, they’re either in two states. An entire document or set of documents spread out so I can see all parts without moving things, or stacked up into a single pile, only the relevant pages, which I can flick back and forth as needed. I daresay these are relevant models for my workspaces. As I focus on a workspace, it expands across my field of vision, giving me more room to array objects in a manner most pleasing. My most extreme behaviour in this way is when I need to think big, and drag a whiteboard out. What I do on a whiteboard could easily be done on an A4 sheet of paper. The point is that I can fill my field of vision with it, and that focusses me marvelously. Also the whole standing up thing gets the blood pumping. ^_^

I guess I also expect a keyboard at my fingertips. I mean, I was prolly the only 8-year old in my primary school to ask to submit printed assignments rather than handwritten. I never did earn my pen-license, so it’s not all good news. ^_^

Rather than have my fingers leave the keyboard, I guess within a workspace I’d like where I’m looking to be the target window for windowing operations, but keyboard focus to remain until I press the keyboard-to-current-window key. There’d also be a bring-window-to-top key and a push-window-to-bottom key. Those three windowing operations are sufficient on the keyboard, and support the various modes I work in, where I’m either just typing into a box, typing into a box based on something in a different box (which I’d obviously be looking at) or in fact flicking my eyes all over the place as I type stuff into a box I might not even be able to see.

Thank goodness for touchtyping lessons from a typewriter-era guide. I suspect modern keyboardists (ie. anyone younger than me) who learnt to type from computer-based tutor programs has probably missed out on the practice in typing without visual feedback. Although most commonly hit when transcribing a document from the document holder that was once a prominant attachment to any respectable secretary’s monitor, it’s also useful (with the right window focus model) when you only need to type with 95% accuracy but need the maximum possible viewable area for whatever is causing you to type.

Anyway, I’d also expect to be able to manipulate documents in the workspace spatially with my finger or fingers. Moving, flipping pages, expanding and contracting, etc. Generally, I’d lay out my workspace, and then return to the keyboard. So full functionality for zooming, pageupping and pagedowning is probably also important for windowing operations. So that makes six.

By zooming I mean full-screening (or full field of viewing). Visualising to the exlusion of all else. See the bit above about whiteboards. ^_^

Although I’m talking about a 3D sphere or something here, I don’t know yet how depth/distance would fit in. Although I stack things now, that’s mainly due to a lack of space (or a tendancy for things to fall off my desk). I guess I would have to try it, or at least think about it some more.

So how is this relevant to now? Well, frankly, a lot of what I’m looking for is available via wmii. I’m on the wmii-2 release, and I use it in Maximised mode (which is not the author’s recommendation by a long shot) mainly because I’m too lazy to poke around harder. The default mode (tiled) is actually quite clever, but I deal with too many apps that don’t handle resizing too well (xterm, specificially. It doesn’t unwrap text when the text area grows, but does when the area shrinks. Or at least that’s the default, maybe I can change that.) I can switch windows and spaces with the keyboard quite quickly, to the point where I don’t mind that I can’t see a list of open windows, I just go switch-switch-switch until I get the one I want.

Many people would find my workflow suboptimal. Many people would be able to point out ways _I_ could work more efficiently. But I like my workflow, and it does work for me. Until I can put on a pair of augmented glasses and be projected into a virtual world of my own imagination, there’s always going to be something that could be done better.

As always, I’m open to suggetsions but reserve the right to initially reject them obstiantly without good reason. I will come ’round eventually. Heck, I even eventually conceded that there are times when handwriting is better than a laptop. It did take over 10 years, but I couldn’t even begin to imagine trying to take lecture notes on a laptop. ^_^

As far as user-friendliness goes, I’m fond of telling people that computers won’t be user-friendly until they are as easy as toasters. On more careful consideration, that’s not true. Toasters are not user-friendly. If you do it wrong, your toast burns. Look away or do anything more catastrophic, burning toast sets off the fire sprinkler. Combine burning toast with another accident involving a kitchen paper roll falling over, and your kitchen’s on fire. That’s not user-friendly. It’s downright user-dangerous. But the point is that it’s a simple device, that does one thing (apply heat to bread) and only a few knobs to adjust (how long to do so, and a button if you want it sooner).

Can computers be like toasters? Sure. They used to be just as dangerous, but with hundreds of vacuum tubes to twiddle and adjust. OK, can they be simple like toasters. Single-tasked and low-button-count? Sure. Of course, you have to balance that computers are very versatile. They have to be able to do multiple single-tasks (that sounds familiar… ^_^), or there’s a whole lot of wasted silicon going on. Sure, you can get some efficiencies by moving away from the desktop PC model to having a site-based computing cluster, providing CPU and memory resources to whichever thin terminal (which may be running on a desktop PC, but which may not have all of that PC’s resources dedicated to it at that moment) is being used. Network-transparent filesystems, operating systems and the like let us do this sort of thing without a lot of bother.

Then it comes down to the applications. It’s widely held folklore that the overwhelming majority of computer users only use a tiny minority of the functionality in a program. And not only that, the ones I’ve watched generally don’t multitask. When Quickbooks is open, the computer’s a bookkeeping thing. When it’s not a bookkeeping thing, Quickbooks is closed or iconised. You don’t end up with half a screen of Quickbooks, and half a screen of MS Word with letters to customers describing new products. Well, you might. I don’t and the users I’ve watched don’t either. And if you _do_ do it, I bet you do it because you think you’re being more efficient. I doubt you actually are more efficient, but only a stopwatch and a professor of efficiencology would know for sure. And two professors of efficincology would mean we’d never know for sure. ^_^

So try it for a day. Run everything full-screened. If you’re a Microsoft Windows user, hold down Alt and hit tab repeatedly to change between programs. You’ll save a _lot_ of time running for the mouse. On the other hand, it’s not optimal. Because you do often work in multiple documents. As I mentioned above, I often type into a different place than I’m reading. Everyone does. Well, authors of fiction maybe not so much, but try writing a budget report looking only at the program you’re typing it into, rather than looking at the actual budget you’re reporting on.

So we come back to workspaces. X11 window managers generally implement workspaces. Apple finally got on board with expose in Mac OS X. nVidia windows drivers provide the same thing on Microsoft Windows. I imagine ATI’s drivers do too. But having them isn’t enough, you have to be able to use them efficiently. Remember, they must be sensible for operation that’s staying long periods in a single window. So hitting the edge of the screen with your mouse is too sensitive a reason to change workspaces. Much time has been spent demonstrating that people’s mouse cursors hit the edges of the screen more often than just coming close to the edge. On the other hand, unless you’ve an excellent recall, or like to go hunting through spaces, or just _always_ lay things out the same way (I’m one of these ones), you need some kind of visual representation of what’s in what page, so you can go to it efficiently. One of the neatest pagers I’ve read about is the Live Updating Workspace Switcher from Luminocity (Video of it in action here) which takes advantage of modern graphic card features to provide a little, real-time view of your entire work area, so you know where you are, and where the task you’re about to switch to is. It also means you could leave a task processing something, and be able to see visually when it’s done. Seriously, this is _good_. Granted, in my job I do a few tasks (compiling, downloading) where I’d like to see them complete without having them clutter up my current display, but surely there’s other things people’d like this sort of feature for? Of course, in my 3D worksphere, it’s unneccessary. I can just look over and see what my apps are doing, without having to do things like reach for the mouse, stop typing and start meta-keying, or whatever else. It’d become like checking your blind spot when driving. Sure, your eyes are off the road for a moment, but you don’t automatically slam on the brakes when you do it.

Yes, some people slam on the brakes when they headcheck. Seriously, I barely got my driver’s license, how can I not be the worst driver on the road?

Maybe driving a car is a better metaphor for computing than making toast? It’s not great, still, so I’ll keep looking for the perfect metaphor for how computers _should_ be able to be used.

For the mentioned customer, and anyone else who feels that computers are running our lives, you’re probably right. _I_ expect that, that’s what I do, it’s both my job and my passion.

To put it in perspective, I wouldn’t feel it was right to have to help bump-start an aeroplane any time I flew anywhere, but many people accept similarly onerous demands from their computers. Somewhere along the line, instead of enslaving our machines, we became slaves of the computers.

Would you believe all this came from me visiting a quotes website, and seeing the search-by-author box, decided Oscar Wilde was a good candiate… I nearly put Groucho Marx in instead, in which case this post would have been rather different, I expect. ^_^

You might notice the complete lack of speech discussion here. This might seem weird from me as a linguist. But a computer you can instruct in natural human language is (a) a long way off, further than my projected worksphere I think… It’s a hard hard challenge, and has nothing to do with user interfaces or user friendliness, or even workflows, but instead obsoletes an enormous part of that; and (b) a robot, of the Asimov mold. (The books, not the dang movie.) Seriously, once you have a computer you can converse with and instruct in a meaningful way, articulating some joints onto it and hooking up some blinking lights for eyes is relatively trivial. Although that might just be because the only robotocists I know happen to be grade-A geniuses. ^_^

(Edit: “Time flies like an arrow. Fruit flies like a banana.” turns out to be attributed to Groucho Marx. Wow. ^_^)

A little planet is a dangerous thing

I had a quick wander through Planet Debian and it took to on to such interesting things as progress shots on a graphical Debian-Installer (Not actually from Planet Debian, but I can’t work out where I saw that now), some very funny Sinfest mods (If you’re a Debian person..), an absolute dream-sounding job (Yes, those two’re the same blog. She’s got some good stuff there. Including a capcha that apparently expects you to type ϖ…), A commentary against the patch-management systems that have started be become quite common in Debian, and to which I converted FreeRADIUS as my first post-Sarge task, personally implanted RFID chips, and musical breast implants.

The weirdest thing about that last one is the idea that fifteen years from now, we’ll still be playing mp3s. Hell, an observable percentage of people I know are either .ogg or .flac already. I myself stopped downloading mp3s because I’ve had two hard disks fail from what I suspect was the weight of my mp3 collections. And my laptop only had the most essential 100 Mb or so of mp3s (Cowboy Bebop, Andrew Denton’s Musical Challenge and a couple of random bits like the Blues Brothers’ Everybody Needs Somebody and Abbot and Costello’s Who’s On First. And ガガガSP’s 卒業 single, but I don’t listen to that very often. In fact, I don’t listen to any of these mp3s much anymore. My desktop machine’s no longer in front of a west-facing window, and I’m not towing my laptop to work in the upstairs basement at TransACT anymore.

I’ve also ripped my new Hitchhiker’s Guide To The Galaxy soundtrack to flac, because either mplayer or copy protection (it doesn’t say CD Audio on the cover! Aha! Treachery uncloaked!) means it skips every second in my DVD drive. I don’t have a CD-audio cable, so analog isn’t an option, but happily cdparanoia was happy to extract it perfectly to the hard disk. Analog mode works fine in my laptop, but I avoid doing that because I’m sure that the laptop’s DVD drive is dodgy and just waiting to eat something important.

Oh, and I scored a new TV. Well, technically my dad gave me an old TV of his, but it’s an improvement over my old one because it’s larger, it has OSD, it has a remote, and it has AV inputs. So I plugged my gamecube in, and played Resident Evil Zero for a couple of hours. I only get Resident Evil Zero out when I change TVs, it seems (it was still in the gamecube from when I was getting my TV Tuner working in Linux) but the TV doesn’t do PAL60 so I can’t have another burl at The Ocarina Of Time, although I could try and finish Metroid Prime at long last.

On a more personal note, it’s looking more and more like the work at TransACT’s dried up, and I’m starting to think I should start seriously exploring my Melbourne options. I’ve got the JET information evening on Wednesday night, so I’ll have an idea of how many people I’m going up against.

I prolly should talk more about the Melbourne plan here. As it happens, I dropped out of everything else to focus on BU and TransACT, and now that work looks like it’s going to dry up. I can do my BU work as easily from interstate as I do now (technically, I do the work from my flat in Queanbeyan, so I’m already interstate) and frankly I’d like to try living somewhere with trains and other such public transport and try getting a job I actually like (TransACT’s nice, but I need a change). So I figure either Melbourne or Sydney fits so far. I’ve friends in both cities, as well as family in Melbourne, so it’ll come down to the job opportunities. Melbourne’s main advantages are Cybersource whom a friend of mine mentioned are likely to be looking for people, as well as a project a friend of mine is looking into which I’d love to get involved in. When I thought I’d have TransACT work until the end of the year, I was thinking I’d go to Melbourne in February (after linux.conf.au 2006) and find a five month job until JET blasts off in July. Now I’m thinking maybe I should be looking to go in December/January… The problem with this plan is that I’ve got a possibly opportunity coming up in Canberra in online shops, and I’d have to break lease on my current flat. And I don’t have any savings to afford to be in Melbourne without a job. And it’s already mid-October. So I’d better get on with it.

On the “actually getting things done” front, I finally submitted a FreeRADIUS 1.0.5-2 which should clear the logjam 1.0.5-1 became when libltdl3-dev started conflicting with libtool1.4 without warning. I’m disappointed in this back-door method of forcing libtool1.4 out, where either a Replaces in libltdl3-dev or a diversion in libtool1.4 would have allowed the libltdl3-dev/libtool transfer of ltdl.m4 without boning me unneccessarily. As it is, the solution became to drag in the relevant parts of the libtool1.4 package to update the in-tree versions of the files. This is bad, but I can’t NMU libtool1.4, and the patch I was given to upgrade FreeRADIUS to libtool 1.5 was unneccesarily intrusive to my mind, and I couldn’t distill the libtool parts from the ‘change how we build the package’ parts.

I’ve also been actively hunting bugs in packages I’m using, leading to patches to libpam-mount (So I can mount my home directory from Keitarou on Mutsumi from XDM and safe from segfaults due to configuration), lftp (so it doesn’t abort when a download finishes ^_^ Upstream didn’t use my patch, but it _was_ a minimal — but not optimal — solution which neatly explicated the problem, I think) and xmame (so I can use xmame with programs with CHD files). In the process, I also submitted bugs to pam and liblircclient0 which are simple non-crashers that valgrind picked up. I’m so glad I started using valgrind, it’s the absolute bee’s knees for finding any kind of memory misuse bug which might otherwise lead to a segfault much later. I also used it on libnifi which majorly improved my memory management and stopped a whole bunch of segfaults. ^_^ I also took the opportunity tonight to point out to the php4 team that libcurl3-dev had disappeared during its autobuild time, much as libltdl3-dev broke FreeRADIUS during its autobuild time. It happened a week ago, so I expect they knew about it, but I was surprised to see absolutely no bug about it.