Thursday, April 09, 2009

AIR command line arguments

I've been working on the next release of the Dojo Toolbox - which is an Adobe AIR app, using the Dojo Toolkit. I'm taking a TDD kind of approach to get on a better footing for evolving this thing, and needed a quick way to run a particular set of unit tests. I wanted to be able to do something like this:
$ adl runTests.xml testModule=toolbox.tests.SomeThing
Getting command line arguments in AIR is via the invoke event, it looks something like this:
air.NativeApplication.nativeApplication.addEventListener(
  air.InvokeEvent.INVOKE, function(evt){ 
    window.scriptArgs = getScriptArgs(evt);
  }
)
...But when I tried it, I just got a console error:
initial content not found
It turns out that command line arguments to an AIR application are expected to be filenames - like if you dropped a file onto the app's icon. To pass through parameters and switches you first need --, like so:
$ adl runTests.xml -- testModule=toolbox.tests.all
The event your handler is passed has an arguments array property, and from there its straightforward to process what you've got to do the right thing. I'll get more into how I'm doing the unit tests in another post, but as this took a little digging I thought I'd share.

Labels:

Friday, February 27, 2009

A rhino prompt

I'm tinkering with another ill-conceived friday night project that may or may not see the light of day. But in the meantime, I just put together this little snippet that illustrates a lot of what's to like about rhino:
var getInput = function() {
 var br = new java.io.BufferedReader(
  new java.io.InputStreamReader(java.lang.System["in"])
 );
    return br.readLine();
};

var greetUser = function() {
 // use out.print for the prompt
 // instead of rhino's print - which is really println
 java.lang.System.out.print("Your name? ");
 var name = getInput();
 print("Hello " + name);
};

greetUser();
quit();
Arguably (and laughably if that's your attitude) there's exactly nothing special here - a whole lot of lines of code to do what in a browser (not to mention any other self-respecting scripting language) is done with one:
alert("Hello: " + prompt("Your name? ", ""))
But the point is that Rhino doesn't provide a prompt function, and it doesnt matter because you can easily make one. I'll take flexibility and potential like this over cute predefined functions every time.

Labels:

Thursday, January 08, 2009

Setting up Rhino

I've been using the Rhino engine more and more to run command-line scripts, fiddle and try things out. But my setup has taken shape slowly, and it wasn't much fun to be honest when I first got started.

I'm on a mac (Leopard), and here's how I've got it now:

  1. Download the Rhino .jar file, you'll find it inside the latest (binary) release.
  2. Drop it in your {user}/Library/java/Extensions folder (create it if it doesnt already exist). That way its automatically added to your classpath whenever you run java, so no need for -jar cmdline params to pull it in (and the classpath insanity that brings).
  3. I made a shell script to invoke rhino and aliased that, but actually you could simply alias the one-liner it contains:
    java jline.ConsoleRunner org.mozilla.javascript.tools.shell.Main "$@"
    To make an alias, in my ~/.profile I've got the following:
    alias rhino='~/utils/runjs.sh'
  4. Now, in your terminal, you just type 'rhino' and it puts you into an interactive shell where you can load files, write js statements and see the results instantly.

But, Rhino's shell is frankly, a crappy user experience. Its got no history, no cursor movement at all. You can just type and hit enter. If you screw something up you have to type it all over again. And the whole beauty of writing *javascript* like this is that you can load a .js library repeatedly as you work on it and try calling its functions. But its a PITA out of the box.

Look again at that java cmd-line I'm using to run rhino and you'll see its using jline. This enlightening post on Taming the Rhino finally brought happiness to my rhino shell by introducing me to jline

You download jline and again, drop the .jar file into your Library/java/Extensions folder. Now the interactive shell is much more shell-like. You have a history on the up arrow, you can back up and move around the current line to edit, and do more scripting and less typing in general.

To run a particular .js file and exit, you do 'rhino yourfile.js'. Further cmd-line parameters populate the arguments object in your script, so rhino script.js filename1 filename2 myoption=true would populate arguments like this:

[
 filename1,
 filename2,
 myoption=true
]

FWIW I use a pattern like this to wrap my script body:

(function(scriptArgs) {
  function main() {
   // process scriptArgs, 
   // e.g. split on = to get name/value pairs 
   // and populate an options object
  }
  main();
})(Array.prototype.slice.apply(arguments));

Back in the shell, you can load, try, load again, try again:

$ jeltz:trunk sfoster$ rhino
Rhino 1.7 release 1 2008 03 06
js> load("lib/docblock.js");
js: "lib/docblock.js", line 7: uncaught JavaScript runtime exception: ReferenceError:\
"lang" is not defined. at lib/docblock.js:7 at lib/docblock.js:5 at <stdin>:2 js> load("lib/langUtils.js"); js: "<stdin>", line 3: Couldn't open file "lib/langUtils.js". js> load("lib/langUtil.js"); js> load("lib/docblock.js"); js> var p = docblock.getParser(); js> p.parse("/** @author sfoster */"); TAG:author:sfoster js> quit(); jeltz:trunk sfoster$

Try it, I think you'll like it.

Labels: , ,

Monday, December 08, 2008

Barcamp Liverpool

I took in the first day of BarCamp Liverpool. It was Liverpool's first, and my second, and went off well I thought. I learnt some things, saw some new and familiar stuff and felt it was time extremely well spent. It was great to see people coming out of the woodwork and talking about what they are doing. There was much talk of iPhone apps, and the economic opportunities (or not) that presents for the developer, lots of startup hobnobbing and general feel-good about being in the industry. Something was missing though, and I've been trying to put my finger on it. In the evening there was a partly-for-fun "pitching" event in which people presented startup ideas to a panel. I think it was significant that the idea that got everyone most excited about was a hardware project - to monitor power consumption of a device at the socket and chart and aggregate data to build consumer awareness of energy usage and perhaps drive usage and purchase decisions. The Web2.0 narcissism is wearing thin I think - the web is maturing as a platform for real work and useful stuff to take place, but it needs more of this grounding in the practical, tangible and meaningful. I hope future barcamps and other events are able to draw more from the "fringes" of the web community, where it less about web technology and culture as a topic in itself, and more about the internet as a component in projects that touch peoples lives in tangible and practical ways. My talk (in direct contradiction to all that) was on maturing client-side development techniques and practices, to introduce more rigour to the discipline. It was a response to the need for repeatable, reliable client-side output that is highlighted by ever more complex demands in web-based UIs. The front-end is a part of a product development process that needs just as much attention as server-side development, and as Steve Souders has being pointing out, in lot of cases, when you actual break down the experience from the user's point of view - warrants more. I introduced a few techniques and tools for testing and profiling client-side tech, but the topic was too big to fit in the 45 minute slot and probably left more questions than answers. I need to either break it out into complementary pieces or take a different approach to a high-level overview. My slides are posted to geekup though, and if you were there or have thoughts, please comment.

Thursday, November 06, 2008

String repetition in javascript

This is about a little snippet I came up with the other month, while a colleague and I were talking about string building and its performance in javascript. I was looking for a neat way to front-pad or indent a string, and missing the x operator in perl. It turns out there is a succinct, one-line idiom:
var indent = new Array(10).join(" ")
That gets you a 9-spaces-long string. Using the formal Array constructor (rather than the array literal [ ] ) you can specify what the initial size or length of the array should be, and we leverage that when joining our otherwise empty array. The string we provide to join with ends up being repeated n-1 times. Cute huh. It also turns out to be pretty fast. Not quite as fast as the equivalent loop using += to build the string, but much faster in many cases than pushing each character into an array and joining to produce the string. Of course if you already know something about what string you are looking to create here, its definately fastest to declare a long string like
var spacePadding = "          ";
and then var indent = spacePadding.substring(indentLevel), and cross your fingers that spacePadding will always be long enough. That's going to be a pain, though, when you need to output something like
»»»»level 1
»»»»»»level 2
Or, worse (or better) if you need to repeat something like <span class="indent">. Its trivial with our join trick:
var indent = new Array(indentLevel+1).join('<span class="indent">');
You can see the test file I used to time various options. new Array(n).join performs worst when n is a small number - like 20 - and its a short or single character string you are repeating. Frankly, for the use case I've outlined - front-padding something - that's also a likely range. As you scale up, and start repeating 100s or 1000s of times, and also increase the length of the string you are repeating, then this margin disappears.

Saturday, August 16, 2008

BarCampLeeds 08

What a day. Who knew that Leeds was this nascent tech hub. I had a hunch of course (and followed it by moving back here), but there's enery and talent oozing out the edges here, and its surely only a matter of time before it reaches critical mass and becomes a real tech ecosystem.

What'd I learn? I started the day with an into to geocaching. Which I'd of course heard of, but somehow had lumped it in my mind with orienteering and other things which I'm not likely to ever do regularly if at all. But I'm sold - what fun, and the kids will love it - a modern day treasure hunt.

While I held onto a vain hope that I might get my demo done in time for my talk in the afternoon, I stuck around in the main hall half-listening, half working. There was the beginnings of an interesting discussion around usability and (graphic) design, but the crowd didnt really bite. I also was treated to a crash course in viral something.. marketing? self-promotion? mostly how to have a laugh on the web it seemed, and share it with as many people as possible. That I didnt quite see the point I'm sure says more about me than the speaker - who did a great job.

I missed what looked like an interesting sessison on the success of wikis, but caught a great session discussing SAAS (Software as a Service). That looks like a big slice of pie for the taking, if you can get it right.

I was introduced to the lazy web, and a clearing house kind of offering (vagueware.com) by the presenter. I was trying to remember the other site I used a while back - it was a more general (i.e. non-tech) ideas/inventions site, but its gone.

My talk was on building desktop apps with Dojo and AIR. There wasnt really enough time to dig in so I was left skimming over materials and gesturing vaguely at examples I pulled up. "Dojo" on AIR was a little arbitrary - Dojo /is/ a good fit, and does give you a real leg up when developing an html/js based AIR app. But not to the exclusion of all other libraries. Its good for the same reason Dojo is good on the web, and that there's a couple specific affordances for AIR (e.g. dojox.storage.AirFileStorageProvider) is nice, but not in and off itself a reason to switch. The compelling case for AIR is that you dont have to re-learn anything to start developing desktop apps, and from that point of view if you're already comfortable with javascript library, you should probably stick with it. Anyhow, I hope folks took something away from it.

Finally I caught 2 great sessions on a a more business slant. The first was tips on dealing with large organizations as a small vendor/contractor, and negotiating the beaurocratic hazards organizations of a size seem to favor. It is so easy to get burnt as these cultures crash, and it was great food for thought. And right on its heels was a discussion on preparing proposals, RFP responses. Stand-out points for me where - ask there's a weighting and scoring sheet for the RFP - people will often share i freely, and that will tell you what you need to know about where to spend your time in the proposal.

So I'm left with a warm fuzzy feeling about Leeds. People had travelled in - several from Manchester and one of two from London, and although actual attendance didnt quite match registration numbers, it was a respectable crowd.

I'm back for more tommorow...

Labels:

Tuesday, August 12, 2008

doh.robot

New automated UI testing goodies just landed in dojo, and I'm moved to blog about them. Getting test coverage of the messy stuff - user interactions, mouse-movement, clicks, drags - has always been an achiles heel of testing web UIs. Any kind of automated testing is better than none (provided you are testing the right things and keeping a good testing/developing balance), but for UI testing so much can go wrong when your page/app loads up in a real browser and has a real user start poking at it you're left with a giant hole to fill with tedious, manual testing. doh.robot (and its dojo, dijit descendants) offer the ability to write tests that include real keyboard and mouse events, that can mimic actual use of interactive widgets. Let's not kid ourselves - manual testing isnt going away anytime soon - and nor should it - but having a suite of tests that can catch regressions, can run though some basic load, click, move, enter input, clear input kind of interactions across key components of your app is a huge plus. It allows you some confidence to make otherwise punishing changes - that would normally require real humans to step though a test script in all your target browsers to confirm nothing got broken. Some people may love the precarious feeling of developing without unit tests, but personally it gives me the willies. It can be paralyzing as the codebase grows and each addition and change multiplies the potential for error. When you do stop and test how much will you have to tear back when an error pops up? You dont see construction workers free-climbing as a building goes up. You build a stable scaffolding, and anchor that against each successive floor that you add. So with testing. Unit tests let you look forwards at what remains to be done, rather than worrying constantly about what you've already built. And with accurate, non-synthetic automated UI interaction testing that doh.robot offers, that can only be a good thing for the growth of the web as an application platform.

Labels:

Friday, March 07, 2008

dijit.byNode and firebug fun

Here's a little tip if you're working with dojo widgets. In firebug you can select an element in the HTML view. Back in the firebug console, your selected node is available as $1 So, $1.tagName shows you the element name, etc. If you've got dojo on your page you can use anything dojo has provided in the console, and if you're using dijit, you also have that stuff too. So, in the HTML view click on the element that represents your widget. It'll have a widgetId attribute. Now, in the console, try:
dijit.byNode($1)
this is a pattern I repeat so often I actually printed and read the firebug manual to get around there with keyboard shortcuts. I recommend you do the same. Now you can quickly and intuitively explore the state of your widget:
dijit.byNode($1)._started
console.dir(dijit.byNode($1).getChildren())
dojo.getObject(dijit.byNode($1).declaredClass).prototype
And/or, you can go back and forth between console/HTML:
dojo.query("[region]", $1).filter(function(n) { return dijit.byNode(n).declaredClass.indexOf("ContentPane") > -1; });
.. gets you call the ContentPane domNodes that are descendants of your selected BorderContainer node. Click on one, and:
dijit.byNode($1).setContent("boo!"); 
its your page, have fun!

Labels:

Friday, February 22, 2008

Restoring SVN repositories from disk - a story

I recently had to move off a company laptop I'd been using for a while, and (thanks to the flu) didnt have much time to do it. So, I backed up those directories I knew had any personal projects and data in and crossed-fingers I'd be able to get what I needed out of there when the time came.

One of the directories I got housed my subversion respositories (I'd been using the flat-file db option). When I started setting back up on a new laptop (and new-to-me platform in OSX) I was faced with a little problem: How to let the new install of subversion know about the old data.

A little reading around in the SVN book soon told me that what you are /supposed/ to do is run a svnadmin dump command, then create the new repos and

    svnadmin load /path/to/reponame < /path/to/my/repo1.dump  

Nice to know - for next time. After some searching and asking around, I finally got a pointer (from the evolt list)

   svnserve --daemon --root "C:\Path\to\Subversion Repository"  
That wouldnt work as-is, but it pointed me in the right direction. Subversion of course has a notion of a root directory where it expects to find everything. I was running subversion via apache though. I'd gone through the basic set up to configure it, and it was busily serving up an entirely fresh and empty repo. To cut a long story short, I finally ended up with this:

LoadModule dav_svn_module /usr/libexec/apache2/mod_dav_svn.so  
<Location /svn>
     DAV svn
     SVNParentPath /my/laptop/backup/SVN
     AuthType Basic
     AuthName "Subversion repository"
     AuthUserFile /etc/apache2/svn-auth-file
     Require valid-user 
</Location>

SVNParentPath was the magical incantation I needed. After that I was left only permissions to fix. That's apparently where most people stumble. And its not my strong point. I can chmod 755 myscript.cgi like the rest of them, but dont ask me to explain what it (and all its possible variations) does exactly. I dug briefly into a nice tutorial on chmod, to grok the details once and for all, but found this walkthrough, and this one which instead supplied all the information I needed.

On OSX/Leopard I needed to use the www user, otherwise that was basically it. sudo acpachectl restart picks up the new apache config. http://localhost/svn/my-repo shows a directory listing of the repo, and:

svn co --username me http://localhost/svn/clients-repo \
/path/to/clients

...finally got me pay dirt - the full directory tree, with version history

Tuesday, January 15, 2008

A Parable

A man was asked to do some renovation on a house. He worked steadily at it for several weeks and finally called his client to come take a look around.

He said, "I was able to keep a lot of the original flooring. I got a good match for the wood and finish where I had to patch and extend the floor. I'm nearly done. I just need to pick up my offcuts and sweep up."

"Oh good" said the client. "So I'll arrange to start moving my family back in tomorrow. Sound OK?"

"Sure".

The main picked up some of the debris, and noticed an ugly dark spot under a piece of paper. As he scraped at it, it revealed itself to be the end of a protruding steel rod. By scraping away around it, the man was able to get pliers on it and wiggle it. As he did, something clicked below the floor and now several of the boards creaked as he walked on them. Creaking was one of the things he'd been called in to fix, so he pried up those board to take a look. There he saw that the steel rod was actually a pin that had been rigged to hold together some structural beams. It had been driven down to sit flush with the floor, but was now rusted away and very fragile. As he stared in disbelief, the beams began to drift apart, the house sagged and collapsed in a pile of rubble.

Moral: A job is done or not done, never nearly done.

Labels:

Monday, January 14, 2008

dijit.Declaration and its mixins

I love the dijit.Declaration widget introduced into the dojo toolkit around version 0.9+. It lets you declare a new widget class inline in your html - which can be very useful, especially when you want the widget templateString to be dynamic output from the server. Just a little tip - I had been getting a m._findMixin is not a function error when instantiating widgets from my Declaration. If you've ever tried to step through widget instantiation you have an idea how sigh-worthy this particular error was. After a little debugging and poking through Declaration.js I noticed that the mixins property was typed as an array. When using dojo.declare directly, the parent class is the 2nd argument. And if you need mixins (like dijit._Templated) you make that 2nd argument an array and list them out in order as its content. So, it can be either a string or an array and dojo.declare will do the right thing. It turns out that with dijit.Declaration, the mixins property must be an array, even if you only have one value to put in there:
<div
 dojoType="dijit.Declaration"
 widgetClass="myns.widget.WidgetClassName"
 mixins="[dijit._Widget]"
 ...
>...</div<
Which put me back in action, and I hope steers you around this particular hole in the road.

Labels:

Friday, November 30, 2007

Regexp to match only html filenames and directory names

I've been working on a script to create filtered directory tree listings. It can be configured with both include conditions, and exclude conditions. If something passes the include filters, it then checks to see if its explicitly excluded. For example, I want to exclude cgi-bin, but include all other directories and files. So its useful to have a good catch-all pattern for including only the good stuff. I needed only html files this time. So after some head scratching I came up with: /(^.*\.htm(l?)$)|(^[^\.]+$)/ I dont like using .*, so to improve it maybe I could use a character range instead, or a negated range like [^\.]+, but this works beautifully.

Labels:

Wednesday, November 28, 2007

Simple Clocks with the Dojo Toolkit

Something I was playing with - this page shows a couple of javascript clock/countdown treatments. None are as whizzy as the dojox.gfx (vector graphics) clock you might have seen around, or your various dashboard widgets - but this is just dojo core + 6k (uncompressed) of code.

Labels: , , ,

Tuesday, November 27, 2007

string.replace with substitution by function

It may or may not be news to you that in Javascript you can do:
someString.replace(
  /\w+/g, 
  function(match) {
    return "blah"
  }
);
Which in this case turns "the original string" into "blah blah blah". Your function is passed the match, and you return whatever you want. That's pretty handy, as you can run the match through transformations, or even use it to lookup or generate some entirely new value. You can also have side-effects from the match, which has interesting possibilities, not least of which is debugging: function(match) { console.log("I matched %s", match); return match); If you're like me, you might even think you could do something like this:
someString.replace(
  /<(\w+)[^>]*>)/g, 
  function(tag) {
    var tagName = RegExp.$1;
    return tagName.toLowerCase(); // or whatever...
  }
);
.. but sadly you'd be wrong. At least as soon as you went to check in IE. It seems IE doesnt populate the properties from the match into the static RegExp object until after the whole replace is done. And your substitution function only gets passed the matched string, not the array of match + sub-matches you might be used to :( Still, there's plenty of mileage there. How about this:
  function format(str, obj) {
   var str = str.replace(/\$\{[^}]+\}/g, function(mstr) {
    var key = mstr.substring(2, mstr.length-1); 
    return (typeof obj[key] == "undefined") ? "" : obj[key];
   });
   return str;
  }
.. which takes a string like "<h1>${firstname} ${lastname}</h1>", and an object like: { firstname: "Bob", lastname: "Smith" } to produce <h1>Bob Smith</h1>

Labels: ,

Wednesday, October 24, 2007

TAE keynote and CSS layout

the Ajaxian folks are doing their little summary of the past year or so as pertains to Ajax. One of the "trends" or opportunities they observe is doing layout using javascript. CSS got a panning. Now I understand that CSS layout is complex - no argument there. But doing layout in javascript is hardly new and proven to be a dead-end - at least with current browsers. In Dojo 0.9 the widget and widget infrastructure are taking a step back from javascript layout - it was one of the real performance killers in 0.4.

Some sites like the International Herald Tribune have been using javascript to layout and control the flow of the text with some success for a while. But as a strategy its risky and expensive - in terms of browser performance. Somethings you cant effectively do in CSS. Others CSS is by far your best tool. What we do need is more and more maturation of CSS solutions, and micro-format like standardization so you can drop in a new stylesheet to apply style and layout - without all the hair pulling. Oh and the browser support to make it all feasible.

Meantime - there's a hybrid. You get as far as you feasibly/easily can with CSS, and use javascript as a sharp tool to tweak, and shuffle elements on the screen.

Thursday, September 06, 2007

Sortable list with dojo 0.9

This is a quick proof of concept of a sortable, data-store backed list (using dojo 0.9). The store (a dojo.data.ItemFileReadStore instance) does the sorting. Each list-item has an id, that is mapped to the store item identifier. So when the re-sorted list comes back I just look up the list-item node and appendChild it to move it in the list. The store has 52 items (its a list of states) and seems snappy enough - faster certainly than re-creating the list-item elements and re-rendering the whole thing each time.

Saturday, July 21, 2007

Adobe AIR tour

I made a quick trip to Dallas to catch the Adobe AIR bus tour there. You know, it was pretty interesting. I was braced for a 4 hour long vendor sales demo, and it kind of was that, but with enough hands-on detail to keep my attention. Plus, it looks like a sweet product. A few misunderstandings that got cleared up for me:
  • AIR is specifically the runtime. Think the .NET runtime - its a cross-platform way of running for desktop apps that are built to it. You can get AIR apps from the web, and those apps can use web content, but it is not really a web technology as such.
  • You can build AIR apps from Flash/Flex or HTML/JS - but the runtime itself doesn't require or tie directly to one more than the other.
  • The runtime is a free download, the "Product" that Adobe is selling is the authoring tools in this case. The SDK is free, and you can author and build AIR apps without the fancy IDE.
I've begun to play with it. The APIs that javascript can now reach are going to be fun. Its like WSH (Windows Scripting Host) and scripting Rhino.. but more, er.. integrated.

Labels:

Wednesday, July 11, 2007

Moving your Firefox profile

Little tip. You can easily move your FF profile directory to another local drive/directory e.g. thumb drive, or a backed-up directory. In my case it was to a directory that doesn't have every read/write scanned by anti-virus software. (I can see how this might be prudent and all, but it was making for a very slow browsing experience on my pc.)

Thursday, February 22, 2007

RefreshAustin is 1yr Old

I'm prompted to post because almost a year after we disbanded the Austin Web Standards Meetup, and merged with RefreshAustin I'm still getting email from meetup.com with people signing up for future web standards meetups in Austin. Folks, the group is alive and well, we meet at least monthly, but we now fly a "Refresh" banner instead of the web standards meetup one. The Refresh format is simple, and with a wider topic than the web standards one we've had some great presentations and discussions, all around the topic of web, web design, 2.0, accessibility, javascript/ajax. Keep an eye on the site and/or upcoming.org for details of events, and sign up to the refreshaustin google group.

Friday, January 12, 2007

Terminal funkiness with ActiveState perl

I just finally got on top of an annoyance I've had for a while: on my work machine I use MKS Toolkit - which provides a lot of the common unix tools for developers using windows. It provides its own Perl build, but I've been using ActiveState's for a while, and didnt want to complicate synchronizing my work across the different machines I use perl on. An unhappy side-effect of installing MKS Toolkit though is that the CPAN interactive shell suddenly starting spitting goobledy-gook to the terminal.

I looked around in the CPAN config, remembering there was some terminal questions in the setup, but nothing I changed had any effect. Not having had to mess with terminal settings / shell environment much before I was stumped until I googled a little and found this:

(MKSSoftware KB Article: Why do console escape sequences appear when using ActiveState Perl in debug mode?

Turns out MKS Toolkit changes the TERM environment variable to nutc, which ActiveState's perl doesnt support. They suggest using ansi or vt100, which worked for me: set TERM=ansi.

Thursday, October 19, 2006

Javascript shell with Rhino

The Rhino javascript interpreter (from Mozilla, sister to SpiderMonkey) has an interactive mode:

C:\dojo\buildscripts>java -jar lib/js.jar
Rhino 1.5 release 3 2002 01 27
js> print('boo');
boo
js>

The Rhino jar is a part of the dojo distribution, so if you've got dojo (from SVN, not the pre-built releases) you already have it.
The Scripting Java page from Mozilla has these and other details. After all these years, javascript is finally able to script java. I find that amusing even if no-one else does :)

(yes I know LiveConnect has been around for almost as many years, probably what I mean is finally there's a practical environment that enables useful work to come from scripting java). Also, as running my javascript scripts is now possible from the commandline, I can invoke them easily from perl, my editor, SlickRun, etc. Which, as redundant as that sounds, is bound to be handy.

Here's another article on the Javascript CommandLine, and lots more

Friday, September 01, 2006

Soft-wrapping long words

The issue of long, non-breaking words like urls has been around for a while on the web - and the impact this can have on layouts and other places where width is constrained for whatever reason. I've been going back and forth on this, and dug up and old test page on No-wrapping and Soft-wrapping. This has some test cases using <nobr>, <wbr> and the soft-hyphen character &shy;. The results aren't too pretty:
Mozilla/Firefox:
Ignores &shy;,
supports <wbr>, but not when contained in <nobr>
Windows IE 6:
Wraps correctly with &shy;,
displays '-' only at a wrap-point.
Supports <wbr> solo, and when contained in <nobr>
Safari 2.0:
Wraps correctly with &shy;,
displays '-' only at a wrap-point.
Seems to only pay attention to <wbr> in the context of <nobr>, where there are spaces to wrap on.
Opera 9.0:
Wraps correctly with &shy;
Ignores <wbr> completely, supports <nobr>
(TODO: Need to add tests to the page for CSS scenarios like white-space: nowrap;) (NOTE: Yes, both nobr and wbr have been deprecated. Unfortunately unless I missed i, there's no good replacements in CSS for wbr, or in xhtml for either.) Meanwhile, I took a stab at a javascript-y solution. This is an implementation that looks to insert a wrap-point in a character string to enforce a (given) maxiumum text columns length. So, if you want to make sure that all words wrap at or before 24 characters, it will insert a soft-hyphen (or <wbr/> for mozilla/firefox). It tries (a little) to favor wrapping after punctuation, and be somewhat smart about recognizing words (e.g by recognizing escape entities, markup). Anyhow, here's a Softwrap test page. It could definately suffer some optimization, and I suspect could be vastly simplified with some better regexp. But its working ok for now.

Monday, August 28, 2006

Re: Has accessibility been taken too far?

(Jeff Croft posted this provocative article which seemed to tap a common feeling that accessibility is a pain in the ass, strictly optional and web designers should be cut some slack)

If you wade through the slop of the first round of comments to this post, there's actually some reasonable debate that follows. Jeff came out saying he wanted to provoke discussion and (eventually) seems to have done so. It would be nice to think he was playing devil's advocate, but that's probably too generous. He does however seem to withdraw some of the more provocative statements and wind up saying you do what you can, when you can, and dont beat yourself up too badly about it.

It seems to me that accessibility is being treated as one big lump you have to swallow. Particularly in the article heading "Has accessibility been taken too far". What does that even mean? From where I sit, in practice it hasnt actually budged much in the last 5 years, though an awareness of what you could do if you cared to might have improved.

There are some aspects of making an accessible design that present real difficulties to a designer, and some that do not. Making layout and content scale and flow sensibly across a useful range of font-size and effective window width is tricky. Making complex forms accessible can also add significant time to a project. But using semantic markup, and good page structure are not really hard at all.

So it seems there might be a useful distinction to be made between "not-meaningless" design, and "accessible" design. Where the former just implies the application of common sense and basic good practices, and the latter actually includes specific accommodations for some particular minority group/environment/technology.

Some is better than none. And if the brow-beating that Jeff refers to is real, it might be counter-productive. What is critical is awareness. And his post and lots of the comments that follow demonstrate some big gaps. There's a difference between a site looking/sounding crummy and being actually broken. Designers, developers and content authors need a better understanding of the impact of their decisions. Creating valid xhtml is useful, but not critical to accessibility. Css layout too - nice to have. Good alt tags? Only strictly /necessary/ in some cases, though without them it might be confusing and exasperating.

On a typical web project, that a site launches at all is usually a major accomplishment. You have to keep it simple, dont sweat the small stuff, and so on to get there. Accessibilty competes with a host of other requirements for attention. Here's my scale of 0:10 for accessibility: 0: not published at all, in any format. You just had to be there 5: published widely in an available format: perhaps a magazine, or a completely inaccessible web format like .gif or a downloadable wordperfect document. At least I might hear about it, and get someone to help me read it 8: Published in semantic, sensible html. But there's no alt tags, and no form field labels 10: All the above, plus all our favorite shortcuts and conventions that make quickly grokking the content a breeze in every conceivable browser, screen-reader, device and context.

If you are a brow-beater (and I'd guess anyone with an interest in accessibility has been guilty at some point), this might be a good perspective to keep.

Stepping back a little, awareness of accessibility on the web does seem to have grown to the point that it is one of the criteria I hear being used when assessing quality. And this might be a simpler way to think of it. If a site blows up in IE 5, is mute or unintelligible in Jaws, invisible to googlebot and strains the eyes on a projector - maybe its just plain bad. When "Good" includes being accessible, and inaccessible is "Bad", I think accessibility on the web has finally arrived.

Thursday, August 17, 2006

"Surveying OS Ajax Toolkits" article on infoworld

This is well worth a read. Unlike most reviews I've seen, this author obviously spent sometime with each of the libraries he includes - enough to get a meaningful impression of the strengths and weaknesses. For me, he's right on the money with Dojo, YUI, Rico, Atlas. I differ a little on GWT, but in truth it sounds like he spent more time with it than I did. Being a dojo guy at present, I think there's lots in there he missed or didnt mention, but the overviews are fair IMO. Oddly, he didnt review Prototype/Scriptaculous at all.

Friday, June 23, 2006

Krugle - open source code search engine

I bumped into one of Krugle's developers at the Ajax experience conference. Looks like they just came out of beta and are open to the public. This is sweet, I can't emphasize enough how useful this is already proving. 90% of all code (I reckon) is boiler-plate, but by the time you've tracked down an implementation (and possibly ported it to your language of choice) its easier (say, 75% of the time) to just code it up yourself. Krugle changes that equation. This is going to be fun. Gripes - the frames and use of javascript: links make it difficult going on impossible to bookmark pages, blow out new tabs etc. Time to fire up Greasemonkey and fix it.

Tuesday, May 16, 2006

Javascript conflicts and portlet namespaces

First - javascript doesn't actually have "namespaces". But the idea is there - unless functions, objects and variables are designated otherwise, they exist in the global scope - properties of the window object. In a portal - where portlets might want to include script libraries to facilitate interaction within the portlet - there's a risk of conflicts with objects using the same name being redefined by portlet-imported code. There is no "portlet scope" - it would have to be artificially defined. This is doable for instance variables, but more problematic for libraries.

Joe Walker (author of DWR - a remoting framework for javascript/java) blogs about one face of this problem here - the $ function

Conflict can exist between code in/required by different portlets on a page, or portlet code and the portal wrapper (style). The $ function is just one example of how code can compete. The other issue is that as javascript is a dynamic, prototype-based language, its quite possible for a script to change the same features of the language mid-stream:
Function.prototype.bind = function(someObjectToBindTo) {
 // .. do stuff. 
}
This is increasingly common as library authors seek to give developers a familiar environment to code in and add syntactical sugar for common tasks. The problem is that method signatures and return values may differ, and functionality may differ so redefinition is a real issue. This is where I think there's a need for guidelines and accepted best practice. Defensive coding will get you around most of it.. but not all.

Finally, there's the even thornier issue of 2 portlets using different versions of the same library. Even if the library was coded to be backward compatible, it would require a way to ensure the earlier version was not included last.

Wednesday, April 26, 2006

Playing nice with others in javascript

Andrew Dupont has written a very interesting article on Prototype, and the recent $() extensions that allow things like $(someelement).hide() and so on.

This is actually a really nice solution. It's syntactical sugar without actually extending the element itself. Andrew argues that as an object oriented language, its reasonable to want to be able to extend objects like HTMLElement in javascript. I know where he's coming from, but out in the wild - in a world where my code has to co-exist with that of other authors (co-workers, customers, and potentially end-users via GreaseMonkey and similar) these core objects and data-types are a shared property and not to be taking liberties with.

A portal is the extreme case, where the portal framework ("style") has its client-side scripts, and included portlets can have their scripts. Portlets might be by the same author or vendor as the portal, or not. The potential for collision and overlapping is huge. This is made worse when you consider that portlets using the same libraries might co-exist on the same page; do we download Prototype.js twice? What if the portlet code was developed against a different version of the same library? And finally, to make a difficult problem basically impossible, its theoretically possible (I gather) to place the exact same portlet (id and all) twice on the same page.

Much of this is just hypothetical. In practice the portal owner has to take some responsibility for what goes on a page, and should enforce some basic conditions like requiring portlets to not tromp around in the window object and global namespace (and not mess with the fundamental data types that other scripts have to share). And in truth - how much scripting is really necessary on these kinds of portal pages? Most functionality will be a click away when the user actually selects a link from the portal and goes there. But there are some interesting use cases where you'd legitimately want to bubble up richer interactions to the aggregated portal page: how about client-side form validation, tooltips, context-menus, productivity (e.g. select all/none) controls.

The Dojo Toolkit goes some way to addressing these issues. It minimizes its footprint in the global namespace - with just the dojo object itself, and a djConfig object. It also has checks in place to safeguard against the unexpected properties of objects that can show up when the core data type object prototypes have been extended (such as Array, Object, Function).

Note, none of this is new or unique to Ajax and the wave of more responsive UI we're seeing recently. The issue existed long before in even the simplest client-side scripting, as well as CSS (most pronouncedly when a stylesheet defines styles directly on an element such as P, TD, UL etc.) The advent of richer browser-based UI does quickly bring the problem to the fore though...

Monday, April 24, 2006

WCAG 2.0 (as compared to section 508)

I've been really impressed with the WCAG 2.0 guidelines. This is a big improvement in the way the guidelines are presented and worded that I think will make adoption much much more likely. This here page answers the inevitable question: how does WCAG 2.0 line up against section 508? On one hand it adds specifics and details: sucess criteria and techniques for achieving success. On the other WCAG 2.0 doesnt assume the use of HTML to present web-based content - so some detail that is explicit in section 508 has been moved to other sections within the document that provide technology-specific details.

Thursday, April 20, 2006

Accessible maps at ALA

This is a nice write-up of making a point-map (a map with information relating to points on that map) in a semantic and accessible manner. I read a lot of articles, and rarely feel compelled to blog them. I was impressed with this one though. It doesnt shy from diggging right into the details, and does an admirable job of working through some complexity to present a real, viable solution. So many introduce a single technique and leave "the rest" as an exercise to the reader. The author also using a definition list, which is one of my favorite constructs. My only hesitation is that DTs dont feature in the Heading heirarchy, and so as semantically appropriate as they are, you lose that implied structure in most UAs.

Thursday, March 09, 2006

Object vs. View-centric apps (RoR vs. PHP)

This post starts out as an account of trying out PHP and Ruby on Rails to build a simple app, and comparing the experience. The thread builds though into an interesting discussion of differing approaches to web application development, and differing needs from the framework you use.