grack.com

I spent the last few days shoehorning Windows 7 into one of the laptops I’ve got around the house. My day-to-day desktop for development is a MacBook Pro, but I spend some time testing on Windows. I haven’t had a chance to update my Windows knowledge from where I left it at the XP level, so I figured it would be a good time to give Vista’s successor a shot.

The laptop I’m upgrading is old, but not so old that it’s obsolete. It’s a Gateway MX7337 with a 3.0GHz P4 (back when Hyperthreading, rather than multi-core was the rage!). It’s got a reasonable 1GB of RAM, enough that a basic Linux desktop would fly and development isn’t impossible. In fact, I did all of the Windows development for Stumbleupon’s IE toolbar on this machine!

The install

My experience begins a few days ago, shortly after we signed up for BizSpark. I downloaded their Windows 7 RC ISO and burnt it to a DVD. Popping it into the laptop resulted in the most pleasant Windows install I’ve encountered.

I’ll digress for a short moment here. I’ve been installing operating systems for a while now and Windows has always had the worst experience. I was very surprised when Microsoft decided to launch XP with the old Windows-NT text-mode installer. For the last few years I’ve been installing Fedora boxes with the pretty VESA-based Anaconda, while every XP machine I boot up starts off with the classic blue-screen of installation and “Press F6 to load drivers”.

There’s not much to say about Windows 7’s installation. It was fast, pretty and over with before I could really think about it.

Hardware

My biggest concern before I started down the Windows 7 path was hardware support. The Gateway laptop was a pain to get working under Windows XP. It uses bog-standard Broadcom wireless drivers which, for some strange reason, Microsoft never supported off its XP install disks. This always left me with an XP machine unable to connect to the internet, having to use either a burnt DVD or a USB key to port over Gateway’s poorly packaged driver bundles.

Windows 7 surprised me here. With the exception of the sound hardware, everything worked out of the box. This is a pretty big improvement over XP on this particular laptop, though I can’t vouch for the experience of a user with newer, potentially unsupported out-of-the-box hardware. The graphics were a bit disappointing, since Intel’s “extreme” laptop chipset (852GME) was only supported by Windows’s default VESA modes.

… And Hardware Issues

The sound hardware was a mystery. The drivers were installed, device manager said everything was OK, but nothing was coming out of the laptop’s speakers. I eventually found some forum posts that suggested I try a set of Vista drivers – from a different manufacturer. That did the trick!

My next task was attempting to get graphics working at a level beyond basic VESA support. This was a lot trickier, since Intel’s last driver update was in 2006 and the fact that there was never an official driver release. I ended up using the device manager’s ability to install legacy drivers and pointing it at the latest driver release from Intel. A forum post suggested the following convoluted, but successful workaround:

  1. Remove all previous installtions of hardware (just keep vistas std vga).
  2. DLD the latest drivers from intel and extract to hardrive.
  3. in device manager go to Action - Add legacy hardware.
  4. Select device manually. (If the 82852/82855 GM/GME doesnt show in the list of display drivers, then you need to point to the directory where you extracted the drivers)
  5. THIS IS THE IMPORTANT STEP. Don’t select 82852/82855 GM/GME from the list of drivers, select the 945GM driver. It will install and you will need to reboot.

It took me a few times to get that working, and I ended up with two 82852/82855 display adapters in the device manager. While the drivers are somewhat faster for general use, they are still too old to support Aero. I can’t believe that worked!

Bonjour Printing (no pun intended)

The final part of the adventure dealt with trying to set up my Bonjour-available printers. I had installed iTunes on the laptop previously, which usually installs the Bonjour software and the Bonjour Printer Wizard. In this case, however, the printer wizard was missing. I had to uninstall and reinstall the software to make this available.

Once the wizard was ready to go, it found the two printers on the network. When I selected the printers I was confused - it couldn’t find the appropriate drivers for either of them. It turns out that Windows 7 doesn’t ship with the whole gamut of printer support on the installation disk like XP did. Instead, you need to either download the drivers directly from catalog.update.microsoft.com (thanks to @herkyjerky for the tip), or use the workaround that I did: setting up a fake printer on LPT1 with the correct drivers (which are pulled from Windows Update automatically), then deleting the printer and letting the Bonjour wizard add it.

Conclusions

Overall, I’m happy with the Windows 7 experience on this older machine. It certainly boots faster (fresh install notwithstanding) and it feels less clunky than XP on the same machine did.

The whole configuration experience is pretty overwhelming. Most of the options have moved since XP. Thankfully, the search available in the start menu is able to find most of the settings that I couldn’t find myself: showing file extensions and hidden files, for example.

UAC is a new beast for me. It’s somewhat annoying, but as long as you are a member of the local Administrators group you can just keep clicking “Yes” to its prompts. If you aren’t part of the admin group, you’ll need to enter the username and password of the admin every time you want to perform an admin-level task. If Microsoft had provided a way to remember the admin credentials for a short period of time, I’d probably run as a regular user rather than an Administrator (thanks for @liltude for some UAC tips!). Quick tip: if you get frustrated with entering your admin credentials for all the prompts and add yourself to the Administrators group, don’t forget to log out. If you don’t, you’ll still have your old “standard user” token and UAC will keep prompting you for a username/password!

Read full post

Quick round-up/review of the books I’ve read this first half of the year.

Mars Trilogy, Kim Stanley Robinson

Red Mars Green Mars Blue Mars

Interesting thoughts on colonization of Mars, longevity treatments and the eventual conflicts between the ancestral and new homes of humanity. I recommend reading them all back-to-back as the characters and story pick up in the next books right where they left off. The books can occasionally get a little dry, but it’s a good, inspiring read if you enjoy stories about politics and terraforming. Overall, more of a story for hardcore sci-fi lovers - not something you’ll want to dabble in. If you’re less of a committed sci-fi fan but interested in a good colonization story, The Moon Is a Harsh Mistress might be a better place to start.

Spook Country, William Gibson

Spook Country

An approximate sequel to Pattern Recognition, at least from the involvement of Hubertus Bigend and his Blue Ant company. Interesting characters and plotlines: he weaves together three major characters, all are very believable and well-developed. Mostly follows the story of a former-rock-star, Hollis Henry, writing for Bigend’s virtual Wired-clone magazine, Node, an anti-anxiety-addict named Milgram and Tito, a cuban teenager. Light book overall, enjoyed it.

Zoe’s Tale, John Scalzi

Zoe’s Tale

I got this one in PDF format as part of my 2009 Hugo Awards Voter’s packet. I haven’t read anything else in the series, but this was entertaining. It’s an alternate point of view (Ender’s shadow style) to another one of John Scalzi’s books, The Last Colony. The book is written from the point of view of a colonist couple’s daughter, Zoe. Since I hadn’t read the first book that this book complimented, I felt like the book was too light - too much was missing. At some point I hope to pick up the other book and see if the having the pair makes more sense together.

Revelation Space, Alastair Reynolds

Revelation Space

Out the the books in this entry, this was by far my favorite. Deep and engrossing, along the lines of Hyperion. Follows a post-plague human culture set in the far-future, with lots of high-technology and a galaxy full of dead alien races. Begins with an archeological dig of an alien race wiped out in a solar event 990,000 years prior. Lots of hints of a bigger universe, though much of the details seem to be left for sequels. Falls apart a little near the end, but I’m excited to read the next books in the series.

Next on my reading list:

Read full post

I’ve been working on a small project to bring support for the HTML5 <video> tag to older browsers, hoping to encourage use of this tag.  The idea is to use Flash’s video/mp4 support as a “downlevel” emulator for the video tag.

It uses an HTC binding in IE and an XBL binding in Mozilla to create a flash video in place of the video tag itself. The flash video support is provided by the excellent FlowPlayer, which supports playing mp4 videos out-of-the-box.

Right now, video4all only supports videos that are statically added to your page. I hope to add support for dynamic addition of videos soon. The videos must be encoded in both video/mp4 and video/ogg formats to properly support Firefox, Safari and the Flash fallbacks. You’ll need to ensure that your video sources are properly tagged with the correct MIME types so that the script can pick them up.

The currently released browsers (that I know of) that support <video> are:

  • Firefox 3.5
  • Safari 4
  • iPhone 3.0

This project extends support for <video> to:

  • IE6+
  • Firefox 3.0
  • Safari 2-3
  • Opera (9.x)

For more info, visit the project page.

Read full post

If you’re not familiar with video4all, let me start off with a quick intro: It allows you to use the standards-compliant HTML5 <video> tag on any browser, freeing you from the complexity of configuring markup for multiple video formats.

I’ve been tweaking the video4all source a bit since last night’s late release to fix some issues with other browsers and clean up some of the code. Adding support for browsers without binding languages was pretty simple - a setInterval runs and checks for new video elements every few seconds, converting them to flash embeds as needed. It’s not ideal (DOM mutation events would be great here), but it does a decent enough job.

One problem that I’ve run up against is that Safari 4 under windows will actually eat your <video> element’s tags if QuickTime isn’t installed!  They are no longer available in the document once eaten by the parser. In fact, there’s no way that I can find to recover these elements.  I’ve been trying to report a bug to Apple, but their bugreporter fails with a mysterious error every time I try to log in with my ADC credentials. I might consider adding a hack property to the video element to support this ultra-minority browser (-x-safari-win-mp4-src?), but I’ll keep researching ways to rescue the missing tags first.

So, what’s next for video4all? First of all, I’d like to remove the hard-coded FlowPlayer control bar that the player uses. It affects the aspect ratio of the video, making it difficult to size these things properly. Secondly, I’d like to start work on binding the rich video JS interface to the flash control behind the scenes. Even making the simple methods to start and stop the video available would be a big help!

(unfortunately this demo is no longer available, sorry!)

If you are interested in helping make this project better, visit us at the video4all project site and join the discussion. I’d love to hear some feedback about potential methods to fix Safari 4’s broken parser, even if they are glorious hacks!

Read full post

Every developer has a different opinion of stored procedures. If you’ve worked on a few projects that involve databases, you’ll inevitably come across the as part of a project.

I’ve personally had a love/hate relationship with them. They require maintenance scripts like your regular table structure, but they aren’t structure. Generally, databases don’t give you good introspection ability for stored procedures either - fire-and-forget management.

Classically, stored procedures have been a bastardized version of the database’s native SQL. Take the basic SQL, add in some cursors, looping constructs and function invocation - that’s your stored procedure language. Depending on your database vendor, be prepared to deal with different syntaxes, punctuation and functionality.

No matter what, you’ll be writing your application in a different language, making the number of languages involved in any project with stored procs N + 1. If you choose to evaluate a different database vendor for a different project, or version 2.0 of your current project, your previous stored procedure may or may not translate.

Over the last decade and a bit, database vendors have been adding native language bindings to the database itself, allowing developers to write code in languages they are comfortable in, regardless of the database storage technology behind the scenes.

I’ll step back for a bit here - the role of application storage has been changing recently, pushed by a number of factors:

  1. Data-binding frameworks, like Hibernate, SQLAlchemy, Ruby on Rails and a number of others have been creating an abstraction layer over different database brands, as well as managing the flow of data between the live objects in the system.
  2. SQL-free large stores like Google’s BigTable, CouchDB and others have been pushing complexity out of the datastore and into the application itself.
  3. Languages with dynamic typing are in-vogue. Python, Ruby and PHP are popular, Java will be getting dynamic dispatch soon.
  4. New accessible structured storage formats, like Apache Thrift and Google’s Protobuf are allowing developers to create highly versionable “blobs” that are easy to write to, but hard to index and query.

Databases are going to have to evolve over the near future to suit the way that applications and hardware are developing today. Memory is cheap, disks are huge and time-to-market is one of the primary driving forces for developers.

I believe that the ideal role for a database is to be a high-powered host for serialization framework stubs. Instead of having to map application objects from database tables using clumsy SQL generation techniques and reconstructing objects from deep JOINs, the serialization frameworks should be injecting small stubs next to the database process itself. These stubs can coordinate data retrieval with the database itself - managing, grooming and traversing indexes, altering storage as needed for client-side data and sending appropriately structured data to the application.

Drizzle is a large step in the right direction. Instead of providing the smörgåsbord of features like the other DB vendors on the market, it provides a light-weight, modular architecture that allows you to swap and re-plug components like you would your application itself.

Here’s my plea to database developers: Let me compose my database server like I do my Java code, from appropriately tailored components optimized for my use. Let me run my Java/Python/Ruby code on the database server, right next to the data itself, in a language that I’m familiar and comfortable with - running on both sides of the database connection TCP connection.

Read full post

After receiving my Google Voice invite tonight and picking a phone number, seemingly at random, I discovered that I had picked a number with the same last four digits as one of my friends’ numbers.  If you are familiar with the Birthday Paradox, you might recognize the form of the problem.

So, what are the chances that given a number of friends, n, you don’t pick a number that ends in the last four digits as the number of another friend? Well, if it’s just you, the probability that you pick a number that noone else has is 1. With a single friend, the chances that you pick a safe number are 9999/10000. With two friends, the chances are (9998/10000) * (9999/10000), modeling it as on trial after another.

It turns out that you can expand this sequence out and wrap it up in a nice factorial equation. I won’t bore you with the details- it’s idential to the technique used on the wikipedia entry. You’ll end up with the following equation:

While having the equation on hand is nice, computing BigInt-magnitude factorials is outside Google’s math evaluation query and Apple’s Calculator application. Fortunately, Wolfram Alpha comes to the rescue (the first time I’ve been able to use it for a real question!):

Enter “10000! / (10000^n * (10000 - n)!)” and you’ll get a detailed analysis of the equation, along with a pretty graph:

After some research, I figured out how to limit the plot to get a better idea by using the “from” keyword: 10000! / (10000^n * (10000 - n)!) from n=0 to 300:

So, as you can see, if you’ve got 120 friends, your chances are pretty much 50/50 that you’ll have the same last four digits as one of them.

Read full post

UPDATE: It’s live! The open-source project is up on Google Code and I’ve blogged a more about it.

I’m getting closer to having the GWT bindings that we wrote for Firefox ready for public release. What we’ve got is more than enough to write a complex extension. The bindings were even enough to write a prototype of an OOPHM server, itself written in GWT!

For now, just a taste of what extension development is like GWT, complete with strong typing, syntax checks, auto-completion and hosted mode support:

protected nsIFile createTempFile() {
    nsIFile file = nsIProperties.getService("@mozilla.org/file/directory_service;1")
        .get("TmpD", nsIFile.iid());
    file.append("logs");
    if (!file.exists()) {
        file.create(nsIFile.DIRECTORY_TYPE, 0777);
    }

    file.append("log.txt");
    file.createUnique(nsIFile.NORMAL_FILE_TYPE, 0666);

    return file;
};

protected void write(String value, nsIFile file) {
    nsIFileOutputStream foStream = nsIFileOutputStream.createInstance("@mozilla.org/network/file-output-stream;1");
    foStream.init(file, 0x02 | 0x08 | 0x10, 0666, 0);
    foStream.write(value, value.length());
    foStream.close();
};

The bindings are all generated from the xulrunner SDK’s IDL files and include documentation, parameter names and constants:

/**
     * @param file          - file to write to (must QI to nsILocalFile)
     * @param ioFlags       - file open flags listed in prio.h
     * @param perm          - file mode bits listed in prio.h
     * @param behaviorFlags flags specifying various behaviors of the class
     *        (currently none supported)
     */
  public final native void init(nsIFile file, int ioFlags, int perm, int behaviorFlags) /*-{
    return this.init(file, ioFlags, perm, behaviorFlags);
  }-*/;
Read full post

I’ve been using this trick for a while and I thought I’d share it. For those who live by Eclipse’s quickfixes, it’s not entirely obvious that it’s legal.

If you have legacy code like the code below, where foo.list() is a Java 1.4-compatible method returning java.util.List, you’ll normally see an “unchecked cast” warning on the assignment like so:

public void doSomeListStuff(Foo foo) {
    List list = foo.list(); // Warning: unchecked cast
    for (Blah blah : list) {
        frobinate(blah);
    }
}

Normally, Eclipse offers to fix it for you like this:

@SuppressWarnings("unchecked")
public void doSomeListStuff(Foo foo) {
    List list = foo.list(); // Everything is A-OK!
    for (Blah blah : list) {
        frobinate(blah);
    }
}

A better solution takes advantage of Java’s less-touted ability to annotate local variable declarations. By annotating the declaration and assignment instead of the method, the warning suppression is limited in scope to the assignment expression itself:

public void doSomeListStuff(Foo foo) {
    @SuppressWarnings("unchecked")
    List list = foo.list();

    for (Blah blah : list) {
        frobinate(blah);
    }
}

This keeps your code under maximum protection of Java’s generic strong typing. By annotating a whole method with @SuppressWarnings(“unchecked”), you may inadvertently introduce a later, unsafe cast that could cause a bug down the line.

Read full post

Here’s a neat CSS-only panel that you can use to disable controls within a given <div> element.  It uses an absolutely-positioned <div> within the container of controls you’d like to disable. The glass panel is positioned using CSS to overlap the controls, and set partially transparent to give the controls a disabled look.

Note that this only works in standards mode, <!DOCTYPE html>, due to IE’s painfully outdated quirks mode. Additionally, your container of controls needs to be styled as overflow: hidden to work around the limitations of the height expression and position: relative so the parent can become a CSS positioning parent.

Click here to view the demo (works in standards-compliant browsers + IE).

<!DOCTYPE html>
<html><body>
<style>
    .disablable {
	    position:relative;
	    overflow:hidden;
    }

    .disablable-disabled .glasspanel {
	    display:block;
	    position:absolute;
	    top:0px;
	    bottom:0px;
	    opacity:0.4;
	    filter:alpha(opacity=40);
	    background-color: green;
	    height:expression(parentElement.scrollHeight + 'px');
	    width:100%;
    }

    .disablable-enabled .glasspanel {
    	display: none;
    }
</style>

<button onclick="document.getElementById('control').className='disablable disablable-disabled';">Disable</button>
<button onclick="document.getElementById('control').className='disablable disablable-enabled';">Enable</button>

<div id="control" style="border: 1px solid black;">
    <div></div>
    These are the controls to disable:
    <br>
    <button>Hi!</button>
    <select><option>Option 1</option><option>Option 2</option></select>
</div>

<button>Won't be disabled</button>
</body></html>
Read full post

I came across the Mix Gestalt project tonight and I thought I’d share some thoughts. It’s a bit of script that effectively sucks code snippets in languages other than Javascript out of your page and converts them to programs running on the .NET platform.

While interesting, it has a number of drawbacks that make it far less interesting than the HTML5-based approach that works in the standards-compliant browsers based on WebKit, Gecko and Opera, as well as the improved IE8.

First of all, it has to bootstrap .NET into Firefox (or whichever browser you are running it in).  This adds a few milliseconds to your page’s cold load time if it’s not already loaded. In the day and age of fast websites, any additional page time is just a no-go.

Once it’s up and running, the code that Gestalt compiles has to talk to the browser over the NPRuntime interface. Imagining pushing the number of operations required to do 3D rendering or real-time video processing becomes very difficult.  To offer a comparison, the Javascript code that runs in Firefox is JIT’d to native code. When the native code has to interact with the DOM, it gets dispatched through a set of much faster quickstubs. For browsers that run plugins out-of-process like Chrome and the future Mozilla, NPRuntime will be even worse!

One of the other claims about Gestalt is that it preserves the integrity of “View Source”. I’d argue that View Source is dead - and it has been for some time now. I rarely trust the View Source representation of the page.The web is still open, but it’s more about inspecting elements and runtime styles and being able to tweak those. I rarely trust the View Source representation of the page. Dynamic DOM manipulation has all but obsoleted it. Firebug provides this for Firefox, while Chrome and Safari come with an advanced set of developer tools out of the box. Even IE8 provides a basic, though buggy set of inspection tools.

The last unfortunate point for the Gestalt project is that it requires a plugin installation on Windows and Mac, and is effectively unsupported under Linux. You won’t see any of these Gestalt apps running on an iPhone or Android device any time soon either.

So where do I see the right path?  HTML5 as a platform is powerful. Between <canvas>,  SVG, and HTML5 <video> you get virtually the same rendering power as the XAML underlying Gestalt, but a significantly larger reach.

As for the scripting languages, Javascript is the only language that you’ll be able to use on every desktop and every device on the market today. Why interpret the <script> blocks on the client when you can compile the Python and Ruby to Javascript itself, allowing it to work on any system?

Regular readers of my blog will know that I’m a big fan of GWT - a project that effectively compiles Java to Javascript. For those interested in writing in Python, Pyjamas is an equivalent project. I’m sure that there must be a Ruby equivalent out there as well.

Javascript is the Lingua Franca of the web, so any project that hopes to bring other languages to it will have to take advantage of it if it.  I’d hope that the Gestalt project evolves into one that leverages, rather than tries to replace the things that the browser does well.

Read full post

Using window.name as a transport for cooperative cross-domain communication is a reasonably well-known and well-researched technique. I came across it via two blog posts by members of the GWT community that were using it to submit GWT FormPanels to endpoints on other domains.

For our product, I’ve been looking at various ways we can offer RPC for our script when it is embedded in pages that don’t run on servers under our control.  Modern browsers, like Firefox 3.5 and Safari 4.0 support XMLHttpRequest Level 2.  This standard allows you to make cross-domain calls, as long as the server responds with the appropriate Access-Control header.  Internet Explorer 8 supports a proprietary XDomainRequest that offers similar support.

When we’re looking at “downlevel” browsers, like Firefox 2/3, Safari 2/3 and IE 6/7, the picture isn’t as clear. The window.name transport works well in every downlevel browser but IE6 and 7. In those IE versions, each RPC request made across the iframe is accompanied by an annoying click sound. As you can imagine, a page that has a few RPC requests that it requires to load will end up sounding like a machine gun. The reason for this is IE’s navigation sound which plays on every location change for any window, including iframes. The window.name transport requires a POST and a redirect back to complete the communication, triggering this audio UI.

I spent a few hours hammering away on the problem, trying to find a solution. It turns out that IE6 can be fooled with a element that masks the clicking sound. This doesn't work in IE7, however. My research then lead to an interesting discovery: the GTalk team was using an ActiveX object named "htmlfile" to work around a similar problem: navigation sounds that would play during their COMET requests. The htmlfile object is basically a UI-disconnected HTML document that works, for the most part, the same way as a browser document. The question was now how to use this for a cross-domain request.

The interesting thing about the htmlfile ActiveX object is that not all HTML works as you’d expect it to. My first attempt was to use the htmlfile object, creating an iframe element with it, attaching it to the body (along with an accompanying form inside the htmlfile document) and POSTing the data. Unfortunately, I couldn’t get any of the events to register. The POST was happening, but none of the iframe’s onload events were firing:

if ("ActiveXObject" in window) {
    var doc = new ActiveXObject("htmlfile");
    doc.open();
    doc.write("<html><body></body></html>");
    doc.close();
} else {
    var doc = document;
}

var iframe = doc.createElement('iframe');
doc.body.appendChild(iframe);
iframe.onload = ...
iframe.onreadystatechange = ...

The second attempt was more fruitful. I tried writing the iframe out as part of the document, getting the iframe from the htmlfile and adding event handlers to this object. Success!  I managed to capture the onload event, read back the window.name value and, best of all, the browser did this silently:

if ("ActiveXObject" in window) {
    var doc = new ActiveXObject("htmlfile");
    doc.open();
    doc.write("<html><body><iframe id='iframe'></iframe></body></html>");
    doc.close();
    var iframe = doc.getElementById('iframe');
} else {
    var doc = document;
    var iframe = doc.createElement('iframe');
    doc.body.appendChild(iframe);
}

iframe.onload = ...
iframe.onreadystatechange = ...

I’m currently working on cleaning up the ugly proof-of-concept code to integrate as a transport in the Thrift-GWT RPC library I’m working on. This will allow us to transparently switch to the cross-domain transport when running offsite, without any visible side-effects to the user.

Read full post