Archive for the ‘web’ Category

The Next Decade: part 2

Saturday, December 6th, 2014

Five years ago I wrote a set of predictions about 2020. In some ways I think I was too unambitious for the progress of technology: browser technology has probably already reached the levels I had expected for a decade later. Some of them look like they might be off-target, like the increase in storage capacity of mobile devices. We still have another five years to see how they all pan out.

I thought I’d take another swing at a set of predictions. This time I’m looking out towards 2025.

Manufacturing

3D printing is the next major transformative technology wave. In 2025 we’ll be seeing 3D printers in the homes of at least a quarter of North Americans. Home 3D printer quality in 2025 will be amazing. A consumer will be able to print in multiple colors and plastic/rubber-like materials, with a final print quality that will often rival that of mold-formed products.

There will be multiple marketplaces for things to print: some open-source ones that will contain basic shapes and designs, and a number of licensed repositories for printing designer items like toys and kitchenware from well-known names. There will also be a healthy 3D piracy community.

When you look around a room in 2014, a lot of what you see will be printed in the home or a rapid-manufacturing facility in 2025. 3D printing will eat the manufacturing lifecycle from the bottom up. Products where the first 100 might be built with rapid prototyping today will have the first 100,000 produced on-demand. Companies like IKEA will choose to print and cut their designs on-demand in warehouses rather than stocking parts built elsewhere.

Local distribution centers will replace overseas manufacturing almost completely. North America will import processed materials from China, India and Africa for use in the new local centers: plastics and rubbers, and metals with binder. The local manufacturing centers will use a combination of 3D printer, laser/plasma cutting and generic robot assembly to build everything from one-off orders to runs of thousands as needed.

To support rapid manufacturing at these levels, generic robotics will replace the manufacturing line of today. A generic robot will be able to be taught to do a task like a human does and will coordinate with other generic robots to build a product like a fixed line would do today. Generic robotics will also find itself at home in other fields currently employing humans doing repetitive tasks like food preparation, laundry and cleaning.

Amusingly 3D printers will be displacing regular document printers which will have mostly died out at that point. Paper will take much longer than 2025 to replace, but the majority of financial, legal and educational work will be entirely electronic. Electronic signatures will be commonplace and some form of e-signature using mobile devices and biometrics will likely be used to validate documents. Paper in the home will be well on its way out and even children will be coloring on electronic devices with a stylus rather than coloring books.

Home

In 2025 we’ll have maxed out our technologies for 2D display and audio. Screens on cheap devices will have the same effective DPI as paper and the pixel will be something very few people will see. Virtual reality won’t be pervasive yet, but it’ll be in a significant number of homes for entertainment purposes. There will be devices small enough for portable use.

The convergence of devices will continue, and will consume a number of devices for entertainment at home. The cable box and dedicated game consoles will be almost dead, replaced with streaming from personal devices Chromecast-style. TV will still be going strong and there will be advancements in display technology that will allow basic 3D technology without glasses using light-field manipulation.

Transport

Vehicles will be radically transformed by advances in self-driving technology. While we still won’t have pervasive self-driving vehicles, there will be a large number of autonomous vehicles on the road. These will mainly replace long-haul trucking and transport, but some higher-end vehicles will be autonomous enough to allow individuals to safely commute to work or run errands hands-free.

Car ownership will be dramatically declining by 2025. With the ability to summon transport from an autonomous or human-driven vehicle at the tip of everyone’s fingertips, it won’t make sense for most people to own vehicles. High-end cars will continue to be a status symbol, but the low-end market will be decimated in the transition.

Electric vehicles will have captured around half of the market of vehicles sold, including everything from passenger cars to transport trucks. Fossil fuels and hydrogen will be included as “backup” options on some models, used only to increase range if needed by charging the batteries.

Energy

With solar getting cheaper and more efficient year-by-year, we’ll see some changes in the way that energy is supplied to home. Neighborhood grids will replace nation-wide grids for home use. Dozens of houses will pool their solar power into a cooperative with battery storage that they’ll use for electricity, heating and charging of vehicles.

More nuclear power will come online in the 2020’s, mainly driven by new reactor designs that produce very little waste and advances in fail-safe technology. Thorium-based reactors will be just starting to appear and safe designs like the CANDU plant will become significantly more popular.

Much of the new power brought online will be fueling desalinization and water-reclamation facilities to stabilize the fresh water supply. Fresh water will dominate the news cycle as energy does today.

Conclusion

With all of the changes described above, North America will see a significant effect on the number of low-end jobs. Manufacturing, food and transport industries will be radically changed, jobs moving from many low-skilled positions to a few high-end positions.

This will require thought on how we can re-architect the economy. I think we’ll see concepts like minimum guaranteed income replacing welfare and minimum wage. This is much more difficult to predict than the pace of technology, but we’ll have to do something to ensure that everyone is able to live in the new world we’re building.

On Google Chromebooks

Wednesday, May 30th, 2012

The Google Chromebook is an interesting product to watch. I’ve been a fan of and using them since the early Cr-48 days. In fact, two Chromebook laptops were in service in our household until just a few weeks ago when the Samsung Chromebook broke (although I hope to repair it soon).

These laptops sit next to our couch in a stack as a set of floater laptops we use for random surfing. If any of us are just looking for a quick bite of information, we generally pull out the Chromebook rather than walking over to the Macbook that sits on our kitchen counter. The Chromebook is also great for our son to use when building LEGO from PDF instructions.

Browsing is far better on the Chromebook than it is on any Android or iOS device I’ve used, hands down. I find the browsing experience to be frustrating on an iPad or my Galaxy 10″, while the Chromebook experience is flawless. The device is basically ready-to-use for browsing as soon as you lift the lid, in contrast to the fair amount of time it takes to get logged into the Macbook (especially if another user has a few applications open in their session).

The hardware itself in the early models was slightly underpowered, but that doesn’t really seem to matter much unless you’re playing a particularly intensive Flash video or HTML5 game. Scrolling is fairly slow on complex sites like Google+ as well, but it’s never been a showstopper. The touchpads have also been hit-and-miss in the early models. For what we use it for, the hardware is pretty decent. I imagine that the next generations will gradually improve on these shortcomings.

What makes these devices a hard sell is the price point. The cheapest Chromebook experience you can get today is the Acer (@ $300). Considering the fact that you are buying a piece of hardware that effectively does less than a laptop, I would find it hard to justify spending that amount if I were looking at hardware today. Even though I prefer to use the Chromebook when surfing over the tablets or the full laptop, I feel like the cost is just too much for a single-purpose device like this.

For Chromebooks to really take off in the home market, I think that a device with the equivalent power to the Samsung Chromebook 5 needs to be on the market at a $199 price point. I could see myself buying them without a second thought at that price. Alternatively, if we saw some sort of Android hybrid integration with the Chromebook, I think that this could radically change the equation and add significant perceived value to the device.

I don’t see the Chromebox being popular in households ever – I believe that we’ll see the decline of the non-portable computer going forward at home. Now, if I were running a business where a large subset of employees could get by with just web access, I would definitely consider rolling these out. The Chromebox looks like it could be a real game changer for business IT costs.

Abusing the HTML5 History API for fun (and chaos)

Monday, March 7th, 2011

Disclaimer: this post is intended to be humorous and the techniques described in this post are not recommend for use for any website that wishes to keep visitors.

I’ve been waiting for a chance to play with the HTML5 History APIs since seeing it on Google’s 20thingsilearned.com. After reading Dive into HTML5’s take on the History API today (thanks to a pointer by Mark Trapp), I finally came up with a great way to play around with the API.

Teaser: it turns out that this API can be used for evil on WebKit browsers. I’ll get to that after the fun part.

The web of the 90’s was a much more “interesting” place. Animated GIF icons were the rage, loud pages and <blink> were the norm and our friend, the <marquee> tag used to give us impossible-to-read scrolling messages.

If you can see this moving, your browser is rocking the marquee tag

Over time, the web grew up and these animations died out. Marquees made a short comeback as animated <title>s, but thankfully those never caught on.

The Modern Browser

The world of the 2011 browser is much different than the browser of the 90’s. Early browsers had a tiny, cramped location bar with terrible usability. In modern browsers, the location, function and usability bar is now the one of the primary focuses of the UI.

Given that, it seems like this large browser UI is ripe for some exploitation. What if we could resurrect marquee, but give it all of the screen real-estate of today’s large, modern location bar?

With that in mind, I give you THE MARQUEE OF TOMORROW:

<script>
function beginEyeBleed() {
        var i = 0;
        var message = "Welcome to the marquee of tomorrow!   Bringing the marquee tag back in HTML5!   ";
        setInterval(function() {
                var scroller = message.substring(i) + message.substring(0, i);
                scroller = scroller.replace(/ /g, "-");
                window.history.replaceState('', 'scroller', '/' + scroller);
                i++;
                i %= message.length;
        }, 100);
}
</script>

(if you’re viewing this on my blog in an HTML5-capable browser)

And now for the evil part.

It turns out that some browsers weren’t designed to handle this level of awesomeness. Trying to navigate away from a page that turns this on can be tricky. Firefox 4 works well: just type something into the location bar and the bar stop animating. WebKit-based browsers are a different story.

On Safari, every replaceState call removes what you’ve typed into the location bar and replaces it with the new data. This makes it impossible to navigate away by typing something else into the bar. Clicking one of your bookmarks will take you away from the page, however.

On Chrome, the same thing happens, but it’s even worse. Every replaceState call not only wipes out the location bar, but it cancels navigate events too. Clicking on bookmarks has no effect. The only recourse is to close the tab at this point.

Edit: comments on this post have noted that you can refresh, or go back to get off the page. You can also click a link on the page to navigate away.

Edit: the location bar seems to be find in Chrome on non-Mac platforms. Ubuntu is reportedly OK and Windows 7 performed OK in my test just now. The bookmark navigation issue is still a problem on the other platforms.

Overall, the new HTML5 history marquee is a pretty effective way of drawing eyeballs and, in some cases, forcing longer pageview times.

I filed issue 75195 for Chromium on this. I’ll file one for Safari as well.

Follow me (@mmastrac) on Twitter and let us know what you think of Gri.pe!

View/vote on HackerNews

(this is Thing A Week #7)

 

AppEngine: “Peace-of-mind scalability”

Thursday, November 25th, 2010

I had a lot of great feedback on my AppEngine post the other day. We put our trust in the AppEngine team to keep things running smoothly while we work on our app and live the rest of our lives. Today is pretty quiet around the Gripe virtual offices (aka Skype channels): it’s Thanksgiving in US and I’m getting hit by a cold too hard to do much today besides write a short post.

We had a great surprise this morning. The View episode we were featured in was in re-runs this morning and we had a huge traffic spike. Nobody noticed until about 30 minutes in, since  everything was scaling automatically and the site was still serving up as quickly as ever:

This is a whole new way to build a startup: no surprise infrastructure worries, no pager duty, no getting crushed by surprise traffic. It’s peace-of-mind scalability.

Now back to fighting my cold, without the worry of our site’s load hanging over my head. For those of you in the US, enjoy your thanksgiving and watch out for Turkey drops!

WKRP Turkey Drop from Mitch Cohen on Vimeo.

Why we’re really happy with AppEngine (and not going anywhere else)

Tuesday, November 23rd, 2010

There’s been a handful of articles critical of Google’s AppEngine that have bubbled up to the top of Hacker News lately. I’d like to throw our story into the ring as well, but as a story of a happy customer rather than a switcher.

We’ve been building out our product, Gri.pe, since last May, after pivoting from our previous project, DotSpots. The only resource available to develop Gripe in the early days was myself. I’d been playing with AppEngine on and off, creating a few small applications to get a feel for AppEngine’s strengths and weaknesses since its initial Python release. We were also invited to the early Java pre-release at DotSpots, but it would have been too much effort to make the switch from our Spring/MySQL platform to AppEngine.

My early experiments on AppEngine near its first release showed that it was promising, but was still an early release product. Earlier this year, I started work on a small personal-use aggregator that I’ve always wanted to write. I targeted AppEngine again and I was pleasantly surprised at how far the platform had matured. It was ready for us to test further if we wanted to tackle projects with it.

Shortly after that last experiment, one of our interns at DotSpots came to us with an interesting idea. A social, mobile complaint application that we eventually named Gri,pe. We picked AppEngine as our target platform for the new product, given the platforms new maturity. It also helped that as the sole developer on the first part of the project, I wanted to focus on building the application rather than spending time building out EC2 infrastructure and ops work that goes along with productizing your startup idea. I prototyped it on the side for a few months with our designer and once we determined that it was a viable product, we decided to focus more of the company’s effort on it.

There were a number of great bonuses to choosing AppEngine as well. We’ve been wanting to get out of the release-cycle treadmill that was killing us at DotSpots and move to a continuous deployment environment. AppEngine’s one-liner deployment and automated versioning made this a snap (I hope to detail our Hudson integration another blog post). The new task queue functionality in AppEngine let us do stuff asynchronously as we always wanted to do at DotSpots, but found to be awkward to automate with existing tools like Quartz. The AppEngine Blobstore does the grunt work of dealing with our image attachments without us having to worry about signing S3 requests (in fairness, we’re using S3 signed requests for our new video upload feature, but the Blobstore let us launch image attachments with a single day’s work).

When it came time for us to launch at TechCrunch 50 this year, I was a bit concerned about how AppEngine would deal with the onslaught of traffic. The folks on the AppEngine team assured me that as long as we weren’t doing anything to cause a lot of write contention on single entity groups, we’d scale just fine. And scale we did:

In the days after our launch, AppEngine hit a severe bit of datastore turbulence. There was the occasional latency spike on AppEngine while we were developing, but late September/early October was much rougher. Simple queries that normally took 50ms would skyrocket up to 5s. Occasionally they would even time out. Our application was still available, but we were seeing significant error rates all over. We considered our options at that point, and decided to stick it out.

Shortly after the rough period started, the AppEngine team fixed the issues. And shortly after that, a bit of datastore maintenance chopped the already good latencies down even further. It’s been smooth sailing since then and the AppEngine team has been committed to improving the datastore situation even more as time goes on:

We didn’t jump ship on AppEngine for one rough week because we knew that their team was committed to fixing things. We’ve also had our rough weeks with the other cloud providers. In 2009, Amazon lost one our EBS drives while we were prototyping DotSpots infrastructure on it. Not just a crash with dataloss, but actually lost. The whole EBS volume was no longer available at all. We’ve also had weeks where EC2 instances had random routing issues between instances, instances lock up or get wedged with no indication of problems on Amazon’s side. Slicehost had problems with our virtual machines losing connectivity to various parts of the globe.

“Every cloud provider has problems” isn’t an excuse for any provider to get sloppy. It’s an understanding I have as the CTO of a company that bases its future on any platform. No matter which provider we choose, we are putting some faith in a third-party that they will solve the problems as they come up. As a small startup, it makes more sense for us to outsource management of IT issues than to spend 1/2 of an engineer’s time dealing with this. We’ve effectively hired them to deal with managing the hard parts of our scaleability infrastructure and they are working for a fraction of what it would cost to do this ourselves.

Putting more control in the hands of a third party means that you have to give up the feeling of being in control of every aspect of your startup. If your self-managed colo machine dies, you might be down for hours while you get your hands dirty fixing, reinstalling or repairing it. When you hand this off to Google (or Amazon, or Slicehost, or Heroku), you give up the ability to work through a problem yourself. It took some time for me to get used to this feeling, but the AppEngine team has done amazing work in gaining the trust of our organization.

Since that rough week in September, we’ve had fantastic service from AppEngine. Our application has been available 100% of the time and our page rendering times are way down. We’re committing 100% of our development resources to banging out new features for Gri.pe and not having to worry about infrastructure.

On top of the great steady-state service, we were mentioned on ABC’s The View and had a massive surge in traffic which was handled flawlessly by AppEngine. It transparently scaled us up to 20+ instances that handled all of the traffic without a sweat. In the end, this surge cost us less than $1.00:

There’s a bunch of great features for AppEngine in the pipeline, some of which you can see in the 1.4 prerelease SDK and others that aren’t publicly available yet but address many of the issues and shortcomings of the AppEngine platform.

If you haven’t given AppEngine a shot, now is a great time.

Post-script: We do still use EC2 as part of our gri.pe infrastructure. We have a small nginx instance set up to redirect our naked domain from gri.pe to www.gri.pe and deal with some other minor items. We also have EC2 boxes that run Solr to deal with some search infrastructure. We talk to Solr from GAE using standard HTTP. As mentioned in the article above, we also use S3 for video uploads and transcoding.

Follow @mmastrac on Twitter and let us know what you think of Gri.pe!

It’s clear that a lot of people have stopped testing IE6

Monday, January 18th, 2010

IE6 has been getting a bad rap lately. France and Germany are both warning their citizens not to use it. The recent intrusions at Google were purported to be caused by holes in IE6. Even Microsoft is warning users to steer clear of it.

To get an idea of who’s given up on testing against this ancient browser, I’ve been running around browsing with IE6 in a VM. From what I’ve seen so far, it’s not getting a lot of love.

I’m pretty happy about this: as more and more sites break in IE6, it drives more people to browsers that don’t suck like Chrome or Firefox. It’s the last part of the browser death spiral: completely busted on the new and exciting sites.

The web browsing experience is pretty poor overall. Many sites serve transparent PNG files to the browser, which often render with miscolored backgrounds. Elements gain random padding, throwing nicely formatted designs for a loop. Other elements that are shrink-wrapped in other browsers sometimes end up with extremely large widths.

Note: I tried not to test sites too deeply- I figured that deeply nested screens will occasionally break in modern browsers too.

Facebook

Facebook is the first site I visited. I figured that this would be the site most likely to regularly test against IE6. It’s clear that they do some testing: the interface works reasonably well overall, with only a few small glitches that might annoy a user.

The most glaring issue happens when you click the “more” link on the left. The UI gets kicked down to the bottom of the screen. Looks like a classic IE float bug:

The setting popup menu occasionally gets itself into a state where it pops as you mouseover the popup part of it, becoming entirely useless (it also random gains padding from time to time):

Twitter

Twitter’s site is pretty simple, but it’s obviously not been tested in IE6. The CSS image sprites they use as controls on each tweet are totally busted:

Google Reader

Google reader is a rich application and works reasonably well in IE6. Lots of little formatting issues: buttons that are cut off, icons that float in the wrong place, stuff pushed down a line. Overall, still usable, albeit slow:

FriendFeed

FriendFeed’s audience is probably not using IE6 and I imagine that they don’t put a lot of work into testing it. Many of the sidebar links don’t work and it sends transparent PNGs to IE6, which happily poops all over them:

Brizzly

An up-and-coming Twitter client, Brizzly looks pretty stellar in modern browsers. The front page is a bit mangled in IE6, however:

When you try to log in, it’s pretty obvious that IE6 isn’t in their plans:

Final thoughts

There’s a pretty big part of the web that still works in IE6, but a lot of the popular sites are getting pretty buggy in IE6. If your startup decides that it’s not worth supporting IE6, I wouldn’t blame you. IE6 is rockin’ a 13% market share on StatCounter and it’ll likely drop down to 4-5% by the end of next year.

Is it worth all the trouble to support IE6 for me? It depends. I’m not going to worry about fixing every little visual glitch in IE6. I’ll probably start sending down transparent PNGs to everything, even if the images don’t look great in IE6. If some bit of functionality breaks in IE6, I’ll repair it to the point where it works well enough to get the job done.

Full support for IE6? No thanks.

More on window.name Cross-Domain Transports

Monday, January 4th, 2010

I previously discussed using window.name as a transport last year, but there wasn’t enough information and theory in the previous post to implement your own transport. I’ll fill in the gaps a bit more here to help developers looking to implement this themselves. You can also see the implementation we’re using at DotSpots in the gwt-rpc-plus project: Cross­Domain­Frame­Transport.java and Cross­Domain­Frame­Transport­Request.java.

The process of making a window.name request is described below. Note that you can inspect the HTML and HTTP traffic on my blog to see how our DotSpots plugin performs the cross-domain requests.

  1. Create the iframe and form to post. In IE, we use ActiveXObject as previously discussed to work around the machine-gun navigation sounds. You’ll need to stash a reference to the ActiveXObject somewhere where IE won’t garbage collect it.
    if ("ActiveXObject" in window) {
        var doc = new ActiveXObject("htmlfile");
        doc.open();
        doc.write("<html><body><iframe id='iframe'></iframe></body></html>");
        doc.close();
        var iframe = doc.getElementById('iframe');
    } else {
        var doc = document;
        var iframe = doc.createElement('iframe');
        doc.body.appendChild(iframe);
    }
    
    var form = doc.createElement('form');
    doc.body.appendChild(form);
    
  2. Set the the iframe’s name to a globally unique request ID. Using a unique request ID is very important, as two frames with the same name in Safari will result in one of them being set to a random name.
    iframe.contentWindow.name = requestId;
    form.target = requestId;
  3. POST the form to the endpoint, including the unique request ID and URL on the host domain. You can usually use /favicon.ico as a local path to return to. This is what gets posted by the DotSpots plugin on my blog:
    serial=0-8908858
    data=... (truncated JSON data)
    type=window.name
    redirect=http%3A%2F%2Fgrack.com%2Ffavicon.ico
  4. At this point, the iframe is now in a different security domain, so onload events won’t fire and window name is inaccessible.
  5. The server generates code to set the window name and redirect back to the original domain. Since the iframe is in a different security domain, the parent page won’t be able to tell that it was loaded. We redirect this iframe back to the URL requested once we’ve set the name.
    <html><body>
    <script>
    window.name='wnr-0-8908858(truncated JSON data)';
    window.location.replace('http://grack.com/favicon.ico');
    </script></body></html>
  6. The browser redirects back. When it gets back to the other domain’s favicon, onload will successfully fire and the window.name previously set by the server will be accessible. As multiple onload and onreadystatechange events might occur during the request/response process, you should always check the window.name for the appropriate prefix.
  7. Clean up the iframe and form. Remove the iframe and form after the request completes, but do this after a short delay. If you remove the iframe immediately after onload fires, the Firefox loading throbber will continue to animate indefinitely.

If you are considering using a transport like this for your own project, you should also consider using postMessage for uplevel browsers that support it. There are two HTTP requests required for window.name, adding a fair bit of latency to each cross-domain request. Using postMessage reduces the number of requests to one, cutting the round-trip latency by 50%.

Your server responses when using postMessage should look like this instead:

<html><body>
<script>
window.parent.postMessage('wnr-0-8908858(truncated JSON data)', '*');
</script></body></html>

If you’re using GWT, all of this work is already done for you in our open-source library, gwt-rpc-plus. We’ve tested it against all of the current desktop browsers (all the way back to IE6) and the mobile browsers too (Android and iPhone).

GWT 2.0 on OSX and the ExternalBrowser RunStyle

Saturday, December 12th, 2009

GWT 2.0 ships with a new HtmlUnit-based testing system that makes testing faster, but isn’t full web-browser under the hood. It can’t perform layout, so any tests that rely on layout measurements won’t succeed.

There are a number of alternate run styles that are useful, however. Manual mode, which outputs a URL that you can connect any browser to, works well for quick testing, but is a pain when you need to keep running it over and over. RemoteWeb and Selenium automate the process of starting and stopping browsers, but require you to start an external server before testing.

There’s another run-style that isn’t well-documented, but I’ve found to be the most useful for testing locally: ExternalBrowser. It requires an executable name in the command-line, the name of the browser to start. On OSX, you can specify the fully-qualified name of the executable beneath the application bundle, or you can use a shell script to launch the browser of your choice.

Place the following script under /usr/bin named ‘safari’ and make it chmod a+x. This allows you (and ExternalBrowser) to launch Safari from the command-line’s PATH. The argument to “open” is the name of any browser that lives under your /Applications/ directory, including subdirectories.

#!/bin/sh
open -W -a Safari $1

Add a Java system property “gwt.args” to your testing launch configuration in Eclipse, Ant script or other IDE. You can then specify the run style like so:

-Dgwt.args="-runStyle ExternalBrowser:safari"

Now, when you run your unit tests, you’ll see GWT launch Safari and navigate to the appropriate URL on its own. Tests will leave windows up on your screen after they complete, but you can safely close them after the run terminates.

Javascript primitive conversion quick-reference

Wednesday, September 16th, 2009

Javascript has three primitive types: number, string and boolean. You can quickly coerce values between the primitive types using some simple expressions.

There are a few different coersion expressions, depending on how you want to handle some of the corner cases.  I’ve automatically generated a list below:

Conversion: To Number To Number To Number To String To Boolean To Boolean
Expression: +x (+x)||0 +(x||0) “”+x !!x !!+x
null 0 0 0 “null” false false
(void 0) NaN 0 0 “undefined” false false
NaN NaN 0 0 “NaN” false false
Infinity Infinity Infinity Infinity “Infinity” true true
-Infinity -Infinity -Infinity -Infinity “-Infinity” true true
0 0 0 0 “0” false false
“0” 0 0 0 “0” true false
1 1 1 1 “1” true true
“1” 1 1 1 “1” true true
2 2 2 2 “2” true true
“2” 2 2 2 “2” true true
[] 0 0 0 “” true false
({}) NaN 0 NaN “[object Object]” true false
true 1 1 1 “true” true true
“true” NaN 0 NaN “true” true false
false 0 0 0 “false” false false
“false” NaN 0 NaN “false” true false
“” 0 0 0 “” false false
“null” NaN 0 NaN “null” true false

The above table was generated with this code (note: uses some Firefox-specific code).

<table id="results" style="border-collapse: collapse; border: 1px solid black;">
 <tr id="header">
 <th>Conversion:</th>
 </tr>
 <tr id="header2">
 <th>Expression:</th>
 </tr>
</table>

<script>
function styleCell(cell) {
 cell.style.border = '1px solid black';
 cell.style.padding = '0.2em';
 return cell;
}

values = [
null, undefined, NaN, +Infinity, -Infinity, 0, "0", 1, "1", 2, "2",
   [], {}, true, "true", false, "false", "", "null"
]

coersions = [
["To Number", "+x"],
 ["To Number", "(+x)||0"],
 ["To Number", "+(x||0)"],
 ["To String", "\"\"+x"],
 ["To Boolean", "!!x"],
 ["To Boolean", "!!+x"]
]

var results = document.getElementById('results');
var trHeader = document.getElementById('header');
var trHeader2 = document.getElementById('header2');

for (var i = 0; i < coersions.length; i++) {
 var th = trHeader.appendChild(styleCell(document.createElement('th')));
 th.textContent = coersions[i][0]
 th = trHeader2.appendChild(styleCell(document.createElement('th')));
 th.textContent = coersions[i][1]
}

for (var i = 0; i < values.length; i++) {
 var tr = results.appendChild(document.createElement('tr'));
 var rowHeader = tr.appendChild(styleCell(document.createElement('th')));
 rowHeader.textContent = uneval(values[i]);

 for (var j = 0; j < coersions.length; j++) {
 var td = tr.appendChild(styleCell(document.createElement('td')));
 td.textContent = uneval(eval("(function(x) { return "+coersions[j][1]+"})")(values[i]));
 }
}

</script>

Time for Atom to Put RSS Out to Pasture?

Tuesday, September 15th, 2009

Is it time to the world to move on from RSS and to its successor, Atom? Some considerations:

Atom has an IETF standard for syndication. Atom has an IETF standard for publication. Atom was designed for modularity. Atom supports rich, well-defined activities within feeds.

RSS is effectively frozen at 2.0:

RSS is by no means a perfect format, but it is very popular and widely supported. Having a settled spec is something RSS has needed for a long time. The purpose of this work is to help it become a unchanging thing, to foster growth in the market that is developing around it, and to clear the path for innovation in new syndication formats. Therefore, the RSS spec is, for all practical purposes, frozen at version 2.0.1. We anticipate possible 2.0.2 or 2.0.3 versions, etc. only for the purpose of clarifying the specification, not for adding new features to the format. Subsequent work should happen in modules, using namespaces, and in completely new syndication formats, with new names.

It is full of legacy tags and archaic design decisions:

The purpose of the <textInput> element is something of a mystery. You can use it to specify a search engine box. Or to allow a reader to provide feedback. Most aggregators ignore it.

We are spending all this time duplicating effort. Every feed reader needs to deal with Atom and RSS. Every blog provides an Atom feed and an RSS feed. Users trying to subscribe to blog feeds are presented with an unnecessary choice.

RSS solved a need at the time, even though it was crufty and difficult to use and difficult to parse (remember when RSS XML didn’t have to be well-formed XML?). It served as an inspiration for millions of sites to open up their content to new methods of reading. It inspired a great successor, Atom, which has surpassed it many times over.

We dropped gopher when its time ran out. It’s time to make Atom the primary format for blogs.