The Next Decade: part 2

December 6th, 2014

Five years ago I wrote a set of predictions about 2020. In some ways I think I was too unambitious for the progress of technology: browser technology has probably already reached the levels I had expected for a decade later. Some of them look like they might be off-target, like the increase in storage capacity of mobile devices. We still have another five years to see how they all pan out.

I thought I’d take another swing at a set of predictions. This time I’m looking out towards 2025.

Manufacturing

3D printing is the next major transformative technology wave. In 2025 we’ll be seeing 3D printers in the homes of at least a quarter of North Americans. Home 3D printer quality in 2025 will be amazing. A consumer will be able to print in multiple colors and plastic/rubber-like materials, with a final print quality that will often rival that of mold-formed products.

There will be multiple marketplaces for things to print: some open-source ones that will contain basic shapes and designs, and a number of licensed repositories for printing designer items like toys and kitchenware from well-known names. There will also be a healthy 3D piracy community.

When you look around a room in 2014, a lot of what you see will be printed in the home or a rapid-manufacturing facility in 2025. 3D printing will eat the manufacturing lifecycle from the bottom up. Products where the first 100 might be built with rapid prototyping today will have the first 100,000 produced on-demand. Companies like IKEA will choose to print and cut their designs on-demand in warehouses rather than stocking parts built elsewhere.

Local distribution centers will replace overseas manufacturing almost completely. North America will import processed materials from China, India and Africa for use in the new local centers: plastics and rubbers, and metals with binder. The local manufacturing centers will use a combination of 3D printer, laser/plasma cutting and generic robot assembly to build everything from one-off orders to runs of thousands as needed.

To support rapid manufacturing at these levels, generic robotics will replace the manufacturing line of today. A generic robot will be able to be taught to do a task like a human does and will coordinate with other generic robots to build a product like a fixed line would do today. Generic robotics will also find itself at home in other fields currently employing humans doing repetitive tasks like food preparation, laundry and cleaning.

Amusingly 3D printers will be displacing regular document printers which will have mostly died out at that point. Paper will take much longer than 2025 to replace, but the majority of financial, legal and educational work will be entirely electronic. Electronic signatures will be commonplace and some form of e-signature using mobile devices and biometrics will likely be used to validate documents. Paper in the home will be well on its way out and even children will be coloring on electronic devices with a stylus rather than coloring books.

Home

In 2025 we’ll have maxed out our technologies for 2D display and audio. Screens on cheap devices will have the same effective DPI as paper and the pixel will be something very few people will see. Virtual reality won’t be pervasive yet, but it’ll be in a significant number of homes for entertainment purposes. There will be devices small enough for portable use.

The convergence of devices will continue, and will consume a number of devices for entertainment at home. The cable box and dedicated game consoles will be almost dead, replaced with streaming from personal devices Chromecast-style. TV will still be going strong and there will be advancements in display technology that will allow basic 3D technology without glasses using light-field manipulation.

Transport

Vehicles will be radically transformed by advances in self-driving technology. While we still won’t have pervasive self-driving vehicles, there will be a large number of autonomous vehicles on the road. These will mainly replace long-haul trucking and transport, but some higher-end vehicles will be autonomous enough to allow individuals to safely commute to work or run errands hands-free.

Car ownership will be dramatically declining by 2025. With the ability to summon transport from an autonomous or human-driven vehicle at the tip of everyone’s fingertips, it won’t make sense for most people to own vehicles. High-end cars will continue to be a status symbol, but the low-end market will be decimated in the transition.

Electric vehicles will have captured around half of the market of vehicles sold, including everything from passenger cars to transport trucks. Fossil fuels and hydrogen will be included as “backup” options on some models, used only to increase range if needed by charging the batteries.

Energy

With solar getting cheaper and more efficient year-by-year, we’ll see some changes in the way that energy is supplied to home. Neighborhood grids will replace nation-wide grids for home use. Dozens of houses will pool their solar power into a cooperative with battery storage that they’ll use for electricity, heating and charging of vehicles.

More nuclear power will come online in the 2020’s, mainly driven by new reactor designs that produce very little waste and advances in fail-safe technology. Thorium-based reactors will be just starting to appear and safe designs like the CANDU plant will become significantly more popular.

Much of the new power brought online will be fueling desalinization and water-reclamation facilities to stabilize the fresh water supply. Fresh water will dominate the news cycle as energy does today.

Conclusion

With all of the changes described above, North America will see a significant effect on the number of low-end jobs. Manufacturing, food and transport industries will be radically changed, jobs moving from many low-skilled positions to a few high-end positions.

This will require thought on how we can re-architect the economy. I think we’ll see concepts like minimum guaranteed income replacing welfare and minimum wage. This is much more difficult to predict than the pace of technology, but we’ll have to do something to ensure that everyone is able to live in the new world we’re building.

On Google Chromebooks

May 30th, 2012

The Google Chromebook is an interesting product to watch. I’ve been a fan of and using them since the early Cr-48 days. In fact, two Chromebook laptops were in service in our household until just a few weeks ago when the Samsung Chromebook broke (although I hope to repair it soon).

These laptops sit next to our couch in a stack as a set of floater laptops we use for random surfing. If any of us are just looking for a quick bite of information, we generally pull out the Chromebook rather than walking over to the Macbook that sits on our kitchen counter. The Chromebook is also great for our son to use when building LEGO from PDF instructions.

Browsing is far better on the Chromebook than it is on any Android or iOS device I’ve used, hands down. I find the browsing experience to be frustrating on an iPad or my Galaxy 10″, while the Chromebook experience is flawless. The device is basically ready-to-use for browsing as soon as you lift the lid, in contrast to the fair amount of time it takes to get logged into the Macbook (especially if another user has a few applications open in their session).

The hardware itself in the early models was slightly underpowered, but that doesn’t really seem to matter much unless you’re playing a particularly intensive Flash video or HTML5 game. Scrolling is fairly slow on complex sites like Google+ as well, but it’s never been a showstopper. The touchpads have also been hit-and-miss in the early models. For what we use it for, the hardware is pretty decent. I imagine that the next generations will gradually improve on these shortcomings.

What makes these devices a hard sell is the price point. The cheapest Chromebook experience you can get today is the Acer (@ $300). Considering the fact that you are buying a piece of hardware that effectively does less than a laptop, I would find it hard to justify spending that amount if I were looking at hardware today. Even though I prefer to use the Chromebook when surfing over the tablets or the full laptop, I feel like the cost is just too much for a single-purpose device like this.

For Chromebooks to really take off in the home market, I think that a device with the equivalent power to the Samsung Chromebook 5 needs to be on the market at a $199 price point. I could see myself buying them without a second thought at that price. Alternatively, if we saw some sort of Android hybrid integration with the Chromebook, I think that this could radically change the equation and add significant perceived value to the device.

I don’t see the Chromebox being popular in households ever – I believe that we’ll see the decline of the non-portable computer going forward at home. Now, if I were running a business where a large subset of employees could get by with just web access, I would definitely consider rolling these out. The Chromebox looks like it could be a real game changer for business IT costs.

Five minutes with the Kobo Vox

November 5th, 2011

Today I had a chance to play with the Canadian equivalent of the Kindle Fire, the Kobo Vox. It’s an Android 2.3 device, which means that it effectively has access to the entire ecosystem of Android apps. What it lacks, unfortunately, is the official Google Market application. It did appear to have access to the Gmail app, which makes the lack of Google’s Android market surprising.

The Vox is a bit lackluster in the graphics department. Full-screen animations like zooms and fades are choppy: 5-10 frames per second. The same animations in the Kobo application on my Galaxy Tab 10 are fluid and smooth. This makes the Kobo Vox feel like a really cheap bit of hardware. It’s not a big deal while reading books in the Kobo application: paging is lightning fast, although it doesn’t have any sort of animation to indicate page flips.

One thing you get with the Vox that you won’t get with the plain Kobo application on other devices is the “Kobo Voice” social reading experience. You can annotate passages in books and share them with other readers. I don’t find this to be a big loss. The Vox also offers a way to lay out books in two-page landscape mode, which would be amazing on the Galaxy Tab 10, but feels a bit cramped on the smaller Vox screen.

The Kobo Vox does have a nice screen. The Dell Streak 7″ tablet has issues with narrow viewing angles in portrait mode. From what I could tell, the Vox was beautiful in portrait and landscape orientation. The quality of the display feels pretty good.

Based on the five minutes I played with it, I don’t think it’s worth me buying. I’m tempted to look at the Kindle Fire for use in Canada, but I suspect that Amazon’s less-than-perfect support for Amazon services in Canada will make it less of an interesting piece of hardware. If you don’t already have a tablet, however, this might not be a bad device to purchase.

Comparable devices:

  • Kindle Fire: $200
  • Kobo Vox: $200
  • Dell Streak 7″: $399 (terrible for reading in portrait!)
  • Galaxy Tab 8.9: $400-600 (couldn’t find it for sale in Canada)
  • Galaxy Tab 10.1: $649

Google +1 Chrome extension tracks your https traffic

November 3rd, 2011

EDIT: In a response to this post on Google+, Louis Gray says that he’s notified the team. I’ll update this post as I get more information.

The Google +1 extension for the Chrome browser sends an RPC event to Google for every page you visit, https or not.

I hate to be a downer on cool stuff like this, but I really don’t think this is acceptable. It’s even sending the querystring, which could potentially contain a secure session token. All of the communication to the Google servers happens over https, but I don’t think that excuses this. https:// traffic needs to be off-limits for auto-tracking like this.

I’d be OK if the button allowed you to disable auto-reporting of the current +1 count (this can default to ‘on’), and added a default-off option to show +1 counts for https sites.

Below is a screenshot of the RPC request sent to Google’s RPC endpoint, showing the https URL of a bank’s login URL, complete with query-string.

Automatic file transfer in iTerm2 via ZModem

October 26th, 2011

scp is a great way to securely transfer files from computer to computer, but wouldn’t it be nice if you could just automatically send files over the existing SSH connection you’ve already opened?

Back in the days of modem-based BBSes and dial-up machine access, file transfers were forced to run over the same TTY as your interaction with the system. A number of different solutions evolved for this, starting with the grandfather of transfer solutions, XModem. Other transfer protocols evolved, some starting from the ground up like Kermit, while YModem and ZModem build on the foundation of XModem.

The latest version of iTerm 2 added support for two features that were very interesting: Triggers, that match a regular expression to a line of text; and co-processes, that can feed input directly into a terminal. With these two features, we can add the ability to stream files to and from any server over an existing ssh session. As ZModem is most modern protocol with wide support (lrzsz is well-supported and packaged on both OSX and Linux), I’ll show you how to use it to automate piggy-backed file uploads and downloads in your iTerm sessions.

Setup

First of all, install lrzsz via brew. This will install the sz and rz binaries in /usr/local/bin/:

macbook-pro-2:~ matthew$ brew install lrzsz
==> Downloading http://www.ohse.de/uwe/releases/lrzsz-0.12.20.tar.gz
==> ./configure --prefix=/usr/local/Cellar/lrzsz/0.12.20 --mandir=/usr/local/Cellar/lrzsz/0.12.20/share/man
==> make
==> make install
/usr/local/Cellar/lrzsz/0.12.20: 13 files, 376K, built in 21 seconds

Secondly, grab the scripts from my iterm2-zmodem github repo, and save them in /usr/local/bin/.

Next, we’ll add a Trigger to your iTerm 2 profile that will trigger on the signature of the rz and sz commands. The setup for these commands differs based on the iTerm 2 version you have:

Build newer than 1.0.0.20111026

    Regular expression: \*\*B0100
    Action: Run Coprocess
    Parameters: /usr/local/bin/iterm2-send-zmodem.sh

    Regular expression: \*\*B00000000000000
    Action: Run Coprocess
    Parameters: /usr/local/bin/iterm2-recv-zmodem.sh

Build older than 1.0.0.20111026 (only receive supported)

    Regular expression: [\$#] rz( -v)?$
    Action: Run Coprocess
    Parameters: /usr/local/bin/iterm2-send-zmodem.sh

Note: ideally we’d be matching on the ZModem initial packet signature: \*\*\u0018B01 in all versions of iTerm 2, but earlier versions of iTerm 2 had a bug that broke this pattern detection in this case. Instead we’re matching against the pattern of the rz command typed at a shell for those older builds.

Receiving files from the server

To receive a file on your server, type the following at a shell prompt:

# rz

A file-picker dialog will then pop up asking you for the file to send. Once you choose the file to send, it will automatically transfer the file across your existing console session.

Sending files to the server

To send files from your server to your desktop, type the following:

# sz file1 file2 file3 /folder/file*

A folder picker will show up, asking where you want to drop the files. If you send multiple files, they will all appear in this folder.

Wrap-up

This is a pretty rough first pass at this, but the shell scripts are available on github if you’ve got ideas for improvement.

Comments: discuss this on Hacker News

On the advancement of science and the useful arts

August 24th, 2011

(this is an expanded version of my Google+ post here)

Apple is quickly burning my goodwill towards with these silly patent fights. Two out of three of the patents were found not to be infringing, while the last one is a software patent that basically describes the functioning of a mobile device that deals with photos.

At this point, it’s probably worth pointing out that Apple’s notification bar is pretty much a rip-off of the Android one. And you know what? I really don’t care.

Companies should be riffing off each other’s designs and improving them as they do. We’ll get a lot further than if we give one company total control over a single domain. Apple has taken the Android notification bar and improved it, just as Google has done with various iPhone features. Both companies have built their mobile operating systems on the prior art of thousands of other inventions from the last thirty years.

As many people have stated, patents are a monopoly to advance science and the useful arts. They are not a monopoly to advance the profits of any given company, though that may be a side-effect of their existence.

Copyright and trademark law already exist to prevent direct copying of design. Would Apple have released the iPhone in the absence of software patents? Very likely. Would the iPhone/Android rivalry shaped up the same way without software patents? Very likely.

In their current form, software patents have been hindering the progress of computing. With that in mind, I say it’s time for them to go.

See this post on Hacker News

Follow me on Twitter: @mmastrac

How Apple can make use of ARM (and Intel) in its laptop line

May 7th, 2011

There’s been some speculation about Apple moving to ARM in some of its MacBook products. This has been largely dismissed as pure rumour by a number of folks: the costs to developers of adding another platform to the universal binary format (“universal trinaries”) would be prohibitive. On top of that, the difficultly of emulating x86 or x64-compiled code on a purely ARM platform with reasonable performance would be a very challenging task.

That’s not to say that it’s totally impossible for Apple to take advantage of the potential power savings of making an ARM switch.

The focus has been on Apple switching entirely from the x64 platform to the ARM platform. I don’t think this is a feasible approach, however. It requires a bit of out-of-the-box thinking instead. Here’s how I think it could happen.

There is a fair bit of space on inside of a MacBook compared to an iPad or iPhone. Apple would use some of this space to drop one of the A5 chips on the motherboard next to the Intel chip, effectively buildly themselves a hybrid ARM/x64 system.

This A5 would be an integral part of the new MacBook design. In fact, it would run the entire OSX kernel on it. It would also be capable of running most of the other light applications on the system: Mail.app, Finder.app, Dock.app. Developers would be able to compile x64/ARM capable binaries that would be able to take advantage of this low-power processor as well. Twitter.app doesn’t need a full-fledged x64 processor to run. It would be perfectly happy living on the lower-power ARM chip.

The x64 chip would still play an important role. It would be useful for running applications that need more power than the ARM chip would be able to provide: games, web browsing and the like. It also provides the architecture necessary for VirtualBox, Parallels and BootCamp to make the OSX platform more interesting to switchers and those with Windows-only apps.

There’s no reason to think this hybrid approach wouldn’t work. The x64 processor would talk with the kernel running on the ARM chip via one of the many high-speed interconnects available on the Intel architecture. The two chips would share the framebuffer- applications running on the two chips would seamlessly render to the same desktop. They could potentially share the same RAM the same way that two Intel chips running on a motherboard do. Think of it as an “asymmetric multi-processor” setup, versus the normal SMP you’d see in servers.

There are huge advantages to running this hybrid mode. The x64 processor would be able to power down far more than it can now, adding precious hours to the runtime of the system. If users didn’t need anything more than simple ARM-able applications, the system could potentially run on the ARM processor for a significant part of a day. On top of that, there is far less required of the Intel side. The x64 chip could potentially be powered down more aggressively than it is today, adding additional runtime for those who run “legacy” x86/x64 apps.

With this approach, I wouldn’t be surprised if we saw MacBooks boasting 20-hour runtimes when used in low-power mode.

Abusing the HTML5 History API for fun (and chaos)

March 7th, 2011

Disclaimer: this post is intended to be humorous and the techniques described in this post are not recommend for use for any website that wishes to keep visitors.

I’ve been waiting for a chance to play with the HTML5 History APIs since seeing it on Google’s 20thingsilearned.com. After reading Dive into HTML5’s take on the History API today (thanks to a pointer by Mark Trapp), I finally came up with a great way to play around with the API.

Teaser: it turns out that this API can be used for evil on WebKit browsers. I’ll get to that after the fun part.

The web of the 90’s was a much more “interesting” place. Animated GIF icons were the rage, loud pages and <blink> were the norm and our friend, the <marquee> tag used to give us impossible-to-read scrolling messages.

If you can see this moving, your browser is rocking the marquee tag

Over time, the web grew up and these animations died out. Marquees made a short comeback as animated <title>s, but thankfully those never caught on.

The Modern Browser

The world of the 2011 browser is much different than the browser of the 90’s. Early browsers had a tiny, cramped location bar with terrible usability. In modern browsers, the location, function and usability bar is now the one of the primary focuses of the UI.

Given that, it seems like this large browser UI is ripe for some exploitation. What if we could resurrect marquee, but give it all of the screen real-estate of today’s large, modern location bar?

With that in mind, I give you THE MARQUEE OF TOMORROW:

<script>
function beginEyeBleed() {
        var i = 0;
        var message = "Welcome to the marquee of tomorrow!   Bringing the marquee tag back in HTML5!   ";
        setInterval(function() {
                var scroller = message.substring(i) + message.substring(0, i);
                scroller = scroller.replace(/ /g, "-");
                window.history.replaceState('', 'scroller', '/' + scroller);
                i++;
                i %= message.length;
        }, 100);
}
</script>

(if you’re viewing this on my blog in an HTML5-capable browser)

And now for the evil part.

It turns out that some browsers weren’t designed to handle this level of awesomeness. Trying to navigate away from a page that turns this on can be tricky. Firefox 4 works well: just type something into the location bar and the bar stop animating. WebKit-based browsers are a different story.

On Safari, every replaceState call removes what you’ve typed into the location bar and replaces it with the new data. This makes it impossible to navigate away by typing something else into the bar. Clicking one of your bookmarks will take you away from the page, however.

On Chrome, the same thing happens, but it’s even worse. Every replaceState call not only wipes out the location bar, but it cancels navigate events too. Clicking on bookmarks has no effect. The only recourse is to close the tab at this point.

Edit: comments on this post have noted that you can refresh, or go back to get off the page. You can also click a link on the page to navigate away.

Edit: the location bar seems to be find in Chrome on non-Mac platforms. Ubuntu is reportedly OK and Windows 7 performed OK in my test just now. The bookmark navigation issue is still a problem on the other platforms.

Overall, the new HTML5 history marquee is a pretty effective way of drawing eyeballs and, in some cases, forcing longer pageview times.

I filed issue 75195 for Chromium on this. I’ll file one for Safari as well.

Follow me (@mmastrac) on Twitter and let us know what you think of Gri.pe!

View/vote on HackerNews

(this is Thing A Week #7)

 

GWT 2.2 and java.lang.IncompatibleClassChangeError

March 3rd, 2011

I’ve been updating Gri.pe to the latest versions of the various libraries we use and ran into some trouble while attempting to update GWT to 2.2. There were some incompatible bytecode changes made: the classes in the com.google.gwt.core.ext.typeinfo package were converted to interfaces. Java has a special invoke opcode for interface types, so the classes would fail to load with this obscure error when the verifier tried to load the classes:

java.lang.IncompatibleClassChangeError: Found interface com.google.gwt.core.ext.typeinfo.JClassType, but class was expected

There are updated libraries available for some third-party GWT libraries, but others (like the gwt-google-apis) haven’t been updated yet.

The solution suggested by others was to manually recompile these against the latest GWT libraries to fix the problem. I thought I’d try a different approach: bytecode rewriting. The theory is pretty simple. Scan the opcodes of every method in a jar and replace the opcode used to invoke a method on a class, INVOKEVIRTUAL, with the opcode for invoking a method on interfaces, INVOKEINTERFACE. More info on Java opcodes is available here.

The ASM library makes this sort of transformation trivial to write. Compile the following code along with the asm-all-3.3.1.jar and plug in the file you want to transform in the main method. It’ll spit out a version of the library that should be compatible with GWT 2.2. You might need to add extra classes to the method rewriter, depending on the library you are trying to rewrite:

package com.gripeapp.bytecoderewrite;

import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import java.util.zip.ZipOutputStream;

import org.objectweb.asm.ClassAdapter;
import org.objectweb.asm.ClassReader;
import org.objectweb.asm.ClassVisitor;
import org.objectweb.asm.ClassWriter;
import org.objectweb.asm.MethodAdapter;
import org.objectweb.asm.MethodVisitor;
import org.objectweb.asm.Opcodes;

public class BytecodeRewrite {
  static class ClassVisitorImplementation extends ClassAdapter {
    public ClassVisitorImplementation(ClassVisitor cv) {
      super(cv);
    }

    @Override
    public MethodVisitor visitMethod(int access, String name, String desc,
              String signature, String[] exceptions) {
        MethodVisitor mv;
        mv = cv.visitMethod(access, name, desc, signature, exceptions);
        if (mv != null) {
          mv = new MethodVisitorImplementation(mv);
        }
        return mv;
    }
  }

  static class MethodVisitorImplementation extends MethodAdapter {

    public MethodVisitorImplementation(MethodVisitor mv) {
      super(mv);
    }

    @Override
    public void visitMethodInsn(int opcode, String owner, String name,
        String desc) {
      if (owner.equals("com/google/gwt/core/ext/typeinfo/JClassType")
          || owner.equals("com/google/gwt/core/ext/typeinfo/JPackage")
          || owner.equals("com/google/gwt/core/ext/typeinfo/JMethod")
          || owner.equals("com/google/gwt/core/ext/typeinfo/JType")
          || owner.equals("com/google/gwt/core/ext/typeinfo/JParameterizedType")
          || owner.equals("com/google/gwt/core/ext/typeinfo/JParameter")) {
        super.visitMethodInsn(Opcodes.INVOKEINTERFACE, owner, name, desc);
      } else
        super.visitMethodInsn(opcode, owner, name, desc);
    }
  }

  public static void main(String[] args) throws IOException {
    ZipInputStream file = new ZipInputStream(new FileInputStream("/tmp/gwt-maps-1.1.0/gwt-maps.jar"));
    ZipEntry ze;

    ZipOutputStream output = new ZipOutputStream(new FileOutputStream(("/tmp/gwt-maps.jar")));

    while ((ze = file.getNextEntry()) != null) {
      output.putNextEntry(new ZipEntry(ze.getName()));
      if (ze.getName().endsWith(".class")) {
        ClassReader reader = new ClassReader(file);
        ClassWriter cw = new ClassWriter(0);
        reader.accept(new ClassVisitorImplementation(cw), ClassReader.SKIP_FRAMES);
        output.write(cw.toByteArray());
      } else {
        byte[] data = new byte[16*1024];
        while (true) {
          int r = file.read(data);
          if (r == -1)
            break;
          output.write(data, 0, r);
        }
      }
    }

    output.flush();
    output.close();
  }
}

AppEngine vs. EC2 (an attempt to compare apples to oranges)

March 1st, 2011

This is an expanded version of my answer on Quora to the question “You’ve used both AWS and GAE for startups: do they scale equally well in terms of availability and transaction volume?”:

I’ve had an opportunity to spend time building products in both EC2 (three years at DotSpots and the last year working on Gri.pe). At DotSpots, we started using EC2 in the early days, back when it was just a few services: EC2, S3, SQS and SDB. It grew a great deal in those years, tacking on a number of useful services, some which we used (<3 CloudFront) and a number that we didn’t. Last year, around April, we switched over to building Gri.pe on AppEngine and I’ve come to appreciate how great this platform is and how much more time I spend building product rather than infrastructure. Other developers enjoy building custom infrastructure, but I’m happy to outsource it to Google.

Given these two technologies, it’s difficult to directly compare the them because they are two different beasts: EC2 is a general purpose virtual machine host, while AppEngine is a very sophisticated application host. AppEngine comes with a number of services out-of-the-box. The Amazon Web Services suite tacks on a number of various utilities to EC2 that give you access to structured, query-able storage, automatic scaling at the VM level, monitoring and other goodies too numerous to mention here.

Transaction Volume

When dealing with AppEngine, the limit to your scaling is effectively determined by how well you program to the AppEngine environment. This means that you must be aware of how transactions are processed at the entity group level and make judicious use of the AppEngine memcache service. If you program well against the architecture and best practices of AppEngine, you have the potential of scaling as well as some of Google’s own properties.

Here’s an example of one of the more expensive pages we render at Gri.pe. Our persistence code automatically keeps entities hot in memcache to avoid hitting the datastore more than a few times while rendering a page:

On EC2, scaling is entirely in your hands. If you are a MySQL wizard, you can scale that part of the stack well. Scaling is half limited only by the constraints of the boxes you rent from Amazon behalf by your skill and the other half by creativity in making them perform well. At DotSpots, we spent a fair bit of time scaling MySQL up for our web crawling activities. We built out custom infrastructure for serving key/value data fast. There was infrastructure all over the place just to keep up with what we wanted to do. Our team put a lot of work into it, but at the end of the day, it was fast.

It’s my opinion that your potential for scaling on AppEngine is much higher for a given set of resources, if your application fits within the constraints of the “ideal application set” for AppEngine. There are some upcoming technologies that I’m not allowed to talk about right now that will expand this set of ideal applications for AppEngine dramatically.

Reliability and availability

As for reliability and availability, it’s not an exact comparison here again. On EC2, an instance will fail from time-to-time for no reason. In some cases it’s just a router failure and the instance comes back in a few minutes to a few hours later. Other times, the instance just dies, taking its ephemeral state with it. In our large DotSpots fleet, we’d see a machine lock up or disappear somewhere around once a 1 month or so. The overall failure rate here was pretty low, but enough that you need to keep on your toes for monitoring and backups. We did have a catastrophic failure while using Elastic Block Store that effectively wiped out all of the data we were storing on it – that was the last time we used EBS (in fairness, that was in the early days of EBS and this probably not as likely to happen again).

On AppEngine, availability is a bit of a different story. Up until the new High-replication datastore, you’d be forced to go down every time the Datastore went into maintenance – a few hours a month. With the new HR datastore, this downtime is effectively gone, in exchange for slightly higher transaction processing fees on Datastore operations. These fees are negligible overall and definitely worth the tradeoff for increased reliability.

AppEngine had some rough patches for Datastore reliability around last September, but these have pretty much disappeared for us. Google’s AppEngine team has been working hard behind the scenes to keep it ticking well. There are some mysterious failures in application publishing from time-to-time on AppEngine. They happen for a few minutes to a few hours at a time, then get resolved as someone internally at Google fixes it. These publish failures don’t affect your running code – just your ability to publish new code. We’re doing continuous deployment on AppEngine, so this affects us more than others.

If you measure reliability in terms of the stress imposed on developers keeping the application running, AppEngine is a clear winner in my mind. If you measure it by the time that your application isn’t unavailable from forces beyond your control, EC2 wins (but only by a small amount, and by a much smaller margin when comparing the HR datastore).

Follow me (@mmastrac) on Twitter and let us know what you think of Gri.pe!

View/vote on HackerNews

(this is Thing A Week #6)