Archive for June, 2008

ContextFree.js & Algorithm Ink: Making Art with Javascript

Monday, June 30th, 2008

In honor of the the release of Firefox 3, I’m releasing ContextFree.js today, along with the demo site Algorithm Ink. ContextFree.js is about drawing striking images—and making art—with minimal amounts of code.

An introduction to ContextFree.js & Algorithm Ink

Computers programs lost something important when displaying a splash of color stopped being one line of code. As a kid, I remember being able to type “plot x,y” on the Apple II to throw up a phosphorescent splotch. When the simplicity of the one-line plotter went away, so did the delight at being so effortlessly generative—in a visual way—on the computer. ContextFree.js is a stab at making it easy again. It’s like a grown up version of Logo (or at least the Turtle Graphics part of Logo).

Go ahead and play with it on Algorithm Ink. It has goodies like a gallery to see other people’s art, the ability to view the source-code of any piece of art, and the ability to save the art to your desktop with a single right-click (that comes along for free, more on this in a second). It works best in Firefox 3, but should also work in Opera and Safari.

The inspiration and kick-in-the-pants motivation for this project came from the always-excellent John Resig (also of Mozilla) finally releasing Processing.js.

It’s All About the Javascripts

ContextFree.js is a port of Context Free Art by Chris Coyne. Algorithm Ink uses open-web, standards-based tools: no plugins required. It uses Canvas to to the drawing, and Javascript to compile and interpret the Context Free grammar. The code that does the compiling and render is open-source and available here. Handling thumbnails and other manipulations of images is just a couple lines of Javascript. No need to muck around with server-side black magic. With the addition of arbitrary transforms to Safari 3.1, all modern browsers, except IE, support Canvas fully enough to run ContextFree.js.

One of the great things about using open-web standards is that it plays nicely with user expectations. If you like a particular piece of art, you can just right click to save it as an image. It would also be possible to use an image as the background for a blog—imagine a unique piece of art adorning your site, one that’s different for every visitor. The bleeding edge versions of Webkit have, in fact, implemented the ability to use Canvas with CSS. Open-web technology helps to break down the silos of inaccessibility: backgrounds images done with Flash have never been adopted because of the heavy-weight feel, as well as the lack of interoperability with the rest of the DOM. As Canvas takes becomes more wide spread, I’m looking forward to an explosion of infographics and other data visualizations that weren’t possible before without a heavy server-side setup, or without a compile step that breaks the ability to view-source-and-learn cycle of the web (Flash, Java, &c, I’m looking at you).

A Peak at the Syntax

The grammar is pretty simple. Let’s start by making a circle:

startshape shape

rule shape{

This says start by drawing the rule named “shape”. The rule shape says to draw a circle.

Now let’s make things more interesting:

startshape shape

rule shape{
 shape{ s .5 b .2 x -.25 }

We’ve added a single line which says that part of the rule for “shape” is to draw itself, scaled down by half, with a slightly increased brightness, drawn on the left side of its parent circle. This makes an off-centered bulls-eye pattern.

Now let’s go to the right, as well as the left:

startshape shape

rule shape{
 shape{ s .5 b .2 x -.25 }
 shape{ s .5 b .2 x .25 }

For comparison, this is what the equivalent code in Processing.js looks like:

// Processing.js code to make the same circle pattern.
void setup()
  size(200, 200);

void draw()
  drawCircle(126, 170, 6);

void drawCircle(int x, int radius, int level)
  float tt = 126 * level/4.0;
  ellipse(x, 100, radius*2, radius*2);
  if(level > 1) {
    level = level - 1;
    drawCircle(x - radius/2, radius/2, level);
    drawCircle(x + radius/2, radius/2, level);

It’s much longer, involves more setup, and more fiddly calculations. In fact, the entire ContextFree.js source is as long as the setup function. Of course, by simplifying so much in ContextFree.js, we’ve also removed the flexibility Processing.js affords.

Adding one more line of code gives an unexpected result: The Sierpinski Triangle. I’ll leave it as an exercise for the reader to figure out what that line of code is, but I doubt that it will be much of a puzzle.

Follow the mouse

The equivalent of “Hello, World!” for a graphical environment is a dot that follows the cursor, replete with a fading trail. This is what it looks like in ContextFree.js:

startshape DOT

rule DOT{
  SQUARE{ s 4 b 1 a -.9 }
  SQUARE{ s .3 }

  DOT{ }

We start by defining the rule “DOT” which draws two things: A large square (scaled up 4x), which is white (brightness of 1), and mostly transparent (alpha of -.9), and then a smaller black square. The large transparent white square acts to fade out more and more of the previous DOTs that have been drawn (it’s a standard trick for doing motion blur effects). The rule MOUSEMOVE simply draws a dot at the current mouse position, whenever the mouse moves.

Here’s the result.

There’s one more example that demonstrates the rest of the features that makes ContextFree.js powerful at the end of this post.

The Big Picture

Besides being pretty, why is ContextFree.js interesting? Because it shows the power of Open web technologies for making graphically-enabled, compelling interaction. The true power of the web revolves around anyone being able to dive in, see what someone else has done, and expand upon it. Canvas lowers the cost of entry to creating graphical mashups and other dynamic, graphical content. It also shows the progress the web has made: a year ago, this demo would not have been possible. Canvas wasn’t ready, and Javascript interpreters weren’t fast enough. Looking at the qualitative difference in speed from Firefox 2 to Firefox 3 indicates the amazing and substantial progress made towards speeding up Javascript since the last major browser release cycle.


There are three major components to ContextFree.js: a tokenizer, a compiler, and a renderer. Each can be instantiated and used separately. The compiler returns a JSON object of the parsed code, which makes it easy to write a new front-end. I’d be interested to see if a Flash implementation of ContextFree.js would be faster than the pure Javascript, and if so, how much faster. As a side note, I’d also like to see an implementation of the Canvas API done in Flash, as a better replacement for excanvas for IE.

Because the computation and drawing of fractals can be intensive, the rendering occurs in it’s own threaded queue. How quickly it iterates over the shapes is dependent on how long the last drawing operation happened. This helps to keep ContextFree.js from freezing the browser.

There were a couple problems with browser inconsistencies and Canvas deficiencies that make the renderer interesting.

The first problem I ran into was that whilst the Canvas specification does support setting the transformation matrix, it does not support getting the transformation matrix. This means that if you want to know how large a shape will be when you draw it (for instance, to know when an element in a fractal is too small to be seen and is therefore safe to not draw), or want to draw in a threaded environment where you can’t guarantee a global transformation state, you need to re-implement the entire transformation infrastructure. This is a bad breakage of DRY, and slow from a development standpoint. According to Ian Hickson, the reason for the lack of a getter seems to stem from a possible implementation detail, although I don’t understand the argument. In personal communications with irrepressible Mozillian Vlad Vukićević, it appears that there is an possibility of adding the getter to the Firefox Canvas API.

The second problem I ran into was that versions of Safari older than 3.1 do not support setting the transformation matrix. The work around isn’t pretty. It involves taking the desired transformation matrix, de-convolving it into an equivalent set of rotations, scales, and translations, and then applying those base transforms to the Canvas. That’s a lot of math and eigen values. Luckily, Dojo already did the heavy lifting and I was able to stand on their shoulders for ContextFree.js.

The last problem I ran into is that even the latest editions of Safari do not support .setTransformation, which means that there is no call to return to reset the current transformation matrix to the identity. I used lead Webkit-developer Maciej Stachowiak’s solution, which is to save the pristine transformation state (using .save), perform the desired transformation and draw step, and then restore the pristine transformation state (using .restore).

The end result is that while there are inconsistencies, JS libraries can tide us over until us user-agents get fully in sync. That said, I recommend the addition of .getTransformation to the Canvas specification: it will save a lot of unnecessary code rewriting, most of which is matrix multiplication best done in a low-level language.


There are a number of improvements to be made to both ContextFree.js and Algorithm Ink. With the former, it isn’t a full port (it’s missing things like comments, z-index, and loop notation). With the later, there is still a strong disconnect between a text-based input and a graphical output. I would love the ability for authors to indicate which variables are interesting in their code, and have the UI expose that variability through sliders and other graphical means. I’d also like to graphically teach the system rules: for example draw a couple shapes, group them, copy and scale them, and have Algorithm Ink generalize that as a rule, translate that rule to code, and draw the full fractal.

The ever-inspiring Bret Victor had some excellent suggestions that I hope someone takes up:

* Highlight a section of code (or “mark” a rule somehow), and the parts of the picture that were generated by by that code/rule are marked somehow—by applying a tint or a glow, perhaps. Because things are so recursive, you should be able to tell when a part of the picture has been marked multiple times—darker tint, eg. The idea is to be able to quickly explore a program and see what’s doing what.

* The opposite: put the mouse over a pixel, and see what code is responsible for that pixel. Ideally, you could see the entire call stack. Perhaps with annotated pictures so you can see what was drawn at each level of the call stack. Let the artist answer the question, “Why did this [part of the picture] happen?”

* Scrub through the construction of the pic. At each timestep, both the picture being added and the code responsible for it are highlighed. The current transform matrix is shown. I can step through time and understand what is happening and why.

Given that ContextFree.js’s “bytecode” is simply a JSON object, as I mentioned above, it wouldn’t be hard to write a new front-end. For example, here’s the “bytecode” for the “follow the mouse” example:

    {shape:"CIRCLE", s:4, b:1, a:-0.9},
    {shape:"CIRCLE", s:0.33, b:1, a:-0.5},
    {shape:"CIRCLE", s:0.3, b:0.5, sat:0}
  MOUSEMOVE:[{ weight:1, draw:[{shape:"DOT"}] }]

Get Involved

ContextFree.js is open source and hosted on Google Code. Feel free to jump in. Feel free to email me at my first name at

Appendix A: Full Example

The last example adds demonstrates a couple more features that make ContextFree.js powerful:

startshape scale

rule scale{
  spiral{s .1 y -.5 x 1}

rule spiral{
 SQUARE{ a -.5 }
 spiral{ y 1.5 r 10 s .99}

rule spiral .01 {
 spiral{ y 1.5 r 10 s .95}
 spiral{ y 1.5 r 10 s .95 flip 90}

Let’s step through the code. The “scale” rule simply makes everything smaller and translates it down and to the right. Let’s look at the first rule “spiral”. It creates a half-transparent square, then makes a slightly smaller square that’s transposed and rotated ten degrees to the left. Repeating this gets a stepping-stone spiral. There are, however, two rules for “spiral”. This is where randomness comes into the design: whenever the rule “spiral” is encountered, ContextFree.js randomly chooses one definition among all definitions of that rule. The number after the rule name indicates the weight of the choice, where the default weight is 1. Thus, this rule draws something like this:

Modify the code slightly we can get an almost hand-drawn look that makes for a compelling background image.

startshape scale

rule scale{
  shape{s .1 y -.5 x 1}

rule shape{
 SQUARE{ b 1 s 1.1 }
 SQUARE{ a -.5 }
 shape{ y 1.5 r 10 s .99}

rule shape{
 SQUARE{ b 1 s 1.1 }
 shape{ y 1.5 r 5 s .99}

rule shape .03 {
 shape{ y 1.5 r 10 s .95 b .1}
 shape{ y 1.5 r 10 s .95 flip 90 b .1}

A large version of this image is also available. You can also play with this pattern live on Algorithm Ink. And for those who missed it, here’s the ContextFree.js code. Email me, or comment below to get involved.

Cross Browser Canvas Support & Speed

Thursday, June 19th, 2008

It used to be that if you wanted to draw—curves, shapes, and polyhedra—with Javascript, you would have been out of luck. However, almost all modern browsers now supportcanvas, which is a nifty technology that finally lets Javascript out of its DOMy prison.

For the most part it works well. To see it in action, take a look at the always-excellent John Resig’s Processing.js, Ilmari Heikkinen’s CAKE, and my own Algorithm Ink (which I’ll write more about soon).

Standards Compliance

In the grand tradition of browser technologies, Canvas isn’t fully supported by everyone, yet.

Firefox 3 and Opera 9.5 both get good marks. Ironically, it’s Apple—who originated the technology—that hasn’t gotten around to implementing some of the basics. In particular, Safari isn’t yet standards compliant and doesn’t implement the fundamental setTransform and transform. Transform is, however, supported in the the nightly build of Webkit.

IE is the only browser that doesn’t support Canvas at all (although, with Google’s excanvas you can pretend that it does).

To see exactly how well your favorite browser does, head over to the test report generator. Do note some of the tests are fairly pedantic and don’t have much real-world meaning. To see how your browser stacks up against other browsers, go here. The same caveat still applies here as well, so take them with a grain of a salt.

Speed Tests

In general, Safari is always the fastest, followed closely by Firefox (in most cases), with Opera lagging in far third. Excanvas-enabled IE wasn’t really aware that the race had started. In general, it looks like what slows Firefox down is stroked arcs, while for Opera it’s filled arcs.

You can find the code for the tests here. I averaged the render times over 10,000 draws

Excanvas-enabled IE uses VML under the hood, so every object you draw remains as a live, unrasterized object that eats memory and potentially CPU time. This means that I wasn’t able to test IE in the same way I tested the other browsers: making 10,000 smiley faces causes IE to collapse into an unresponsive mess and die (IE can’t handle all the happiness?).

You can also see how the time increases as more and more objects are added in IE: Drawing 100 circles took only 1.1ms per circle, drawing 10000 circles took 2.14ms per circle, and drawing 50000 took a whopping 21.77ms per circle.

Firefox Mobile Concept Video

Wednesday, June 11th, 2008

Firefox is coming to mobile. The innovation, usability, and extensibility that has propelled Firefox to 200 million users is set to do the same for Firefox in a mobile setting.

User experience is the most important aspect of having a compelling mobile product. Every bit of interaction and pixel of presentation counts when typing is laborious and screen sizes are minuscule. Many of the standard interaction models, like menus, always-present chrome, and having a cursor, don’t necessarily make sense on mobile. It’s a wickedly exciting opportunity but there are myriad challenges to getting it right.

One avenue for exploring this opportunity is through Mozilla Labs, which is about pushing the envelope towards better and brighter interaction horizons, as well as incorporating a winder community into the innovation process. This concept video explains one direction we are thinking in, and we’d love to inspire participation in thinking about other directions.

Standard Mockup/Experimental UI Disclaimer
All of the images and videos are only conceptual mockups of an experimental UI for Firefox Mobile, any particular feature may end up looking entirely different, or may not even make it into the final release.


Design Principles

Touch. This concept prototype for Firefox mobile (code name Fennec) is being designed for a touch screen. Why not multitouch? Because Firefox should be able to run on the least common denominator of touch devices. Especially for touch-enabled interfaces direct manipulation is key. Along that line of thought, the interface should be operable with a finger. Switching between input methods is time-consuming and annoying, so the user shouldn’t have to switch to a stylus or other secondary form of input. Firefox will work on non-touchscreen devices, but that’s out of scope for this demo.

Large targets are good. The same fingertip that controls the interface takes up between 1/5th to 1/10th of the vertical/horizontal height/width of the mobile touch-screen. In other words, fingers are fat: hitting small targets is like trying to touch-type with your elbow. All actions should be represented by targets that are large enough to be fast, easy, and (at the very least) not aggravating to hit.

Visual Momentum and Physics are compelling. Nothing shouts “sexy!” like pretty animations and a physics engine. Beyond marketing appeal, there is a strong argument that such physicality helps the user build a mental model of the interface, and that interface physics yields consistency. We are wired to track the movement of things and to be able to remember where they’ve gone, as long as they don’t appear and disappear, which doesn’t happen in the real world. Of course, copying every physical metaphors blindly gets you interfaces like the multi-million dollar blunder that was Microsoft Bob, so we need to select our metaphors carefully.

Typing is difficult. This means we want to minimize the amount of keystrokes required to get anywhere or do anything.

Content is king. With restricted screen size, every pixel counts. As much of the screen as possible should always be dedicated to content, not controls or cruft.

Opening Screen

Let’s start with the basics. When you first open the browser, you’ll see two things. Your bookmarks (right), and a big plus button (left) which opens a new tab/page.

Clicking on bookmarks zooms the browser to the page. You can scroll the page by dragging as well as flicking. Scrolling has momentum (like the iPhone), so it’s easy to go long distances.

You zoom out again by dragging the page past its border in any direction. One of the cool things about this is that the gesture for zooming out to the home screen is simply throwing the page in any direction. It’s singularly therapeutic! It’s also discoverable: in the panning process you discover the visual clues that there is more past the edge of the page. In informal user tests every user figured out the interaction without instructions or help.

New Tabs

Creating a new tab is easy. You click on the big plus, and the browser finds an open spot, puts the tab there, and zooms in on it. If you have a homepage, the new tab opens there. Even as the page zooms further out, the new tab buttons remains in the same logical spot.

Placing the cursor in the URL immediately gives a set of results (a la the awesome bar) that is generate from your history sorted by both frequency and recency. To do an immediate search on what you’ve typed, you can just tap a search provider at the bottom of the results.


The standard controls (like back and forward) are located to the left of the page. You get there by gently dragging the page in the appropriate direction. When the controls are visible, the URL bar fades in as well. This makes for a discoverable way to access the chrome, with a huge Fitts’s-law-abiding target. To get rid of the chrome, you just drag the page back to the center, and the controls slide away.

Because the controls are accessed by panning, for the most part every single pixel of the screen is filled with what you care about most: the content.

By using horizontal panning to access the controls, we avoid the iPhone’s problem of needing to go to the very top of the page to enter URLs. (Yes, I know about tapping on the top of the screen to auto-scroll, but few iPhone users even here at Mozilla knew the trick). Horizontal panning does introduce the problem of accessing the controls when the page requires a lot of horizontal scrolling, but this is a rare case, and with kinetic scrolling moving a long distance is fast. Even long left-right pages will require only one flick to get to the edge, and then one push to open the controls.

To increase the discoverability of the controls even further, it might make sense to show the controls in the zoom-out view as well (appropriately scaled with the page).

Spatial View

In the zoomed-out view, you can drag the pages around to group them in a way that makes sense to you. If you have a couple pages open to your email, and a couple pages open to some vacation planning material, you can place the similarly-themed sites physically together. This makes excellent use of our spatial memory and muscle memory: finding where you put a page on two-dimensional plane is remarkably swift and requires little cognitive load. With the addition of semantic placement, it’s even better. Spatial view uses space in a near optimal fashion—all pages are displayed at once in near maximum size so that extraneous interaction is not required to browse through your open pages.

This is in contrast with the standard list-based, pop-up method of choosing pages, which is fast for a small number of items but becomes increasingly cumbersome as you have to peck-hunt-and-scroll for the page you want. Such interfaces retain no sense of physicality or visual momentum, and thus lose the benefits they confer.

One of the big benefits of the Spatial view is that it is easily extensible. The bookmarks page is a good example, but think bigger. Imagine if I wanted to transfer info from one cellphone to another: I’d bring the the new phone close to mine, and it would appear as an area in the spatial view. I could zoom into it (assuming I had the appropriate permissions) and see what the other phone is looking at, collaborate, and drag tabs/files/contacts back to my phone. I’m sure there are other ways of using the metaphor, too.

There are a couple ways of augmenting this view. For example, the pages can be sized based on the frequency/recency with which you view a site. That way, sites you visit more often will be larger targets. It also helps to combat the all-page-thumbnails-look-like-white-squares problem. The spatial view has some potential scaling problems on the desktop where you can have upwards of 100 tabs open, but on mobile you’ll never have more than 10ish tabs open. Even so, I think there are solutions to the desktop problem, but that will have to wait until another post.

Spatial view has some passing similarities to Apple’s Expose. Both mechanisms are examples of zooming user interfaces (ZUIs), which have been around for almost as long as multitouch. The biggest different is that with Expose, the spatial relationship between windows is broken every time the windows reshuffle. With the spatial view I know that the tab I want is down and to the left, whereas that mapping doesn’t exist using Expose. It’s an important distinction that helps the user to feel comfortable in the space and not get lost.

Get Involved

The code for the demo is open-source and available here. You can even play with it, but be warned that it is a hack-up-prototype and there are lots of edge-cases that aren’t handled. Labs is interested in turning this concept into a working Firefox extension, so if you are interested in helping to spearhead that effort, drop me a line (my first name at If you’d like to get involved with Mozilla Labs more generally, check out our forums.

If you’d like to get involved on a broader level with the Mozilla mobile effort, you can join us on IRC (either #labs or #mobile on, join the mailing list, or take a look at the Mobile wiki page.


Creativity thrives best when not in a vacuum. There are a lot of folks I’ve bounced ideas off of, but I wanted to call out special thanks to crazy-in-the-best-way-possible Mark Finkle, drier-wit-than-thou Madhava Enros, unsung-hero Alex Faaborg, and I’ll-get-you-next-time Jenny Boriss for their invaluable input.