Wednesday, June 15, 2016

Unboxing Packages: path

I want to do something a little different with my blog post this week. When I’ve written about packages in the past, I’ve mostly done a high-level overview of their APIs and how they fit into the Dart ecosystem as a whole. But path is one of the very oldest packages in the ecosystem, and any Dart user who’s written any server-side or command-line apps is probably already familiar with the API.

So instead of a high-level overview, I want to do a deep dive. I want to talk about why we made the design decisions we made when writing path, and how we implemented our design effectively and efficiently. This post will be as much about how the package was constructed as it is about what the final product looks like.

Initial Design

It first became clear that Dart needed a solid solution for path manipulation when Bob Nystrom and I first started working on pub. Paths may seem simple on their face, but there’s a lot of hidden complexity when you need to make them work with all the edge case formats that can crop up across all the operating systems we support.

This became our first design constraint: make something that handles all the edge-cases. This is less obvious than it sounds: a lot of times, good design involves sacrificing some edge-case behavior to make the common case better, or even just simpler to implement. But we knew that path would be widely-used across the ecosystem, and we wanted users to be totally confident in it. If an application had to sanitize its paths before handing them off to us, we weren’t doing our job.

Another important early decision was to make the core API use top-level methods. We often look at other languages’ APIs for inspiration, but they were split on this point. Node uses top-level functions, whereas Java uses instance methods on a Path class. Ruby uses static methods for simple manipulation and a Pathname class for more complex ones. This didn’t provide clear guidance.

We decided to rely on a rule of thumb: only create a class when it’s the canonical representation of its data type¹. There were already a bunch of APIs, both in the core and in external code, that logically took paths and accepted only strings, not our hypothetical Path objects. Certainly everywhere the end user supplied a path, that path would be made available to the program as a string.

So we decided to go with the flow of the existing APIs and continue representing paths as strings. All the path manipulation APIs now take strings and return strings, and the world is simpler for it.

We chose functions for the package based on a combination of our own needs and APIs that were common among other languages’ path manipulation suites. Some of them, like join() and relative(), were pretty obvious. Others like rootPrefix() only became apparent because they filled holes in actual code. And a few, like prettyUri(), only got added well after the package was released.

The Quest for Correctness

We wanted to make our users confident in the correctness of path’s logic, which meant we had to be confident ourselves first. To do this, we wrote tests. Lots and lots of tests. Today, the package has 2.5 times more lines of test code than implementation code, and that’s how we like it.

Writing tests isn’t trivial, though. We had to be careful to include all the cases that came up in practice. This meant that, for every function where they were relevant, we tested combinations of:

  • Directory paths that did or did not end in separators.
  • Paths with zero, one, or two extensions.
  • Paths with multiple separators in a row.
  • Paths containing, or entirely composed of, the directory traversal operators . and ...
  • Absolute and relative paths.
  • Different formats of the current working directory.

We wrote those tests first for the Posix style of path, which are used by OS X and Linux. Then we ported them over to Windows paths², and added even more cases:

  • Windows supports both / and \ as separators, so we tested both and their combinations.
  • Not only does Windows support C:\-style path roots, it supports \\server\share\-style UNC paths as well.
  • You can also start a path with \ in Windows to indicate that it’s relative to the current working directory’s roots.

Determining the proper behavior for all of these involved looking up specifications online, manually testing path behavior on the command line, and a healthy amount of discussion about exactly the right way to handle edge-cases. These discussions led to our next round of design decisions.

Not all paths are valid. Sometimes they run afoul of an operating system’s rules for valid characters, and sometimes they just don’t make sense at all—consider the path /.., for example, or just an empty string. I initially advocated for throwing errors in these cases since in general failing fast is good, but we discussed options and Bob convinced me that path operations should never fail³.

While failing fast can make errors easier to track down, it also means that a defensive programmer has to be aware of the potential for failure anywhere it could occur. Path operations are frequently used in small utility methods that aren’t expected to fail, and most of the time their output is ultimately passed to IO operations which already need error handling.

So instead of throwing an error, the path operations just do the best they can on meaningless input. For most operations, /.. is considered the same as / and the empty path is considered the same as ., but we don’t work too hard to adhere to these definitions if it would get in the way of efficiently processing valid paths.

We also had to figure out what to do with paths that contained irrelevant characters, like foo//bar or foo/./bar, both of which are semantically identical to foo/bar. We ended up deciding to preserve the existing format as much as possible. The user would be able to explicitly call normalize() if they wanted clean paths, but otherwise they’d get something like what they passed in.

This decision made it easier to interoperate with other software that did, for whatever reason, care about the exact format of a path. For example, code using less-robust path manipulation logic might not be able to tell that foo/baz/qux was within foo/bar/../baz, so it’s useful for p.join("foo/bar/../baz", "qux") to return "foo/bar/../baz/qux".

Platforms and Customization

Paths are unusual in that their semantics are deeply platform-specific, but following those semantics mostly doesn’t actually require running the code on the platform in question. We wanted to take advantage of this to allow users to do path manipulations for platforms they weren’t using, but we also wanted to make the easy default use the current platform. This called for more design.

We came up with the idea of a Context object, which would take a style of path (Posix, Windows, or eventually URI) and the only OS-specific piece of data path manipulation used—the current directory path. Context had a set of methods that exactly mirrored the top-level functions in path. In fact, path’s functions just forward to a context!

We used contexts heavily in our own tests. They allowed us to run Windows path tests on Linux, for example, and to test operations like relative() without having to make any assumptions about the current directory.

While adding contexts, we also made a design decision that turned out to be a mistake in retrospect. We’d defined a Style enum for determining which platform style a context should use, which would have been fine if we hadn’t decided to make public the methods Context called on the style.

We had a vague notion that this would allow third-party packages to define custom styles, but no one ever did. Even if they’d wanted to, different path styles are so idiosynctatic that they probably couldn’t have encoded all the custom logic in the methods we provided. So instead we had a bunch of public API surface that was tightly coupled to the internal implementation of path manipulation.

Eventually the implementation needed tweaking in a way that affected the Style methods. We couldn’t change those methods, so instead we deprecated them and added an internal implementation of Style where we could add new methods privately. The lesson here is sometimes maximal extensibility isn’t worth the pain.

Making it Fast

When we first implemented path, we were primarily concerned with correctness and not speed. Our philosophy was (and is) to avoid optimizing packages until we have a clear idea of what parts are slowest and used most heavily. If the package started out correct and well-tested, we could be sure that any performance improvements later on preserved the necessary behavior.

But eventually the time came to make those changes. Users were doing path manipulations in performance-critical loops, and that meant it had to be fast. We set up a benchmark so we could track our progress, and used Observatory to see exactly what parts of our code were taking the most time. Then we called in Anders Johnsen, one of our resident performance experts who’s since moved on from Google, to see what he could do.

It turned out he could do a lot! Not only did our code get faster, we learned quite a bit about strategies for keeping it fast.

The first change was to avoid parsing the whole path. Our original code heavily used an internal ParsedPath class that eagerly parsed the entire path and exposed its components as fields. We still use this class for particularly complex functions, but for anything simple and performance-critical, we now deal with the string directly. This removes a lot of extra unnecessary work and allocations.

The second change was to stop using regular expressions. At the time, Dart’s regular expression engine was very slow. It’s since been dramatically improved, but explicit string operations still tend to involve a lot less overhead. We had been using regexps for very simple operations anyway, so switching away from them ended up being pretty straightforward.

Finally, we had to short-circuit early when possible. A lot of path operations were very complex in the worst case—they required a lot of logic and maybe even iteration over the whole path. But the worst case didn’t actually come up all that often, and it turned out to be pretty easy to detect when it didn’t. For example, Windows paths can have a lot of different roots, which makes finding the root difficult. But if the path starts with /, then it’s guaranteed to be a root-relative path, so the root is "/". These sorts of checks may seem nitty, but they helped a lot.

Coming Up For Air

I tried something new today, and I’m curious what you thought. Leave me feedback in the comments if you’d like to see more deep dives, or if you prefer my previous articles that gave a high-level overview of package APIs. For my part, I like writing both of them, so I’d be happy to continue doing a mix in the future.

Join me again in two weeks when I cover a very generalized package that’s only used in one place so far.

Natalie is still doing the writing here. I’m just hitting publish. – Kevin

1: If it were up to me, the File class’s methods would be top-level functions as well.

2: We also support url paths, but these were added later on.

3: There’s one exception to this rule: path calls Uri.base to figure out the working directory for operations like relative(), which is an IO operation and so could theoretically fail. In pratice, though, it basically never does.

Thursday, June 9, 2016

Dart 1.17: More performance improvements

Dart 1.17 is now available. Get it now!

We continued the work from 1.16 of working closely with some of our key users at Google to make sure Dart is even more productive for developers.

We've additionally optimized how our core tools deal with large applications, and have seen significant improvement over the last two releases. Dartium is now much more stable. We have improved the speed of the Dart Analyzer by more than 200% on large codebases. Last, but not least, Dartium startup time on large applications at Google has improved by a factor of 4.

We also made a number of smaller changes to core SDK APIs, please refer to the SDK changelog.

Tuesday, May 17, 2016

Unboxing Packages: vm_service_client

Three weeks ago, I wrote about the stream_channel package. Two weeks ago, I wrote about the json_rpc_2 package which is built on top of stream_channel. This week I’ll complete the trifecta by writing about the vm_service_client package, which uses json_rpc_2 in turn—and is a really cool package in its own right!

One of the lesser-known corners of the Dart VM is its service protocol, but it’s one of its most powerful components. It uses JSON-RPC 2.0 over WebSockets to allow clients to connect to the VM, inspect its internal state, set breakpoints, and all sorts of neat stuff. If you’ve ever used Observatory for debugging or profiling your Dart code, you’ve been using the service protocol under the hood: that’s how the Observatory web app talks to the VM’s internals.

Because the protocol is fully documented and based on a standard underlying protocol, it’s possible for anyone to use from their code. And the vm_service_client package makes it downright easy: it provides a Dart-y object-oriented API for (more or less) everything in the protocol. And that turns out to be a lot of stuff: I count 108 classes in the API documentation, with more added over time as the protocol adds new features.

Because the client’s surface area is so broad, I’m not even going to attempt to cover all of it. I’ll discuss the most important classes, of course, but I also want to touch on the broader picture of how we took a language-independent RPC protocol and created a native-feeling API to use it.

Connecting the Client

Before we go deep into the API, though, let’s start at the beginning: actually establishing a connection with a running VM instance. The first step is to get the VM to actually run the service protocol at all. If you run dart --enable-vm-service, it will listen for WebSocket connections on ws://localhost:8181/ws (by default).1 You can also force a running Dart process to start the VM service by sending it a SIGQUIT signal, as long as you’re not on Windows, but that’s a lot less reliable and customizable.

Once the service is running, you can connect a client to it using new VMServiceClient.connect(). This takes the service protocol’s WebSocket URL as either a string or a Uri—and, for convenience, it can also take Observatory’s HTTP URL, which it will use to figure out the corresponding WebSocket URL. If anything goes wrong, it’ll be reported through the done future.

import "package:vm_service_client/vm_service_client.dart";

main(List<String> args) async {
  var url = args.isEmpty ? "ws://localhost:8181/ws" : args.first;
  var client = new VMServiceClient.connect(url);

  // ...


Almost every piece of data the service protocol provides comes in two varieties: the reference and the full version. The reference contains a little bit of metadata, as well as enough information to make an RPC that will provide the full version, which contains all available information.

This split accomplishes two things. It makes the responses much more compact by avoiding unnecessary metadata. More importantly, though, it allows for circularity. A library can refer to the classes it contains, which can in turn refer back to the library that contains them.

In the client, all reference classes end in Ref, whereas their full versions do not. So VMLibraryRef is a reference to a VMLibrary. The full versions extend their corresponding references, so you can pass a VMLibrary to a method that expects a VMLibraryRef. Every reference has a load() method that returns its full version.

This is an important place where the client makes the API feel more native. Because the service protocol can’t really overload an RPC based on its argument type, it has to have different calls for resolving references to different types of objects. But as a Dart client, we can provide an extra degree of uniformity. This is a pattern that comes up several times throughout the client.

Runnable Isolates

Isolates are a special case: they have three states instead of just two. There’s VMIsolateRef and VMIsolate, but neither of these will provide all the isolate’s information. For that you need VMRunnableIsolate.

The extra layer exists because the VM loads an isolate in stages. First it creates the isolate with simple metadata like its name and its start time. But it needs to do a bunch of additional work, loading libraries and classes and stuff like that, before it can actually run any code in the isolate. Only once all that work is done is a VMRunnableIsolate available.

You can use the VMServiceClient.onIsolateRunnable stream to get a notification when the isolate you care about is runnable, but if you already have a reference to an unrunnable version there’s an easier way. VMIsolateRef.loadRunnable() returns the runnable version once it’s available, and does so in a way that’s guaranteed to be safe from race conditions.

Once you have a connection to the service, you need to be able to find the part you want to interact with. I don’t mean looking up its API docs, I mean actually getting to it from your VMServiceClient object. For the most part, the VM service is organized hierarchically:

Note that many of these getters are maps. This comes entirely from the client: the service protocol always sends collections down as lists for compactness. But it’s useful for users to be able to look up libraries by their URIs or classes by their names, so the client converts the lists into the data structure that best fits with how we expect users to interact with the data.

Let’s take a look at how you’d add a breakpoint at the beginning of a call to from dart:io:

var vm = await client.getVM();
var isolate = await vm.isolates.first.loadRunnable();
var library = await isolate.libraries[Uri.parse("dart:io")].load();
var file = await library.classes["File"].load();


Code deals with data, so the VM service needs a way to represent data, and that way is the Instance class. Like most types, Instances have references and full values—these are represented in the client as VMInstanceRef and VMInstance.

On its own, VMInstance only provides two pieces of information. Its klass getter returns the VMClassRef representing the instance’s class, and its fields getter returns its fields and their values. In practice, though, many instances include more information—the service provides extra information for many core library types, which the client represents as classes like VMIntInstance and VMListInstance.

The client also provides VMInstanceRef.getValue(), a convenience method for converting instances to local Dart objects. Every VMInstanceRef subclass overrides this to recreate their particular type of object. It also takes an optional onUnknownValue() callback to which plain instances—including those in data structures—are passed to be converted into local values based on the caller’s logic.

Evaluating Code

The VM service doesn’t just let you look at the VM’s state, it lets you run code on it as well! A few different classes have evaluate() methods that take strings to evaluate and return VMInstanceRefs. Where you evaluate the code determines what names are accessible.

  • VMLibraryRef.evaluate() runs code in the context of a library. This lets the code access anything the library has imported, as well as any of its private names.
  • VMClassRef.evaluate() runs code in the context of a class. This is mostly the same as the library context, except that the code can refer to static class members without needing to prefix them with the class’s name.
  • VMInstanceRef.evaluate() runs code in the context of an instance. This means the code can refer to the instances fields and to this.
  • VMFrameRef.evaluate() can be used when an isolate is paused to run code in the same context as one of the current stack frames.

This is another example of the client using the same API to draw similarities between different parts of the underlying protocol. Because the client has the well-known unifying metaphors of objects and methods, it’s able to take disparate APIs and expose them in a consistent way that isn’t possible using raw RPCs.

Go Forth and Make Something Cool

In some ways, the VM service client is the most exciting package I’ve written about yet. It’s designed to make available a whole bunch of internal VM functionality, and the only limit on what can be done with that functionality is your imagination. So take this package and make something cool. Make a REPL or a visual object inspector. Heck, make an entirely new debugger!

Join me again in two weeks when I write about one of the oldest and most fundamental packages in the entire Dart ecosystem.

  1. Humans interacting directly with Observatory usually pass --observe flag instead of --enable-vm-service. This will also enable the service, but it also turns on a handful of other options, the exact set of which is subject to change. It’s much safer to use --enable-vm-service when writing code to interact with the VM.

Wednesday, May 4, 2016

Unboxing Packages: json_rpc_2

Last week I wrote about the stream_channel package for two-way communication, so this week it seemed natural to move to a package that uses it: json_rpc_2. This is an implementation of the JSON-RPC 2.0 specification, which is a popular protocol for providing structure and standardization to WebSocket APIs.

Although it’s most commonly used with WebSockets, the protocol itself is explicitly independent of the underlying transport mechanism. This makes it a great fit for stream channels, which can be used to represent a two-way stream of JSON objects in a way that works with any underlying mechanism. Thanks to stream channels, JSON-RPC 2.0 can be used across WebSockets, isolates, or any channel a user chooses to wrap.

Shared APIs

There are three main classes in json_rpc_2: Client makes requests and receives responses, Server handles requests and returns responses, and Peer does both at once. Because all of these involve two-way communication, they all have the same two constructors. The default constructor takes a StreamChannel<String> where each string is an encoded JSON object, and automatically decodes incoming objects and encodes outgoing ones. On the other hand, if you want to communicate using decoded maps and lists, you can use the withoutJson() constructor, which only requires that the objects be JSON-compatible.

The three classes also have the same lifecycle management. In order to give the user time to set up request handlers or enqueue request batches, they don’t start listening to the stream channel until listen() is called. Once it is, it returns a future that completes once the channel has closed—also accessible as the done getter. And if the user wants to close the channel themselves, they can call close().


The Client class is in charge of making requests of a server. The core method for this is sendRequest(), which takes a method (the name of the remote procedure to call) and parameters to pass to that method.

The structure of these parameters depends what the server accepts. JSON-RPC 2.0 allows both positional parameters, which are passed as an Iterable of JSON-safe objects, and named ones, which are passed as a Map from string names to JSON-safe values. The parameters can also be omitted entirely if the method doesn’t take any.

The call to sendRequest() returns a future that completes with the server’s response. The protocol defines two types of response: “success” and “error”. On a success, the server returns a JSON-safe object which the sendRequest() future emits. On a failure, the server returns an error object with associated metadata. This metadata is wrapped up as an RpcException and thrown by the future.

import 'package:json_rpc_2/json_rpc_2.dart' as rpc;

/// Uses the VM service protocol to get the Dart version of a Dart process.
/// The [observatoryUrl] should be a `ws://` URL for the process's VM service.
Future<String> getVersion(Uri observatoryUrl) async {
  var channel = new WebSocketChannel.connect(observatoryUrl);
  var client = new rpc.Client(channel);

  // getVM() returns an object with a bunch of metadata about the VM itself.
  var vm = await client.sendRequest("getVM");
  return vm["version"];

If you don’t care whether the request succeeds, you can also call sendNotification(). JSON-RPC 2.0 defines a notification as a request that doesn’t require a response, and a compliant server shouldn’t send one at all. Notifications are commonly used by peers for emitting events, but I’ll get to that later.

JSON-RPC 2.0 also has a notion of batches, where a bunch of requests are sent as part of the same underlying message. The server is allowed to process batched requests in whatever order it wants, but it’s required to send the responses back as a single message as well. This can use less bandwidth if you have a bunch of requests that don’t have strong ordering needs.

The json_rpc_2 client lets the user create batches using the withBatch() method. This takes a callback (which may be asynchronous), and puts all requests that are sent while that callback is running into a single batch. This batch is sent once the callback is complete.


The Server class handles requests from one or more clients. Its core API is registerMethod(), which controls how those requests are handled. It just takes a method name and a callback to run when that method is called. The value returned by that callback becomes the result returned to the client.

import "package:json_rpc_2/json_rpc_2.dart" as rpc;
import "package:shelf/shelf_io.dart" as io;
import "package:shelf_web_socket/shelf_web_socket.dart";

var _i = 0;

main() async {
  io.serve(webSocketHandler((webSocketChannel) {
    var server = new rpc.Server(webSocketChannel);

    // Increments [_i] and returns its new value.
    server.handleMethod("increment", () => ++_i);
  }), 'localhost', 1234);

The server presents an interesting API design challenge. Most methods require certain sorts of parameters—one might need exactly three positional parameters, one might need two mandatory named and one optional parameter, and another might not allow any parameters at all. JSON-RPC 2.0 is pretty clear about how to handle this at the protocol level, but how do we let the user specify it?

We could have users manually validate the parameters—and in fact, for complex validations we do. Users can always manually throw new RpcException.invalidParams() based on whatever logic they code. But it’s a huge pain to manually validate the presence and type of every parameter, so Server uses a couple clever tricks to figure out requirements with minimal user code.

The first trick is that the callback passed to registerMethod() can take either zero or one parameters. This is how Server figures out whether the method allows parameters at all. In the example above, if a client tried to call increment with parameters of any kind, they would get an “invalid parameters” error. But the most clever trick is how parameters that are passed are parsed, and it involves an entirely new class.


The Parameters class wraps a JSON-safe object and provides methods to access it in a type-safe way that will automatically throw RpcExceptions if the object isn’t the expected format. It’s what gets passed to the registerMethod() callback, if it takes a parameter at all.

If you call asList and the caller passed the parameters by name, it’ll throw an RpcException. If you call asMap and the parameters were passed by position? RpcException as well. Or you can just call value and get the underlying parameter no matter what form it takes.

Parameters also lets you verify the parameter values themselves. The [] operator can be used for either positional parameters (with int arguments) or named parameters (with string arguments), and returns a Parameter object which extends Parameters with a bunch of methods for validating types beyond just lists and maps.

All of the native JSON types have getters like asString, asNum, and similar. Just like asList and asMap, these getters return the parameter values if they’re the correct types and throw RpcExceptions if they aren’t. There are also derived getters like asDateTime and asUri which ensure that the value can be parsed as the appropriate type, and asInt which ensures that a number is an integer.

// Sets [_i] to the given value.
server.handleMethod("set", (parameters) {
  _i = parameters[0].asInt;
  return _i;

It’s important to note that the [] operator will return a parameter even if it doesn’t exist, either because there weren’t enough positional parameters passed or because a parameter with that name wasn’t passed. This makes it easy to support optional parameters.

A parameter that doesn’t exist will always throw an RpcException for its asType methods, and even for value. But there are methods where it won’t throw. If you call asStringOr() for a parameter that exists, it behaves just like asString, but for a non-existent parameter it’ll return the defaultValue parameter. Every asType getter has a corresponding asTypeOr() method. Even value has valueOr().

// Returns the logarithm of [_i].
// If the `"base"` named parameter is passed, uses that as the base. Otherwise,
// uses `e`.
server.handleMethod("log", (parameters) {
  return math.log(_i)/math.log(parameters["base"].asNumOr(math.E));


The Peer class works as both a Server and a Client over the same underlying connection. In terms of API, it’s exactly the sum of those two classes. It adds no methods of its own, so in that sense you already know everything about it. But it’s still instructive to talk about why it exists.

While I can easily imagine a structure where two endpoints are truly peers, each invoking methods on the other and receiving results, in practice most of the time I’ve seen peer-structured protocols has been for the sake of event dispatch. You see, JSON-RPC 2.0 doesn’t include an explicit mechanism for the server pushing events to the client. It can only respond to requests made by the client. This is intentional, since it makes the protocol much simpler, and the peer structure is the standard way around it.

To support server events, both the client and server must act as peers, able to send and receive requests. In this world, events are modeled as requests sent from the server to the client—or more specifically, notifications, since the server doesn’t expect a response. The client registers a method for each type of event it wants to handle, and the server sends a request for every dispatch.

/// Uses the VM service protocol to print the VM name.
/// Prints the VM name again every time it's changed.
void printVersions(Uri observatoryUrl) async {
  var channel = new WebSocketChannel.connect(observatoryUrl);
  var peer = new rpc.Peer(channel);

  peer.registerMethod("streamNotify", (parameters) async {
    if (parameters["streamId"].asString != "VMUpdate") {
      throw new rpc.RpcException.invalidParams(
          "Only expected VMUpdate events.");

    print("VM name is ${await peer.sendRequest("getVM")["version"]}.");

  print("VM name is ${await client.sendRequest("getVM")["version"]}.");

RPC Home

Next time you need to communicate with a JSON-RPC 2.0 server, you know where to turn. Next time you need to create an RPC server, I hope you look to JSON-RPC 2.0 as the underlying protocol. It’s clean and straightforward, and best of all, it’s got a great implementation already written and ready to use.

I wrote about stream_channel in my last article. In this article, I wrote about json_rpc_2, which uses stream_channel. Join me in two weeks when I build this layer cake a little higher and write about a package that uses json_rpc_2!

Friday, April 29, 2016

Dart in Education: Interview with Prof. Dr. Nane Kratzke

Nane Kratzke is a professor of Computer Science at the Lübeck University of Applied Sciences in North Germany. He conducts cloud computing research at the university’s Center of Excellence for Communications, Systems, and Applications (CoSA).

He also gives Computer Science courses — one of which uses Dart as a vehicle to teach web programming. We asked Nane about this.


[Off-topic] Is it true you were researching network warfare at some point in your past?

Yes, that is true. I was enlisted in German Navy as a Navy Officer and during my military time I studied Computer Science at the University of Federal Armed Forces in Munich, Germany. For about six years after my studies, I was involved as a software engineer, team leader and project leader in several programs for command and control systems of German frigates.

After that, I worked as a consulting software architect for a German think tank consulting mainly the German Ministry of Defence. I did some research with the University of German Federal Armed Forces, Munich, concerning network-centric warfare (especially agent-based simulations) and consulted the German Ministry of Defence in questions of network-centric warfare and enterprise architecture management.

You lead a web technologies course in Lübeck. How did you design this course?

I would not say that the current course is ‘designed.’ It was more like an evolutionary process.

It started as a really standard course focusing mainly on HTML and server-side programming using PHP. Forms encoded in HTML, server-side form handling and that kind of thing. This setting is very limiting in terms of designing some fun or challenging tasks from a teaching perspective. No student ever told me that, but I am afraid this course was very boring.

I decided to let the course evolve by concentrating more on the client side using JavaScript. It turned out that focusing on the client side is much more interesting and challenging for students. It simply provides much more flexibility to create interesting tasks for students. Like developing a game for instance.

One year later Dart comes into my focus of attention. And Dart seemed to be a great language for a web technology course. I decided to give it a try.

The primary intent of the course is to let students develop something fun by applying some useful concepts of software engineering. Furthermore, the course is designed in a problem- and project-oriented way of learning and teaching. All theory lectures about web technologies is more like an introductory crash course on the very basics - just to draw the big picture. The students have to understand the web technology concepts in detail on their own, and by solving a concrete problem as a team. So, an educational theoretician would likely categorize the course as a problem-based teaching approach embedded in a project-based context (or something like that).

Games are a good combination to do this, in my opinion. Games let students forget that they are studying and that they are applying a lot of "boring stuff" like logic or modeling concepts. Notwithstanding, they learn a lot about using:

  • patterns (they get to know in their 3rd semester),
  • modeling (game logic, which is a lot of formal logic — of which students are often afraid),
  • applying object orientation (they get to know it in their 1st and 2nd semester but don’t have the chance to apply it in a more complex setting),
  • separation of concerns,
  • and so on.

If learning can be fun, it should be. This is not possible in all computer science courses of course. But in web technology, there is a clear opportunity.

Your course introduces students to HTML, CSS, DOM, HTTP, REST, ... and Dart. Why not the more obvious choice — JavaScript?

The very basic idea dates back to my own studies of computer science. In my first semester, we had to attend a programming course. Some of us could already program. Some of us not. Our professors decided to ground us all. We learned to program using "non-obvious" Haskell. None of us even heard of that language at that time (nowadays, Haskell is much more well-known, even for students).

All of us had to start at the beginner level due to that. Looking back, this was one of the best decisions. Taking a non-obvious choice of a programming language makes your students more equal. Grading is fairer because you do not grade how good a student is in a specific programming language. Instead, students get graded for their general understanding of programming and software development. Knowledge of a specific programming language does not say so much most of the times, from my experience.

How does Dart fare as a programming language for education?

I would not say that Dart fares better than any other language. Each language has its strengths and weaknesses. A preference for a programming language is a very personal point of view. I like Dart, I love Ruby (but it would be cool if Ruby had an optional type system ;-), I would like to do more with Haskell after decades of abstinence, and I am no real friend of Java (although I teach it in introductory programming courses).

But there is no objective reason for my preferences, and these are personal preferences. It is simply not worth to fight these "language wars" in my eyes. A programming language has to fulfill a purpose.

So, the main reason for me experimenting with Dart was its "non-obviousness", to ground all to the same level. Besides that, there are some further and maybe objective benefits:

  • You can use the same programming language on client and server side (which is a major advantage for web technology courses — you don’t need to introduce two or even more languages, which tends to be very time intensive).
  • You have a working dependency management system (pub). This solves a lot of nitty-gritty problems you have in a JavaScript course. Yes, there are solutions to do same in JavaScript.
  • Dart is inspired in a lot of aspects by Java and Java is our teaching language in 1st and 2nd semester.
  • Dart is a class-based, object-oriented language and therefore much more familiar to most students (compared with the prototyping style of object-oriented programming known from JavaScript).

What are some ways in which Dart could be better at this?

Of course, more tutorials would be helpful for students. Especially in the German language.

Dart has some strange concepts (from a student perspective) like inherent interfaces and a missing "protected" access modifier for methods and data fields in object-oriented programming. So Dart might be not the best choice in these details. Also, concurrent programming and isolates are not easy to use from my point of view. I would really appreciate some more pragmatic concepts for parallelism in Dart, like a parallel map for streams. I even wrote some time ago a blog post about that.

However, I am afraid there never will exist a perfect programming language. ;-)

From a teaching perspective, it is very sad that the Dart team has stopped the development of the Dart Editor. It was simply great to install an IDE with all dependencies working out of the box. Of course, the recommended IDE — WebStorm — is a great IDE as well. It provides much more features. There are good reasons that the Dart team does not want to concentrate on supporting and developing an IDE. However, for a beginner, this introduces starting obstacles. A student not known to Dart or even web programming has to install an IDE, install the Dart SDK and get the IDE configured to work with the SDK. These are no severe obstacles, but it was much easier to do the same with a one click Dart Editor install. Students were instantly ready to program Dart.

Of course, there is the online IDE — DartPad — which is a great starting point for diving into Dart. But as a teaching institution, we need an offline capable IDE installable on our machines at our labs and home on students’ laptops or desktop systems. A solution must work even if the network connection is down due to whatever reasons. It would be really great if there would be a one click installable desktop version of DartPad. I love JetBrains — the makers of WebStorm and a lot of other great IDEs. But it is never a good feeling for a teacher to wonder whether the academic and classroom licenses might be still for free in the next semester. I do not think JetBrains would drop its support for education. However, I like to have my courses autarchic and not relying on outside support and goodwill.

For those students who have had an experience with another language (but who are — presumably — still junior), what is their approach to Dart?

Our students learn to program using Java in their first and second semester. So, the transfer to Dart is quite smooth. But inherent interfaces and a missing "protected" modifier are for some students a bit hard to accept.

Some confusion arises with the optional type system at the beginning of each course. A lot of students know dynamically typed languages like PHP or JavaScript and of course, all students know statically typed languages like C or Java. Because Java is our teaching language in the first and second semester, students are used to programming in a statically typed way.

The Dart code of my students makes it evident that their programming style changes over time. They start with a lot of static types and, week by week, the types disappear more and more - almost completely. But at the end of the course types reappear in method definitions.

It is a fascinating process from a teaching perspective. This Dart style of coding is not requested for the course and is not considered for grading. It just happens by insight.

Another aspect is asynchronous programming. Because the code is still arranged in a sequence of lines it is hard to understand for many students how async methods are working at first. This feature is not as widespread in other languages. So, a lot of students can not transfer the async concept from another language to Dart. Which makes it tricky for them to understand. However, using exactly this feature to implement action handlers for DOM-tree events is astonishingly intuitive for most students. This is still a bit weird for me because students use a feature intuitively without understanding its concept. I think the reason might be, that Dart API is designed very carefully and with programmer’s intuitions in mind. But I do not know how much is intentionally designed or just API designers’ "luck." However, it works for most students.

Part of the course is applying skills by building a simple, DOM-based web game. Why a game (as opposed to, say, a TODO app)?

I think a game is a great object of investigation for a computer science student. You have a non-trivial logic, which I think is much more complex than in a TODO app. You need an elegant user interface, hiding all irrelevant complexities of the game logic. And you have to separate these kinds of concerns.

Of course, you can develop a TODO app. But no one "loves" a TODO app. Almost everybody loves gaming.

I do not think that my students would share their TODO apps. But they are proud of their games and share them with their friends, family and via social networks. I started a hall of fame for the best games of every semester in my last course because I was asked to do so by my students. They simply want to share their game outcomes.

So, students are simply much more motivated when implementing a game. I have absolutely no clue how to motivate with a TODO app.

How many students do you have each semester? What is their feedback on the course?

There are about forty to sixty students each semester (steadily increasing over the last years). We have three or four corresponding practical classes and teams of three or four students within these courses. Each team has to develop a game which is often inspired by classic arcade games like Tetris, Boulder Dash, Pac-Man and so on. The teams can choose the game concept they want to implement.

As a supervisor, I only have a look in five sprint meetings (concept, general architecture, alpha, beta, final) to assure that all games across all teams have comparable complexity and are on track.

The course is very well accepted despite the fact that developing a game is hard work, and you have to apply all of your software engineering skills to build a good game. I think the students do not see the work as intense, thanks to the fact it’s a game they’re making.

I have a habit of asking each team spontaneously in their final presentation of the game whether they could add an additional feature before release. They only have one or two days left at that time. I check that way how extendable and resilient their architecture is. It is a bit like the real world where your customer has a new idea although you were so happy to finish the project in time. The students do not have to do that. But most teams take the challenge. They are really motivated. Some of them even solved the problem at the time of the final sprint meeting. That impressed me because it shows that they not only built a game, not only solved a problem, they know how to build software in an extendable and resilient way.

What would you recommend to other educators who are thinking about teaching web technologies (or programming in general) with Dart?

Personally, I think Dart would be a great language to learn programming. But a lot of curricula in computer science depend on statically typed languages for freshman students. That might be why most study programs rely on Java, I think. So, I am afraid there is little acceptance of such an approach.

To use Dart in a web technology course is much more practical for a teacher. The simple reason is that you can work with the same language on the client and the server side. And the language is intentionally designed to support both sides. As a teacher, you do not have to introduce two or even more languages.

Especially in web technology there is a lot of previous - but heterogeneous - knowledge of technologies among all students. Some students are PHP wizards, other students are capable of programming, but have no experience in any web technology at all. This difference in skills is really challenging for a teacher. However, at the end of the course, all students should have somehow the same level of skills. I do not know any other course in a computer science study program (with the exception of introducing programming courses) where this issue is so distinctive.

A general recommendation is to keep the experienced students motivated by providing something new for them. This can be done using the "non-obvious" Dart instead of the well-known PHP/JS language combination. And let experienced students share their knowledge with the more inexperienced students. Project-based learning is a great way to do this.

However, I am afraid over the years Dart will get more and more widespread. The first students are coming in my courses and have already programmed in Dart. So, it might be that I have to find a new, non-obvious language in a few years ;-)

Tuesday, April 26, 2016

Dart 1.16: Faster tools, updated HTML APIs

Dart 1.16 is now available. This release includes important updates to our tools.

Faster developer tools

In this release, we've been working closely with our users at Google to make sure Dart is even more productive for developers. We've optimized how Dartium loads applications, improving the time it takes to open an application up to 40%. We also continue to invest in faster code analysis and quicker JavaScript compile times. You should see improved performance in this and future releases.

Updated HTML APIs

In Dart 1.15 we updated Dartium to Chrome 45 from Chrome 39. In this release, we've updated our browser APIs – dart:html, dart:svg, etc. – to align with these changes. While most of these changes involve new and lesser used APIs, you should verify your application code to find and fix possible breaks.

And more...

The SDK changelog has details about all of the updates in Dart 1.16 SDK. Get it now.

Unboxing Packages: stream_channel

The stream_channel package is the youngest I’ve written about so far—the first version was published on 28 January 2016, only three months ago as I write this! But despite its youth, it fills a very important role in the Dart ecosystem by providing a common abstraction for two-way communication.

In my article on source_span, I wrote about how important it is for a package ecosystem to provide common conventions that can be used throughout the language. stream_channel is another great example of that. The core API it provides is extremely simple, just two getters and a set of rules for them to follow, but the ability for Dart code to implement protocols independent of the underlying implementation is profound.

abstract class StreamChannel<T> {
  Stream<T> get stream;
  StreamSink<T> get sink;

The test package uses StreamChannel to implement a protocol for running tests that works whether the tests are in an isolate, a separate process, or even an iframe in a browser window. The web_socket_channel package uses it to define a common API for WebSockets that works the same on all platforms. And having a common API means that it’s also possible to create common utility classes in the style of the async package.

The Rules

As nice and simple as the API is, it’s important to note that there’s more to being a stream channel. A channel is logically a single entity, which means that the two APIs it exposes—its stream and its sink—have to work in concert with one another. So there are a set of rules that all valid implementations of StreamChannel must follow. These rules are designed to make it easy for users to interact with stream channels without leaving resources dangling or errors unhandled.

They also model the behavior of the dart:io WebSocket class almost exactly. Since it’s the only core library API that provides a connected stream and sink, it inspired a lot of the initial stream channel design.


The core SDK defines two types of streams. The listen() method may be called any number of times for broadcast streams, but only once for single-subscription streams. The first rule of stream channels is that the stream must be single-subscription. This is the default for streams, so most users will assume it unless stated otherwise, but it’s important to be explicit.

Stream Closure

Most of the rules have to do with the stream and/or the sink closing. This is no coincidence: the moment a channel is closed is one of the highest-risk times for logic errors, because resources can fail to be freed and events can happen in unexpected orders. The second rule addresses this: once the sink is closed, the stream closes without emitting any more events.

This means that the callback passed to stream.listen() won’t ever be called once sink.close() is called. It eliminates a whole class of potential bugs where an event could sneak in before the underlying channel was fully closed.

Sink Closure

The third rule is the inverse of the second: once the stream closes, the sink silently drops all events. This means that if your code receives an onDone event from the stream, it doesn’t need to do any extra work to avoid calling sink methods; they just automatically don’t do anything, guaranteed.

This and the stream closure rule work together to ensure that both components of the channel agree about whether it’s open or closed. Having a single canonical state makes channels more straightforward to work with and reason about, and give the user a consistent way to control and react to that state.

Subscription Canceling

The fourth rule is unusual in that it’s about a connection that shouldn’t exist between the stream and the sink. It says that canceling the stream’s subscription has no effect on the sink. This means that the only way for the channel to be closed locally is by calling sink.close()—code that’s dealing with the sink can be confident that only it (and the remote endpoint) is in charge of the channel’s connection.

Error Bouncing

The fifth rule only applies to channels that don’t have a way of transmitting arbitrary errors to the remote endpoint. If a channel can’t transmit an error, it closes and forwards the error to the sink—in particular, to the sink.done future. Errors sent to a channel that can’t handle them are probably caused by bugs in the program, and forwarding them to done makes it possible to handle them without making them look like events from the remote endpoint.

Early Closure

The sixth and final rule is less of a requirement and more of a guideline. If the stream closes before it has a listener, the sink should silently drop all events if possible, but only if possible. This is tricky because the connection may not be established at all until the stream has a listener, and if it’s not established there might be no way to tell when the stream closes. But if it is possible, this ensures that no events are sent over a channel that the user expects to be closed.

Premade Stream Channels

Even in the few short months that stream_channel has existed, there are already a few classes that implement the interface to wrap commonly-used two-way communication channels.


The IsolateChannel class is part of the stream_channel package. Its default constructor, new IsolateChannel(), simply wraps an existing ReceivePort and a SendPort in a stream channel.

That’s great if you already have both ports, but what if you’re establishing the initial connection? Anyone who’s written a bunch of isolate code knows what a pain it is to correctly do the dance of sending a port that sends back another port to establish a two-way connection. To make that easier, IsolateChannel provides two utility constructors. new IsolateChannel.connectReceive() takes a ReceivePort, and new IsolateChannel.connectSend() takes the attached SendPort. They then use an internal protocol to connect ports going the other direction so you don’t have to worry about it.

/// Spawns a worker isolate and returns a [StreamChannel] for communicating with
/// it.
Future<StreamChannel> spawnWorker() async {
  var port = new ReceivePort();
  var isolate = await Isolate.spawn(worker, port.sendPort);
  return new IsolateChannel.connectReceive(port);

/// The entrypoint for the worker isolate.
void worker(SendPort port) {
  var channel = new IsolateChannel.connectSend(port);
  /// ...


This isn’t strictly part of the stream_channel package, but the WebSocketChannel class defined in the web_socket_channel package is another great example of a stream channel. It has two concrete implementations: IOWebSocketChannel wraps the WebSocket class from dart:io, whereas HtmlWebSocketChannel wraps the one from dart:html.

Both implementations have one constructor that wraps the underlying class, as well as another that opens a new connection. Otherwise, they provide pretty much the same API (with one platform-specific getter).

WebSocketChannel is particularly interesting because it’s not just a vanilla StreamChannel. It provides a few additional APIs: protocol includes the meta-protocol if one was negotiated, and if the socket is closed by the remote end point closeCode and closeReason indicate why. Its sink is also a custom subclass whose close() method allows the user to specify their own code and reason.

Creating New Stream Channels

If you need a stream channel for a different kind of underlying communication channel, you may need to create your own. You could just call new StreamChannel() with a stream and a sink, but be careful: if your stream and sink don’t satisfy the rules I described above, you’re liable to run into some really tricky bugs.

The new StreamChannel.withGuarantees() constructor is much safer, at the cost of providing some extra layers of wrapping. It ensures that, regardless of the behavior of the stream and sink that are passed in, the ones exposed by the channel satisfy all the rules.


If you don’t have a preexisting stream and sink, you can create a channel from scratch using the StreamChannelController class. This exposes two stream channels. The code managing the controller interacts directly with the local channel. This is connected to the foreign channel, which is meant to be returned to be used by external code.

/// Returns a [StreamChannel] that communicates over [port].
StreamChannel messagePortChannel(MessagePort port) {
  var controller = new StreamChannelController(allowForeignErrors: false);

  // Pipe all events from the message port into the local sink...
  port.listen((message) => controller.local.sink.add(;

  // ...and all events from the local stream into the send port., onDone: port.close);

  // Then return the foreign controller for your users to use.
  return controller.foreign;

StreamChannelController automatically ensures that the stream channel rules are satisfied for both the local and remote channels. The allowForeignErrors parameter to the constructor controls how error bouncing is handled. By default, errors are passed straight from the foreign channel to the local one. But if there’s no way to deal with those errors, allowForeignErrors: false can be passed to forward those errors to foreign.sink.done instead.


I’ll finish this article by talking about one of the coolest stream channel utility classes. MultiChannel allows multiple independent virtual channels to communicate over a single underlying channel—it’s similar to having multiple SendPorts all communicating with different parts of the same isolate, but it works across any channel at all.

The MultiChannel is itself a stream channel, and it’s usually used to establish the initial connection. But the most important part of the class is its virtual channels, which are created using the virtualChannel() method. Each VirtualChannel provides an opaque id that the remote endpoint can pass to virtualChannel() to create its own virtual channel connected to the local one.

/// Serializes [test] into a JSON-safe map.
Map serializeTest(MultiChannel channel, Test test) {
  // Create a virtual channel for the test so that the remote endpoint can tell
  // us to run it.
  var testChannel = channel.virtualChannel(); async {
    assert(message['command'] == 'run');
    testChannel.add({"result": await});

  return {
    "type": "test",

/// Deserializes [test] into a concrete [Test] class.
Test deserializeTest(MultiChannel channel, Map test) {
  // Create a virtual channel connected to the one created in [serializeTest].
  var testChannel = channel.virtualChannel(test['channel']);
  return new Test(test['name'], testChannel);

The underlying stream is only closed once the initial MultiChannel and all virtual channels are closed. This lets the channels remain fully independent, but it also means it’s important to be scrupulous about closing them when their job is done.

Sink or Stream

This isn’t quite everything in the stream_channel package, but it’s all the most important parts. You’ll just have to check out the API docs for the rest! And next time you need two-way communication, you know where to look.

Join me next week when I talk about a package that’s built on top of stream_channel.