Thursday, July 28, 2016

Dart 1.18: Laying foundations

Dart 1.18 is now available. Go get it!

The team has been focused on implementation details over the last six weeks. The API changes to the SDK are very light – see the CHANGELOG – but we have been working hard laying the foundation for a number of important projects.

Download the latest release. Let us know what you think!

Wednesday, July 20, 2016

AngularDart is going all Dart

Until now, the multiple language flavors of Angular 2 were written as TypeScript source, and then automatically compiled to both JavaScript and Dart. We're happy to announce that we’re splitting the Angular 2 codebase into two flavors – a Dart version and a TypeScript/JavaScript version – and creating a dedicated AngularDart team.

This is amazing news for Dart developers because:

  • The framework will feel more like idiomatic Dart.
  • It will make use of Dart features that couldn't work with the TypeScript flavor.
  • It will be faster.

This is equally great news for our TypeScript and JavaScript developers, by the way. Cleaner API, performance gains, easier contributions. Read more on the Angular blog.

Angular 2 for Dart is used by many teams at Google. Most famously by the AdWords team, but many other Google teams build large, mobile-friendly web apps. Some of the top requests from these teams were: make the API feel like Dart, provide a faster edit-refresh cycle, and improve application performance.

That’s exactly what we aim to do. We believe we can significantly improve both the performance and usability of AngularDart. For example, in the 2 weeks since we started work on the purely Dart version, we were already able to unleash strong mode on the code and were able to significantly improve the code quality (fixed 1000+ warnings).

One more thing

We're also happy to announce plans to release our library of Material Design components for Angular 2 Dart. These components are built purely in Dart, and they’re used in production Google apps. Watch for updates.

Here’s just one of the many Angular 2 Dart components we plan to release:

Dart was designed "batteries included" – it’s not just a programming language, but also a set of stable libraries, solid tools, a great framework — and soon, a repository of battle-tested UI widgets.

The components aren’t ready to be publicly released yet, but if you were thinking about learning Dart, now is a good time to start. By the time you’re ready to build production apps, you’ll have the building blocks at your disposal.

Dig into AngularDart

angular2 2.0.0-beta.18 is now available on the pub site. You can look at the source, file issues and create pull requests at dart-lang/angular2. If you already use AngularDart in a project, you can use pub upgrade now to get the latest version. Please join the angular2 Dart group to ask general questions.

Wednesday, July 13, 2016

Changes at

Today we simplified, making it reflect the current state of the project a little bit better.

We have for the fundamental Dart technologies—the language itself and the core libraries. And then we have separate websites for the different targets:
Some other changes we made:
  • Feature the pages that people visit most often.
  • Show the core goals of the project on the homepage.
  • Completely rework the information architecture, from domains down to individual pages and sections.
  • Set up for hosting event-related micro-sites.
  • Reimplement the sites to make maintenance easier.
More significant changes will come, but we needed to land these changes before going further.
If you notice that something's broken or could just be better, please let us know using the relevant issue tracker:

Wednesday, June 15, 2016

Unboxing Packages: path

I want to do something a little different with my blog post this week. When I’ve written about packages in the past, I’ve mostly done a high-level overview of their APIs and how they fit into the Dart ecosystem as a whole. But path is one of the very oldest packages in the ecosystem, and any Dart user who’s written any server-side or command-line apps is probably already familiar with the API.

So instead of a high-level overview, I want to do a deep dive. I want to talk about why we made the design decisions we made when writing path, and how we implemented our design effectively and efficiently. This post will be as much about how the package was constructed as it is about what the final product looks like.

Initial Design

It first became clear that Dart needed a solid solution for path manipulation when Bob Nystrom and I first started working on pub. Paths may seem simple on their face, but there’s a lot of hidden complexity when you need to make them work with all the edge case formats that can crop up across all the operating systems we support.

This became our first design constraint: make something that handles all the edge-cases. This is less obvious than it sounds: a lot of times, good design involves sacrificing some edge-case behavior to make the common case better, or even just simpler to implement. But we knew that path would be widely-used across the ecosystem, and we wanted users to be totally confident in it. If an application had to sanitize its paths before handing them off to us, we weren’t doing our job.

Another important early decision was to make the core API use top-level methods. We often look at other languages’ APIs for inspiration, but they were split on this point. Node uses top-level functions, whereas Java uses instance methods on a Path class. Ruby uses static methods for simple manipulation and a Pathname class for more complex ones. This didn’t provide clear guidance.

We decided to rely on a rule of thumb: only create a class when it’s the canonical representation of its data type¹. There were already a bunch of APIs, both in the core and in external code, that logically took paths and accepted only strings, not our hypothetical Path objects. Certainly everywhere the end user supplied a path, that path would be made available to the program as a string.

So we decided to go with the flow of the existing APIs and continue representing paths as strings. All the path manipulation APIs now take strings and return strings, and the world is simpler for it.

We chose functions for the package based on a combination of our own needs and APIs that were common among other languages’ path manipulation suites. Some of them, like join() and relative(), were pretty obvious. Others like rootPrefix() only became apparent because they filled holes in actual code. And a few, like prettyUri(), only got added well after the package was released.

The Quest for Correctness

We wanted to make our users confident in the correctness of path’s logic, which meant we had to be confident ourselves first. To do this, we wrote tests. Lots and lots of tests. Today, the package has 2.5 times more lines of test code than implementation code, and that’s how we like it.

Writing tests isn’t trivial, though. We had to be careful to include all the cases that came up in practice. This meant that, for every function where they were relevant, we tested combinations of:

  • Directory paths that did or did not end in separators.
  • Paths with zero, one, or two extensions.
  • Paths with multiple separators in a row.
  • Paths containing, or entirely composed of, the directory traversal operators . and ...
  • Absolute and relative paths.
  • Different formats of the current working directory.

We wrote those tests first for the Posix style of path, which are used by OS X and Linux. Then we ported them over to Windows paths², and added even more cases:

  • Windows supports both / and \ as separators, so we tested both and their combinations.
  • Not only does Windows support C:\-style path roots, it supports \\server\share\-style UNC paths as well.
  • You can also start a path with \ in Windows to indicate that it’s relative to the current working directory’s roots.

Determining the proper behavior for all of these involved looking up specifications online, manually testing path behavior on the command line, and a healthy amount of discussion about exactly the right way to handle edge-cases. These discussions led to our next round of design decisions.

Not all paths are valid. Sometimes they run afoul of an operating system’s rules for valid characters, and sometimes they just don’t make sense at all—consider the path /.., for example, or just an empty string. I initially advocated for throwing errors in these cases since in general failing fast is good, but we discussed options and Bob convinced me that path operations should never fail³.

While failing fast can make errors easier to track down, it also means that a defensive programmer has to be aware of the potential for failure anywhere it could occur. Path operations are frequently used in small utility methods that aren’t expected to fail, and most of the time their output is ultimately passed to IO operations which already need error handling.

So instead of throwing an error, the path operations just do the best they can on meaningless input. For most operations, /.. is considered the same as / and the empty path is considered the same as ., but we don’t work too hard to adhere to these definitions if it would get in the way of efficiently processing valid paths.

We also had to figure out what to do with paths that contained irrelevant characters, like foo//bar or foo/./bar, both of which are semantically identical to foo/bar. We ended up deciding to preserve the existing format as much as possible. The user would be able to explicitly call normalize() if they wanted clean paths, but otherwise they’d get something like what they passed in.

This decision made it easier to interoperate with other software that did, for whatever reason, care about the exact format of a path. For example, code using less-robust path manipulation logic might not be able to tell that foo/baz/qux was within foo/bar/../baz, so it’s useful for p.join("foo/bar/../baz", "qux") to return "foo/bar/../baz/qux".

Platforms and Customization

Paths are unusual in that their semantics are deeply platform-specific, but following those semantics mostly doesn’t actually require running the code on the platform in question. We wanted to take advantage of this to allow users to do path manipulations for platforms they weren’t using, but we also wanted to make the easy default use the current platform. This called for more design.

We came up with the idea of a Context object, which would take a style of path (Posix, Windows, or eventually URI) and the only OS-specific piece of data path manipulation used—the current directory path. Context had a set of methods that exactly mirrored the top-level functions in path. In fact, path’s functions just forward to a context!

We used contexts heavily in our own tests. They allowed us to run Windows path tests on Linux, for example, and to test operations like relative() without having to make any assumptions about the current directory.

While adding contexts, we also made a design decision that turned out to be a mistake in retrospect. We’d defined a Style enum for determining which platform style a context should use, which would have been fine if we hadn’t decided to make public the methods Context called on the style.

We had a vague notion that this would allow third-party packages to define custom styles, but no one ever did. Even if they’d wanted to, different path styles are so idiosynctatic that they probably couldn’t have encoded all the custom logic in the methods we provided. So instead we had a bunch of public API surface that was tightly coupled to the internal implementation of path manipulation.

Eventually the implementation needed tweaking in a way that affected the Style methods. We couldn’t change those methods, so instead we deprecated them and added an internal implementation of Style where we could add new methods privately. The lesson here is sometimes maximal extensibility isn’t worth the pain.

Making it Fast

When we first implemented path, we were primarily concerned with correctness and not speed. Our philosophy was (and is) to avoid optimizing packages until we have a clear idea of what parts are slowest and used most heavily. If the package started out correct and well-tested, we could be sure that any performance improvements later on preserved the necessary behavior.

But eventually the time came to make those changes. Users were doing path manipulations in performance-critical loops, and that meant it had to be fast. We set up a benchmark so we could track our progress, and used Observatory to see exactly what parts of our code were taking the most time. Then we called in Anders Johnsen, one of our resident performance experts who’s since moved on from Google, to see what he could do.

It turned out he could do a lot! Not only did our code get faster, we learned quite a bit about strategies for keeping it fast.

The first change was to avoid parsing the whole path. Our original code heavily used an internal ParsedPath class that eagerly parsed the entire path and exposed its components as fields. We still use this class for particularly complex functions, but for anything simple and performance-critical, we now deal with the string directly. This removes a lot of extra unnecessary work and allocations.

The second change was to stop using regular expressions. At the time, Dart’s regular expression engine was very slow. It’s since been dramatically improved, but explicit string operations still tend to involve a lot less overhead. We had been using regexps for very simple operations anyway, so switching away from them ended up being pretty straightforward.

Finally, we had to short-circuit early when possible. A lot of path operations were very complex in the worst case—they required a lot of logic and maybe even iteration over the whole path. But the worst case didn’t actually come up all that often, and it turned out to be pretty easy to detect when it didn’t. For example, Windows paths can have a lot of different roots, which makes finding the root difficult. But if the path starts with /, then it’s guaranteed to be a root-relative path, so the root is "/". These sorts of checks may seem nitty, but they helped a lot.

Coming Up For Air

I tried something new today, and I’m curious what you thought. Leave me feedback in the comments if you’d like to see more deep dives, or if you prefer my previous articles that gave a high-level overview of package APIs. For my part, I like writing both of them, so I’d be happy to continue doing a mix in the future.

Join me again in two weeks when I cover a very generalized package that’s only used in one place so far.

Natalie is still doing the writing here. I’m just hitting publish. – Kevin

1: If it were up to me, the File class’s methods would be top-level functions as well.

2: We also support url paths, but these were added later on.

3: There’s one exception to this rule: path calls Uri.base to figure out the working directory for operations like relative(), which is an IO operation and so could theoretically fail. In pratice, though, it basically never does.

Thursday, June 9, 2016

Dart 1.17: More performance improvements

Dart 1.17 is now available. Get it now!

We continued the work from 1.16 of working closely with some of our key users at Google to make sure Dart is even more productive for developers.

We've additionally optimized how our core tools deal with large applications, and have seen significant improvement over the last two releases. Dartium is now much more stable. We have improved the speed of the Dart Analyzer by more than 200% on large codebases. Last, but not least, Dartium startup time on large applications at Google has improved by a factor of 4.

We also made a number of smaller changes to core SDK APIs, please refer to the SDK changelog.

Tuesday, May 17, 2016

Unboxing Packages: vm_service_client

Three weeks ago, I wrote about the stream_channel package. Two weeks ago, I wrote about the json_rpc_2 package which is built on top of stream_channel. This week I’ll complete the trifecta by writing about the vm_service_client package, which uses json_rpc_2 in turn—and is a really cool package in its own right!

One of the lesser-known corners of the Dart VM is its service protocol, but it’s one of its most powerful components. It uses JSON-RPC 2.0 over WebSockets to allow clients to connect to the VM, inspect its internal state, set breakpoints, and all sorts of neat stuff. If you’ve ever used Observatory for debugging or profiling your Dart code, you’ve been using the service protocol under the hood: that’s how the Observatory web app talks to the VM’s internals.

Because the protocol is fully documented and based on a standard underlying protocol, it’s possible for anyone to use from their code. And the vm_service_client package makes it downright easy: it provides a Dart-y object-oriented API for (more or less) everything in the protocol. And that turns out to be a lot of stuff: I count 108 classes in the API documentation, with more added over time as the protocol adds new features.

Because the client’s surface area is so broad, I’m not even going to attempt to cover all of it. I’ll discuss the most important classes, of course, but I also want to touch on the broader picture of how we took a language-independent RPC protocol and created a native-feeling API to use it.

Connecting the Client

Before we go deep into the API, though, let’s start at the beginning: actually establishing a connection with a running VM instance. The first step is to get the VM to actually run the service protocol at all. If you run dart --enable-vm-service, it will listen for WebSocket connections on ws://localhost:8181/ws (by default).1 You can also force a running Dart process to start the VM service by sending it a SIGQUIT signal, as long as you’re not on Windows, but that’s a lot less reliable and customizable.

Once the service is running, you can connect a client to it using new VMServiceClient.connect(). This takes the service protocol’s WebSocket URL as either a string or a Uri—and, for convenience, it can also take Observatory’s HTTP URL, which it will use to figure out the corresponding WebSocket URL. If anything goes wrong, it’ll be reported through the done future.

import "package:vm_service_client/vm_service_client.dart";

main(List<String> args) async {
  var url = args.isEmpty ? "ws://localhost:8181/ws" : args.first;
  var client = new VMServiceClient.connect(url);

  // ...


Almost every piece of data the service protocol provides comes in two varieties: the reference and the full version. The reference contains a little bit of metadata, as well as enough information to make an RPC that will provide the full version, which contains all available information.

This split accomplishes two things. It makes the responses much more compact by avoiding unnecessary metadata. More importantly, though, it allows for circularity. A library can refer to the classes it contains, which can in turn refer back to the library that contains them.

In the client, all reference classes end in Ref, whereas their full versions do not. So VMLibraryRef is a reference to a VMLibrary. The full versions extend their corresponding references, so you can pass a VMLibrary to a method that expects a VMLibraryRef. Every reference has a load() method that returns its full version.

This is an important place where the client makes the API feel more native. Because the service protocol can’t really overload an RPC based on its argument type, it has to have different calls for resolving references to different types of objects. But as a Dart client, we can provide an extra degree of uniformity. This is a pattern that comes up several times throughout the client.

Runnable Isolates

Isolates are a special case: they have three states instead of just two. There’s VMIsolateRef and VMIsolate, but neither of these will provide all the isolate’s information. For that you need VMRunnableIsolate.

The extra layer exists because the VM loads an isolate in stages. First it creates the isolate with simple metadata like its name and its start time. But it needs to do a bunch of additional work, loading libraries and classes and stuff like that, before it can actually run any code in the isolate. Only once all that work is done is a VMRunnableIsolate available.

You can use the VMServiceClient.onIsolateRunnable stream to get a notification when the isolate you care about is runnable, but if you already have a reference to an unrunnable version there’s an easier way. VMIsolateRef.loadRunnable() returns the runnable version once it’s available, and does so in a way that’s guaranteed to be safe from race conditions.

Once you have a connection to the service, you need to be able to find the part you want to interact with. I don’t mean looking up its API docs, I mean actually getting to it from your VMServiceClient object. For the most part, the VM service is organized hierarchically:

Note that many of these getters are maps. This comes entirely from the client: the service protocol always sends collections down as lists for compactness. But it’s useful for users to be able to look up libraries by their URIs or classes by their names, so the client converts the lists into the data structure that best fits with how we expect users to interact with the data.

Let’s take a look at how you’d add a breakpoint at the beginning of a call to from dart:io:

var vm = await client.getVM();
var isolate = await vm.isolates.first.loadRunnable();
var library = await isolate.libraries[Uri.parse("dart:io")].load();
var file = await library.classes["File"].load();


Code deals with data, so the VM service needs a way to represent data, and that way is the Instance class. Like most types, Instances have references and full values—these are represented in the client as VMInstanceRef and VMInstance.

On its own, VMInstance only provides two pieces of information. Its klass getter returns the VMClassRef representing the instance’s class, and its fields getter returns its fields and their values. In practice, though, many instances include more information—the service provides extra information for many core library types, which the client represents as classes like VMIntInstance and VMListInstance.

The client also provides VMInstanceRef.getValue(), a convenience method for converting instances to local Dart objects. Every VMInstanceRef subclass overrides this to recreate their particular type of object. It also takes an optional onUnknownValue() callback to which plain instances—including those in data structures—are passed to be converted into local values based on the caller’s logic.

Evaluating Code

The VM service doesn’t just let you look at the VM’s state, it lets you run code on it as well! A few different classes have evaluate() methods that take strings to evaluate and return VMInstanceRefs. Where you evaluate the code determines what names are accessible.

  • VMLibraryRef.evaluate() runs code in the context of a library. This lets the code access anything the library has imported, as well as any of its private names.
  • VMClassRef.evaluate() runs code in the context of a class. This is mostly the same as the library context, except that the code can refer to static class members without needing to prefix them with the class’s name.
  • VMInstanceRef.evaluate() runs code in the context of an instance. This means the code can refer to the instances fields and to this.
  • VMFrameRef.evaluate() can be used when an isolate is paused to run code in the same context as one of the current stack frames.

This is another example of the client using the same API to draw similarities between different parts of the underlying protocol. Because the client has the well-known unifying metaphors of objects and methods, it’s able to take disparate APIs and expose them in a consistent way that isn’t possible using raw RPCs.

Go Forth and Make Something Cool

In some ways, the VM service client is the most exciting package I’ve written about yet. It’s designed to make available a whole bunch of internal VM functionality, and the only limit on what can be done with that functionality is your imagination. So take this package and make something cool. Make a REPL or a visual object inspector. Heck, make an entirely new debugger!

Join me again in two weeks when I write about one of the oldest and most fundamental packages in the entire Dart ecosystem.

  1. Humans interacting directly with Observatory usually pass --observe flag instead of --enable-vm-service. This will also enable the service, but it also turns on a handful of other options, the exact set of which is subject to change. It’s much safer to use --enable-vm-service when writing code to interact with the VM.

Wednesday, May 4, 2016

Unboxing Packages: json_rpc_2

Last week I wrote about the stream_channel package for two-way communication, so this week it seemed natural to move to a package that uses it: json_rpc_2. This is an implementation of the JSON-RPC 2.0 specification, which is a popular protocol for providing structure and standardization to WebSocket APIs.

Although it’s most commonly used with WebSockets, the protocol itself is explicitly independent of the underlying transport mechanism. This makes it a great fit for stream channels, which can be used to represent a two-way stream of JSON objects in a way that works with any underlying mechanism. Thanks to stream channels, JSON-RPC 2.0 can be used across WebSockets, isolates, or any channel a user chooses to wrap.

Shared APIs

There are three main classes in json_rpc_2: Client makes requests and receives responses, Server handles requests and returns responses, and Peer does both at once. Because all of these involve two-way communication, they all have the same two constructors. The default constructor takes a StreamChannel<String> where each string is an encoded JSON object, and automatically decodes incoming objects and encodes outgoing ones. On the other hand, if you want to communicate using decoded maps and lists, you can use the withoutJson() constructor, which only requires that the objects be JSON-compatible.

The three classes also have the same lifecycle management. In order to give the user time to set up request handlers or enqueue request batches, they don’t start listening to the stream channel until listen() is called. Once it is, it returns a future that completes once the channel has closed—also accessible as the done getter. And if the user wants to close the channel themselves, they can call close().


The Client class is in charge of making requests of a server. The core method for this is sendRequest(), which takes a method (the name of the remote procedure to call) and parameters to pass to that method.

The structure of these parameters depends what the server accepts. JSON-RPC 2.0 allows both positional parameters, which are passed as an Iterable of JSON-safe objects, and named ones, which are passed as a Map from string names to JSON-safe values. The parameters can also be omitted entirely if the method doesn’t take any.

The call to sendRequest() returns a future that completes with the server’s response. The protocol defines two types of response: “success” and “error”. On a success, the server returns a JSON-safe object which the sendRequest() future emits. On a failure, the server returns an error object with associated metadata. This metadata is wrapped up as an RpcException and thrown by the future.

import 'package:json_rpc_2/json_rpc_2.dart' as rpc;

/// Uses the VM service protocol to get the Dart version of a Dart process.
/// The [observatoryUrl] should be a `ws://` URL for the process's VM service.
Future<String> getVersion(Uri observatoryUrl) async {
  var channel = new WebSocketChannel.connect(observatoryUrl);
  var client = new rpc.Client(channel);

  // getVM() returns an object with a bunch of metadata about the VM itself.
  var vm = await client.sendRequest("getVM");
  return vm["version"];

If you don’t care whether the request succeeds, you can also call sendNotification(). JSON-RPC 2.0 defines a notification as a request that doesn’t require a response, and a compliant server shouldn’t send one at all. Notifications are commonly used by peers for emitting events, but I’ll get to that later.

JSON-RPC 2.0 also has a notion of batches, where a bunch of requests are sent as part of the same underlying message. The server is allowed to process batched requests in whatever order it wants, but it’s required to send the responses back as a single message as well. This can use less bandwidth if you have a bunch of requests that don’t have strong ordering needs.

The json_rpc_2 client lets the user create batches using the withBatch() method. This takes a callback (which may be asynchronous), and puts all requests that are sent while that callback is running into a single batch. This batch is sent once the callback is complete.


The Server class handles requests from one or more clients. Its core API is registerMethod(), which controls how those requests are handled. It just takes a method name and a callback to run when that method is called. The value returned by that callback becomes the result returned to the client.

import "package:json_rpc_2/json_rpc_2.dart" as rpc;
import "package:shelf/shelf_io.dart" as io;
import "package:shelf_web_socket/shelf_web_socket.dart";

var _i = 0;

main() async {
  io.serve(webSocketHandler((webSocketChannel) {
    var server = new rpc.Server(webSocketChannel);

    // Increments [_i] and returns its new value.
    server.handleMethod("increment", () => ++_i);
  }), 'localhost', 1234);

The server presents an interesting API design challenge. Most methods require certain sorts of parameters—one might need exactly three positional parameters, one might need two mandatory named and one optional parameter, and another might not allow any parameters at all. JSON-RPC 2.0 is pretty clear about how to handle this at the protocol level, but how do we let the user specify it?

We could have users manually validate the parameters—and in fact, for complex validations we do. Users can always manually throw new RpcException.invalidParams() based on whatever logic they code. But it’s a huge pain to manually validate the presence and type of every parameter, so Server uses a couple clever tricks to figure out requirements with minimal user code.

The first trick is that the callback passed to registerMethod() can take either zero or one parameters. This is how Server figures out whether the method allows parameters at all. In the example above, if a client tried to call increment with parameters of any kind, they would get an “invalid parameters” error. But the most clever trick is how parameters that are passed are parsed, and it involves an entirely new class.


The Parameters class wraps a JSON-safe object and provides methods to access it in a type-safe way that will automatically throw RpcExceptions if the object isn’t the expected format. It’s what gets passed to the registerMethod() callback, if it takes a parameter at all.

If you call asList and the caller passed the parameters by name, it’ll throw an RpcException. If you call asMap and the parameters were passed by position? RpcException as well. Or you can just call value and get the underlying parameter no matter what form it takes.

Parameters also lets you verify the parameter values themselves. The [] operator can be used for either positional parameters (with int arguments) or named parameters (with string arguments), and returns a Parameter object which extends Parameters with a bunch of methods for validating types beyond just lists and maps.

All of the native JSON types have getters like asString, asNum, and similar. Just like asList and asMap, these getters return the parameter values if they’re the correct types and throw RpcExceptions if they aren’t. There are also derived getters like asDateTime and asUri which ensure that the value can be parsed as the appropriate type, and asInt which ensures that a number is an integer.

// Sets [_i] to the given value.
server.handleMethod("set", (parameters) {
  _i = parameters[0].asInt;
  return _i;

It’s important to note that the [] operator will return a parameter even if it doesn’t exist, either because there weren’t enough positional parameters passed or because a parameter with that name wasn’t passed. This makes it easy to support optional parameters.

A parameter that doesn’t exist will always throw an RpcException for its asType methods, and even for value. But there are methods where it won’t throw. If you call asStringOr() for a parameter that exists, it behaves just like asString, but for a non-existent parameter it’ll return the defaultValue parameter. Every asType getter has a corresponding asTypeOr() method. Even value has valueOr().

// Returns the logarithm of [_i].
// If the `"base"` named parameter is passed, uses that as the base. Otherwise,
// uses `e`.
server.handleMethod("log", (parameters) {
  return math.log(_i)/math.log(parameters["base"].asNumOr(math.E));


The Peer class works as both a Server and a Client over the same underlying connection. In terms of API, it’s exactly the sum of those two classes. It adds no methods of its own, so in that sense you already know everything about it. But it’s still instructive to talk about why it exists.

While I can easily imagine a structure where two endpoints are truly peers, each invoking methods on the other and receiving results, in practice most of the time I’ve seen peer-structured protocols has been for the sake of event dispatch. You see, JSON-RPC 2.0 doesn’t include an explicit mechanism for the server pushing events to the client. It can only respond to requests made by the client. This is intentional, since it makes the protocol much simpler, and the peer structure is the standard way around it.

To support server events, both the client and server must act as peers, able to send and receive requests. In this world, events are modeled as requests sent from the server to the client—or more specifically, notifications, since the server doesn’t expect a response. The client registers a method for each type of event it wants to handle, and the server sends a request for every dispatch.

/// Uses the VM service protocol to print the VM name.
/// Prints the VM name again every time it's changed.
void printVersions(Uri observatoryUrl) async {
  var channel = new WebSocketChannel.connect(observatoryUrl);
  var peer = new rpc.Peer(channel);

  peer.registerMethod("streamNotify", (parameters) async {
    if (parameters["streamId"].asString != "VMUpdate") {
      throw new rpc.RpcException.invalidParams(
          "Only expected VMUpdate events.");

    print("VM name is ${await peer.sendRequest("getVM")["version"]}.");

  print("VM name is ${await client.sendRequest("getVM")["version"]}.");

RPC Home

Next time you need to communicate with a JSON-RPC 2.0 server, you know where to turn. Next time you need to create an RPC server, I hope you look to JSON-RPC 2.0 as the underlying protocol. It’s clean and straightforward, and best of all, it’s got a great implementation already written and ready to use.

I wrote about stream_channel in my last article. In this article, I wrote about json_rpc_2, which uses stream_channel. Join me in two weeks when I build this layer cake a little higher and write about a package that uses json_rpc_2!