Benjamin van der Veen

Thoughts on OWIN, Buffering, and Async

Early on, we decided that OWIN should be an asynchronous interface. There are many possible approaches to this design goal, and ultimately it is important that whatever we come up with accommodates performant host implementations across platforms, and provides application and framework developers a sane programming model which takes advantage of upcoming and current “first-class” async features of C# and F#.

Buffering Streaming Data

Consider the situation where data is coming at an application from a backend server faster than the client connected to the OWIN host can receive it—perhaps as when streaming data from a backend service to a mobile device. At some point, the OWIN host streaming data to the client will need to communicate to the OWIN app which is providing the data to the host that it ought to back off, to slow down the rate at which it provides data to the host. Of course, this means that the app will need to communicate this to the backend service.

Ultimately, the app is talking to the backend service over TCP, and it can slow down the data coming over the connection by temporarily ceasing to send TCP ACK packets to the backend service to which it is connected. Once the slow client empties the OWIN host’s outgoing buffer, and the host then must communicate to the OWIN app that it ought to resume sending ACK packets to the backend service, thereby triggering the service to send more data. I refer to this concept as “back-pressure”. Back-pressure is key to prevent the hosts outgoing buffers from growing indefinitely as the backend service puts data in faster than the slow client takes it out. Of course, this problem can work in the opposite direction: a client connected to an OWIN host might be sending data faster than the app can deal with it.

This buffering problem adds a special twist to the basic goal of making an interface whereby a producer can huck data at a consumer. The consumer of the data must be able to somehow apply back-pressure to the producer—to communicate that it should “back off”.

In our discussions, we have identified two primary approaches to implementing an asynchronous interface. What follows is a discussion of each mechanism and how back-pressure could be implemented within it.

The Pull Approach

In this model, calling code asks that a value be retrieved or an operation be carried out, and later, that code is called back (by way of an interface or delegate) with a means of accessing the return value (or exception) of the operation. The callback is guaranteed to be invoked exactly once, with one value.

The compiler takes care of the details, and what results is a construct which mirrors a traditional function—the user calls a function with some arguments, gets a single value back, and his code continues where it left off. This form of async is a readily grok-able conceptual model.

async void ShuttleData()
	while (true)
		var data = await inputSocket.Read();

		if (data.Count == 0)

		await outputSocket.Write(data);

The back-pressure mechanism is implicit, just like in a traditional synchronous model. The incoming data is (for all intents and purposes) not “ACK’d” until the user reads the input socket. Thus, until the intervening write operation succeeds and read is called again, no more data is read into user-space. The outgoing write mechanism can wait to return until its buffer has been emptied, thereby providing back-pressure by delaying further reads.

In .NET, the designers of C# and F# first took the old Win32 APIs and wrapped the .NET APM patterns in the BCL, then later designed the Task Parallel Library, and now they’re working on C# 5’s async/await syntax; F#’s async syntax is equivalent and available today. These language features make using this “pull” model very easy.

The Push Approach

This approach more closely mirrors that of the Reactive Framework. For an example implementation, we look to Node. Node’s approach to asynchronicity is different the pull model—the user must wire up callbacks and deal with incoming values passed to the callbacks “immediately”. The callbacks cannot block on IO; this is enforced by the APIs exposed by the Node libraries to user code.

In Node, whenever you write outgoing data using a write function, it may be copied into to the kernel’s outgoing network buffer immediately, or placed into a user-space buffer to be sent later. The write function returns immediately. If the user-space buffer is larger than some configured maximum, the write operation will return false, otherwise true.

inputSocket.on("data", function (data) {
	if (data.length == 0)
	else if (outputSocket.write(data) === false)
outputSocket.on("drain", function () {

Based on this flag, the user can apply back-pressure to the source of the data by “pausing” it. Later, the outgoing buffer will fire a “drain” event, informing the user that the size of the user-space buffer has fallen to a configured minimum and that data may be added to it without causing undue memory pressure in the process. In response to this “drain” event, the user “resumes” the incoming data source, and continues to push data to the client with the write function, perhaps until it returns false again.

Because JavaScript lacks the expressive power of C# and F# (which will, with the release of C# 5, both have first-class asynchronous programming primitives), Node has to introduce these additional “pause” and “resume” functions.

Why I’m Leaning Toward the Pull Approach

It may be possible to construct some an adapter from a push-style interface into a pull-style interface, and it may be the case that a push-style interface is the way to go for OWIN. But ultimately, it is important that OWIN’s asynchronous interface be easy and intuitive to use with the asynchronous language features provided be C# and F#, and these language features map cleanly onto a pull-style interface.

Copyright © 2015 Benjamin van der Veen. atom feed