In the first part of What’s New in Go 1.20, we looked at language changes. For part two, I would like to introduce three changes to the standard library that address problems that the community has been thinking about and debating solutions to for years.


First of all, a whole new package has been added. But you can’t import it by default, and you probably shouldn’t be using it at all. It’s the new experimental arena package.

The arena package was proposed by Dan Scales and has been added to the Go standard library in 1.20. But if you just try to add import "arena" to a program, you get the following, somewhat cryptic error message:

imports arena: build constraints exclude all Go files in GOROOT/src/arena

To opt into using arenas, you need to set GOEXPERIMENT=arenas when calling the go tool, like GOEXPERIMENT=arenas go build ..

EDIT from the morning after publishing this post: The Go team have removed the arena experiment from the official release notes with this explanation:

The arena goexperiment contains code used inside Google in very limited use cases that we will maintain, but the discussion on #51317 identified serious problems with the very idea of adding arenas to the standard library. In particular the concept tends to infect many other APIs in the name of efficiency, a bit like sync.Pool except more publicly visible.

It is unclear when, if ever, we will pick up the idea and try to push it forward into a public API, but it’s not going to happen any time soon, and we don’t want users to start depending on it: it’s a true experiment and may be changed or deleted without warning.

I believe the arena package will still work in the final release, but this is just to emphasize once again that it is an experiment, and it may be removed at any time. It seems like it will certainly need some changes before it can become a permanent addition to the Go standard library. END OF EDIT

So what are arenas and why is the Go team trying so hard to keep you from using them? I asked ChatGPT, and this is what it said (this is the equivalent of quoting Webster’s Dictionary for the 21st century):

Memory arenas are a memory management technique used in some programming languages and libraries to allocate and deallocate large blocks of memory efficiently. They are typically used in situations where the program needs to frequently allocate and deallocate a large number of small objects. By allocating and deallocating memory in large blocks, rather than individually for each object, memory arenas can reduce the overhead associated with memory management and improve performance.

If you want to go more in-depth, Uptrace has a nice guide to the arena package (presumably written by a human, but who knows nowadays), but I’ll try to just give a basic overview here.

As you probably know, Go is a garbage collected language. This means that when you refer to a variable, the compiler and the runtime automatically keep track of the uses of that variable to see when it comes into use and when it is no longer being used. Once a variable is no longer used, it is “garbage” waiting to be collected.

For many kinds of applications, garbage comes in waves. For example, if you have a web server, it may allocate a lot of memory in order to build up a response to some user request, but once it responds, it no longer needs any of the memory that it allocated, so it can all be returned to the system at once. Another example is a game might want to free all of the objects created for a level once the level is over. The arena package lets Gophers opt into this approach to memory management in performance critical code. Instead of having the garbage collector start a root and then travel down to “mark and sweep” the live memory and return the dead objects, the whole arena can be marked as dead all at once. The release notes for Go 1.20 claim that

When used appropriately, [using package arena] has the potential to improve CPU performance by up to 15% in memory-allocation-heavy applications.

This is highly efficient, but also highly dangerous. What if the programmer makes a mistake, and for example, adds some strings to a logger call that outlives the request? The log might be overwritten by a subsequent request and the string become replaced with junk data, leading to crashes or worse—security exploits.

To mitigate the risk of these kinds of bugs, the arena package will deliberately cause a panic if can detect someone reusing memory after it has been freed. Dan Scales explains,

  • Each arena A uses a distinct range in the 64-bit virtual address space
  • A.Free unmaps the virtual address range for arena A
  • The physical pages for the arena can then be reused by the operating system for other arenas.
  • If a pointer to an object in arena A still exists and is dereferenced, it will get a memory access fault, which will cause the Go program to terminate. Because the implementation knows the address ranges of arenas, it can give an arena-specific error message during the termination.

There is a similar comment in the Go runtime package that implements memory arenas:

// What makes the arenas here safe is that once they are freed, accessing the
// arena's memory will cause an explicit program fault, and the arena's address
// space will not be reused until no more pointers into it are found. There's one
// exception to this: if an arena allocated memory that isn't exhausted, it's placed
// back into a pool for reuse. This means that a crash is not always guaranteed.

So, it is still possible to write buggy code with arenas, but hopefully, the bugs will translate into simple crashes rather than full blown memory corruption or security exploits.

The arena package has a fairly simple API. Here’s some example code from arenas_test.go:

a := arena.NewArena()
defer a.Free()

tt := arena.New[T1](a)
tt.n = 1

ts := arena.MakeSlice[T1](a, 99, 100)
// …

There is also an arena.Clone function for when you want to move an object out of an arena and onto the regular Go memory heap.

With luck, the arena experiment will succeed, and we will see it introduced as a regular package in a future version of Go.


While most Go programmers probably will never need to use the arena package directly, I suspect virtually all Go programmers will have some occasion to use a different new feature in Go 1.20: multierrors.

The concept of multierrors in Go is not new. Hashicorp’s go-multierror package goes all the way back to 2014 and there was at least one proposal to add multierrors to the standard library by 2017.

Multierrors also exist in other languages. Python added exception groups to Python 3.11, for example. In the case of Python, while there was a popular third party MultiError class, it ultimately needed to be added to the language for full operability:

Changes to the language are required in order to extend support for exception groups in the style of existing exception handling mechanisms. At the very least we would like to be able to catch an exception group only if it contains an exception of a type that we choose to handle. Exceptions of other types in the same group need to be automatically reraised, otherwise it is too easy for user code to inadvertently swallow exceptions that it is not handling.

Unlike Python, in Go, errors are just values, so it was easy enough to create your own multierror type and expose it using errors.As. Indeed, I wrote my own multierror package that worked this same way. This was clearly an idea that was being created and recreated by the community, so did it really need to be solved at the level of the standard library?

Suppose I have some code like this:

a := errors.New("a")
b := errors.New("b")
c := join(a, b)
d := fmt.Errorf("more context: %w", c)
e := errors.New("e")
f := join(d, e)

If join flattens the error list, then d (“more context”) will be lost from the resulting multierror. Inside of f will be a, b, and e, but not d.

afbe

If you are just using your own multierror package in your own application, this is basically a theoretical concern, because you can make sure not to add context around a multierror wherever it might be lost. But if multierrors are part of the standard library and can be expected to be used regularly, then losing d is a real problem that could pop up at unwanted times and places. This problem is what sunk one of the more recent multierror proposals.

The reason that Damien Neil’s latest multierror proposal has succeeded where other multierror proposals did not is that it creates a tree of errors, rather than a slice. The accepted errors.Join code instead represents the error hierarchy like this:

adcfbe

This means that all of the nodes in the tree are still available for inspection using errors.Is or errors.As.

In Go 1.20, multierrors can be created either with errors.Join(errs ...error) error or by using multiple %w verbs with fmt.Errorf like fmt.Errorf("%w: %w", notFoundErr, dbErr). Once created, errors.Is and errors.As can extract any of the values in the tree, whether they are in a leaf like a, b and e or a branch like d. Users can also create their own multierror types by adding an Unwrap() []error method to their custom error type.

One wrinkle in the implementation is that for now at least, there is no errors.Split(error) []error function. This is a deliberate omission, since that would flatten the tree. Instead, if you need to inspect every node in a tree (for example, for logging purposes), you are encouraged to traverse the tree yourself. I suspect that if we ever see a generic iterator type in Go, something that automatically walks the tree might be added then.


Finally, lets look at http.ResponseController.

Interfaces are one Go’s defining features as a language. With an interface, you can specify that your code can accept any value that has a certain method, no matter what its concrete type is.

From the early days of Go, interfaces have also been used for what Chris Siebenmann has called “interface smuggling”:

In interface smuggling, the actual implementation is augmented with additional well known APIs, such as io.ReaderFrom and io.WriterTo. Functions that want to work more efficiently when possible, such as io.Copy(), attempt to convert the io.Reader or io.Writer they obtained to the relevant API and then use it if the conversion succeeded:

if wt, ok := src.(WriterTo); ok {
   return wt.WriteTo(dst)
}
if rt, ok := dst.(ReaderFrom); ok {
   return rt.ReadFrom(src)
}
[... do copy ourselves ...]

I call this interface smuggling because we are effectively smuggling a different, more powerful, and broader API through a limited one. In the case of types supporting io.WriterTo and io.ReaderFrom, io.Copy completely bypasses the nominal API; the .Read() and .Write() methods are never actually used, at least directly by io.Copy (they may be used by the specific implementations of .WriteTo() or .ReadFrom(), or more interface smuggling may take place).

Russ Cox deemed this pattern the somewhat less pejorative sounding “extension interface pattern”. Whatever you call it, this pattern can be a great way to expose a simple API and still leave room for adding more complicated extensions later.

Extension interfaces are useful, but not without their pitfalls. There can be be cases where it would be good to implement an extended interface, but an implementation author isn’t aware of it, so they fail to implement it. This can be addressed with documentation, but it’s not as clear as using the type system directly.

What if you want to create a wrapper a simple interface that you do know might also implement an extended interface? How to do this depends on what exactly you’re wrapping and why. If you were creating a new version of io.LimitedReader, it wouldn’t make sense to add a WriteTo method. The whole point is to put a cap on how much can be read from the source reader, not bypass it and hook it up to the writer directly. As an implementation author, you need to think carefully about how extended interfaces interact with your type.

Another problem is if the extended interface can only be tested for through a type assertion. In that case, as the author of a wrapper, you need to provide two wrapping types: one that has the extended interface and passes calls through to the underlying type, and one that doesn’t have the method, so it won’t spuriously trigger the type assertion. Worse still, if there are multiple extended interfaces you want to be able to wrap, you need to provide a 2N number of types to provide for every variation of extended interfaces coexisting! This may seem absurd, but this is exactly the situation that library authors who wanted to wrap http.ResponseWriter found themselves in.

http.ResponseWriter is a fairly simple interface used for HTTP servers in the Go standard library:

type ResponseWriter interface {
    Header() Header
    WriteHeader(statusCode int)
    io.Writer
}

You can set the headers on a response. You can set the status code on the response (which also causes the headers to be written on the wire). And you can write the body of a response. Simple!

But of course, the Go http package also supports a number of extended interfaces for ResponseWriter. These are http.Flusher (which lets you flush an in progress write to its clients), http.Pusher (which lets you do HTTP/2 server push requests), http.Hijacker (which provides access to the underlying net.Conn), and io.ReaderFrom (which allows for nice things like automatic sendfile support). As a result, the go-chi project has six different types to implement its WrapResponseWriter interface type. (This is cut down from 24 to 23 on the theory that anything which implements HTTP/2 server push must be maximally fancy.)

So then all the way back in 2016, Filippo Valsorda opened an issue about setting timeouts in an http.Handler. This was clearly a real need the Go community had, but it was hard to see how to make it work while retrofitting it into the existing http.ResponseWriter interface. It would be great to set server timeouts for a client based on what we know about that client, but how can this functionality be exposed? Do we just go straight from 3 to 5 optional methods defined in the http package?

Despite a lot of careful thought about the problem, this was the situation until Go 1.20, when Damien Neil successfully landed the http.ResponseController proposal. As he wrote,

A problem is that we have no good place at the moment to add functions that adjust these timeouts. We might add methods to the ResponseWriter implementation and access them via type assertions (as is done with the existing Flush and Hijack methods), but this proliferation of undiscoverable magic methods scales poorly and does not interact well with middleware which wraps the ResponseWriter type.

The solution he came up with based on prior discussions was to add a new concrete type, http.ResponseController:

// NewResponseController creates a ResponseController for a request.
//
// The ResponseWriter should be the original value passed to the Handler.ServeHTTP method,
// or have an Unwrap method returning the original ResponseWriter.
//
// If the ResponseWriter implements any of the following methods, the ResponseController
// will call them as appropriate:
//
//  Flush()
//  FlushError() error // alternative Flush returning an error
//  Hijack() (net.Conn, *bufio.ReadWriter, error)
//  SetReadDeadline(deadline time.Time) error
//  SetWriteDeadline(deadline time.Time) error
//
// If the ResponseWriter does not support a method, ResponseController returns
// an error matching ErrNotSupported.
func NewResponseController(rw ResponseWriter) *ResponseController

There are two aspects of http.ResponseController that fix the problems with the earlier extension interfaces. One, the addition of an Unwrap() http.ResponseWriter method allows middleware to easily wrap a ResponseWriter without needing to provide all the extended interface methods. Two, the addition of ErrNotSupported makes it easy for types that do have extended methods to signal to their callers when the types they wrap don’t have the same extended methods they do. These are best practices for extended interfaces that have emerged from experience using them. If you provide an extended interface, also provide an escape hatch for wrapper types that don’t know if their wrapped types will have the extended types or not.

This is my blog, so I will immodestly mention that I had a proposal for adding an Unwrap method to ResponseWriter back in 2020, but my proposal didn’t have a concrete type to handle the unwrapping or ErrNotSupported, and I’m not sufficiently in the weeds of the http package to have been able to implement read/write deadlines if I had known to suggest it as a motivating problem. The 3 line long http.MaxBytesHandler is about the limit of my ability to contribute. 😆


For all three of these changes written about above, the moral of the story is that when it comes to the Go standard library, it can take sometimes years for all the pieces of a good solution to come together in one place, but when they do, it can solve a longstanding problem in a way that makes things easier for everyone involved going forward. Even the authors of Go didn’t know what idiomatic Go code looked like when they wrote the standard library, but working together today we can evolve the standard library in a way that adds new capabilities while preserving backwards compatibility, so that everyone benefits.