Go is a language designed by Google for use in a modern internet environment. It comes with a highly capable standard library and a built-in HTTP client. I’m on the record as being against including a lot of dependencies. So why did I end up writing my own HTTP client helper library for Go?

I am also the author of a lot of command line tools, many of which make API requests or other HTTP calls, and the more you try to write command line tools that handle errors robustly, the more you see the limitations of using net/http without having some kind of convenience wrapper.

Brad Fitzpatrick, long time maintainer of the net/http package, laid out the problems very well in a file included with his experimental repository for a new HTTP client library. (The prototype new client library was kicked off in 2018 but never finished for whatever reason.) He wrote,

Consider the following typical code you see Go programmers write:

func GetFoo() (*T, error) {
  res, err := http.Get("http://foo/t.json")
  if err != nil {
    return nil, err
  }
  t := new(T)
  if err := json.NewDecoder(res.Body).Decode(t); err != nil {
    return nil, err
  }
  return t, nil
}

This code looks fine at first but has several major problems:

  • Too easy to not call Response.Body.Close.
  • Too easy to not check return status codes
  • Context support is oddly bolted on

As a result,

  • Proper usage is too many lines of boilerplate

A proper GetFoo function ends up looking like this:

func GetFoo(ctx context.Context) (*T, error) {
  req, err := http.NewRequest("GET", "http://foo/t.json", nil)
  if err != nil {
    return nil, err
  }
  req = req.WithContext(ctx)
  res, err := http.DefaultClient.Do(req)
  if err != nil {
     return nil, err
  }
  defer res.Body.Close()
  if res.StatusCode < 200 || res.StatusCode > 299 {
     return nil, fmt.Errorf("bogus status: got %v", res.Status)
  }
  t := new(T)
  if err := json.NewDecoder(res.Body).Decode(t); err != nil {
     return nil, err
  }
  return t, nil
}

Code that was never particularly short before is now over a dozen lines long. And this is a simple GET request, not a complicated POST of an object with authentication or response validation.

For comparison, here’s that same code in requests, the HTTP client library that I authored:

func GetFoo(ctx context.Context) (*T, error) {
  var t T
  if err := requests.
    URL("http://foo/t.json").
    ToJSON(&t).
    Fetch(ctx); err != nil {
    return nil, err
  }
  return &t, nil
}

When faced with the prospect of writing dozens of lines of boilerplate for even the simplest HTTP request, clearly the answer has to be to use some kind of helper function, but such an answer just raises the further question of whether to write one-off helpers for myself as needed or find a suitable client helper library.

Surveying the existing libraries, they all seemed to have at least one of two major flaws as I see it. One is that support for context.Context tends to be bolted on if it exists at all. Two is hiding the underlying http.Client in such a way that it is difficult or impossible to replace or mock out. Beyond that, I believe that none have the same core simplicity that my requests library eventually achieved.

It’s one thing to be able to write a simple helper that handles a particular case, like fetching JSON, but if a library is really worth sharing, it should be more than something you could just dash off in an afternoon. A good tool should be able to work with requests in general, whether simple or complex, and make all of them more declarative and easier to understand. It’s nice to write a simple dozen line helper function to clear up one kind of boilerplate, but it’s truly great to be able to find a set of abstractions that eliminate whole categories of boilerplate in one stroke.

Take the grabxkcd command line app I refactored in a previous post. In that case, there was no auth and the app only made GET requests, but even for such a simple app, an HTTP library would need to fulfill two very separate usecases, namely fetching JSON from the XKCD API and saving the images of comics. There are lots of libraries that can help with one case, but very few that can handle both. Requests can.


So, for years I found myself copying snippets of HTTP client helpers from project to project. There had to be a better way, but what was it?

Woman struggling to make HTTP calls

There has to be a better way

Unlike some other languages, Go does not support optional arguments or keyword arguments. As a result, there are just a handful of strategies for encapsulating complicated code behind a simple API call. First and most obviously, one can use a configuration struct. This might look something like

err := requests.Fetcher{
  URL: "http://foo/t.json",
  ToJSON: &t,
}.Do(ctx)

But given the way that the parts of a request interact, this ends up being a poor fit. In a POST request, do you have separate fields for BodyBytes vs. BodyJSON? What happens if they both get set? What about the parts of a URL (host, path, query)? How do you build a URL using a config struct? Should you break it down into a URL helper struct that feeds into a request helper? Keeping it simple gets complicated. Config structs are a good next step for code too long to put into a simple function parameter, but they break down when there are numerous overlapping sub-fields.

Next there is the functional option pattern in Go, popularized by Dave Cheney. In this pattern, a constructor takes any number of arguments, each of which is a function that configures the object. It would look something like:

err := requests.Fetcher(
  ctx,
  requests.URL("http://foo/t.json"),
  requests.ToJSON(&t),
)

This is better, but it ends up repeating the name of the package over and over again because the options themselves need to be imported.

The pattern I ended up going with instead is the builder pattern with fluent method chaining, in which an object modifies and returns itself, so that methods can be repeatedly called to build up a final object for use:

req, err := requests.
    URL("http://foo/t.json").
    Request(ctx)

One quirk of Go’s syntax is that the dot between methods has to go at the end of the line above, not the start of the line below, as is customarily done in other languages. It’s a little odd, but you get used to it.

In most cases, one wants to send the request right away, so instead of calling a final Request(ctx), you can call Fetch(ctx) instead, which also triggers the response validators and handler:

err := requests.
    URL("http://foo/t.json").
    ToJSON(&t).
    Fetch(ctx)

Just having a consolidated Fetch() method is a huge reduction in boilerplate because it lets you collect all the parts of making a request that can fail into a single if err != nil statement instead of repeating if err != nil for each step along the way.

Once I settled on using the builder pattern, I quickly found the ingredients I needed to cook up a truly sweet abstraction. Essentially, all connections have three phases: build the request, validate the response, and handle response. Validating and handling a response take the same function signature: look at a response and return an error if anything goes wrong. That leaves building the request. A request has a URL, method, headers, and a body. I added convenience methods for constructing a URL, common headers, and ways of building a body, and the whole thing clicked into place.

To validate that I had the right abstractions in place, I went back and changed many of my existing applications to use requests instead of whatever helpers they were using before. That led to some refinements, but no major rethinking of the core concept. I knew that I had found what I was looking for when get-headers and linkrot, two applications which are completely different than a typical JSON API client, both benefited from their refactorings.

Going back to the grabxkcd app, the existing code looks like this:

func (app *appEnv) fetchJSON(url string, data interface{}) error {
    resp, err := app.hc.Get(url)
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    return json.NewDecoder(resp.Body).Decode(data)
}

func (app *appEnv) fetchAndSave(url, destPath string) error {
    resp, err := app.hc.Get(url)
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    f, err := os.Create(destPath)
    if err != nil {
        return err
    }
    defer f.Close()

    _, err = io.Copy(f, resp.Body)
    return err
}

With requests, the fetchJSON() and fetchAndSave() methods becomes basically trivial:

func (app *appEnv) fetchJSON(url string, data interface{}) error {
  return requests.
    URL(url).
    Client(&app.hc).
    ToJSON(data).
    Fetch(context.Background())
}

func (app *appEnv) fetchAndSave(url, destPath string) error {
  return requests.
    URL(url).
    Client(&app.hc).
    ToFile(destPath).
    Fetch(context.Background())
}

At this point, the methods are so simple they can probably just be inlined without a loss of readability.


The real advantage of the requests library can be seen when making an API client for a complex API. Here for example is a client for MailChimp’s ListCampaigns endpoint that I use at work:

package mailchimp

type V3 struct {
  listCampaignBuilder *requests.Builder
}

func NewV3(apiKey, listID string, c *http.Client) V3 {
  // API keys end with 123XYZ-us1, where us1 is the datacenter
  _, datacenter, _ := strings.Cut(apiKey, "-")

  return V3{
    requests.URL("https://dc.api.mailchimp.com/3.0/campaigns?count=10&offset=0&status=sent&fields=campaigns.archive_url,campaigns.send_time,campaigns.settings.subject_line,campaigns.settings.title,campaigns.settings.preview_text&sort_field=send_time&sort_dir=desc").
      Hostf("%s.api.mailchimp.com", datacenter).
      BasicAuth("", apiKey).
      Param("list_id", listID).
      Client(c),
  }
}

func (v3 V3) ListCampaigns(ctx context.Context) (*ListCampaignsResp, error) {
    var data ListCampaignsResp
    if err := v3.listCampaignBuilder.
        Clone().
        ToJSON(&data).
        Fetch(ctx); err != nil {
        return nil, fmt.Errorf("could not list MC campaigns: %w", err)
    }
    return &data, nil
}

There are a couple of typically atypical things to notice about this API. One, Mailchimp uses different base URLs for different users. Your API key ends with dash whatever, and based on that, you need to know to send requests to whatever.api.mailchimp.com. The requests library makes dealing with this easy with its Hostf(string, ...interface{}) method. Similarly, MailChimp protects its API endpoints with Basic Auth, which as a commonly used HTTP header, requests has a convenience method for. The list ID is passed to the API as a query parameter along with a bunch of other parameters, so requests lets you add a query parameter without stripping any existing parameters listed in the base URL.

A nice thing about the requests.Builder type is that it is often possible to store everything needed for making and authenticating a request directly inside the builder itself, so there’s no need to store the base URL and API key separately as strings alongside the HTTP client. Each call to V3.ListCampaigns() just clones the builder and has everything it needs to make a fully authenticated request.


An important thing that the requests library does is allow for overriding the http.Client used for making a request. This lets users set custom timeouts and cookie jars, but more importantly lets users set custom transports. A transport can be anything that fulfills the http.RoundTripper interface of taking requests in and putting responses and errors out. In this case, it lets us test our MailChimp API client programmatically without making any actual API requests to MailChimp. This is crucial for testing any kind of third party API.

I first became familiar with the experience of using specialized transports in testing when I was working on a Ruby project that uses VCR as an easy way to mock out calls to external HTTP services in tests. There is a Go clone of VCR that acts as an http.RoundTripper, but I found it to be somewhat more complicated than my needs, so requests provides a very simple requests.Record and requests.Replay transports that can be dropped into a project, while of course it is still compatible with Go-VCR, since both projects just use the standard library http.RoundTripper interface to customize their client behavior.

As an illustration, here’s what it looks like to mock out a call using the requests.ReplayString transport:

const res = `HTTP/1.1 200 OK

An example response.`

myclient := &http.Client{
  Transport: requests.ReplayString(res),
}

var s string
if err := requests.
  URL("http://response.example").
  Client(myclient).
  ToString(&s).
  Fetch(context.Background()); err != nil {
  return err
}
// s == "An example response."

So that’s why I ended up writing my own requests library for Go. This blog post just scratches the surface of what it’s capable of. It defaults to validating that responses have a 2XX status, but it can also check content types and peek at content lengths. There are handlers for treating a response as HTML or various kinds of buffers. The requests.RoundTripperFunc type makes it easy to create custom transport to transform requests in arbitrary ways. If it sounds interesting to you, check it out and let me know what you think.