Not sure how to structure your Go web application?

My new book guides you through the start-to-finish build of a real world web application in Go — covering topics like how to structure your code, manage dependencies, create dynamic database-driven pages, and how to authenticate and authorize users securely.

Take a look!

Organising Database Access in Go

Last updated:

A few weeks ago someone created a thread on Reddit asking:

In the context of a web application what would you consider a Go best practice for accessing the database in (HTTP or other) handlers?

The replies it got were a genuinely interesting mix. Some people advised using dependency injection, a few favoured the simplicity of using global variables, others suggested putting the connection pool pointer into the request context.

Me? I think the right answer depends on the project.

What's the overall structure and size of the project? What's your approach to testing? How is it likely to grow in the future? All these things and more should play a part when you pick an approach to take.

So in this post we're going to take a look at four different methods for organizing your code and structuring access to your database connection pool, and explain when they may — or may not — be a good fit for your project.

Application setup

I like concrete examples, so let's set up a simple book store application to help illustrate the four different approaches. If you'd like to follow along, you need to create a new bookstore database and then execute the following SQL to create a books table and add some sample records.

CREATE TABLE books (
    isbn char(14) NOT NULL,
    title varchar(255) NOT NULL,
    author varchar(255) NOT NULL,
    price decimal(5,2) NOT NULL
);

INSERT INTO books (isbn, title, author, price) VALUES
('978-1503261969', 'Emma', 'Jayne Austen', 9.44),
('978-1505255607', 'The Time Machine', 'H. G. Wells', 5.99),
('978-1503379640', 'The Prince', 'Niccolò Machiavelli', 6.99);

ALTER TABLE books ADD PRIMARY KEY (isbn);

You'll also need to run the following commands to scaffold a basic application structure and initialize a Go module:

$ mkdir bookstore && cd bookstore
$ mkdir models
$ touch main.go models/models.go
$ go mod init bookstore.alexedwards.net
go: creating new go.mod: module bookstore.alexedwards.net

At this point, you should have a bookstore directory on your machine with a structure exactly like this:

bookstore/
├── go.mod
├── main.go
└── models
    └── models.go

1. Using a global variable

OK, let's start by looking at storing the database connection pool in a global variable.

This approach is arguably the simplest thing that works. You initialise the sql.DB connection pool in your main() function, assign it to a global variable, and then access the global from anywhere that you need to execute a database query.

In the context of our book store application, the code would look something like this:

File: models/models.go
package models

import (
    "database/sql"
)

// Create an exported global variable to hold the database connection pool.
var DB *sql.DB

type Book struct {
    Isbn   string
    Title  string
    Author string
    Price  float32
}

// AllBooks returns a slice of all books in the books table.
func AllBooks() ([]Book, error) {
    // Note that we are calling Query() on the global variable.
    rows, err := DB.Query("SELECT * FROM books")
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var bks []Book

    for rows.Next() {
        var bk Book

        err := rows.Scan(&bk.Isbn, &bk.Title, &bk.Author, &bk.Price)
        if err != nil {
            return nil, err
        }

        bks = append(bks, bk)
    }
    if err = rows.Err(); err != nil {
        return nil, err
    }

    return bks, nil
}               
File: main.go
package main

import (
    "database/sql"
    "fmt"
    "log"
    "net/http"

    "bookstore.alexedwards.net/models"

    _ "github.com/lib/pq"
)

func main() {
    var err error

    // Initalize the sql.DB connection pool and assign it to the models.DB 
    // global variable.
    models.DB, err = sql.Open("postgres", "postgres://user:pass@localhost/bookstore")
    if err != nil {
        log.Fatal(err)
    }

    http.HandleFunc("/books", booksIndex)
    http.ListenAndServe(":3000", nil)
}

// booksIndex sends a HTTP response listing all books.
func booksIndex(w http.ResponseWriter, r *http.Request) {
    bks, err := models.AllBooks()
    if err != nil {
        log.Print(err)
        http.Error(w, http.StatusText(500), 500)
        return
    }

    for _, bk := range bks {
        fmt.Fprintf(w, "%s, %s, %s, £%.2f\n", bk.Isbn, bk.Title, bk.Author, bk.Price)
    }
}           

At this point, if you run this application and make a request to the /books endpoint you should get the following response:

$ curl localhost:3000/books
978-1503261969, Emma, Jayne Austen, £9.44
978-1505255607, The Time Machine, H. G. Wells, £5.99
978-1503379640, The Prince, Niccolò Machiavelli, £6.99

Using a global variable to store the database connection pool like this is potentially a good fit when:

  • Your application is small and simple, and keeping track of globals in your head isn't a problem.
  • Your HTTP handlers are spread across multiple packages, but all your database-related code lives in one package.
  • You don't need to mock the database for testing purposes.

The drawbacks of using global variables are well-documented, but in practice I've found that for small and simple projects using a global variable like this works just fine, and it's (arguably) clearer and easier to understand than some of the other approaches we'll look at in this post.

For more complex applications — where your handlers have more dependencies beyond just the database connection pool — it's generally better to use dependency injection instead of storing everything in global variables.

The approach we've taken here also doesn't work if your database logic is spread over multiple packages, although — if you really want to — you could a separate config package containing an exported DB global variable and import "yourproject/config" into every file that needs it. I've provided a basic example in this gist.

1b. Global variable with an InitDB function

A variation on the 'global variable' approach that I sometimes see uses an initialisation function to set up the connection pool, like so:

File: models/models.go
package models

import (
    "database/sql"

    _ "github.com/lib/pq"
)

// This time the global variable is unexported.
var db *sql.DB

// InitDB sets up setting up the connection pool global variable.
func InitDB(dataSourceName string) error {
    var err error

    db, err = sql.Open("postgres", dataSourceName)
    if err != nil {
        return err
    }

    return db.Ping()
}

type Book struct {
    Isbn   string
    Title  string
    Author string
    Price  float32
}

func AllBooks() ([]Book, error) {
    // This now uses the unexported global variable.
    rows, err := db.Query("SELECT * FROM books")
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var bks []Book

    for rows.Next() {
        var bk Book

        err := rows.Scan(&bk.Isbn, &bk.Title, &bk.Author, &bk.Price)
        if err != nil {
            return nil, err
        }

        bks = append(bks, bk)
    }
    if err = rows.Err(); err != nil {
        return nil, err
    }

    return bks, nil
}  
File: main.go
package main

import (
    "fmt"
    "log"
    "net/http"

    "bookstore.alexedwards.net/models"
)

func main() {
    // Use the InitDB function to initialise the global variable.
    err := models.InitDB("postgres://user:pass@localhost/bookstore")
    if err != nil {
        log.Fatal(err)
    }

    http.HandleFunc("/books", booksIndex)
    http.ListenAndServe(":3000", nil)
}

...             

This is a small tweak to the global variable pattern, but it gives us a few nice benefits:

  • All the database-related code now lives a single package, including the code to set up the connection pool.
  • The global db variable is not exported, which removes the possibility of it being accidentally mutated by other packages at runtime.
  • During testing, you can reuse the InitDB() function to initialise a connection pool to your test database (by calling it from TestMain() before your tests run).

2. Dependency injection

In a more complex web application there are probably additional application-level objects that you want your handlers to have access to. For example, you might want your handlers to also have access to a shared logger, or a template cache, as well your database connection pool.

Rather than storing all these dependencies in global variables, a neat approach is to store them in a single custom Env struct like so:

type Env struct {
    db *sql.DB
    logger *log.Logger
    templates *template.Template
}

The nice thing about this is that you can then define your handlers as methods against Env. This gives you a easy and idiomatic way of making the connection pool (and any other dependencies) available to your handlers.

Here's a full example:

File: models/models.go
package models

import (
    "database/sql"
)

type Book struct {
    Isbn   string
    Title  string
    Author string
    Price  float32
}

// Update the AllBooks function so it accepts the connection pool as a 
// parameter.
func AllBooks(db *sql.DB) ([]Book, error) {
    rows, err := db.Query("SELECT * FROM books")
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var bks []Book

    for rows.Next() {
        var bk Book

        err := rows.Scan(&bk.Isbn, &bk.Title, &bk.Author, &bk.Price)
        if err != nil {
            return nil, err
        }

        bks = append(bks, bk)
    }
    if err = rows.Err(); err != nil {
        return nil, err
    }

    return bks, nil
}     
File: main.go
package main

import (
    "database/sql"
    "fmt"
    "log"
    "net/http"

    "bookstore.alexedwards.net/models"

    _ "github.com/lib/pq"
)

// Create a custom Env struct which holds a connection pool.
type Env struct {
    db *sql.DB
}

func main() {
    // Initialise the connection pool.
    db, err := sql.Open("postgres", "postgres://user:pass@localhost/bookstore")
    if err != nil {
        log.Fatal(err)
    }

    // Create an instance of Env containing the connection pool.
    env := &Env{db: db}

    // Use env.booksIndex as the handler function for the /books route.
    http.HandleFunc("/books", env.booksIndex)
    http.ListenAndServe(":3000", nil)
}

// Define booksIndex as a method on Env.
func (env *Env) booksIndex(w http.ResponseWriter, r *http.Request) {
    // We can now access the connection pool directly in our handlers.
    bks, err := models.AllBooks(env.db)
    if err != nil {
        log.Print(err)
        http.Error(w, http.StatusText(500), 500)
        return
    }

    for _, bk := range bks {
        fmt.Fprintf(w, "%s, %s, %s, £%.2f\n", bk.Isbn, bk.Title, bk.Author, bk.Price)
    }
}                 

One of the advantages of this pattern is how clear it is to see what dependencies our handlers have and what values they take at runtime. All the dependencies for our handlers are explicitly defined in one place (the Env struct), and we can see what values they have at runtime by simply looking at how it is initialised in the main() function.

Another benefit is that any unit tests for our handlers can be completely self-contained. For example, a unit-test for booksIndex() could create an Env struct containing a connection pool to a test database, then call it's booksIndex() method in order to test the handler behaviour. There's no need to rely any global variables outside of the test.

In general, dependency injection in this way is quite a nice approach when:

  • There is a common set of dependencies that your handlers need access to.
  • All your HTTP handlers live in one package, but your database-related code may be spread across multiple packages.
  • You don't need to mock the database for testing purposes.

2b. Dependency injection via a closure

If you don't want to define your handlers as methods on Env, an alternative approach is to put your handler logic into a closure and close over the Env variable like so:

File: main.go
package main

import (
    "database/sql"
    "fmt"
    "log"
    "net/http"

    "bookstore.alexedwards.net/models"

    _ "github.com/lib/pq"
)

type Env struct {
    db *sql.DB
}

func main() {
    db, err := sql.Open("postgres", "postgres://user:pass@localhost/bookstore")
    if err != nil {
        log.Fatal(err)
    }

    env := &Env{db: db}

    // Pass the Env struct as a parameter to booksIndex().
    http.Handle("/books", booksIndex(env))
    http.ListenAndServe(":3000", nil)
}

// Use a closure to make Env available to the handler logic.
func booksIndex(env *Env) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        bks, err := models.AllBooks(env.db)
        if err != nil {
            log.Print(err)
            http.Error(w, http.StatusText(500), 500)
            return
        }

        for _, bk := range bks {
            fmt.Fprintf(w, "%s, %s, %s, £%.2f\n", bk.Isbn, bk.Title, bk.Author, bk.Price)
        }
    }
}    

This pattern makes our handler functions a bit more verbose, but it can be a useful technique if you want to use dependency injection when your handlers are spread across multiple packages. Here's a gist demonstrating how that can work.

3. Wrapping the connection pool

The third pattern we'll look at uses dependency injection again, but this time we're going to wrap the sql.DB connection pool in our own custom type.

Let's jump straight in to the code:

File: models/models.go
package models

import (
	"database/sql"
)

type Book struct {
	Isbn   string
	Title  string
	Author string
	Price  float32
}

// Create a custom BookModel type which wraps the sql.DB connection pool.
type BookModel struct {
	DB *sql.DB
}

// Use a method on the custom BookModel type to run the SQL query.
func (m BookModel) All() ([]Book, error) {
	rows, err := m.DB.Query("SELECT * FROM books")
	if err != nil {
		return nil, err
	}
	defer rows.Close()

	var bks []Book

	for rows.Next() {
		var bk Book

		err := rows.Scan(&bk.Isbn, &bk.Title, &bk.Author, &bk.Price)
		if err != nil {
			return nil, err
		}

		bks = append(bks, bk)
	}
	if err = rows.Err(); err != nil {
		return nil, err
	}

	return bks, nil
}
File: main.go
package main

import (
	"database/sql"
	"fmt"
	"log"
	"net/http"

	"bookstore.alexedwards.net/models"

	_ "github.com/lib/pq"
)

// This time make models.BookModel the dependency in Env.
type Env struct {
	books models.BookModel
}

func main() {
    // Initialise the connection pool as normal.
	db, err := sql.Open("postgres", "postgres://user:pass@localhost/bookstore")
	if err != nil {
		log.Fatal(err)
	}

    // Initalise Env with a models.BookModel instance (which in turn wraps
    // the connection pool).
	env := &Env{
		books: models.BookModel{DB: db},
	}

	http.HandleFunc("/books", env.booksIndex)
	http.ListenAndServe(":3000", nil)
}

func (env *Env) booksIndex(w http.ResponseWriter, r *http.Request) {
    // Execute the SQL query by calling the All() method.
	bks, err := env.books.All()
	if err != nil {
		log.Print(err)
		http.Error(w, http.StatusText(500), 500)
		return
	}

	for _, bk := range bks {
		fmt.Fprintf(w, "%s, %s, %s, £%.2f\n", bk.Isbn, bk.Title, bk.Author, bk.Price)
	}
}

At first glance this pattern might feel more confusing than the other options we've looked at — especially if you're not very familiar with Go. But it has some distinct advantages over our previous examples:

  • The database calls are succinct and read very nicely from the perspective of our handlers: env.books.All() versus the previous models.AllBooks(env.db).
  • In a complex application, your database access layer might have more dependencies than just the connection pool. This pattern allows us to store all those dependencies in the custom BookModel type, rather than having to pass them as parameters with every call.
  • Because the database actions are now defined as methods on our custom BookModel type, it opens up the opportunity to replace any references to BookModel in our application code with an interface. And in turn, that means that we can create a mock implementation of our BookModel which can be used during testing.

The final point here is probably the most important, so let's take a look at what it could look like in practice:

File: main.go
package main

import (
	"database/sql"
	"fmt"
	"log"
	"net/http"

	"bookstore.alexedwards.net/models"

	_ "github.com/lib/pq"
)

type Env struct {
    // Replace the reference to models.BookModel with an interface 
    // describing its methods instead. All the other code remains exactly 
    // the same. 
	books interface {
		All() ([]models.Book, error)
	}
}

func main() {
	db, err := sql.Open("postgres", "postgres://user:pass@localhost/bookstore")
	if err != nil {
		log.Fatal(err)
	}

	env := &Env{
		books: models.BookModel{DB: db},
	}

	http.HandleFunc("/books", env.booksIndex)
	http.ListenAndServe(":3000", nil)
}

func (env *Env) booksIndex(w http.ResponseWriter, r *http.Request) {
	bks, err := env.books.All()
	if err != nil {
		log.Print(err)
		http.Error(w, http.StatusText(500), 500)
		return
	}

	for _, bk := range bks {
		fmt.Fprintf(w, "%s, %s, %s, £%.2f\n", bk.Isbn, bk.Title, bk.Author, bk.Price)
	}
}

Once you've made that change, you should be able to create and run a unit test for the booksIndex() handler using a mockBookModel like so:

$ touch main_test.go
File: main_test.go
package main

import (
	"net/http"
	"net/http/httptest"
	"testing"

	"bookstore.alexedwards.net/models"
)

type mockBookModel struct{}

func (m *mockBookModel) All() ([]models.Book, error) {
	var bks []models.Book

	bks = append(bks, models.Book{"978-1503261969", "Emma", "Jayne Austen", 9.44})
	bks = append(bks, models.Book{"978-1505255607", "The Time Machine", "H. G. Wells", 5.99})

	return bks, nil
}

func TestBooksIndex(t *testing.T) {
	rec := httptest.NewRecorder()
	req, _ := http.NewRequest("GET", "/books", nil)

	env := Env{books: &mockBookModel{}}

	http.HandlerFunc(env.booksIndex).ServeHTTP(rec, req)

	expected := "978-1503261969, Emma, Jayne Austen, £9.44\n978-1505255607, The Time Machine, H. G. Wells, £5.99\n"
	if expected != rec.Body.String() {
		t.Errorf("\n...expected = %v\n...obtained = %v", expected, rec.Body.String())
	}
}
$ go test -v
=== RUN   TestBooksIndex

--- PASS: TestBooksIndex (0.00s)
PASS
ok      bookstore.alexedwards.net       0.003s

Wrapping the connection pool with a custom type and combining it with dependency injection via an Env struct is quite a nice approach when:

  • There is a common set of dependencies that your handlers need access to.
  • Your database layer has more dependencies than just the connection pool.
  • You want to mock the database during unit tests.

4. Request context

Finally let's look at using request context to store and pass around the database connection pool. Just to be clear upfront, I don't recommend using this approach, and the official documentation advises against it too:

Use context Values only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions.

In other words, that means request context should only be used to store values which are created during an individual request cycle and are no longer needed after the request has completed. It's not really intended to store long-lived handler dependencies like connection pools, loggers or template caches.

That said, some people do use request context in this way, and it's worth being aware of in case you ever come across it.

The pattern works like this:

File: main.go
package main

import (
    "context"
    "database/sql"
    "fmt"
    "log"
    "net/http"

    "bookstore.alexedwards.net/models"

    _ "github.com/lib/pq"
)

// Create some middleware which swaps out the existing request context
// with new context.Context value containing the connection pool.
func injectDB(db *sql.DB, next http.HandlerFunc) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        ctx := context.WithValue(r.Context(), "db", db)

        next.ServeHTTP(w, r.WithContext(ctx))
    }
}

func main() {
    db, err := sql.Open("postgres", "postgres://user:pass@localhost/bookstore")
    if err != nil {
        log.Fatal(err)
    }

    // Wrap the booksIndex handler with the injectDB middleware,
    // passing in the new context.Context with the connection pool.
    http.Handle("/books", injectDB(db, booksIndex))
    http.ListenAndServe(":3000", nil)
}

func booksIndex(w http.ResponseWriter, r *http.Request) {
    // Pass the request context onto the database layer.
    bks, err := models.AllBooks(r.Context())
    if err != nil {
        log.Print(err)
        http.Error(w, http.StatusText(500), 500)
        return
    }

    for _, bk := range bks {
        fmt.Fprintf(w, "%s, %s, %s, £%.2f\n", bk.Isbn, bk.Title, bk.Author, bk.Price)
    }
}                      

Essentially, what's happening here is that the injectDB middleware replaces the request context for every request with one that contains the connection pool. Then, in our handlers, we pass the request context on to our database layer.

Then in the database layer we can retrieve the connection pool from the context and use it like this:

File: models/models.go
package models

import (
    "context"
    "database/sql"
    "errors"
)

type Book struct {
    Isbn   string
    Title  string
    Author string
    Price  float32
}

func AllBooks(ctx context.Context) ([]Book, error) {
    // Retrieve the connection pool from the context. Because the
    // r.Context().Value() method always returns an interface{} type, we
    // need to type assert it into a *sql.DB before using it.
    db, ok := ctx.Value("db").(*sql.DB)
    if !ok {
        return nil, errors.New("could not get database connection pool from context")
    }

    rows, err := db.Query("SELECT * FROM books")
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var bks []Book

    for rows.Next() {
        var bk Book

        err := rows.Scan(&bk.Isbn, &bk.Title, &bk.Author, &bk.Price)
        if err != nil {
            return nil, err
        }

        bks = append(bks, bk)
    }
    if err = rows.Err(); err != nil {
        return nil, err
    }

    return bks, nil
}               

If you go ahead and run this code it'll work just fine. But this pattern has some big downsides:

  • Each time we retrieve the connection pool from the context we need to type assert it and check for any errors. This makes our code more verbose, and we lose the compile-time type safety that we have with the other approaches.
  • Unlike the dependency injection patterns, it's not clear to see what dependencies a function has just by looking at its signature. Instead, you have to read through the code to see what it is retrieving from the request context. In a small application this isn't a problem — but if you're trying to get to grips with a large, unfamiliar, codebase then it's not ideal.
  • It's not idiomatic Go. Using the request context in this way goes against the advice in the official documentation, and that means the pattern might be surprising or unfamiliar to other Go developers.

So, is there ever a scenario where this pattern is a good fit? It's tempting to be glib here and say "no", but the truth is that it can be an easy-ish way to pass around the connection pool if you have a sprawling codebase with handlers and database logic spread across many different packages.

But if you're considering using it for that reason, then it's probably a sign that you should refactor your codebase to have a simpler, flatter, package structure. Or, alternatively, I would suggest taking a closer look at the closure pattern we talked about earlier instead.