This page looks best with JavaScript enabled

Introduction to BoltDB

 ·  ☕ 5 min read

Why use BoltDB

Bolt plays pretty well with Go’s concurrency model, we can do multiple reads concurrently.

For pro and cons, I will contrast it with similar database SQLite. I have done SQLite when I worked with Xentrix to develop GUI tools for the artists.

Pro

  1. It is native go.

This means you don’t need any database server running, like SQLite. In oppose to Redis and memcached.

  1. Cross-compilation

One thing I see a lot appearing on the internet about Bolt is it’s an ability to cross-compile.

  1. Concurrent reads.

Read operation does not lock the database. But for writing, one transaction should end before next to process.

  1. Available on platforms where Go is available.

One thing to note is, BoltDB is a key-value datastore unlike SQLite, which is an RDBMS.

Cons

  • Go only.

The fact that Bolt is native go, you can’t use it with other languages. In contrast, SQLite works with approx 2 dozen languages.

If you are here, probably you are curious how to start using Bolt in your personal projects. BoltDB is also used in production at Shopify and HashiCorp (the company which made Consul). Bolt is as reliable as underlying infrastructure.

Buckets

Bolt store data in buckets. A Bucket is similar to a table in RDBMS. And similar to a collection in Document based stores.

Basically what we can do now is connect to the database.

Connecting to a Database

1
2
3
4
5
db, err := bolt.Open("bolt.db", 0600, nil)
if err != nil {
    log.Fatal(err)
}
defer db.Close()

From the docs:

1
2
3
4
// Open creates and opens a database at the given path.
// If the file does not exist then it will be created automatically.
// Passing in nil options will cause Bolt to open the database with the default options.
func Open(path string, mode os.FileMode, options *Options) (*DB, error) {

Third argument to bolt.Open is Options. Most widely used is Timeout. From the documentation:

1
2
3
4
// Timeout is the amount of time to wait to obtain a file lock.
// When set to zero it will wait indefinitely. This option is only
// available on Darwin and Linux.
Timeout time.Duration

Initialize the Database

Before we start writing to the database, we need to set things up. We must check if bucket exists or not. We’re gonna do that in our first transaction.

1
2
3
4
5
6
db.Update(func(tx *bolt.Tx) error {
    b, err := tx.CreateBucketIfNotExists("todo")
    if err != nil {
        return fmt.Errorf("create bucket: %s", err)
    }
})

Inside the closure, you have a consistent view of the database. You commit the transaction by returning nil at the end. You can also rollback the transaction at any point by returning an error. All database operations are allowed inside a read-write transaction (the Update func).

Writing and Updating to the Database

The same db.Update API can also be used to write new entries to the bucket using Bucket.Put. Something like this:

1
2
3
4
db.Update(func(tx *bolt.Tx) error {
        b := tx.Bucket([]byte("todo"))
        return b.Put([]byte("your key"), []byte("your value"))
    })

Batch read-write transactions
Each DB.Update() waits for disk to commit the writes. This overhead can be minimized by combining multiple updates with the DB.Batch() function:

1
2
3
4
db.Batch(func(tx *bolt.Tx) error {
    // ...
    return nil
})

If you are looking for an autoincrement solution, you can use the Bucket.NextSequence API. There is whole example at https://github.com/boltdb/bolt#autoincrementing-integer-for-the-bucket

As we know, Bolt stores keys and values in byte slices. And to convert from integer and string to byte slices have different difficulty level. While string do back and forth with []byte(str) and string(byt). Interger needs some special care. This is from the docs:

1
2
3
4
5
6
// itob returns an 8-byte big endian representation of v.
func itob(v int) []byte {
    b := make([]byte, 8)
    binary.BigEndian.PutUint64(b, uint64(v))
    return b
}

Reciprocal of this func would be this:

1
2
3
func btoi(b []byte) int {
    return int(binary.BigEndian.Uint64(b))
}

Not to mention, same Put API can be used to update the existing pairs. It’s CRUD you know. 🤷

Reading from the Database

Until now, we were using db.Update, now while reading from there we’ll use db.View. It support concurrent reads.

Getting Value by Key

We can use Bucket.Get to get value of specific keys. As always, pass a byte slice.

1
2
3
4
5
6
db.View(func(tx *bolt.Tx) error {
    b := tx.Bucket([]byte("todo"))
    v := b.Get([]byte("your key"))
    fmt.Printf("The value field for 'your key' is: %s\n", v)
    return nil
})

The Get() function does not return an error because its operation is guaranteed to work (unless there is some kind of system failure). If the key exists then it will return its byte slice value. If it doesn’t exist then it will return nil. It’s important to note that you can have a zero-length value set to a key which is different than the key not existing.

Iterating over the Keys in Bucket

In some situation you might want to iterate over all the items. To iterate over the keys, you first have to acquire a Bucket.Cursor. Something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
db.View(func(tx *bolt.Tx) error {
    b := tx.Bucket([]byte("todo"))
    // we need cursor for iteration
    c := b.Cursor()
    for k, v := c.First(); k != nil; k, v = c.Next() {
        // do stuff with each key value pair
    }
    // should return nil to complete the transaction
    return nil
})

At last, I must recommend reading the docs as there are more goodies there. Reading the source code is also not bad, infact it is full of documentation.

Related readings:

Share on

Santosh Kumar
WRITTEN BY
Santosh Kumar
Fullstack Developer at Method Studios