Tags: #go #guides
panic
NOTE: Refer to the specification if ever confused about what the expected behaviour is.
i
to index
.r
to reader
.buf
to buffer
.cfg
to config
.dst, src
to destination, source
.in, out
when referring to stdin/stdout.rx, tx
when dealing with channels (i.e. receiver, transmitter).data
when referring to file content (whether a string
or []byte
).ok
instead of longer alternatives.<T>Error
(example: type ExitError struct {...}
).Err<T>
(example: var ErrFormat = errors.New("image: unknown format")
).Set<T>
vs Register<T>
NOTE: Refer also to https://github.com/kettanaito/naming-cheatsheet
The go standard library has no strong conventions or idioms for how to handle whitespace. So try and be concise without leaving the user with a wall of text to digest. Additionally, you can use block syntax {...}
to help group related logic:
// Simple code is fine to condense the whitespace.
if ... {
foo
for x := range y {
...
}
bar
}
// Complex code could benefit from some whitespace (also separate block syntax for grouping related logic).
if {
...
{
...grouping of related logic...
}
...
}
Not always obvious but be wary of returning concrete types when building a package to be used as a library.
Here is an example of why this might be problematic: we had a library that defined a constructor that returned a struct of type *T
. This struct had methods attached and inside of those methods were API calls. We built a separate CLI that consumed the package library and realised our CLI’s test suite wasn’t able to mock the type appropriately as some of the fields on the struct were private and would determine if an attached method would make an API call.
The solution was for us to return an interface. This made it simple to mock the behaviours we wanted (e.g. pretend there was an API error, how does our CLI handle it).
When you wrap errors your message should include:
And your message should NOT include:
Here is a BAD example where the caller of a function that fails is seeing duplicate information:
// Source
func MightFail(id string) error {
err := sqlStatement()
if err != nil {
return fmt.Errorf("mightFail failed with id %v because of sql: %w", id, err
}
...
return nil
}
// Caller
func business(ids []string) error {
for _, id := range ids {
err := MightFail(id)
if err != nil {
return fmt.Errorf("business failed MightFail on id %v: %w", id, err)
}
}
}
The resolution to the above bad code is: only include information the caller doesn’t have. The caller is free to annotate your errors with information such as the name of your function, arguments they passed in, etc. There is no need for you to provide that information to them, as its obvious up front. If this same logic is applied consistently you’ll end up with error messages that are high-signal and to-the-point.
See also the article “When life gives you lemons, write better error messages”, from which the following images are sourced.
Bad error message:
Good error message:
panic
panic
is reserved for when an error is unrecoverable.panic
.bytes.Truncate
is an example of the last sub-point.
panic
should be documented (example: bytes.Truncate
)recover
is for when you disagree with the library authors.panic
and return an error for the caller to handle.When taking a slice of a slice you might stumble into behaviour which appears confusing at first. The cap
, len
and data
fields might change, but the underlying array is not re-allocated, nor copied over and so modifications to the slice will modify the original backing array.
Refer to the golang language specification section on “full slice expressions” syntax (
[low : high : max]
) for controlling the capacity of a slice.
The underlying array is modified after updating an element on the slice:
a := []int{1, 2}
b := a[:1] /* [1] */
b[0] = 42 /* [42] */
fmt.Println(a) /* [42, 2] */
When data gets appended to b
(a slice of the a
slice), the underlying array has enough capacity to hold two more elements, so append
will not re-allocate. This means that appending to b
might not only change a
but also c
(a slice of the a
slice).
a := []int{1, 2, 3, 4}
b := a[:2] /* [1, 2] */
c := a[2:] /* [3, 4] */
b = append(b, 5)
fmt.Println(a) /* [1 2 5 4] */
fmt.Println(b) /* [1 2 5] */
fmt.Println(c) /* [5 4] */
The fix is b := a[:2:2]
which sets the capacity of the b
slice such that append
will cause a new array to be allocated. This means a
will not be modified, nor will the c
slice of a
.
NOTE: there are more examples/explanations in https://blogtitle.github.io/go-slices-gotchas/
Reference articles: goinbigdata.com and dave.cheney.net.
In essence when people say ‘pass by reference’, the point they’re trying to get across is: “this isn’t a copy of the value being passed”. Where as ‘pass by reference’ is a very specific type of behaviour.
All primitive/basic types (int and its variants, float and its variants, boolean, string, array, and struct) in Go are passed by value.
Maps and slices are passed by pointer (sometimes incorrectly called pass-by-reference). This is where a new copy of the ‘pointer’ to the same memory address is created.
Go does not have pass-by-reference semantics because Go does not have ‘reference variables’ (which is something you’d find in C++).
In C++ you can create a = 10
and then alias b
to a
(&b = a
) such that updating b
would affect a
. Go doesn’t have this behaviour. Every variable is stored in its own memory space. Meaning if we had b := &a
and updated b
then we wouldn’t cause any change to a
.
When we define a function that accepts a pointer (e.g. changeName(p *Person)
) and we pass a pointer to it (e.g. changeName(&person)
) the variable person is modified inside the changeName
function. This happens because &person
and p
are two different pointers to the same struct which is stored at the same memory address. This is quite different to C++’s reference variables.
Your functions should have concise/relevant arguments passed in.
Don’t, for example, pass in an argument whose type is a large object and which the function then has to know how that object is structured as that’s violating the Law of Demeter. Instead choose a field from the object to pass in as it’ll likely have a simpler type (like a string
or int
).
Three approaches to dealing with functions that potentially could have a large number of arguments…
<T>Options
struct.I would say go with option 1 whenever possible, and almost never choose option 2 over option 3 as the latter is much more flexible.
The problem with option 2 is that it can become quite cumbersome to construct an object with lots of fields, and more importantly it can be hard to know which fields are required and which are optional. Yes it’s nice that you can easily omit optional fields easily, but then option 3 also provides that benefit while also solving the problem of knowing what arguments are required vs optional.
Using option 3 can be helpful when you want to make the function signature clear, by accepting a couple of concrete arguments that are required for the function to work, while shifting optional arguments into separate functions, as demonstrated below…
type Client struct {
host, proxy string
port int
}
type Option func(*Client) // call this function to apply the option
func WithPort(port int) Option {
return func(c *Client) { c.port = port }
}
func WithProxy(proxy string) Option {
return func(c *Client) { c.proxy = proxy }
}
func NewClient(host string, options ...Option) *Client {
c := &Client{host: host, port: 80} // default values
for _, option := range options {
option(c) // apply the options by calling each one of them
}
return c
}