Atomic Habits: a personal summary
Here is my summary of Atomic Habits based on each chapter, and I hope this information can give you an idea of what the book is about, so you can read it yourself and get all of the benefits.
Since PostgreSQL 9.2 we have the ability to store and query range of values with specific operators and functions. This is a super nice feature that removes complexity when handling logic around two pieces of information that can also be representing a third one.
Let's say that we have a model that has price_range
as one of its attributes. Instead of handling two fields like minimum_price
and maximum_price
, we can use ranges of age like [0, 45.67]
, [30.04, 98.50]
, [100, 500]
and so one. This way, we can perform queries using range operators that will match things like "2.5 <@ numrange(1.50,7)"
that means element 2.5 is contained by range from 1.50 to 7, what is true in this example.
PostgreSQL comes with the following built-in range types:
Based on the fact that Ecto has a PostgreSQL adapter called Postgrex, it should be very trivial to take advantage of the range data type, like any other common data type as string, integer or boolean. However, that is not true as Ecto doesn't provide an out-of-box Range data type. The good news is that Ecto allow you to create custom data types when needed that will match the adapter data type available, and it is really simple to do it.
Postgrex comes with a data type %Postgrex.Range{}
, this will be our reference in our custom Ecto data type. The %Postgrex.Range{}
accepts 4 attributes: lower
, upper
, lower_inclusive
, upper_inclusive
(both lower and upper inclusive attributes are set as true by default) that pretty much maps with how Postgres sets ranges.
Let's take a look in our custom Ecto data type for ranges:
defmodule MyApp.Numrange do
@behaviour Ecto.Type
def type, do: :numrange
def cast([lower, upper]) do
{:ok, [lower, upper]}
end
def cast(_), do: :error
def load(%Postgrex.Range{lower: lower, upper: nil}) do
{lower, _} = lower |> to_float
{:ok, [lower, nil]}
end
def load(%Postgrex.Range{lower: lower, upper: upper}) do
{lower, _} = lower |> to_float
{upper, _} = upper |> to_float
{:ok, [lower, upper]}
end
def dump([lower, upper]) do
{:ok, %Postgrex.Range{lower: lower, upper: upper, upper_inclusive: false}}
end
def dump(_), do: :error
defp to_float(value) do
value |> Decimal.to_string |> Float.parse
end
end
Let's break down the code above.
The first thing we notice is that our custom data type module behaves like an Ecto.Type
.
@behaviour Ecto.Type
That means that it expects 4 functions to be implemented (from Ecto documentation):
type/0
should output the name of the db type;cast/1
should receive any type and output your custom Ecto type;load/1
should receive the db type and output your custom Ecto type;dump/1
should receive your custom Ecto type and output the db type.The cast/1
function is defining what is the input that the data type will receive and what to pass along to the adapter.
def cast([lower, upper]) do
{:ok, [lower, upper]}
end
def cast(_), do: :error
The load/1
function will receive from the adapter a struct called %Postgrex.Range{}
. Notice that depending on the case we can have ranges using infinity (passing nil
as value), so we can pattern match those cases. The return is a tuple with the second element being a list of 2 values.
Also, any transformation can happen in this stage, in the case below I am converting to float the value the adapter sends, that is a Decimal.
def load(%Postgrex.Range{lower: lower, upper: nil}) do
{lower, _} = lower |> to_float
{:ok, [lower, nil]}
end
def load(%Postgrex.Range{lower: lower, upper: upper}) do
{lower, _} = lower |> to_float
{upper, _} = upper |> to_float
{:ok, [lower, upper]}
end
The dump/1
function will prepare the data to be sent to our adapter. In our case, as I want to modify the adapter's default for inclusive boundary for my upper level, I set false
for upper_inclusive
.
def dump([lower, upper]) do
{:ok,
%Postgrex.Range{
lower: lower, upper: upper, upper_inclusive: false
}
}
end
def dump(_), do: :error
To know more about inclusive boundaries I suggest PostgreSQL documentation.
Having that in place, the migration can use the new data type as the example below:
defmodule MyApp.Repo.Migrations.Product do
use Ecto.Migration
def change do
create table(:products) do
add :price_range, :numrange
timestamps
end
end
end
The next step is set our model to use our custom type in the schema.
defmodule MyApp.Product do
use MyApp.Web, :model
schema “products” do
field :price_range, MyApp.Numrange
timestamps
end
### Rest of the model omitted.
end
Also, part of the model is ensuring that the contract between the model and the custom data type is valid. In our case, a valid price_range
is:
Of course, this validation and even the contract could be different. We could use maps or structs to pass along the values if preferred.
With that in place you can take advantage of this data type and perform queries or functions over the range type. Below is an example:
defmodule MyApp.ProductQuery do
import Ecto.Query
alias MyApp.{Repo, Product}
def within_price_range(price) do
query =
from p in Product,
where: fragment(“?::numeric <@ ?”, ^price, p.price_range)
query |> Repo.all
end
end
The another nice feature from Ecto is that you can use the function fragment/1
to execute specific database operations or functions that are not available in Ecto.Query.API
.
Ecto, besides all great features and design, provides everything we need to extend it, what makes it really powerful and enjoyable.