To do interesting things with a database you need actual tables in that database.
Unfortunately this means we also need to come up with something for this hypothetical app to work with. I am choosing to go with caveman themed examples.
migrate new
.For this and other commands to work you need to be in the migrations
directory.
$ migrate new create_cave_table
------------------------------------------------------------------------
-- MyBatis Migrations - new
------------------------------------------------------------------------
Your migrations configuration did not find your custom template. Using the default template.Creating: 20241022232155_create_cave_table.sql
Done!
------------------------------------------------------------------------
-- MyBatis Migrations SUCCESS
-- Total time: 1s
-- Finished at: Tue Oct 22 19:21:55 EDT 2024
-- Final Memory: 5M/504M
------------------------------------------------------------------------
Edit this first migration so that we are creating a schema in the up migration and dropping it in the down migration.
CREATE SCHEMA prehistoric;
-- //@UNDO
DROP SCHEMA prehistoric;
The reason to do this instead of putting tables in the default schema is simple - it will get cluttered. For servers of any significant size it's a likely that you will eventually have hundreds of tables. It's nice to at least be able to group those into broad categories.
cave
table in the prehistoric
table.Most tables I make in postgres I will give a uuid
primary key,
a created_at
column and an updated_at
column.
After that it's whatever is unique about the data stored in the table. For our cave example,
lets say to start all caves have is a description
.
CREATE SCHEMA prehistoric;
CREATE TABLE prehistoric.cave(
id uuid NOT NULL DEFAULT gen_random_uuid() PRIMARY KEY,
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now(),
description text
);
-- //@UNDO
DROP TABLE prehistoric.cave;
DROP SCHEMA prehistoric;
updated_at
Manually setting updated_at
is error-prone. Ideally we can use a database
trigger to keep it up to date automatically.
To pull that off we need to first create a function within the schema that sets a row's updated_at
to the current time as well as a trigger to execute that function whenever a row changes.
CREATE SCHEMA prehistoric;
CREATE FUNCTION prehistoric.set_current_timestamp_updated_at()
RETURNS TRIGGER AS $$
DECLARE
_new record;
BEGIN
_new := NEW;
_new."updated_at" = NOW();
RETURN _new;
END;
$$ LANGUAGE plpgsql;
CREATE TABLE prehistoric.cave(
id uuid NOT NULL DEFAULT gen_random_uuid() PRIMARY KEY,
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now(),
description text
);
CREATE TRIGGER set_prehistoric_cave_updated_at
BEFORE UPDATE ON prehistoric.cave
FOR EACH ROW
EXECUTE PROCEDURE prehistoric.set_current_timestamp_updated_at();
-- //@UNDO
DROP TABLE prehistoric.cave;
DROP FUNCTION prehistoric.set_current_timestamp_updated_at;
DROP SCHEMA prehistoric;
We only need to create the function once per schema. We can then re-use it for any future tables with a similar shape (which will be most of them.)
If you are creating the trigger to set updated_at
in the same migration as creating the table (which you should) then
you don't need anything special in the down migration to remove it. Just dropping the table will be enough.
migrate up
This should create the table in the database.
$ migrate up
------------------------------------------------------------------------
-- MyBatis Migrations - up
------------------------------------------------------------------------
========== Applying: 20241022232155_create_cave_table.sql ======================
CREATE SCHEMA prehistoric;
CREATE TABLE prehistoric.cave(
id uuid NOT NULL DEFAULT gen_random_uuid() PRIMARY KEY,
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now(),
description text
);
CREATE FUNCTION prehistoric.set_current_timestamp_updated_at()
RETURNS TRIGGER AS $$
DECLARE
_new record;
BEGIN
_new := NEW;
_new."updated_at" = NOW();
RETURN _new;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER set_prehistoric_cave_updated_at
BEFORE UPDATE ON prehistoric.cave
FOR EACH ROW
EXECUTE PROCEDURE prehistoric.set_current_timestamp_updated_at();
------------------------------------------------------------------------
-- MyBatis Migrations SUCCESS
-- Total time: 0s
-- Finished at: Tue Oct 22 19:42:43 EDT 2024
-- Final Memory: 14M/504M
------------------------------------------------------------------------
migrate down
This is just to verify you wrote the down migration sensibly. You can migrate up
after verifying this works.