MySQL, Database table history using Schemas and triggers

Assuming that we have to trace every intervention on a database called « stats»

As result we wil have a new database with the “history_” prefix which will contain all the same tables of our source database “stats” but with different column definition. We have to add 3 new colums.

  1. Operation.
  2. Time.
  3. Operation made.
  4. Creating the histroy database.

To continue we need to be sure that the mysql user has all privilieges for creating databases and Triggers.

Now we will create the database :

SELECT   concat('CREATE SCHEMA',char(10),   'IF NOT EXISTS history_',s.schema_name,char(10),   'DEFAULT CHARACTER SET ', s.default_character_set_name,char(10),   'DEFAULT COLLATE ',s.default_collation_name,char(10)
,   ';' ,char(10),   'USE history_',s.schema_name,char(10),   ';' ,char(10))
FROM     information_schema.schemata s
WHERE    s.schema_name = SCHEMA();
@schema_name : is the source database name ;
@character_set_name : current encoding.

A question is : why are we doing this complicated query to create a simple database ?? : The answer is that in some cases we would like to create multiple database history, and with this query we avoid rewriting queries !

This is the result of the query:

IF NOT EXISTS history_stats
DEFAULT COLLATE utf8_general_ci;
USE history_stats;

as we see it’s an SQL Code. At this point we haven’t created anything .. We are only generating SQL code to copy paste on mysql command line (for example)

Creating history tables :

History tables are containing the same source tables column defintion and we add additional informations : history_auto_increment : Primary auto increment field, history_timestamp : timestamp operation. history_user :mysql user, the user who made the operation, history_operation : operation : enum(‘DELETE’,’INSERT’,’UPDATE’)

Important : history_user is all time the same, but in some systems there a multiple mysql users so we keep it in case we use it in future cases.

SELECT       concat(
'CREATE TABLE ',tables.table_name,'(',char(10)
,   '    history_auto_increment',char(10)
,   '        int unsigned',char(10)
,   '        not null',char(10)
,   '        auto_increment',char(10)
,   '        primary key',char(10)
,   ',   history_timestamp',char(10)
,   '        timestamp',char(10)
,   '        not null',char(10)
,   ',   history_user',char(10)
,   '        varchar(16)',char(10)
,   '        not null',char(10)
,   ',   history_operation',char(10)
,   '        enum(''DELETE'',''INSERT'',''UPDATE'')',char(10)
,   '        not null',char(10)
,   ','
,   group_concat(
'   '
,   columns.column_name,' '
,   columns.column_type,' '
,   case
when columns.character_set_name is null then ''
else concat(
' CHARACTER SET ',columns.character_set_name
,   ' COLLATE ',columns.collation_name
,   case columns.is_nullable
when 'NO' then ' NOT NULL'
else ''
,   char(10)
order by columns.ordinal_position
separator ','
,   ')',char(10)
,   'ENGINE='
,   case tables.engine
when 'InnoDB' then 'InnoDB'
else 'MyISAM'
,   'DEFAULT CHARACTER SET ',collations.character_set_name,char(10)
,   'COLLATE ',tables.table_collation,char(10)
,   ';',char(10)
FROM         information_schema.tables
INNER JOIN   information_schema.columns
ON           tables.table_schema    = columns.table_schema
AND          tables.table_name      = columns.table_name
INNER JOIN   information_schema.collations
ON           tables.table_collation = collations.collation_name
WHERE        tables.table_schema    = SCHEMA()
AND          tables.table_type      = 'BASE TABLE'
GROUP BY     tables.table_name
,            tables.engine
,            tables.table_collation
,            collations.character_set_name

This is an example of the result of the Query :

CREATE TABLE reporting_indicator(
int unsigned
not null
primary key
,   history_timestamp
not null
,   history_user
not null
,   history_operation
not null
,   id_reporting int(3) unsigned  NOT NULL
,   id_indicator int(3) unsigned  NOT NULL
COLLATE utf8_general_ci;

At this point we have all the tables ready to use ! but in some cases when the tables contains too much columns, SQL source can be truncated. To avoid this change the value of group_concat_max_len and if not max_allowed_packet value :

SET [GLOBAL | SESSION] group_concat_max_len = val;

Creating Triggers.

Insertion on history tables can be made by two ways : in the application so we have to program manually all the insertion .. Or automatically using the powerful

select     concat(
'create trigger air_',tables.table_name,char(10)
,   'after insert on ',tables.table_schema,'.',tables.table_name,char(10)
,   'for each row ',char(10)
,   'begin',char(10)
,   '    insert',char(10)
,   '    into    history_',tables.table_schema,'.',tables.table_name,'(',char(10)
,   '            history_timestamp',char(10)
,   '    ,       history_user',char(10)
,   '    ,       history_operation',char(10)
,   '    ,'
,   group_concat(
'       '
,   columns.column_name
,   char(10)
,   '    '
,   ') values (',char(10)
,   '            CURRENT_TIMESTAMP',char(10)
,   '    ,       CURRENT_USER',char(10)
,   '    ,       ''INSERT''',char(10)
,   '    ,'
,   group_concat(
'       '
,   'new.',columns.column_name
,   char(10)
,   '    '
,   ');',char(10)
,   'end',char(10)
,   '$$',char(10)
from       information_schema.tables
inner join information_schema.columns
on         tables.table_schema        = columns.table_schema
and        tables.table_name          = columns.table_name
where      tables.table_schema        = SCHEMA()
and        tables.table_type          = 'BASE TABLE'
group by   tables.table_schema
,          tables.table_name

And here is an example of generated SQL code :

create trigger aur_reporting_indicator
after update on stats.reporting_indicator
for each row
into    history_stats.reporting_indicator(
,       history_user
,       history_operation
,       id_reporting
,       id_indicator
) values (
,       'UPDATE'
,       new.id_reporting
,       new.id_indicator

To create triggers fo the other operations we have to modify the generating query with the correct operation name (Update, DELETE)

Useful commands :


– Generate query to drop all triggers :

select     concat('DROP TRIGGER stats.air_',tables.table_name, ';',char(10))
from       information_schema.tables
where      tables.table_schema        = SCHEMA();

8 Replies to “MySQL, Database table history using Schemas and triggers”

  1. Not bad, but how do you approach the maintenance of these ?

    Seeing how you use the mysql metadata tables, you could probably consider adding a trigger on some of these that would automatically update the triggers on your tables when they are modified (i.e. field type change, field add, etc.)


  2. hi,
    I googled and don find other solution for track database changes. so this way seems only way for implement that.
    I have enterprise database with 450 tables and I want track changes of tables in separate database with same structure ( adding some new fileds).One problem is change table structure in main database need to change history database structure too.
    I need create 430*3=1290 trigger,
    other problem, mariadb and mysql does not support multiple trigger in one table.
    by the way your idea is very good.

  3. I forgot explain my main problem.
    I have users table in my database and my application always login with one database use and pass, so how can i add login userd in my app with trigger. if i add add_user and chg_user for my tables in my main database i think this solve my problem for insert and update opertaion, but for delete i cant find solution.


Leave a Reply