项目作者: azuqua

项目描述 :
A knexjs relational database migrations and schema scraping library
高级语言: JavaScript
项目地址: git://github.com/azuqua/dumptruck.git
创建时间: 2016-10-31T16:16:19Z
项目社区:https://github.com/azuqua/dumptruck

开源协议:MIT License

下载


NB! This repository is currently WIP and as such is extremely unstable.

rdb-migrate

An knexjs relational database framework that allows for safe migrations and data schema scraping
via either the cli interface or in code for automated schema changes.

Installation

  1. $ npm install --save rdb-migration

Index

Quickstart" class="reference-link"> Quickstart

Running a database migration" class="reference-link"> Running a database migration

From the terminal:

  1. $ node rdb-migrate run -c <pathToConfigDir>

In code:

  1. var migrate = require("rdb-migrate");
  2. var args = {
  3. client: <initialized knexjs client>
  4. };
  5. runner(args, function (err) {
  6. if (!err) {
  7. console.log("Migration task complete!");
  8. }
  9. else {
  10. console.log("Error in migrate task!", err);
  11. console.log(err.stack);
  12. }
  13. });

Example config.json file:

  1. // <pathToConfigDir>/config.json
  2. {
  3. "dialect": "pg",
  4. "debug": false,
  5. "migrationsPath": "./migrations/",
  6. "dumpPath": "./dump/",
  7. "pg": {
  8. "host": "127.0.0.1",
  9. "port": 5432,
  10. "pass": "",
  11. "user": "rdb-migrate",
  12. "database": "rdb-migrate"
  13. }
  14. }

Example migration .js file:

  1. // <pathToMigrations>/helloTables.js
  2. module.exports = function (m) {
  3. // Run in `safe` mode.
  4. // Perform the migration iff the specified tables do not exist.
  5. m.safe();
  6. m.create.table({
  7. table: "foo",
  8. timestamps: true,
  9. columns: [
  10. {
  11. type: "integer",
  12. args: ["id"]
  13. },
  14. {
  15. type: "string",
  16. args: ["fooString"]
  17. }
  18. ]
  19. });
  20. m.create.table({
  21. table: "bar",
  22. timestamps: true,
  23. columns: [
  24. {
  25. type: "integer",
  26. args: ["id"]
  27. },
  28. {
  29. type: "json",
  30. args: ["barJson"],
  31. default: "'{}'::json"
  32. }
  33. ]
  34. });
  35. // Create a `bar_foo` join table w/ appropriate foreign keys.
  36. m.create.joinTable(["foo", "bar"]);
  37. };

Cloning a database schema

First generate a dump.json file by scraping your original database:

  1. $ node rdb-migrate dump --configFile <pathToConfigFile-database1>
  1. // <configFile-database1>
  2. {
  3. "dialect": "pg",
  4. "debug": false,
  5. "migrationsPath": "./migrations/",
  6. "dumpPath": "./dump_db_1/",
  7. "pg": {
  8. "host": "127.0.0.1",
  9. "port": 5432,
  10. "pass": "",
  11. "user": "db_user",
  12. "database": "db_1"
  13. }
  14. }
  1. // dump file structure/ template
  2. {
  3. // table array
  4. "tables": [
  5. {
  6. //table meta-data
  7. "table": "table name",
  8. "sequences": [
  9. {
  10. //sequences on table meta-data
  11. }
  12. ],
  13. "columns": [
  14. {
  15. //columns on table meta-data
  16. }
  17. ],
  18. "indexes": [
  19. {
  20. //index on table meta-data
  21. }
  22. ],
  23. "primary": {
  24. "columns": [
  25. //columns in primary key
  26. ]
  27. },
  28. "constraints": [
  29. {
  30. //constraint meta-data
  31. }
  32. ]
  33. }
  34. ],
  35. "users": [
  36. {
  37. //database users
  38. }
  39. ],
  40. // sql dialect specific items: e.g. postgres extensions, etc.
  41. }

Now run a full --schema migration, where your config file references the dumpDir from the first step
and the database connection references an empty database.

  1. $ node rdb-migrate run --configFile <pathToConfigFile-database2> --schema
  1. // <configFile-database2>
  2. {
  3. "dialect": "pg",
  4. "debug": false,
  5. "migrationsPath": "./migrations/",
  6. "dumpPath": "./dump_db_1/",
  7. "pg": {
  8. "host": "127.0.0.1",
  9. "port": 5432,
  10. "pass": "",
  11. "user": "db_user",
  12. "database": "db_2"
  13. },
  14. // optional parameter
  15. "version" : 9.5
  16. }

Schema Migration Flow" class="reference-link"> Schema Migration Flow

  1. Compile migrations.
  2. Compare migrations against current schema history.
  3. Run unapplied migrations according to specified order.
  4. Update migration history.
  5. Scrape database and provide dump.json file for subsequent schema cloning.

ChangeLog" class="reference-link"> ChangeLog

  • 0.0.1 [WIP]