load data fast - percona data fast.pdfbill karwin software developer, consultant, trainer using...
TRANSCRIPT
Load Data Fast!BILL KARWINPERCONA LIVE OPEN SOURCE DATABASE CONFERENCE 2017
Bill KarwinSoftware developer, consultant, trainer
Using MySQL since 2000
Senior Database Architect at SchoolMessenger
SQL Antipatterns: Avoiding the Pitfalls of Database Programminghttps://pragprog.com/titles/bksqla/sql-antipatterns
Oracle ACE Director
Load Data Fast!Common chores§ Dump and restore§ Import third-party data§ Extract, Transfer, Load (ETL)§ Test data that needs to be reloaded
repeatedly
https://commons.wikimedia.org/wiki/File:Kitten_with_laptop_-_278017185.jpg
Is it done yet?
How to Speed This Up?1. Query Solutions2. Schema Solutions3. Configuration Solutions4. Parallel Execution Solutions
Example TableCREATE TABLE TestTable (id INT UNSIGNED NOT NULL PRIMARY KEY,intCol INT UNSIGNED DEFAULT NULL,stringCol VARCHAR(100) DEFAULT NULL,textCol TEXT
) ENGINE=InnoDB;
Let’s load 1 million rows!
Best Case PerformanceRunning a test script to loop over 1 million rows, without inserting to a database.
$ php test-bulk-insert.php --total-rows 1000000 --noop
This should have a speed that is the upper bound for any subsequent test.
Time: 2 seconds (00:00:02)1000000 rows = 432435.24 rows/sec1000000 stmt = 432435.24 stmt/sec1000000 txns = 432435.24 txns/sec1000000 conn = 432435.24 conn/sec
Worst Case PerformanceINSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES(?, ?, ?, ?);
Run a test script that executes one INSERT, commits, reconnects.
$ php test-bulk-insert.php --total-rows 10000
Time: 34 seconds (00:00:34)10000 rows = 290.29 rows/sec10000 stmt = 290.29 stmt/sec10000 txns = 290.29 txns/sec10000 conn = 290.29 conn/sec
Inserting One Row: Overhead
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html
0
0.5
1
1.5
2
2.5
3
Connecting Sendingquery Parsing Insertingrow Closingquery
Query Solutions
Inserting One Row at a TimeINSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES(?, ?, ?, ?);
Run a test script that executes one INSERT, commits using a single connection.
$ php test-bulk-insert.php --total-rows 1000000 \--txns-per-conn 1000000
Time: 527 seconds (00:08:47)1000000 rows = 1894.67 rows/sec1000000 stmt = 1894.67 stmt/sec1000000 txns = 1894.67 txns/sec
1 conn = 0.00 conn/sec
Inserting One Row: Overhead
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Sendingquery Parsing Insertingrow Closingquery
Inserting Multiple RowsINSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?),(?, ?, ?, ?);
Q: How many rows can you insert in one statement?
A: As many as fit in max_allowed_packet bytes.
Inserting Multiple Rows: Overhead
0
1
2
3
4
5
6
7
8
Sendingquery Parsing Insertingrow Closingquery
Inserting Multiple Rows: Results$ php Test-bulk-insert.php --total-rows 1000000 \--rows-per-stmt 100 --txns-per-conn 10000
Time: 85 seconds (00:01:25)1000000 rows = 11680.98 rows/sec10000 stmt = 116.81 stmt/sec10000 txns = 116.81 txns/sec
1 conn = 0.01 conn/sec
TransactionsBEGIN TRANSACTION;
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
COMMIT;
Q: How many statements can you do in one transaction?
A: In theory this is constrained by undo log segments, but it's a lot.
Transactions: Results$ php test-bulk-insert.php --total-rows 1000000 \--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100
Time: 63 seconds (00:01:03)1000000 rows = 15744.53 rows/sec10000 stmt = 157.45 stmt/sec100 txns = 1.57 txns/sec1 conn = 0.02 conn/sec
Inserting with Prepared QueriesBEGIN TRANSACTION;
PREPARE INSERT INTO TestTable …
EXECUTE …
EXECUTE …
EXECUTE …
EXECUTE …
COMMIT;
Q: How many times can you execute a given prepared statement?
A: There is no limit, as far as I can tell.
0
1
2
3
4
5
6
7
8
Sendingquery Parsing Insertingrow Insertingrow Insertingrow Insertingrow Closingquery
Prepared Queries: Overhead
Prepared Queries: Results$ php test-bulk-insert.php --total-rows 1000000 \--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100
$ php test-bulk-insert.php --total-rows 1000000 \--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 \--emulate-prepares
Time: 95 seconds (00:01:35)1000000 rows = 10518.97 rows/sec
Time: 63 seconds (00:01:03)1000000 rows = 15744.53 rows/sec
Load Data in File: Resultsmysql> LOAD DATA LOCAL INFILE 'TestTable.csv' INTO TABLE TestTable;
https://dev.mysql.com/doc/refman/8.0/en/load-data.html
Flat-file data load in a single transaction.
Works with replication.
Overhead: Load Data Infile
0
50
100
150
200
250
Sendingquery Parsing LOADDATAINFILE Closingquery
Load Data in File: Results$ php test-bulk-insert.php --total-rows 1000000 --load-data
Time: 25 seconds (00:00:25)1000000 rows = 39563.53 rows/sec
1 stmt = 0.04 stmt/sec1 txns = 0.04 txns/sec1 conn = 0.04 conn/sec
Load XML in File: ResultsLOAD XML LOCAL INFILE 'TestTable.xml' INTO TABLE TestTable;
https://dev.mysql.com/doc/refman/8.0/en/load-xml.html
$ php test-bulk-insert.php --total-rows 1000000 --load-xml
Time: 77 seconds (00:01:17)1000000 rows = 12858.16 rows/sec
1 stmt = 0.01 stmt/sec1 txns = 0.01 txns/sec1 conn = 0.01 conn/sec
What about Load JSON in File?Sorry, the hypothetical LOAD JSON INFILE is not supported by MySQL yet.
😭But it has been proposed as a feature request:
https://bugs.mysql.com/bug.php?id=79209
Go vote for it!
Or better yet, implement it and contribute a patch!
Schema Solutions
IndexesHow much overhead for one index? Two indexes?
1. mysql> ALTER TABLE TestTable ADD INDEX (intCol);
2. mysql> ALTER TABLE TextTable ADD INDEX (stringCol);
Indexes: Overhead
0
1
2
3
4
5
6
7
8
Sendingquery Parsing Insertingrow Insertingindexes Closingquery
Indexes: Results$ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 \--stmts-per-txn 100 --txns-per-conn 100
$ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 \--stmts-per-txn 100 --txns-per-conn 100 --indexes 1
$ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 \--stmts-per-txn 100 --txns-per-conn 100 --indexes 2
Time: 71 seconds (00:01:11)1000000 rows = 13993.81 rows/sec
Time: 63 seconds (00:01:03)1000000 rows = 15744.53 rows/sec
Time: 95 seconds (00:01:35)1000000 rows = 10473.64 rows/sec
Index DeferralWhat if we insert with no indexes, and build indexes at the end?§ Thi is what Percona’s mysqldump --innodb-optimize-keys does.§ Load time is like when you have no indexes:
Then create indexes after data load. This reduces the effective rate of rows/second:
mysql> ALTER TABLE TestTable ADD INDEX (intCol);Query OK, 0 rows affected (7.02 sec)
mysql> ALTER TABLE TestTable ADD INDEX (stringCol);Query OK, 0 rows affected (8.54 sec)
Time: 63 seconds (00:01:03)1000000 rows = 15744.53 rows/sec
Time: 63 + 7 + 8.5 seconds (00:01:35)1000000 rows = 12738.85 rows/sec
effective data load rate
TriggersHow much overhead for a trigger?
mysql> CREATE TRIGGER TestTriggerBEFORE INSERT ON TestTableFOR EACH ROW
SET NEW.stringCol = UPPER(NEW.stringCol);
This is a very simple trigger. If you have more complex code, like subordinate INSERT statements, the cost will be higher.
Triggers: Results$ php test-bulk-insert.php --total-rows 1000000 \--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 \--trigger
Time: 69 seconds (00:01:09)1000000 rows = 14296.91 rows/sec10000 stmt = 142.97 stmt/sec100 txns = 1.43 txns/sec1 conn = 0.01 conn/sec
CSV Storage Enginemysql> CREATE TABLE TestTable (id INT UNSIGNED NOT NULL,intCol INT UNSIGNED NOT NULL,stringCol VARCHAR(100) NOT NULL,textCol TEXT NOT NULL
) ENGINE=CSV;
# ls -l /usr/local/mysql/data/test
total 24-rw-r----- 1 _mysql _mysql 5824 Apr 22 20:10 TestTable_429.SDI-rw-r----- 1 _mysql _mysql 35 Apr 22 20:10 testtable.CSM-rw-r----- 1 _mysql _mysql 0 Apr 22 20:10 testtable.CSV
CSV Storage EngineMove CSV file into datadir:
# time cp data.csv /usr/local/mysql/data/test/testtable.CSV
real 0m8.359s
# ls -l /usr/local/mysql/data/test/
total 6350872-rw-r----- 1 _mysql _mysql 5824 Apr 22 20:18 TestTable_431.SDI-rw-r----- 1 _mysql _mysql 35 Apr 22 20:18 testtable.CSM-rw-r----- 1 _mysql _mysql 3251630334 Apr 22 20:19 testtable.CSV
Time: 8.359 (00:00:08)1000000 rows = 119631.53 rows/sec
CSV into InnoDB Storage EngineUse CSV storage engine, then alter to InnoDB table (and add a primary key):
ALTER TABLE TestTable ADD PRIMARY KEY (id), ENGINE=InnoDB;
Query OK, 1000000 rows affected (1 min 37.73 sec)
Time: 8.359 + 97.73 seconds (00:01:46)1000000 rows = 9426.05 rows/sec
effective data load rate
Partitioning
Transportable Tablespaces
Configuration Solutions
Increase Buffering, Decrease Durabilityinnodb_buffer_pool_size = 4G(default 128M)
innodb_log_buffer_size = 1G(default 16M)
innodb_log_file_size = 4G(default 48M)
innodb_flush_log_at_trx_commit = 0(default 1)
# log-bin = mysql-bin
Time: 56 seconds (00:00:56)1000000 rows = 17697.29 rows/sec
Increase Buffering, Decrease DurabilitySame, but at least flush the log buffer:
innodb_flush_log_at_trx_commit = 2(default 1)
Time: 60 seconds (00:01:00)1000000 rows = 16564.26 rows/sec
Tuning + Load Data$ php test-bulk-insert.php --total-rows 1000000 --load-data
Time: 22 seconds (00:00:22)1000000 rows = 43873.50 rows/sec
Config for More BufferingInnodb_buffer_pool_size=4G(default 128M)
Time: 82 seconds (00:01:22)1000000 rows = 12161.69 rows/sec
Innodb_change_buffering=none(default all)Innodb_log_buffer_size=1G(default 16M)
Time: 81 seconds (00:01:21)1000000 rows = 12291.17 rows/sec
Binlog_cache_size=256K)(default 32K)
Config for Greater ThroughputInnodb_log_file_size=4G(default 48M)
Time: 80 seconds (00:01:20)1000000 rows = 12488.30 rows/sec
Innodb_io_capacity=2000(default 200)
Time: 80 seconds (00:01:20)1000000 rows = 12432.38 rows/sec
Innodb_lru_scan_depth=8192(default 1024)
Time: 81 seconds (00:01:21)1000000 rows = 12269.61 rows/sec
Config for Lower DurabilityInnodb_doublewrite=OFF(default ON)
Time: 85 seconds (00:01:25)1000000 rows = 11740.06 rows/sec
Innodb_flush_log_at_trx_commit=0(default 1)
Time: 84 seconds (00:01:24)1000000 rows = 11768.51 rows/sec
# Log_bin Time: 82 seconds (00:01:22)1000000 rows = 12087.97 rows/sec
Sync_binlog=0(default 1)
Time: 83 seconds (00:01:23)1000000 rows = 11906.84 rows/sec
Config for Fewer ChecksInnodb_checksum_algorithm=none(defaultcrc32)
Time:84seconds (00:01:24)1000000rows = 11807.99rows/sec
Innodb_log_checksums=OFF(defaultON)
Time:84seconds(00:01:24)1000000rows= 11893.64rows/sec
Foreign_key_checks=0(default1)
Unique_checks=0(default1)
Parallel Execution Solutions
Parallel ImportLike LOAD DATA INFILE but supports multi-threaded import:
$ mysqlimport --local --use-threads 4 \dbname table1 table2 table3 table4
Runs a fixed number of threads, imports one table per thread.
If an import finishes and there are more tables, first available thread does it.
https://dev.mysql.com/doc/refman/8.0/en/mysqlimport.html
Parallel ImportConnecting to localhostConnecting to localhostConnecting to localhostConnecting to localhost
Selecting database testSelecting database testSelecting database testSelecting database test
Loading data from LOCAL file: TestTable2.csv into TestTable2Loading data from LOCAL file: TestTable3.csv into TestTable3Loading data from LOCAL file: TestTable1.csv into TestTable1Loading data from LOCAL file: TestTable4.csv into TestTable4
test.TestTable3: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0Disconnecting from localhost
test.TestTable1: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0Disconnecting from localhost
test.TestTable2: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0Disconnecting from localhost
test.TestTable4: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0Disconnecting from localhost
MysqlImport: Results$ php test-bulk-insert.php --total-rows 1000000 --load-data \--use-threads 4
Time: 31 seconds (00:00:31)1000000 rows = 32205.28 rows/sec
4 stmt = 0.13 stmt/sec4 txns = 0.13 txns/sec4 conn = 0.13 conn/sec
Conclusions
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
RowsperSecond
why are you still doing this?
Want to Try The Tests Yourself?The test-bulk-insert.php script is available here:
https://github.com/billkarwin/bk-tools
One Last Thing…What Was Our Solution?We cheated:§ Load database once.§ Take a filesystem snapshot.§ Run tests.§ Restore from snapshot.§ Re-run tests.§ etc.
This is not a good solution for everyone. It worked for one specific use case.
License and CopyrightCopyright 2017 Bill Karwin
http://www.slideshare.net/billkarwinReleased under a Creative Commons 3.0 License: http://creativecommons.org/licenses/by-nc-nd/3.0/
You are free to share—to copy, distribute, and transmit this work, under the following conditions:
Attribution.YoumustattributethisworktoBillKarwin.
Noncommercial.Youmaynotusethisworkforcommercialpurposes.
NoDerivativeWorks.You maynotalter,
transform,orbuilduponthiswork.