Python sqlite3.Connection.iterdump Method
Last modified April 15, 2025
This comprehensive guide explores Python's sqlite3.Connection.iterdump
method, which generates SQL commands to recreate database contents. We'll cover
basic usage, practical examples, and common patterns.
Basic Definitions
The iterdump
method returns an iterator that yields SQL commands as
strings. These commands can recreate the database schema and data when executed.
Key characteristics: it produces valid SQL statements, preserves the complete database structure, and includes all data. It's useful for backups, migrations, and database sharing.
Basic Database Dump
Here's the simplest usage of iterdump
to export a database's
contents to SQL commands.
import sqlite3 # Create and populate a sample database with sqlite3.connect('example.db') as conn: cursor = conn.cursor() cursor.execute('''CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)''') cursor.execute("INSERT INTO users (name, age) VALUES ('Alice', 30)") cursor.execute("INSERT INTO users (name, age) VALUES ('Bob', 25)") conn.commit() # Dump the database contents with sqlite3.connect('example.db') as conn: for line in conn.iterdump(): print(line)
This example creates a simple database with one table and two records. The
iterdump
method then generates SQL commands to recreate this.
The output includes CREATE TABLE statements, INSERT commands, and other SQL needed to rebuild the database exactly as it was.
Saving Dump to File
This example shows how to save the dump output to a file for backup purposes.
import sqlite3 # Connect to database and save dump to file with sqlite3.connect('example.db') as conn: with open('backup.sql', 'w') as f: for line in conn.iterdump(): f.write(f"{line}\n") print("Database backup saved to backup.sql")
The script writes each SQL command from iterdump
to a text file.
This file can later be used to restore the database.
This is a simple but effective backup strategy for small to medium SQLite databases. The resulting file is human-readable and portable.
Restoring from Dump
This example demonstrates how to restore a database from a dump file created
with iterdump
.
import sqlite3 # Create a new database from dump file with sqlite3.connect('restored.db') as conn: cursor = conn.cursor() with open('backup.sql', 'r') as f: sql_script = f.read() cursor.executescript(sql_script) print("Database restored from backup.sql")
The script reads the SQL commands from the backup file and executes them against a new database. This recreates the original database structure and data.
Note that executescript
is used here as the dump file contains
multiple SQL statements separated by semicolons.
Dumping Specific Tables
This example shows how to dump only specific tables from a database.
import sqlite3 def dump_table(conn, table_name): cursor = conn.cursor() cursor.execute(f"SELECT * FROM {table_name}") rows = cursor.fetchall() # Get column names columns = [desc[0] for desc in cursor.description] # Generate CREATE TABLE statement cursor.execute(f"SELECT sql FROM sqlite_master WHERE type='table' AND name='{table_name}'") create_stmt = cursor.fetchone()[0] yield create_stmt + ";\n" # Generate INSERT statements for row in rows: values = ", ".join([f"'{str(v)}'" if v is not None else "NULL" for v in row]) yield f"INSERT INTO {table_name} VALUES ({values});\n" # Example usage with sqlite3.connect('example.db') as conn: for line in dump_table(conn, 'users'): print(line)
This custom function mimics iterdump
but for a single table. It
first gets the table schema, then generates INSERT statements for all rows.
This approach is useful when you only need to backup or transfer specific tables from a database rather than the entire contents.
Comparing Databases
This example demonstrates how to use iterdump
to compare two
databases by comparing their dump outputs.
import sqlite3 import difflib def get_dump_lines(conn): return list(conn.iterdump()) # Connect to both databases with sqlite3.connect('db1.db') as conn1, sqlite3.connect('db2.db') as conn2: dump1 = get_dump_lines(conn1) dump2 = get_dump_lines(conn2) # Generate diff diff = difflib.unified_diff(dump1, dump2, fromfile='db1.db', tofile='db2.db') print('\n'.join(diff))
The script generates dumps from two databases and uses Python's difflib
to show differences between them. This helps identify schema or data changes.
This technique is valuable for debugging, version control, or verifying database migrations between different environments.
In-Memory Database Dump
This example shows how to dump an in-memory database to a file using
iterdump
.
import sqlite3 # Create in-memory database with sqlite3.connect(':memory:') as mem_conn: cursor = mem_conn.cursor() cursor.execute('''CREATE TABLE test (id INTEGER, data TEXT)''') cursor.execute("INSERT INTO test VALUES (1, 'Memory data')") mem_conn.commit() # Dump to file with open('memory_dump.sql', 'w') as f: for line in mem_conn.iterdump(): f.write(f"{line}\n") print("In-memory database saved to memory_dump.sql")
The script creates a temporary in-memory database, populates it with data, then
uses iterdump
to save its contents to a file before it disappears.
This pattern is useful for preserving the state of temporary databases or for creating templates from dynamically generated databases.
Dumping with Progress Tracking
This advanced example shows how to track progress while dumping a large database.
import sqlite3 import sys def dump_with_progress(conn, output_file): total = 0 cursor = conn.cursor() # Count total tables for progress cursor.execute("SELECT COUNT(*) FROM sqlite_master WHERE type='table'") table_count = cursor.fetchone()[0] with open(output_file, 'w') as f: for i, line in enumerate(conn.iterdump()): f.write(f"{line}\n") # Update progress for each table if "CREATE TABLE" in line: progress = (i / table_count) * 100 sys.stdout.write(f"\rProgress: {progress:.1f}%") sys.stdout.flush() print("\nDump complete") # Example usage with sqlite3.connect('large_db.db') as conn: dump_with_progress(conn, 'large_dump.sql')
This script enhances the basic dump functionality by adding progress tracking. It counts tables first, then updates progress as each table is dumped.
For very large databases, this feedback is valuable to estimate remaining time and confirm the process is working correctly.
Best Practices
- Use transactions: Wrap dump operations in transactions for consistency
- Handle large dumps: Process line by line to avoid memory issues
- Validate backups: Test restoring from dumps periodically
- Consider compression: Compress dump files for storage efficiency
- Document dumps: Include metadata about when dumps were created
Source References
Author
List all Python tutorials.