DuckDB provides seamless data import and export capabilities with native support for multiple file formats. You can query files directly without creating tables first, making data analysis fast and efficient.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/duckdb/duckdb/llms.txt
Use this file to discover all available pages before exploring further.
Native File Format Support
DuckDB natively supports reading from and writing to several file formats:- CSV - Comma-separated values with automatic type detection
- Parquet - Columnar format for efficient analytical queries
- JSON - Newline-delimited JSON and standard JSON arrays
Reading Data
The simplest way to read data is by referencing the file directly in your SQL query:Writing Data with COPY
TheCOPY statement exports query results or table data to files:
Import Data with COPY
You can also useCOPY to import data into existing tables:
sales table, automatically mapping columns by position or name.
Automatic Type Detection
DuckDB automatically detects data types when reading files:dtypes parameter:
Multiple Files and Globs
Query multiple files at once using glob patterns:Export Database
Export an entire database with all tables and schemas:Format-Specific Options
Each file format supports specific options for fine-tuned control:- CSV Options - Delimiters, quotes, headers, null handling
- Parquet Options - Compression codecs, row group size, encryption
- JSON Options - Format detection, record handling, type inference
Performance Tips
- Use Parquet for large analytical datasets (columnar format enables efficient querying)
- Use CSV for interoperability with other tools
- Use JSON for semi-structured or nested data
- Enable Hive partitioning for partitioned datasets to skip unnecessary file reads
- Use
COPYinstead ofINSERTfor bulk data loading