Cloud storage and filesystem
The filesystem destination stores data in remote file systems and cloud storage services like AWS S3, Google Cloud Storage, or Azure Blob Storage. Underneath, it uses fsspec to abstract file operations. Its primary role is to be used as a staging area for other destinations, but you can also quickly build a data lake with it.
Install dlt with filesystem
Install the dlt library with filesystem dependencies:
pip install "dlt[filesystem]"
This installs the s3fs and botocore packages.
You may also install the dependencies independently. Try:
pip install dlt
pip install s3fs
so pip does not fail on backtracking.
Destination capabilities
The following table shows the capabilities of the Filesystem destination:
| Feature | Value | More |
|---|---|---|
| Preferred loader file format | jsonl | File formats |
| Supported loader file formats | jsonl, insert_values, parquet, csv, model, reference | File formats |
| Supported table formats | delta, iceberg | Table formats |
| Has case sensitive identifiers | True | Naming convention |
| Supported merge strategies | upsert, insert-only | Merge strategy |
| Supported replace strategies | truncate-and-insert, insert-from-staging | Replace strategy |
| Sqlglot dialect | duckdb | Dataset access |
| Supports tz aware datetime | True | Timestamps and Timezones |
| Supports naive datetime | True | Timestamps and Timezones |
This table shows the supported features of the Filesystem destination in dlt.
Initialize the dlt project
Let's start by initializing a new dlt project as follows:
dlt init chess filesystem
This command will initialize your pipeline with chess as the source and AWS S3 as the destination.
Set up the destination and credentials
AWS S3
The command above creates a sample secrets.toml and requirements file for an AWS S3 bucket. You can install those dependencies by running:
pip install -r requirements.txt
To edit the dlt credentials file with your secret info, open .dlt/secrets.toml, which looks like this:
[destination.filesystem]
bucket_url = "s3://[your_bucket_name]" # replace with your bucket name,
[destination.filesystem.credentials]
aws_access_key_id = "please set me up!" # copy the access key here
aws_secret_access_key = "please set me up!" # copy the secret access key here
If you have your credentials stored in ~/.aws/credentials, just remove the [destination.filesystem.credentials] section above, and dlt will fall back to your default profile in local credentials. If you want to switch the profile, pass the profile name as follows (here: dlt-ci-user):
[destination.filesystem.credentials]
profile_name="dlt-ci-user"
You can also specify an AWS region:
[destination.filesystem.credentials]
region_name="eu-central-1"
You need to create an S3 bucket and a user who can access that bucket. dlt does not create buckets automatically.
-
You can create the S3 bucket in the AWS console by clicking on "Create Bucket" in S3 and assigning the appropriate name and permissions to the bucket.
-
Once the bucket is created, you'll have the bucket URL. For example, if the bucket name is
dlt-ci-test-bucket, then the bucket URL will be:s3://dlt-ci-test-bucket -
To grant permissions to the user being used to access the S3 bucket, go to IAM > Users, and click on “Add Permissions”.
-
Below you can find a sample policy that gives the minimum permission required by dlt to a bucket we created above. The policy contains permissions to list files in a bucket, get, put, and delete objects. Remember to place your bucket name in the Resource section of the policy!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DltBucketAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:GetObjectAttributes",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::dlt-ci-test-bucket/*",
"arn:aws:s3:::dlt-ci-test-bucket"
]
}
]
}
- To obtain the access and secret key for the user, go to IAM > Users and in the “Security Credentials”, click on “Create Access Key”, and preferably select “Command Line Interface” and create the access key.
- Obtain the “Access Key” and “Secret Access Key” created that are to be used in "secrets.toml".
Using S3 compatible storage
To use an S3 compatible storage other than AWS S3, such as MinIO, Cloudflare R2 or Google
Cloud Storage, you may supply an endpoint_url in the config. This should be set along with AWS credentials:
[destination.filesystem]
bucket_url = "s3://[your_bucket_name]" # replace with your bucket name,
[destination.filesystem.credentials]
aws_access_key_id = "please set me up!" # copy the access key here
aws_secret_access_key = "please set me up!" # copy the secret access key here
endpoint_url = "https://<account_id>.r2.cloudflarestorage.com" # copy your endpoint URL here
Adding additional configuration
To pass any additional arguments to fsspec, you may supply kwargs and client_kwargs in toml config.
[destination.filesystem.kwargs]
use_ssl=true
auto_mkdir=true
[destination.filesystem.client_kwargs]
verify="public.crt"
To pass additional arguments via env variables, use stringified dictionary:
DESTINATION__FILESYSTEM__KWARGS='{"use_ssl": true, "auto_mkdir": true}
You can also override default fsspec settings used by dlt:
[destination.filesystem.kwargs]
use_listings_cache=false # listing cache disabled by default as you typically add files
listings_expiry_time=60.0
skip_instance_cache=false # instance cache enabled by default, it is thread isolated anyway
There's however no good reason to do that, except debugging fsspec internal problems. You could try
to enable listing cache but this cache is not shared across threads which dlt load steps uses to
parallelize writes. You may get unpredictable cache invalidation behavior.
Google storage
Run pip install "dlt[gs]" which will install the gcfs package.
To edit the dlt credentials file with your secret info, open .dlt/secrets.toml.
You'll see AWS credentials by default.
Use Google cloud credentials that you may know from BigQuery destination
[destination.filesystem]
bucket_url = "gs://[your_bucket_name]" # replace with your bucket name,
[destination.filesystem.credentials]
project_id = "project_id" # please set me up!
private_key = "private_key" # please set me up!
client_email = "client_email" # please set me up!
Note that you can share the same credentials with BigQuery, replace the [destination.filesystem.credentials] section with a less specific one: [destination.credentials] which applies to both destinations.
If you have default Google Cloud credentials in your environment (i.e., on cloud function), remove the credentials sections above and dlt will fall back to the available default.
Use Cloud Storage admin to create a new bucket. Then assign the Storage Object Admin role to your service account.
Azure Blob Storage
Run pip install "dlt[az]" which will install the adlfs package to interface with Azure Blob Storage.
Edit the credentials in .dlt/secrets.toml, you'll see AWS credentials by default; replace them with your Azure credentials.
Supported schemes
dlt supports both forms of the blob storage urls:
[destination.filesystem]
bucket_url = "az://<container_name>/path" # replace with your container name and path
and
[destination.filesystem]
bucket_url = "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/path"
You can use az, abfss, azure and abfs url schemes.
If you need to use a custom host for your storage account, you can set it up like below:
[destination.filesystem.credentials]
# The storage account name is always required
azure_account_host = "<storage_account_name>.<host_base>"
Remember to include storage_account_name with your base host i.e. dlt_ci.blob.core.usgovcloudapi.net.
dlt will use this host to connect to Azure Blob Storage without any modifications:
Use the Blob endpoint (azure_account_host = "onelake.blob.fabric.microsoft.com").
IMPORTANT: OneLake bucket URLs must use GUIDs for workspace and lakehouse, not display names:
bucket_url = "abfss://<workspace_guid>@onelake.dfs.fabric.microsoft.com/<lakehouse_guid>/Files"
Find GUIDs in your browser URL when viewing workspace/lakehouse in Fabric portal.
Two forms of Azure credentials are supported:
SAS token credentials
Supply storage account name and either SAS token or storage account key
[destination.filesystem.credentials]
# The storage account name is always required
azure_storage_account_name = "account_name" # please set me up!
# You can set either account_key or sas_token, only one is needed
azure_storage_account_key = "account_key" # please set me up!
azure_storage_sas_token = "sas_token" # please set me up!
If you have the correct Azure credentials set up on your machine (e.g., via Azure CLI),
you can omit both azure_storage_account_key and azure_storage_sas_token and dlt will fall back to the available default.
Note that azure_storage_account_name is still required as it can't be inferred from the environment.
Service principal credentials
Supply a client ID, client secret, and a tenant ID for a service principal authorized to access your container.
[destination.filesystem.credentials]
azure_storage_account_name = "account_name" # please set me up!
azure_client_id = "client_id" # please set me up!
azure_client_secret = "client_secret"
azure_tenant_id = "tenant_id" # please set me up!
Concurrent blob uploads
dlt limits the number of concurrent connections for a single uploaded blob to 1. By default, adlfs that we use splits blobs into 4 MB chunks and uploads them concurrently, which leads to gigabytes of used memory and thousands of connections for larger load packages. You can increase the maximum concurrency as follows:
[destination.filesystem.kwargs]
max_concurrency=3
Hugging Face
The filesystem destination supports loading into Hugging Face Datasets using the hf:// protocol. See the Hugging Face destination page for setup and configuration details.
Local file system
If for any reason you want to have those files in a local folder, set up the bucket_url as follows (you are free to use config.toml for that as there are no secrets required):
[destination.filesystem]
bucket_url = "file:///absolute/path" # three slashes (file:///) for an absolute path
For handling deeply nested layouts, consider enabling automatic directory creation for the local filesystem destination. This can be done by setting kwargs in secrets.toml:
[destination.filesystem]
kwargs = '{"auto_mkdir": true}'
Or by setting an environment variable:
export DESTINATION__FILESYSTEM__KWARGS = '{"auto_mkdir": true/false}'
dlt correctly handles the native local file paths. Indeed, using the file:// schema may not be intuitive, especially for Windows users.
[destination.unc_destination]
bucket_url = 'C:\a\b\c'
In the example above, we specify bucket_url using TOML's literal strings that do not require escaping of backslashes.
[destination.unc_destination]
bucket_url = '\\localhost\c$\a\b\c' # UNC equivalent of C:\a\b\c
[destination.posix_destination]
bucket_url = '/var/local/data' # absolute POSIX style path
[destination.relative_destination]
bucket_url = '_storage/data' # relative POSIX style path
In the examples above, we define a few named filesystem destinations:
- unc_destination demonstrates a Windows UNC path in native form.
- posix_destination demonstrates a native POSIX (Linux/Mac) absolute path.
- relative_destination demonstrates a native POSIX (Linux/Mac) relative path. In this case, the
filesystemdestination will store files in the$cwd/_storage/datapath, where $cwd is your current working directory.
dlt supports Windows UNC paths with the file:// scheme. They can be specified using host or purely as a path component.
[destination.unc_with_host]
bucket_url="file://localhost/c$/a/b/c"
[destination.unc_with_path]
bucket_url="file:////localhost/c$/a/b/c"
Windows supports paths up to 255 characters. When you access a path longer than 255 characters, you'll see a FileNotFound exception.
To overcome this limit, you can use extended paths. dlt recognizes both regular and UNC extended paths.
[destination.regular_extended]
bucket_url = '\\?\C:\a\b\c'
[destination.unc_extended]
bucket_url='\\?\UNC\localhost\c$\a\b\c'
SFTP
Run pip install "dlt[sftp]" which will install the paramiko package alongside dlt, enabling secure SFTP transfers.
Configure your SFTP credentials by editing the .dlt/secrets.toml file. By default, the file contains placeholders for AWS credentials. You should replace these with your SFTP credentials.
Below are the possible fields for SFTP credentials configuration:
sftp_port # The port for SFTP, defaults to 22 (standard for SSH/SFTP)
sftp_username # Your SFTP username, defaults to None
sftp_password # Your SFTP password (if using password-based auth), defaults to None
*sftp_pkey* # Your private key for key-based authentication, defaults to None
sftp_key_filename # Path to your private key file for key-based authentication, defaults to None
sftp_key_passphrase # Passphrase for your private key (if applicable), defaults to None
sftp_timeout # Timeout for establishing a connection, defaults to None
sftp_banner_timeout # Timeout for receiving the banner during authentication, defaults to None
sftp_auth_timeout # Authentication timeout, defaults to None
sftp_channel_timeout # Channel timeout for SFTP operations, defaults to None
sftp_allow_agent # Use SSH agent for key management (if available), defaults to True
sftp_look_for_keys # Search for SSH keys in the default SSH directory (~/.ssh/), defaults to True
sftp_compress # Enable compression (can improve performance over slow networks), defaults to False
*sftp_sock* # Custom socket to use for communication to target host, defaults to None
sftp_gss_auth # Use GSS-API for authentication, defaults to False
sftp_gss_kex # Use GSS-API for key exchange, defaults to False
sftp_gss_deleg_creds # Delegate credentials with GSS-API, defaults to True
sftp_gss_host # Host for GSS-API, defaults to None
sftp_gss_trust_dns # Trust DNS for GSS-API, defaults to True
*sftp_disabled_algorithms* # Disable specific algorithms for security, defaults to None
*sftp_transport_factory* # Custom transport factory, defaults to None
*sftp_auth_strategy* # Authentication strategy, defaults to None
The * credentials indicate parameters that cannot be set through .dlt/secrets.toml and must be set through code instantiation.
For more information about credentials parameters: https://docs.paramiko.org/en/3.3/api/client.html#paramiko.client.SSHClient.connect
Authentication methods
SFTP authentication is attempted in the following order of priority:
-
Key-based authentication: If you provide a
key_filenamecontaining the path to a private key or a corresponding OpenSSH public certificate (e.g.,id_rsaandid_rsa-cert.pub), these will be used for authentication. If the private key requires a passphrase, you can specify it viasftp_key_passphrase. If your private key requires a passphrase to unlock, and you’ve provided one, it will be used to attempt to unlock the key. -
SSH Agent-based authentication: If
allow_agent=True(default), Paramiko will look for any SSH keys stored in your local SSH agent (such asid_rsa,id_dsa, orid_ecdsakeys stored in~/.ssh/). -
Username/Password authentication: If a password is provided (
sftp_password), plain username/password authentication will be attempted. -
GSS-API authentication: If GSS-API (Kerberos) is enabled (
sftp_gss_auth=True), authentication will use the Kerberos protocol. GSS-API may also be used for key exchange (sftp_gss_kex=True) and credential delegation (sftp_gss_deleg_creds=True). This method is useful in environments where Kerberos is set up, often in enterprise networks.
1. Key-based authentication
If you use an SSH key instead of a password, you can specify the path to your private key in the configuration.
[destination.filesystem]
bucket_url = "sftp://[hostname]/[path]"
file_glob = "*"
[destination.filesystem.credentials]
sftp_username = "foo"
sftp_key_filename = "/path/to/id_rsa" # Replace with the path to your private key file
sftp_key_passphrase = "your_passphrase" # Optional: passphrase for your private key
2. SSH agent-based authentication
If you have an SSH agent running with loaded keys, you can allow Paramiko to use these keys automatically. You can omit the password and key fields if you're relying on the SSH agent.
[destination.filesystem]
bucket_url = "sftp://[hostname]/[path]"
file_glob = "*"
[destination.filesystem.credentials]
sftp_username = "foo"
sftp_key_passphrase = "your_passphrase" # Optional: passphrase for your private key
The loaded key must be one of the following types stored in ~/.ssh/: id_rsa, id_dsa, or id_ecdsa.
3. Username and password authentication
This is the simplest form of authentication, where you supply a username and password directly.
[destination.filesystem]
bucket_url = "sftp://[hostname]/[path]" # The hostname of your SFTP server and the remote path
file_glob = "*" # Pattern to match the files you want to upload/download
[destination.filesystem.credentials]
sftp_username = "foo" # Replace "foo" with your SFTP username
sftp_password = "pass" # Replace "pass" with your SFTP password
Notes:
- Key-based authentication: Make sure your private key has the correct permissions (
chmod 600), or SSH will refuse to use it. - Timeouts: It's important to adjust timeout values based on your network conditions to avoid connection issues.
This configuration allows flexible SFTP authentication, whether you're using passwords, keys, or agents, and ensures secure communication between your local environment and the SFTP server.