Chain Releases

ipfs/kubo - v0.35.0

Published: May 21, 2025

Release Summary

This release of Kubo brings significant improvements focused on data onboarding, performance, retrieval methods, and overall configuration flexibility.

A major highlight is the dramatically improved performance of adding data, particularly large directories when the daemon is running online. For example, adding a 10GiB file now takes roughly 30 seconds, a massive reduction from the previous 24 minutes. This is paired with an optimized, dedicated queue that ensures newly added CIDs are announced to the network much faster.

Retrieval capabilities are expanded with experimental support for an opt-in HTTP retrieval client. This allows Kubo to fetch blocks over standard HTTPS connections using delegated routing results that include /tls/http multiaddrs. This can simplify infrastructure for providers and potentially leverage HTTP caching.

Managing data in the Mutable File System (MFS) becomes easier with new Reprovider.Strategy options (mfs and pinned+mfs) that allow automatically announcing data placed in MFS, removing the need for manual pinning and unpinning for updates. Additionally, MFS now includes experimental support for a read/write FUSE mount point accessible via ipfs mount.

Users gain more fine-grained control over how data is structured when added to IPFS. New ipfs add options and persistent configuration settings allow customizing UnixFS DAG shaping, such as setting maximum links per file chunk or directory, enabling optimization for different network conditions and client capabilities.

The WebUI receives a user interface improvement with the addition of a grid view on the Files screen.

Other notable changes include making datastore metrics opt-in by default to reduce overhead in typical configurations, new configuration options for Bitswap (e.g., disabling the server component) and Routing (e.g., ignoring specific providers or customizing delegated HTTP routers), and a new setting for Pebble datastore users to pin the database format version, preventing automatic upgrades for easier potential downgrades. New environment variables provide more control over log output destinations and add an optional wait time for acquiring the repository lock.

Release Notes

<a href="http://ipshipyard.com/"><img align="right" src="https://github.com/user-attachments/assets/39ed3504-bb71-47f6-9bf8-cb9a1698f272" /></a>

This release  was brought to you by the [Shipyard](http://ipshipyard.com/) team.

- [πŸ”¦ Highlights](#-highlights)
  - [Opt-in HTTP Retrieval client](#opt-in-http-retrieval-client)
  - [Dedicated `Reprovider.Strategy` for MFS](#dedicated-reproviderstrategy-for-mfs)
  - [Experimental support for MFS as a FUSE mount point](#experimental-support-for-mfs-as-a-fuse-mount-point)
  - [Grid view in WebUI](#grid-view-in-webui)
  - [Enhanced DAG-Shaping Controls](#enhanced-dag-shaping-controls)
    - [New DAG-Shaping `ipfs add` Options](#new-dag-shaping-ipfs-add-options)
    - [Persistent DAG-Shaping `Import.*` Configuration](#persistent-dag-shaping-import-configuration)
    - [Updated DAG-Shaping `Import` Profiles](#updated-dag-shaping-import-profiles)
  - [`Datastore` Metrics Now Opt-In](#datastore-metrics-now-opt-in)
  - [Improved performance of data onboarding](#improved-performance-of-data-onboarding)
    - [Fast `ipfs add` in online mode](#fast-ipfs-add-in-online-mode)
    - [Optimized, dedicated queue for providing fresh CIDs](#optimized-dedicated-queue-for-providing-fresh-cids)
      - [Deprecated `ipfs stats provider`](#deprecated-ipfs-stats-provider)
  - [New `Bitswap` configuration options](#new-bitswap-configuration-options)
  - [New `Routing` configuration options](#new-routing-configuration-options)
  - [New Pebble database format config](#new-pebble-database-format-config)
  - [New environment variables](#new-environment-variables)
    - [Improved Log Output Setting](#improved-log-output-setting)
    - [New Repo Lock Optional Wait](#new-repo-lock-optional-wait)
  - [πŸ“¦οΈ Important dependency updates](#-important-dependency-updates)
- [πŸ“ Changelog](#-changelog)
- [πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Contributors](#-contributors)

### Overview

This release brings significant UX and performance improvements to data onboarding, provisioning, and retrieval systems.

New configuration options let you customize the shape of UnixFS DAGs generated during the data import, control the scope of DAGs announced on the Amino DHT, select which delegated routing endpoints are queried, and choose whether to enable HTTP retrieval alongside Bitswap over Libp2p.

Continue reading for more details.


### πŸ”¦ Highlights

#### Opt-in HTTP Retrieval client

This release adds experimental support for retrieving blocks directly over HTTPS (HTTP/2), complementing the existing Bitswap over Libp2p.

The opt-in client enables Kubo to use [delegated routing](https://github.com/ipfs/kubo/blob/master/docs/config.md#routingdelegatedrouters) results with `/tls/http` multiaddrs, connecting to HTTPS servers that support [Trustless HTTP Gateway](https://specs.ipfs.tech/http-gateways/trustless-gateway)'s Block Responses (`?format=raw`, `application/vnd.ipld.raw`). Fetching blocks via HTTPS (HTTP/2) simplifies infrastructure and reduces costs for storage providers by leveraging HTTP caching and CDNs.

To enable this feature for testing and feedback, set:

```console
$ ipfs config --json HTTPRetrieval.Enabled true
```

See [`HTTPRetrieval`](https://github.com/ipfs/kubo/blob/master/docs/config.md#httpretrieval) for more details.

#### Dedicated `Reprovider.Strategy` for MFS

The [Mutable File System (MFS)](https://docs.ipfs.tech/concepts/glossary/#mfs) in Kubo is a UnixFS filesystem managed with [`ipfs files`](https://docs.ipfs.tech/reference/kubo/cli/#ipfs-files) commands. It supports familiar file operations like cp and mv within a folder-tree structure, automatically updating a MerkleDAG and a "root CID" that reflects the current MFS state. Files in MFS are protected from garbage collection, offering a simpler alternative to `ipfs pin`. This makes it a popular choice for tools like [IPFS Desktop](https://docs.ipfs.tech/install/ipfs-desktop/) and the [WebUI](https://github.com/ipfs/ipfs-webui/#readme).

Previously, the `pinned` reprovider strategy required manual pin management: each dataset update meant pinning the new version and unpinning the old one. Now, new strategiesβ€”`mfs` and `pinned+mfs`β€”let users limit announcements to data explicitly placed in MFS. This simplifies updating datasets and announcing only the latest version to the Amino DHT.

Users relying on the `pinned` strategy can switch to `pinned+mfs` and use MFS alone to manage updates and announcements, eliminating the need for manual pinning and unpinning. We hope this makes it easier to publish just the data that matters to you.

See [`Reprovider.Strategy`](https://github.com/ipfs/kubo/blob/master/docs/config.md#reproviderstrategy) for more details.

#### Experimental support for MFS as a FUSE mount point

The MFS root (filesystem behind the `ipfs files` API) is now available as a read/write FUSE mount point at `Mounts.MFS`. This filesystem is mounted in the same way as `Mounts.IPFS` and `Mounts.IPNS` when running `ipfs mount` or `ipfs daemon --mount`.

Note that the operations supported by the MFS FUSE mountpoint are limited, since MFS doesn't store file attributes.

See [`Mounts`](https://github.com/ipfs/kubo/blob/master/docs/config.md#mounts) and [`docs/fuse.md`](https://github.com/ipfs/kubo/blob/master/docs/fuse.md) for more details.

#### Grid view in WebUI

The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for the grid view on the _Files_ screen:

> ![image](https://github.com/user-attachments/assets/80dcf0d0-8103-426f-ae91-416fb25d32b6)

#### Enhanced DAG-Shaping Controls

This release advances CIDv1 support by introducing fine-grained control over UnixFS DAG shaping during data ingestion with the `ipfs add` command.

Wider DAG trees (more links per node, higher fanout, larger thresholds) are beneficial for large files and directories with many files, reducing tree depth and lookup latency in high-latency networks, but they increase node size, straining memory and CPU on resource-constrained devices. Narrower trees (lower link count, lower fanout, smaller thresholds) are preferable for smaller directories, frequent updates, or low-power clients, minimizing overhead and ensuring compatibility, though they may increase traversal steps for very large datasets.

Kubo now allows users to act on these tradeoffs and customize the width of the DAG created by `ipfs add` command.

##### New DAG-Shaping `ipfs add` Options

Three new options allow you to override default settings for specific import operations:

- `--max-file-links`: Sets the maximum number of child links for a single file chunk.
- `--max-directory-links`: Defines the maximum number of child entries in a "basic" (single-chunk) directory.
  - Note: Directories exceeding this limit or the `Import.UnixFSHAMTDirectorySizeThreshold` are converted to HAMT-based (sharded across multiple blocks) structures.
- `--max-hamt-fanout`: Specifies the maximum number of child nodes for HAMT internal structures.

##### Persistent DAG-Shaping `Import.*` Configuration

You can set default values for these options using the following configuration settings:
- [`Import.UnixFSFileMaxLinks`](https://github.com/ipfs/kubo/blob/master/docs/config.md#importunixfsfilemaxlinks)
- [`Import.UnixFSDirectoryMaxLinks`](https://github.com/ipfs/kubo/blob/master/docs/config.md#importunixfsdirectorymaxlinks)
- [`Import.UnixFSHAMTDirectoryMaxFanout`](https://github.com/ipfs/kubo/blob/master/docs/config.md#importunixfshamtdirectorymaxfanout)
- [`Import.UnixFSHAMTDirectorySizeThreshold`](https://github.com/ipfs/kubo/blob/master/docs/config.md#importunixfshamtdirectorysizethreshold)

##### Updated DAG-Shaping `Import` Profiles

The release updated configuration [profiles](https://github.com/ipfs/kubo/blob/master/docs/config.md#profiles) to incorporate these new `Import.*` settings:
- Updated Profile: `test-cid-v1` now includes current defaults as explicit `Import.UnixFSFileMaxLinks=174`, `Import.UnixFSDirectoryMaxLinks=0`, `Import.UnixFSHAMTDirectoryMaxFanout=256` and `Import.UnixFSHAMTDirectorySizeThreshold=256KiB`
- New Profile: `test-cid-v1-wide` adopts experimental directory DAG-shaping defaults, increasing the maximum file DAG width from 174 to 1024, HAMT fanout from 256 to 1024, and raising the HAMT directory sharding threshold from 256KiB to 1MiB, aligning with 1MiB file chunks.
  - Feedback: Try it out and share your thoughts at [discuss.ipfs.tech/t/should-we-profile-cids](https://discuss.ipfs.tech/t/should-we-profile-cids/18507) or [ipfs/specs#499](https://github.com/ipfs/specs/pull/499).

> [!TIP]
> Apply one of CIDv1 test [profiles](https://github.com/ipfs/kubo/blob/master/docs/config.md#profiles) with `ipfs config profile apply test-cid-v1[-wide]`.

#### `Datastore` Metrics Now Opt-In

To reduce overhead in the default configuration, datastore metrics are no longer enabled by default when initializing a Kubo repository with `ipfs init`.
Metrics prefixed with `<dsname>_datastore` (e.g., `flatfs_datastore_...`, `leveldb_datastore_...`) are not exposed unless explicitly enabled. For a complete list of affected default metrics, refer to [`prometheus_metrics_added_by_measure_profile`](https://github.com/ipfs/kubo/blob/master/test/sharness/t0119-prometheus-data/prometheus_metrics_added_by_measure_profile).

Convenience opt-in [profiles](https://github.com/ipfs/kubo/blob/master/docs/config.md#profiles) can be enabled at initialization time with `ipfs init --profile`: `flatfs-measure`, `pebbleds-measure`, `badgerds-measure`

It is also possible to manually add the `measure` wrapper. See examples in [`Datastore.Spec`](https://github.com/ipfs/kubo/blob/master/docs/config.md#datastorespec) documentation.

#### Improved performance of data onboarding

This Kubo release significantly improves both the speed of ingesting data via `ipfs add` and announcing newly produced CIDs to Amino DHT.

##### Fast `ipfs add` in online mode

Adding a large directory of data when `ipfs daemon` was running in online mode took a long time. A significant amount of this time was spent writing to and reading from the persisted provider queue. Due to this, many users had to shut down the daemon and perform data import in offline mode. This release fixes this known limitation, significantly improving the speed of `ipfs add`.

> [!IMPORTANT]
> Performing `ipfs add` of 10GiB file would take about 30 minutes.
> Now it takes close to 30 seconds.

Kubo v0.34:

```console
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-100M > /dev/null
 100.00 MiB / 100.00 MiB [=====================================================================] 100.00%
real	0m6.464s

$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-1G > /dev/null
 1000.00 MiB / 1000.00 MiB [===================================================================] 100.00%
real	1m10.542s

$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-10G > /dev/null
 10.00 GiB / 10.00 GiB [=======================================================================] 100.00%
real	24m5.744s
```

Kubo v0.35:

```console
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-100M > /dev/null
 100.00 MiB / 100.00 MiB [=====================================================================] 100.00%
real	0m0.326s

$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-1G > /dev/null
 1.00 GiB / 1.00 GiB [=========================================================================] 100.00%
real	0m2.819s

$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-10G > /dev/null
 10.00 GiB / 10.00 GiB [=======================================================================] 100.00%
real	0m28.405s
```

##### Optimized, dedicated queue for providing fresh CIDs

From `kubo` [`v0.33.0`](https://github.com/ipfs/kubo/releases/tag/v0.33.0),
Bitswap stopped advertising newly added and received blocks to the DHT. Since
then `boxo/provider` is responsible for the first time provide and the recurring reprovide logic. Prior
to `v0.35.0`, provides and reprovides were handled together in batches, leading
to delays in initial advertisements (provides).

Provides and Reprovides now have separate queues, allowing for immediate
provide of new CIDs and optimised batching of reprovides.

###### New `Provider` configuration options

This change introduces a new configuration options:

- [`Provider.Enabled`](https://github.com/ipfs/kubo/blob/master/docs/config.md#providerenabled) is a global flag for disabling both [Provider](https://github.com/ipfs/kubo/blob/master/docs/config.md#provider) and [Reprovider](https://github.com/ipfs/kubo/blob/master/docs/config.md#reprovider) systems (announcing new/old CIDs to amino DHT).
- [`Provider.WorkerCount`](https://github.com/ipfs/kubo/blob/master/docs/config.md#providerworkercount) for limiting the number of concurrent provide operations, allows for fine-tuning the trade-off between announcement speed and system load when announcing new CIDs.
- Removed `Experimental.StrategicProviding`. Superseded by `Provider.Enabled`, `Reprovider.Interval` and [`Reprovider.Strategy`](https://github.com/ipfs/kubo/blob/master/docs/config.md#reproviderstrategy).

> [!TIP]
> Users who need to provide large volumes of content immediately should consider setting `Routing.AcceleratedDHTClient` to `true`. If that is not enough, consider adjusting `Provider.WorkerCount` to a higher value.

###### Deprecated `ipfs stats provider`

Since the `ipfs stats provider` command was displaying statistics for both
provides and reprovides, this command isn't relevant anymore after separating
the two queues.

The successor command is `ipfs stats reprovide`, showing the same statistics,
but for reprovides only.

> [!NOTE]
> `ipfs stats provider` still works, but is marked as deprecated and will be removed in a future release. Be mindful that the command provides only statistics about reprovides (similar to `ipfs stats reprovide`) and not the new provide queue (this will be fixed as a part of wider refactor planned for a future release).

#### New `Bitswap` configuration options

- [`Bitswap.Libp2pEnabled`](https://github.com/ipfs/kubo/blob/master/docs/config.md#bitswaplibp2penabled) determines whether Kubo will use Bitswap over libp2p (both client and server).
- [`Bitswap.ServerEnabled`](https://github.com/ipfs/kubo/blob/master/docs/config.md#bitswapserverenabled) controls whether Kubo functions as a Bitswap server to host and respond to block requests.
- [`Internal.Bitswap.ProviderSearchMaxResults`](https://github.com/ipfs/kubo/blob/master/docs/config.md#internalbitswapprovidersearchmaxresults) for adjusting the maximum number of providers bitswap client should aim at before it stops searching for new ones.

#### New `Routing` configuration options

- [`Routing.IgnoreProviders`](https://github.com/ipfs/kubo/blob/master/docs/config.md#routingignoreproviders) allows ignoring specific peer IDs when returned by the content routing system as providers of content.
  - Simplifies testing `HTTPRetrieval.Enabled` in setups where Bitswap over Libp2p and HTTP retrieval is served under different PeerIDs.
- [`Routing.DelegatedRouters`](https://github.com/ipfs/kubo/blob/master/docs/config.md#routingdelegatedrouters) allows customizing HTTP routers used by Kubo when `Routing.Type` is set to `auto` or `autoclient`.
  - Users are now able to adjust the default routing system and directly query custom routers for increased resiliency or when dataset is too big and CIDs are not announced on Amino DHT.

> [!TIP]
>
> For example, to use Pinata's routing endpoint in addition to IPNI at `cid.contact`:
>
> ```console
> $ ipfs config --json Routing.DelegatedRouters '["https://cid.contact","https://indexer.pinata.cloud"]'
> ```

#### New Pebble database format config

This Kubo release provides node operators with more control over [Pebble's `FormatMajorVersion`](https://github.com/cockroachdb/pebble/tree/master?tab=readme-ov-file#format-major-versions). This allows testing a new Kubo release without automatically migrating Pebble datastores, keeping the ability to switch back to older Kubo.

When IPFS is initialized to use the pebbleds datastore (opt-in via `ipfs init --profile=pebbleds`), the latest pebble database format is configured in the pebble datastore config as `"formatMajorVersion"`. Setting this in the datastore config prevents automatically upgrading to the latest available version when Kubo is upgraded. If a later version becomes available, the Kubo daemon prints a startup message to indicate this. The user can them update the config to use the latest format when they are certain a downgrade will not be necessary.

Without the `"formatMajorVersion"` in the pebble datastore config, the database format is automatically upgraded to the latest version. If this happens, then it is possible a downgrade back to the previous version of Kubo will not work if new format is not compatible with the pebble datastore in the previous version of Kubo.

When installing a new version of Kubo when `"formatMajorVersion"` is configured, automatic repository migration (`ipfs daemon with --migrate=true`) does not upgrade this to the latest available version. This is done because a user may have reasons not to upgrade the pebble database format, and may want to be able to downgrade Kubo if something else is not working in the new version. If the configured pebble database format in the old Kubo is not supported in the new Kubo, then the configured version must be updated and the old Kubo run, before installing the new Kubo.

See other caveats and configuration options at [`kubo/docs/datastores.md#pebbleds`](https://github.com/ipfs/kubo/blob/master/docs/datastores.md#pebbleds)

#### New environment variables

The [`environment-variables.md`](https://github.com/ipfs/kubo/blob/master/docs/environment-variables.md) was extended with two new features:

##### Improved Log Output Setting

When stderr and/or stdout options are configured or specified by the `GOLOG_OUTPUT` environ variable, log only to the output(s) specified. For example:

- `GOLOG_OUTPUT="stderr"` logs only to stderr
- `GOLOG_OUTPUT="stdout"` logs only to stdout
- `GOLOG_OUTPUT="stderr+stdout"` logs to both stderr and stdout

##### New Repo Lock Optional Wait

The environment variable `IPFS_WAIT_REPO_LOCK` specifies the amount of time to wait for the repo lock. Set the value of this variable to a string that can be [parsed](https://pkg.go.dev/[email protected]#ParseDuration) as a golang `time.Duration`. For example:
```
IPFS_WAIT_REPO_LOCK="15s"
```

If the lock cannot be acquired because someone else has the lock, and `IPFS_WAIT_REPO_LOCK` is set to a valid value, then acquiring the lock is retried every second until the lock is acquired or the specified wait time has elapsed.

#### πŸ“¦οΈ Important dependency updates

- update `boxo` to [v0.30.0](https://github.com/ipfs/boxo/releases/tag/v0.30.0)
- update `ipfs-webui` to [v4.7.0](https://github.com/ipfs/ipfs-webui/releases/tag/v4.7.0)
- update `go-ds-pebble` to [v0.5.0](https://github.com/ipfs/go-ds-pebble/releases/tag/v0.5.0)
  - update `pebble` to [v2.0.3](https://github.com/cockroachdb/pebble/releases/tag/v2.0.3)
- update `go-libp2p-pubsub` to [v0.13.1](https://github.com:/libp2p/go-libp2p-pubsub/releases/tag/v0.13.1)
- update `go-libp2p-kad-dht` to [v0.33.1](https://github.com/libp2p/go-libp2p-kad-dht/releases/tag/v0.33.1) (incl. [v0.33.0](https://github.com/libp2p/go-libp2p-kad-dht/releases/tag/v0.33.0), [v0.32.0](https://github.com/libp2p/go-libp2p-kad-dht/releases/tag/v0.32.0), [v0.31.0](https://github.com/libp2p/go-libp2p-kad-dht/releases/tag/v0.31.0))
- update `go-log` to [v2.6.0](https://github.com/ipfs/go-log/releases/tag/v2.6.0)
- update `p2p-forge/client` to [v0.5.1](https://github.com/ipshipyard/p2p-forge/releases/tag/v0.5.1)

### πŸ“ Changelog

<details><summary>Full Changelog</summary>

- github.com/ipfs/kubo:
  - chore(version): 0.35.0
  - fix: go-libp2p-kad-dht v0.33.1 (#10814) ([ipfs/kubo#10814](https://github.com/ipfs/kubo/pull/10814))
  - fix: p2p-forge v0.5.1 ignoring /p2p-circuit (#10813) ([ipfs/kubo#10813](https://github.com/ipfs/kubo/pull/10813))
  - chore(version): 0.35.0-rc2
  - fix(fuse): ipns error handling and friendly errors (#10807) ([ipfs/kubo#10807](https://github.com/ipfs/kubo/

Links

Back to ipfs/kubo releases