Tutorial: Setting up an IPFS peer, part IV

Peeking under the hood of IPFS daemon profiles

Photo by Markus Spiske on Unsplash

In today’s follow up post to our ongoing Tutorial Series on Setting up an IPFS peer, we’re going to go over some of the intricacies of the IPFS repository (repo) profiles. In our previous posts, we essentially stuck with the defaults of the server profile. This is fine for getting started quickly, but for those looking to tweak their IPFS peer node, it is worth digging into the details a bit more. Additionally, certain profiles will be more useful in certain contexts, so it’s important to figure out what will work best for your particular application.

Let’s take a look under the hood

Configuration profiles allow a user to tweak their IPFS repository configuration quickly and easily. Profiles can be applied with the --profile (or -p ) flag to the ipfs init command, or with the ipfs config profile apply command. It is possible to switch between profiles even after you have initialized your IPFS repo, and once a new profile has been applied, a backup of the original configuration file will be created in your $IPFS_PATH .

But what exactly is a profile, and what does it do? In our previous tutorials, we initialized with the default server profile:

ipfs init -p server

But there are other options as well. In fact, there are seven in total, including server , local-discovery , test , default-networking , badgerds , default-datastore , and lowpower . So let’s briefly cover what each of these profile configurations mean… under the hood.

Server

From the IPFS docs, we learn that the server profile disables local host discovery, recommended when running IPFS on machines with public IPv4 addresses. So what exactly is happening here? Digging into the go-ipfs code a bit here gives us some more information. Essentially, the server profile adds a default set of non-routable IPv4 prefixes (according to this registry) to the Addresses.NoAnnounce and Swarm.AddrFilters entries, and turns MDNS and NAT discovery off ( Discovery.MDNS.Enabled = false, Swarm.DisableNatPortMap = true ). That’s pretty much it. All other manually-specified config options are left unchanged.

Local discovery

The local-discovery profile is essentially the opposite of the server profile. When enabled, it sets default values to fields affected by the server profile, and enables discovery in local networks. In other words, it removes the above non-routable IPv4 prefixes from Addresses.NoAnnounce and Swarm.AddrFilters , and sets Discovery.MDNS.Enabled to true , and Swarm.DisableNatPortMap to false .

Test and Default Networking

Next we have the test profile, which is designed to reduce external interference of IPFS daemon, which is useful when using the daemon in test environments. In practice, this means the API , Gateway , and Swarm Addresses are set to "/ip4/127.0.0.1/tcp/0" , Swarm.DisableNatPortMap is set to true , all Bootstrap nodes are removed (set to [] ), and MDNS discovery is disabled ( Discovery.MDNS.Enabled = false ). The counter-part to this profile — default-networking — restores default network settings, and is the inverse profile of the test profile.

Badger/Default Datastore

If you are feeling particularly adventurous, you might want to try out this next profile. The badgerds profile replaces the default datastore configuration with an experimental badger datastore. Be warned, if you apply this profile after ipfs init , you will need to convert your datastore to the new configuration. You can do this using ipfs-ds-convert (see ipfs-ds-convert --help and ipfs-ds-convert convert --help ). Additional warning, the badger datastore is experimental, make sure you are backing up your data frequently (which you should probably be doing anyway). Under the hood, this profile will edit your Datastore.Spec , so make sure you know what you're doing. As always, there’s a yin to this profile’s yang. The default-datastore profile restores the default datastore configuration. Since you’d be changing the datastore again, the same above caveat about converting your datastore would apply.

Low power

Finally, we have our lowpower profile mode. This one is designed to reduce daemon overhead on the system. With that in mind, it may affect node functionality to some degree, such that performance of content discovery and data fetching may be degraded. If you read our previous post on configuration options, then you might have guessed that this profile sets Routing.Type to "dhtclient" , Reprovider.Interval to "0" , and adjusts the Swarm.ConnMgr.LowWater to 20 , .HighWater to 40 , and .GracePeriod to 1 minute.

What’s next?

That was quick and painless! But hopefully now you have a better idea for when and why you might want to use profiles when spinning up your IPFS daemon, locally, or as part of a server setup. As in our previous post, there’s a lot to unpack here, so feel free to jump back up to a particular section, refer back here later, and generally use this post as a guide for tweaking your peer node.

In the mean time, why not check out some of our other stories, or sign up for our Textile Photos waitlist to see what we’re building with IPFS, or even drop us a line and tell us what cool distributed web projects you’re working on — we’d love to hear about it!