go-ipfs 0.4.20 released

by Steven Allen on 2019-04-16

We’re excited to release go-ipfs 0.4.20, trail-blazing the way to the distributed web. This release includes some critical performance and stability fixes so all users should upgrade ASAP! As of release four-twenty, the CLI should be more usable due to improved commands, Bitswap should be more reliable, and the relays should be much more performant. Enjoy!

This is also the first release to use the budding go modules system instead of GX. While GX has been a great way to dogfood an IPFS-based package manager, building and maintaining a custom package manager is a lot of work and we haven’t been able to dedicate enough time to bring the user experience of gx to an acceptable level. You can read #5850 for some discussion on this matter.

๐Ÿ”ฆ Highlights

โ›ด Docker

As of this release, it’s now much easier to run arbitrary IPFS commands within the docker container:

> docker run --name my-ipfs ipfs/go-ipfs:v0.4.20 config profile apply server # apply the server profile
> docker start my-ipfs # start the daemon

This release also reverts a change that caused some significant trouble in 0.4.19. If you’ve been running into Docker permission errors in 0.4.19, please upgrade.

๐Ÿ•ธ WebUI

This release contains a major WebUI release with some significant improvements to the file browser and new opt-in, privately hosted, anonymous usage analytics.

๐Ÿ•น Commands

As usual, we’ve made several changes and improvements to our commands. The most notable changes are listed in this section.

New: ipfs version deps

This release includes a new command, ipfs version deps, to list all dependencies (with versions) of the current go-ipfs build. This should make it easy to tell exactly how go-ipfs was built when tracking down issues.

New: ipfs add URL

The ipfs add command has gained support for URLs. This means you can:

  1. Add files with ipfs add URL instead of downloading the file first.
  2. Replace all uses of the ipfs urlstore command with a call to ipfs add --nocopy. The ipfs urlstore command will be deprecated in a future release.

Changed: ipfs swarm connect

The ipfs swarm connect command has a few new features:

It now marks the newly created connection as “important”. This should ensure that the connection manager won’t come along later and close the connection if it doesn’t think it’s being used.

It can now resolve /dnsaddr addresses that don’t end in a peer ID. For example, you can now run ipfs swarm connect /dnsaddr/bootstrap.libp2p.io to connect to one of the bootstrap peers at random. NOTE: This could connect you to an arbitrary peer as DNS is not secure (by default). Please do not rely on this except for testing or unless you know what you’re doing.

Finally, ipfs swarm connect now returns all errors on failure. This should make it much easier to debug connectivity issues. For example, one might see an error like:

Error: connect QmYou failure: dial attempt failed: 6 errors occurred:
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/127.0.0.1/tcp/4001) dial attempt failed: dial tcp4 127.0.0.1:4001: connect: connection refused
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/::1/tcp/4001) dial attempt failed: dial tcp6 [::1]:4001: connect: connection refused
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/2604::1/tcp/4001) dial attempt failed: dial tcp6 [2604::1]:4001: connect: network is unreachable
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/2602::1/tcp/4001) dial attempt failed: dial tcp6 [2602::1]:4001: connect: network is unreachable
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/150.0.1.2/tcp/4001) dial attempt failed: dial tcp4 0.0.0.0:4001->150.0.1.2:4001: i/o timeout
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/200.0.1.2/tcp/4001) dial attempt failed: dial tcp4 0.0.0.0:4001->200.0.1.2:4001: i/o timeout

Changed: ipfs bitswap stat

ipfs bitswap stat no longer lists bitswap partners unless the -v flag is passed. That is, it will now return:

> ipfs bitswap stat
bitswap status
	provides buffer: 0 / 256
	blocks received: 0
	blocks sent: 79
	data received: 0
	data sent: 672706
	dup blocks received: 0
	dup data received: 0 B
	wantlist [0 keys]
	partners [197]

Instead of:

> ipfs bitswap stat -v
bitswap status
	provides buffer: 0 / 256
	blocks received: 0
	blocks sent: 79
	data received: 0
	data sent: 672706
	dup blocks received: 0
	dup data received: 0 B
	wantlist [0 keys]
	partners [203]
		QmNQTTTRCDpCYCiiu6TYWCqEa7ShAUo9jrZJvWngfSu1mL
		QmNWaxbqERvdcgoWpqAhDMrbK2gKi3SMGk3LUEvfcqZcf4
		QmNgSVpgZVEd41pBX6DyCaHRof8UmUJLqQ3XH2qNL9xLvN
        ... omitting 200 lines ...

Changed: ipfs repo stat --human

The --human flag in the ipfs repo stat command now intelligently picks a size unit instead of always using MiB.

Changed: ipfs resolve (ipfs dns, ipfs name resolve)

All of the resolve commands now:

  1. Resolve recursively (up to 32 steps) by default to better match user expectations (these commands used to be non-recursive by default). To turn recursion off, pass -r false.
  2. When resolving non-recursively, these commands no longer fail when partially resolving a name. Instead, they simply return the intermediate result.

Changed: ipfs files flush

The ipfs files flush command now returns the CID of the flushed file.

Performance And Reliability

This release has the usual collection of performance and reliability improvements.

๐Ÿบ Badger Memory Usage

Those of you using the badger datastore should notice reduced memory usage in this release due to some upstream changes. Badger still uses significantly more memory than the default datastore configuration, but it is also much faster, and memory usage will hopefully continue to improve.

๐Ÿ” Bitswap

We fixed some critical CPU utilization regressions in bitswap for this release. If you’ve been noticing CPU regressions in go-ipfs 0.4.19, especially when running a public gateway, upgrading to 0.4.20 will likely fix them.

๐Ÿƒโ€ Relays

After AutoRelay was introduced in go-ipfs 0.4.19, the number of peers connecting through relays skyrocketed to over 120K concurrent peers. This highlighted some performance issues that we’ve now fixed in this release. Specifically:

If you’ve enabled relay hop (Swarm.EnableRelayHop) in go-ipfs 0.4.19 and it hasn’t burned down your machine yet, this release should improve things significantly. However, relays are still under heavy load so running an open relay will continue to be resource intensive.

We’re continuing to investigate this issue and have a few more patches on the way that, unfortunately, won’t make it into this release.

๐Ÿ˜ฑ Panics

We’ve fixed two notable panics in this release:

๐Ÿ“– Content Routing

IPFS announces and finds content by sending and retrieving content routing (“provider”) records to and from the DHT. Unfortunately, sending out these records can be quite resource intensive.

This release has two changes to alleviate this: 1. a reduced number of initial provide workers and 2. a persistent provider queue.

We’ve reduced the number of parallel initial provide workers (workers that send out provider records when content is initially added to go-ipfs) from 512 to 6, to avoid additional performance overhead from simultaneous provide requests. Currently, due to some issues in our DHT, each provide request tries to establish hundreds of connections, significantly impacting the performance of go-ipfs and crashing some routers.

We’ve introduced a new persistent provider queue for files added via ipfs add and ipfs pin add. When new directory trees are added to go-ipfs, go-ipfs will add the root/final CID to this queue. Then, in the background, go-ipfs will walk the queue, sequentially sending out provider records for each CID.

This ensures that root CIDs are sent out as soon as possible and are sent even when files are added when the go-ipfs daemon isn’t running.

By example, let’s add a directory tree to go-ipfs:

> # We're going to do this in "online" mode first so let's start the daemon.
> ipfs daemon &
...
Daemon is ready
> # Now, we're going to create a directory to add.
> mkdir foo
> for i in {0..1000}; do echo do echo $i > foo/$i; done
> # finally, we're going to add it.
> ipfs add -r foo
added QmUQcSjQx2bg4cSe2rUZyQi6F8QtJFJb74fWL7D784UWf9 foo/0
...
added QmQac2chFyJ24yfG2Dfuqg1P5gipLcgUDuiuYkQ5ExwGap foo/990
added QmQWwz9haeQ5T2QmQeXzqspKdowzYELShBCLzLJjVa2DuV foo/991
added QmQ5D4MtHUN4LTS4n7mgyHyaUukieMMyCfvnzXQAAbgTJm foo/992
added QmZq4n4KRNq3k1ovzxJ4qdQXZSrarfJjnoLYPR3ztHd7EY foo/993
added QmdtrsuVf8Nf1s1MaSjLAd54iNqrn1KN9VoFNgKGnLgjbt foo/994
added QmbstvU9mnW2hsE94WFmw5WbrXdLTu2Sf9kWWSozrSDscL foo/995
added QmXFd7f35gAnmisjfFmfYKkjA3F3TSpvUYB9SXr6tLsdg8 foo/996
added QmV5BxS1YQ9V227Np2Cq124cRrFDAyBXNMqHHa6kpJ9cr6 foo/997
added QmcXsccUtwKeQ1SuYC3YgyFUeYmAR9CXwGGnT3LPeCg5Tx foo/998
added Qmc4mcQcpaNzyDQxQj5SyxwFg9ZYz5XBEeEZAuH4cQirj9 foo/999
added QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G foo

In 0.4.19, we would have sent out provider records for files foo/{0..1000} before sending out a provider record for foo. If you were to ask a friend to download /ipfs/QmUQcSjQx2bg4cSe2rUZyQi6F8QtJFJb74fWL7D784UWf9, they would (baring other issues) be able to find it pretty quickly as this is the first CID you’ll have announced to the network. However, if you ask your friend to download /ipfs/QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G/0, they’ll have to wait for you to finish telling the network about every file in foo first.

In 0.4.20, we immediately tell the network about QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G (the foo directory) as soon as we finish adding the directory to go-ipfs without waiting to finish announcing foo/{0..1000}. This is especially important in this release because we’ve drastically reduced the number of provide workers.

The second benefit is that this queue is persistent. That means go-ipfs won’t forget to send out this record, even if it was offline when the content was initially added. NOTE: go-ipfs does continuously re-send provider records in the background twice a day, it just might be a while before it gets around to sending out any specific one.

๐Ÿ”‚ Bitswap Reliability

Bitswap now periodically re-sends its wantlist to connected peers. This should help work around some race conditions we’ve seen in bitswap where one node wants a block but the other doesn’t know for some reason.

You can track this issue here: https://github.com/ipfs/go-ipfs/issues/5183.

๐Ÿค  Improved NAT Traversal

While NATs are still p2p enemy #1, this release includes slightly improved support for traversing them.

Specifically, this release now:

  1. Better detects the “gateway” NAT, even when multiple devices on the network claim to be NATs.
  2. Better guesses the external IP address when port mapping, even when the gateway lies.

๐Ÿ“ก Reduced AutoRelay Boot Time

The experimental AutoRelay feature can now detect NATs much faster as we’ve reduced initial NAT detection delay to 15 seconds. There’s still room for improvement, but this should make nodes that have enabled this feature dialable earlier on start.

โค๏ธ Contributors

We’d like to thank all the users who contributed to this release including contributors to ipfs, ipld, libp2p, and multiformats.

Contributor Commits Lines ยฑ Files Changed
Raรบl Kripalani 60 +5489/-1104 163
Jakub Sztandera 55 +4891/-514 145
Steven Allen 130 +2075/-1563 246
vyzo 63 +474/-268 92
Michael Avila 13 +458/-74 22
Matt Joiner 14 +323/-172 32
hannahhoward 5 +158/-130 6
ลukasz Magiera 10 +120/-167 19
Anton Petrov 4 +135/-4 8
Andrew Nesbitt 3 +57/-64 4
Yusef Napora 14 +63/-49 14
Marten Seemann 3 +24/-78 11
tg 1 +82/-15 5
jmank88 4 +88/-4 6
Richard Littauer 3 +64/-1 3
whyrusleeping 2 +55/-9 3
Kevin Atkinson 3 +34/-1 5
aschmahmann 1 +33/-0 1
Evgeniy Kulikov 2 +7/-17 3
Anatoliy Basov 1 +11/-6 1
Bob Potter 1 +3/-10 1
Lars Gierth 1 +1/-9 1
Dominic Della Valle 1 +5/-2 1
Leonhard Markert 1 +2/-2 1
gukq 2 +2/-2 2
Alex Browne 1 +1/-1 1
Hector Sanjuan 1 +1/-1 1
Tomas Virgl 1 +1/-1 1
lnykww 1 +1/-1 1
monotone 1 +1/-1 1

We’d also like to thank users who’ve reported and helped diagnose bugs and all those community members who’ve helped out on the forums and IRC.

๐Ÿ“œ Changelog

For those of you who like to look through the PRs that went into this release, here is the complete changelog.

Comments