IPFS – a hidden gem of web3 development

Mention the term “web3” to a group of developers and you’ll inevitably see a handful of them reflexively cringe. Some still stubbornly argue that it’s mostly unwarranted hype – a way for programmers to waste their time building glorified ponzi schemes with sub-par UX.

What they’re missing is that new decentralised protocols have actually been a catalyst for developers to rethink years of assumptions of how the web should work. One of these is IPFS, which – despite being 7 years old – I think is still an overlooked technology that merits more attention.

After using IPFS for storing images in my new projects, I never want to go back to the old way again. Here’s why.


How things typically work in web2

Let’s imagine that you’re building a simple web app where users can create their profiles and upload a profile picture. We’ll start off by building this in a fairly naive MVP-way, and see what happens as complexity arises.

The first thing we need is a place to store static files that users upload. Easy – we can use a storage service like AWS S3 that’s cheap and reliable. We create our bucket, change some permissions, and write some uploading logic like this:

import { PutObjectCommand } from "@aws-sdk/client-s3"

const uploadParams = {
  Bucket: "mybucket",
  Key: `avatars/${userId}.jpg`,
  Body: fileStream,
}

await s3Client.send(new PutObjectCommand(uploadParams))

When User 1 uploads a profile picture, it will now be stored at https://mybucket.s3.amazonaws.com/avatars/1.jpg

We will of course also want to store a reference to this file in our users database table:

ID    username     profile_picture_s3_url
1     tristan      avatars/1.jpg

Seems good enough. We can now deploy our site and start getting some traction!

Some time later, a user requests a new feature: they want to be able to upload cover photos too! No problem. We can just create a new covers directory in our S3 bucket and upload the files there! User 1 will then have an avatar stored at /avatars/1.jpg and a cover photo stored at /covers/1.jpg. All is good.

Sometime later, a couple of the app’s power users start requesting organisation accounts. They’re so happy with your amazing service that they want to invite their co-workers to it. Oh, and organisations should of course also be able to have profile pictures and cover photos.

Hmm, now you’re starting to doubt your directory structure in S3 a bit. Orgs are different to users, so we can’t store the avatar of Org 1 as /avatars/1.jpg, because then it would clash with the avatar of User 1.

Should we create new org-avatars and org-covers directories?

Or maybe it would have made more sense to have users and orgs as top-level directories? That way we could store all things related to User 1 under the same directory, e.g: /users/1/avatar.jpg + /users/1/cover.jpg. That seems more elegant.

Actually, what if we just drop directory structures completely and have a single /images directory with randomly generated file names?

Ugh, decisions, decisions… well, it’s too late now. Migrating these would require not only that we move all the files in S3, but also that we update all the path references in our database. Sounds like something for our future devops hire to take care of. 👍


A couple of weeks later, your app is hit by an awful AWS outage (this actually happens every now and then, so it’s not that unrealistic a scenario). All the images on your site look like this, and your users are not happy:

Darn it. We should probably have a backup service to serve our images if AWS goes down. Maybe Google Cloud or Microsoft Azure? Buuut… then we’d have to copy all our folders from S3 to this alternative service, upload all future uploads to these two services, and make sure we maintain the same file structure in two places as it evolves. Sounds like a lot of work. Let’s leave that for the future devops hire. 👍

When using a traditional storage provider, there’s tons of little annoyances like this, but ultimately it all boils down to the same problem: the files you upload are identified by their URL path, not by their content.

It seems clear that in a world where feature requirements change quickly and technologies like edge-networking become more prominent, this location-based file system starts to feel sub-optimal. Wouldn’t it be nice if, when fetching a resource, instead of querying a specific path on a specific server, we could just describe what we’re looking for and fetch it from whatever server is closest to us?

The magic that IPFS enables ✨

With IPFS, files are no longer represented by their URL – they are located through a hash (also known as a CID).

The CID is generated through a deterministic process – the same file will always produce the same hash. This enables us to create a new web where it doesn’t matter what service provider is storing your file, or how or where they do it – if you give them the hash, they will know what file you’re referring to and give it to you.

The best thing is that it’s really easy to get started with this. When reading articles on IPFS, you might fall under the impression that you need to run your own node, navigate in different CLI tools, or join the Filecoin network. In reality, there’s tons of services that have APIs that are just as easy to use as the AWS SDK! Here’s how you can upload and pin files with Pinata for example:

const pinata = pinataSDK(PINATA_API_KEY, PINATA_SECRET_API_KEY)

const { ipfsHash } = await pinata.pinFileToIPFS(fileStream)

Now I can store the returned CID instead of an S3 path in my database:

ID    username     profile_picture_cid
1     tristan      QmVeg59S5tUgQEPHDaWFVsAXx4VW1ctAWNkXK6Mt5bCQV6

Call me a nerd, but there’s a certain elegance to this that’s profoundly satisfying. Content addressing just instinctively seems superior to location-based addressing, because once I have the hash, I can choose between a wide range of different service providers to store this file for me and serve it upon request through different gateways.

As an example, here’s my file on Cloudflare’s IPFS gateway: https://cloudflare-ipfs.com/ipfs/QmVeg59S5tUgQEPHDaWFVsAXx4VW1ctAWNkXK6Mt5bCQV6

There’s a growing ecosystem of services that will store and hash your file for you, including Pinata, Filebase and Web3Storage. You could even decide to go fully decentralised and ask the Filecoin or Arweave network to pin your files (just make sure you understand the underlying token-based economic incentives first).

The point is that, when all services use the same underlying protocol, it becomes trivially easy to replicate your content across different storage providers, or change your preferred gateway for accessing the files.

Found a cheaper pinning service than your current storage provider? No problem – just give the CIDs to your new provider, let them pin your files, and stop paying your old one.

Worried that your provider might be hit by a server outage? Just replicate your content to a different IPFS node in another location.

Concerned about the loading speed for your users in Hong Kong? Ask a provider there to pin your files and serve them using a different gateway based on the user’s IP address.

If you’re a web developer and haven’t checked out IPFS yet, give it a try!


At Layer3, we store all our uploaded files with IPFS. If you enjoyed this post and love the idea of building new types of applications that take advantage of the decentralised web, you should join our team!

Subscribe to Layer3
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.