Adding fingerprint to SASS files

I have some scss files on my site, and I’d like to add their hash/fingerprint to their URL so unique versions has unique URLs.

That is, main.scss would be built as main.d8d7b2392345bc1949bf3d6cb2bcd16843d783375dc365fa572c9ce1df45cabc.css (I guess the hash can be shorter; I don’t really care).

This has just a couple of very useful effects:

  • If someone loads a page during deployment, the page points to the right version of the css file. E.g.: if the new version of a page hasn’t been deployed yet, the old version of the page is served, but that points to the old css file, so that’s fine.
  • Each version has a distinct URL, and that URL only ever serves that distinct version. This allows cache headers to be set to cache the page indefinitely (max-age=31536000 actually, but close enough).

Side note: I’ve noticed the cachebust option for get_url. It doesn’t really result in any of the two above the features I’ve mentioned above. During deployments, new pages might include main.css?newhash, but if the new file was not yet copied, the old version will be served and poison any caches and proxies indefinitely.

I don’t see this being an option right now. I dug through the code a bit and I guess it could be implemented. Mostly, a new flag in the compile_sass function should work, though I still need to figure out how to make get_url actually return the right value too.

Do you think that putting css files into GetUrl.permalinks is okay? Or would a separate field GetUrl.stylesheets be preferable?

I mean, that’s been that way for years and no one reported an issue with it so far. The window of time where it can happen is minuscule or inexistent depending on the deployment strategy. This argument has been added specifically for CSS files.

I’ve seen issues on other sites occur with this approach. If someone opens the page while rsync is still copying files onto the server, they get a recent HTML page which has the new css url/hash. The server then returns the old css (since the new one is still about to be copied), and the client caches that.

The main issue is that, at this point, the client has cached an old version under the new url. Reloading the page won’t help; the issue will linger until the next deployment.

I haven’t seen this on my personal blog (not enough traffic), but I have seen it in sites for small stores.

I guess I should just write a proof of concept, perhaps it makes sense to drop the existing hashing system in favour of hashed urls.