Proposal: Plugin

This is a feature request for support of plugins. Since zola is written in rust and only used with a compiled binary, the direct access for e.g. inserting custom tera functions is strongly limited if not unavailable. While this poses no problems for many applications in my cases, I would propose a way to enable plugin support with dynamic loading to allow advanced users to extend zola without modifying the code itself.

Motivation
While zola has a myriad of well-built features, it still lacks support for some more advanced features, e.g. LaTeX rendering. There may also be some use-cases where specific extensions of e.g. Tera or the support of another feature e.g. support for new data type may be required but can’t be added easily since zola lacks a way to load plugins or extensions on runtime.

Solution
Exposes some sort of plugin system. One way to do this would be to allow the user to dynamically load libraries on runtime, from which specific functions are triggered by zola. Ideally, those libraries could access some sort of API for extending zola.

Alternative Solution
Allow some sort of (Lua) scripts for extending zola. Those scripts may either need to talk to some API or listen to hooks. I don’t really have experience with the background of such systems, but it might be more easy to develop for,

Known Problems
Allowing the user to load plugins always also exposes inexperienced users to some risk with e.g. libraries that corrupt date etc. In addition to potential harmful libraries, the proposed solution could stability and reliability since dynamic loading is, anecdotally at least, very fragile.

I would love to hear what the team behind zola thinks of this idea. My solution is most certainly not the best, and I would love to hear your feedback and suggestions on this topic. While I’m rather inexperienced with rust, I would still love to help with the implementation.

Addressing GitHub comment:
I’m aware that “single binary” distribution is a goal/ feature of zola. A dynamic loading approach wouldn’t impact this feature, since the plugins would be loaded from files as shared libs. This means the plugins would be too distributed separately, but zola’s binary isn’t affected. Since plugins are additional, I don’t see a conflict with „all you need“ since plugins are as the name suggests extensions and not part of the original application.

Moved from: zola#1544

6 Likes

Bevy is running into a need for this (or something equivalent) too. In particular, we would like rust doc short code links that are more legible.

The existing HTML template approach is very frustrating to both read and write. But without the ability to talk to a server (or perform complex logic), we need to include all of the information required to make the link inside the shortcode, without access to any external information.

Having looked into this briefly for another personal projects, I hope I can provide a few insights. The problem with plugin systems is that because Rust doesn’t have a stable ABI (even between compiler runs) plugins will have to go through the C ABI even if everything you talk to is Rust, and that means adding unsafe code. While plugins would enable a bunch of really cool things, I doubt that the maintainers will be happy about the inclusion about unsafe code since this does take a fairly big hit to maintainability.

Exposing an external API to enable scripting sounds a bit more doable since we could do that without the unsafe code but depending on whether that needs to be sync or async could require a big rewrite of the current programme flow to embed it it into zola itself. Maintaining an API that is flexible enough that it could serve enough different use cases would be quite a lot of work.

However, Zola does actually provide a way to talk to a server: load_data / remote_content so especially if you’re willing to maintain a (minimal) web server next to it (which it sounds like you’re already doing) that could respond with whatever data you need you get the plugin system essentially for free. If you want to keep the workflow as close to vanila Zola as possible I’d imagine something like this:

  1. Use cargo-make to spin up the web server that will respond with whatever data you need. If this server exposes a REST API you could fairly easily get up to arbitrarily complex logic. On the side of configuring logic, I imagine you could put anythign you need here like URLs or ports into the [extra] section of config.toml
  2. run zola build This does have the slight drawback that the data will get refreshed when the page gets rerendered so watching for changes server side would not be trivial. (although if the server also runs locally you can have that signal zola to rerender if that is important to you)
  3. Use either cargo make or pre-commit to do any post-processing and validation that you need to do before you push things (this last point is what my workflow looks like)

I think this is a really cool idea, but honestly, from experience I’m not sure how bit the chances are that something other than the workflow I described above would get implemented. I don’t mean to be a downer, the things you describe sound really cool and something I’d be interested in myself, but they do pose significant engineering problems. Out of interest, is there a part of your use case that can’t be solved by the solution above and if so, can you tell us more about it?

1 Like

Given you already support loading data from an endpoint you could in a way also abstract it away without adding any complexity. In theory you could execute any executable file which outputs data to the stdout. This flow is fairly similar to that of a request to an endpoint. And whether it’s txt or json would have to be identified by the user using the format= parameter just as is already the case.

If we treat it as shell scripts we can even just respect the hashbang such that not only pure bash scripts could be used, but basically any script (e.g. python). This would essentially allow you to inject data from anywhere and without really any additional overhead. Waiting for output from an executable is pretty much the same (abstractly speaking) as waiting for the output from a file :slight_smile: Even passing arguments to it shouldn’t be complex and could easily be fasilitated.

If you are up for that idea I do not mind to contribute this into the codebase. I was almost writing my own static generator and this was one of the things (being able to inject data from anywhere using any kind of source) was one of the missing requirements. Sure your idea of having a server to do so can work as well but it is a lot harder to set up than simply being able to use any kind of script without really any work on your side to facilitate so. And for those that do want the server, they can already do so today, as you said yourself.

This might be a good choice to write plugins: GitHub - rhaiscript/rhai: Rhai - An embedded scripting language for Rust.

So one thing I want to chime in here with is that I really appreciate how focused Zola is. It does a few really important things very well. Yes, not being able to write custom tera functions, as in your example, is limiting, but what is gained is a system that does everything else very well.

A lot of other SSG’s go insane with the modularity of things, and as a consequence they don’t really do the main 90% so well (e.g. builds are slow, steep learning curve, lots of dependencies to do simple stuff, or, very often, they’re just broken).

I’m in a situation now myself where I would like to be able to register a tera function, but I have ways to work around my specific use case. That being said, if there was a focused way to do that, and it didn’t detract from the overall appeal of Zola, then that would be really cool. Otherwise, I think most features should just get merged into main or be out of scope.

1 Like

I have to say this is a pretty genius solution. It would be cool if someone took this idea and abstracted away some of the boiler plate around building a server like this, so to the user’s perspective, they’re just implementing the interface of a function or something.

1 Like

I guess one question I have is how would this workout in other environments other than your local? For example, if you’re using Cloudflare Pages for your builds, would running basically the equivalent of a little language server work?

1 Like

This was an idea I had a while back and never really implemented, but rereading it, I might look into it again. If I manage to get it working, I’ll see if I can publish it as a github template or something.

I don’t have a ton of experience with things github actions or cloudflare pages (or language servers for that matter). However, just thinking about it I’d say this: all that you need is that the requested file/URL responds with the correct data at build time. I don’t know how something like cloudflare pages handle external requests, as I understand they use some kind of containerisation which could make external requests a problem but I’m not sure about that. If it isn’t, that’s basically your problem done: you just need to put your tiny server somewhere the build system can reach it during build time. If external requests are out of the question, you’ll have to embed it into the container you use to build the site somehow. I’m again not very sure how they work, but I imagine you can get pretty far with some custom container shenanigans.

1 Like