We are anticipating that there will be places where it makes sense to employ various messaging systems (e.g. RabbitMQ, Kafka, ActiveMQ, Kestrel) to allow outside applications to receive notifications from FOLIO. There are also likely to be use cases where external systems will want to push a message into a FOLIO system and some kind of gateway will be needed; these requirements are not well-explored at this time.
The primary use case I’ve been thinking about is allowing a discovery system to update itself based on catalog changes, without having to poll the catalog continuously with “What has changed in the last two minutes?” type queries. I’m not sure whether the code that builds our discovery Solr index should get pulled into a FOLIO module or remain external and be updated to talk to FOLIO instead of our previous catalog.
That publish/subscribe will be available for external systems to receive notifications assures me that this use case can probably be met.
Is it fair to assume that publish/subscribe will be one of the core models for communication between FOLIO modules internally? It seems like it must be to support a marketplace of modules where the development of core modules is done without knowing all of the modules that might, in a particular set-up, want to receive updates about catalog changes, or circulation events, etc…
This question and comment generated substantial discussion on developers channel of the FOLIO Project slack team. (If you need an invite to the FOLIO Project slack team, please send me a private message.)
API communication in FOLIO is going to be handled through a piece of software we’re calling OKAPI. Modules in FOLIO will not be calling each other; instead, they will be calling endpoints registered by other modules. (More on that in the how OKAPI differs from traditional microservices question response). Multiple modules can be registered for OKAPI endpoints, and the way OKAPI works is as a prioritized pipeline of requests and responses that are registered at an endpoint. (Said another way, the output of one module is the input of the next in the pipeline.)
The discussion on slack surrounded these topics:
A true subscription model would have all subscribers receive the same original message; can that be achieved with OKAPI’s current pipelining?
The architecture already includes a ‘request-only’ mode that signals that the module will not return a response body with different data. (And, in fact, OKAPI will discard a response body received from a ‘request-only’ module and continue forwarding the original response body to subsequent modules in the pipeline.)
There was discussion about having a true pub/sub model, and perhaps we would be better off with a real message queue. (Something like exposing the vert.x message bus using STOMP or AMQP.) There is a need to identify real use cases for this, though, before development work can be prioritized.