One of the new features of Mule EE 3.3 is the cache scope. The idea behind it is to speed up your flow by caching data that is frequently re-used by your Mule instance instead of loading the data again from external resources or going through a list of message processors that produce the same data given the same input.
How is this done? Well, first you have to declare your cached message processors inside a new XML tag called: drum roll … <cache>, which is found under the ee namespace. Here is a simple example:
When Mule does not find cached data, it goes through the message processors defined in the cache scope and populates the cache with the final result. If data already resides in the cache, then it is immediately returned without going through the nested message processors. In the above cache scope, we have 3 message processors. The first one is a logger which informs us when Mule encounters a cache miss, and consequently, goes through the processor chain. On the other hand, when Mule encounters a cache hit, no logging occurs. The second message processor is a JDBC outbound endpoint that queries the database given the payload as input. The following snippet shows the definition of the JDBC Connector together with the SQL Select statement.
The last message processor is an expression transformer. From the JDBC result, we are extracting the value column of the first returned row.
Each item in the cache is a key/value pair where the key represents the payload at the cache scope entry point. The value is the result at the end of the cache scope. In our case, the cached entry value would be the result of the expression transformer. By default, Mule creates the cache key by performing an MD5 operation on the payload. For the cache entry value, Mule will cache not just the payload at the end of the cache scope, but the whole MuleEvent. The idea behind all of this is that apart from the payload, you might need other information such as message properties.
The default store Mule uses as a cache is an InMemoryObjectStore. All items will be stored in memory with configurable time to live and other basic options. More advanced options are available by configuring a custom object store and a caching strategy. We’ll look into this in a future blog post where we’ll explore how to configure EhCache as our object store for Mule caching.
Now let us explore a simple example where we receive a request on an HTTP endpoint. Depending on whether the payload is cached, we retrieve the information from the database or use the cache to avoid hitting the database for each request.
After we receive an HTTP request on the inbound endpoint, the data following the last “/” is extracted from the URL. For example, if the user hits http://localhost:8081/cache/25, the payload after the expression transformer will be 25. Using this payload, Mule goes through the cache scope, and will either (1) invoke the message processor chain if no entry key is found in the cache for that payload, or (2) retrieve it directly from the cache. Let’s have a look at our log entries. The following are the logs when hitting http://localhost:8081/cache/25 for the first time.
As you can see the first line is the output of the logger we defined, followed by the JDBC log entries. When we hit the same URL again, no log entries are shown in the log file.
One important thing to note about Mule caching is that to be able to use the cache scope, the payload must be cacheable. Payloads which are read once (consumables) such as streams are non-cacheable.
In future blog posts, we’ll delve into more detail about caching and advanced options such as integrating Mule with EhCache and using a bootstrap loader to pre-populate on startup.