## Overview
-The `json_cache` package exports a single constructor `JSONCache(diag)` which
-must be called with the `new` operator. The resulting cache object is intended
-to store arbitrary node.js JSON objects, which are read from disk files and
-modified (repeatedly) during the execution of your program. The cache tracks
-the on-disk pathname of the object, and writes it back to there after a delay
-time. A simple form of locking is implemented to support atomic modifications.
+The `json_cache_rw` package exports a single constructor `JSONCacheRW(diag)`
+which must be called with the `new` operator. The resulting cache object stores
+arbitrary node.js JSON objects, which are read from disk files and modified
+(repeatedly) during the execution of your program. The cache tracks the on-disk
+pathname of the object, and writes it back to there after a delay time. A
+simple locking algorithm is implemented to support atomic modifications.
## Calling API
-Suppose one has a `JSONCache` instance named `jc`. It behaves somewhat like an
-ES6 `Map` instance that maps pathname strings to JSON objects, except that it
-has `jc.read()`, `jc.write()`, and `jc.modify()` functions instead of `get` and
-`set`, and new objects are added to the cache by attempting to `read` them.
+Suppose one has a `JSONCacheRW` instance named `jcrw`. It behaves somewhat like
+ an ES6 `Map` instance that maps pathname strings to JSON objects, except that
+it has `jcrw.read()`, `jcrw.write()`, and `jcrw.modify()` functions instead of
+`get` and `set`, and new objects are added to the cache by attempting to `read`
+them.
-The interfaces for the `JSONCache`-provided instance functions are:
+The interfaces for the `JSONCacheRW`-provided instance functions are:
-`await jc.read(key, default_value)` — retrieves the object stored under
+`await jcrw.read(key, default_value)` — retrieves the object stored under
`key`, which must be the on-disk path to the `*.json` or similarly-named file
that will eventually store the JSON object. If the `default_value` is provided
and the on-disk file does not exist, then the `default_value` is added to the
`utf-8` encoding, parsed with `JSON.parse()`, and then cached and returned.
Disk file reading or JSON parsing errors result in exceptions being thrown.
-`await jc.write(key, value, timeout)` — caches the given `value` under
+`await jcrw.write(key, value, timeout)` — caches the given `value` under
the given `key`, and dirties it so that it will be written after `timeout` ms
has elapsed. If the `key` already exists in the cache and is dirty, the new
`value` will be written after the original timeout elapses, and the timeout
with `utf-8` encoding and `JSON.stringify()` plus a newline. The function
returns immediately (before the write is attempted), and any later disk file
writing error is logged to the console. Despite this, the interface to the
-function is specified as `async` because concurrent `jc.get()` or `jc.modify()`
+function is specified as `async` because concurrent `jcrw.get()` or `jcrw.modify()`
operations on the same `key` must be `await`ed before updating the cache.
-`await jc.modify(key, default_value, modify_func, timeout)` first does a
-`jc.read()` call with the given `key` and `default_value`, then passes the
+`await jcrw.modify(key, default_value, modify_func, timeout)` first does a
+`jcrw.read()` call with the given `key` and `default_value`, then passes the
result of this to the user-specified `modify_func` callback, and then does a
-`jc.write()` call with the given `key`, the `modify_func` result, and the given
-`timeout`. In the meantime, the given cache entry is locked to prevent any
-other accesses, thus allowing atomic modification of a given cache entry (or
- equivalently, a given JSON file). The `modify_func` is specified as `async`,
-so it can perform activities such as disk I/O, but this should not be lengthy,
-since other cache accesses to the same key will block during the `modify_func`.
+`jcrw.write()` call with the given `key`, the `modify_func` result, and the
+given `timeout`. In the meantime, the given cache entry is locked to prevent
+any other accesses, thus allowing atomic modification of a given cache entry
+(or equivalently, a given JSON file). The `modify_func` is specified as
+`async`, so it can perform activities such as disk I/O, but this should not be
+lengthy, since other cache accesses to the same key will block during the
+`modify_func`.
The interface for the user-provided callback function `modify_func()` is:
`slug` value to a counter. The counter for a page will increments each time the
code executes. The code creates a new file and/or a new counter as required.
```
-let JSONCache = require('@ndcode/json_cache')
+let JSONCacheRW = require('@ndcode/json_cache_rw')
-let json_cache = new JSONCache()
+let json_cache_rw = new JSONCacheRW()
let hit = slug => {
- let hit_count = json_cache.read('hit_count.json', {})
+ let hit_count = json_cache_rw.read('hit_count.json', {})
if (
!Object.prototype.hasOwnProperty.call(result.value, slug)
)
hit_count[slug] = 0
++hit_count[slug]
- json_cache.write('hit_count.json', hit_count)
+ json_cache_rw.write('hit_count.json', hit_count)
}
```
In the above example, it has not been done atomically, since it does not matter
in which order hits are recorded for a page. It could be done atomically like:
```
-let JSONCache = require('@ndcode/json_cache')
+let JSONCacheRW = require('@ndcode/json_cache_rw')
-let json_cache = new JSONCache()
+let json_cache_rw = new JSONCacheRW()
let hit = slug => {
- json_cache.modify(
+ json_cache_rw.modify(
'hit_count.json',
{},
async result => {
order of lock acquisition should be chosen to avoid deadlock. In this example
we will acquire `transactions.json` and then `balances.json`:
```
-let JSONCache = require('@ndcode/json_cache')
+let JSONCacheRW = require('@ndcode/json_cache_rw')
-let json_cache = new JSONCache()
+let json_cache_rw = new JSONCacheRW()
let deposit = (account, amount) => {
- json_cache.modify(
+ json_cache_rw.modify(
'transactions.json',
[],
async transactions => {
- json_cache.modify(
+ json_cache_rw.modify(
'balances.json',
{},
async balances => {
## About asynchronicity
JSON files are read and written with `fs.readFile()` and `fs.writeFile()`, this
-`jc.read()` is fundamentally an asynchronous operation and therefore returns a
-`Promise`, which we showed as `await jc.read()` above. Other functions are also
-asynchronous as they may have to wait for a concurrent `jc.read()` to complete.
+`jcrw.read()` is fundamentally an asynchronous operation and therefore returns
+a `Promise`, which we showed as `await jcrw.read()` above. Other functions are
+also asynchronous as they may have to wait for a concurrent `jcrw.read()` to
+complete.
Also, the atomic modification may be asynchronous, and so `modify_func()` is
-also expected to return a `Promise`. Obviously, `jc.modify()` must wait for the
-`modify_func()` promise to resolve, indicating that the new object is safely
-stored in the cache, so that it can resolve the `jc.modify()` promise in turn.
+also expected to return a `Promise`. Obviously, `jcrw.modify()` must wait for
+the `modify_func()` promise to resolve, indicating that the new object is
+safely stored in the cache, so that it can resolve the `jcrw.modify()` promise
+in turn.
## About exceptions
Exceptions during atomic modification are handled by reflecting them through
both `Promise`s. The user should ensure that the `result.value` is not modified
in this case — exceptions should be caught and any `result.value` changes
-undone before the exception is rethrown from `build_func` to `jc.modify()`.
+undone before the exception is rethrown from `build_func` to `jcrw.modify()`.
Note that if several callers are requesting the same key simultaneously and an
exception occurs during reading or parsing the JSON, each caller receives a
-reference to same shared exception object, thus when the `jc.read()` `Promise`
-rejects, the rejection value (exception object) should be treated as read-only.
+reference to same shared exception object, thus when the `jcrw.read()`
+`Promise` rejects, the rejection value (exception object) should be treated as
+read-only.
## About deletions
## About on-disk modification
Do not modify the on-disk version of the file while the server is running and
-the `json_cache` may be active for a file. It will not be detected, and cannot
-be handled in a consistent way. If read-only access to JSON files is required,
-please use our `build_cache` module instead `json_cache`, and provide a
-`build_func` which runs the `fs.readFile()` and `JSON.parse()`. In this way,
+the `json_cache_rw` may be active for a file. It will not be detected, and
+cannot be handled in a consistent way. If read-only access to JSON files is
+required, please use our `json_cache` module instead `json_cache_rw`. Then,
on-disk changes to the file will be detected and visible to the application.
-Also, do not run multiple node.js instances, or multiple JSONCache instances in
-the same node.js instance, which can refer to the same file. Modifying the file
-in such circumstance counts as an on-disk modification, which is not allowed.
+Also, do not run multiple node.js instances, or multiple JSONCacheRW instances
+in the same node.js instance, which can refer to the same file. Modifying the
+file in such circumstance counts as an on-disk modification, which is not
+allowed.
## About diagnostics
common case of retrieval when the object is already in cache. A `diag` value
of `undefined` is treated as `false`, thus it can be omitted in the usual case.
-The `diag` output is handy for development, and can also be handy in production,
-e.g. our production server is started by `systemd` which automatically routes
-`stdout` output to the system log, and the cache access diagnostic acts somewhat
-like an HTTP server's `access.log`, albeit cache hits are not logged. It is
-particularly handy that write failures, such as disk-full errors, are logged.
+The `diag` output is handy for development, and can also be handy in
+production, e.g. our production server is started by `systemd` which
+automatically routes `stdout` output to the system log, and the cache access
+diagnostic acts somewhat like an HTTP server's `access.log`, albeit cache hits
+are not logged. It is particularly handy that write failures, such as disk-full
+errors, are logged.
We have not attempted to provide comprehensive logging facilities or
log-routing, because the simple expedient is to turn off the built-in
diagnostics in complex cases and just do your own. In our server we use a
-single JSONCache instance for all `*.json` files with `diag` set to `true`.
+single `JSONCacheRW` instance for all `*.json` files with `diag` set to `true`.
## To be implemented
It is intended that we will shortly add a timer function (or possibly just a
function that the user should call periodically) to flush objects from the
cache after a stale time, on the assumption that the object might not be
-accessible or wanted anymore. This will be able to occur between a `jc.read()`
-and a corresponding `jc.write()` call, hence the API for `jc.write()` specifies
-that the `value` is mandatory, even if the cached object was modified in-place.
+accessible or wanted anymore. This will be able to occur between a
+`jcrw.read()` and a corresponding `jcrw.write()` call, hence the API for
+`jcrw.write()` specifies that the `value` is mandatory, even if the cached
+object was modified in-place.
## GIT repository
The development version can be cloned, downloaded, or browsed with `gitweb` at:
-https://git.ndcode.org/public/json_cache.git
+https://git.ndcode.org/public/json_cache_rw.git
## License
## Contributions
-We would greatly welcome your feedback and contributions. The `json_cache` is
-under active development (and is part of a larger project that is also under
+We would greatly welcome your feedback and contributions. The `json_cache_rw`
+is under active development (and is part of a larger project that is also under
development) and thus the API is considered tentative and subject to change. If
this is undesirable, you could possibly pin the version in your `package.json`.