![]() ![]() If there are frequent updates to a distributed cache, think again if the design is correct. This is because instead of updating the record in the database, it is being done in cache, that means the database calls must either be really costly or too frequent. It should be the last choice for having to lock a cache entry for updates. If not, strive for a design where eventual consistency would suffice. In such a scenario, try as much possible that the data cached is immutable. 429Ĭoncurrency is of the very critical aspect of a cache configuration, specially if there will be multiple workers accessing the same keys in a distributed cache. However, better would be not mix the two. One obvious option would be to update cache after updating the database. ![]() Now, if a write-through or write-back strategy is being used, be careful that if the cache is updated first and then the transaction fails for logical reasons - like to prevent dirty-write - then the cache will be out of sync with database. And of course, in a transaction all should go in or none. For example take master-detail records, where in a transaction, first a master record is entered and then one or more detail records. Let's consider specifically those transactions which involve more than one update to the database. So a proper messaging queue should be used where one is needed. It can also bring in higher availability to the application and can even help with a scalable architecture. The purpose of messaging queues is more than decoupling the components. This should not be a motivation to skip a messaging queue and directly use a caching server like one. In real world, often non-technical challenges force a technical compromise.Įven though some messaging queues like RabbitMQ can use a caching server like Redis in backend. Who would do that? There could be several real life scenarios when one might be forced to avoid having to use messaging queues for various reasons like - to save costs, avoid adding a new component in infra to avoid security audits, certifications, etc, lack of skills to use or manage, cumbersome approval process. ![]() If there are too many write operations for the same key, then there might be some design issues because at the very least it will have to deal with the concurrency related problems. That is why the important distinction was 'for a given key'. Now there are certain cache architectures like write-back cache which do help in batching the write operations. If done correctly, for a given key, most traffic to a cache should be read operations and towards database be write operations. Since caching architectures are not really the topic of discussion here, let's leave it for self discovery and learning. One of the balanced approach is to have a two-level cache - a local cache backed by another shared/distributed cache. If (vertically) scaling up was the solution then the database itself could have been scaled, why even put a caching server. Now one can scale the caching server up but thats not really a lasting solution and only delays the pain. Abuse it enough and the caching server starts running out of CPU and/or memory, etc. Developers often assume a cache is less costly call but they often tend to overlook the fact that it's less costly compared to a database call. And it works too - until the caching server is the one that starts getting drowned under the load. Idea being, a cache will shield the database from frequent repetitive calls. One of the first reaction to an overloaded database problem in a distributed system is to introduce a cache. ![]() Let's go beyond the caching architectures and layouts and explore few scenarios when things may not work as expected. The cache should really be a hidden place - not only hidden from the user's perspective but also from the program logic. It is curious to see the dictionary meaning of the word Caching being defined as 'to store things in a secret place'. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |