Updating two tables
Updating two tables - Sexgirlslive
We can use a TTL on cache entries to bound staleness, but the lower the TTL, the less effective the cache is.These problems would be solved if we could just update the materialized views in the cache whenever any data changed in those source tables.
Imagine that we are asked to build a new service that provides a single point of read access to data from multiple sources within our company.
In the world of microservices, this is generally considered an anit-pattern.
The shared data store tightly couples the services together, preventing them from evolving independently.
front-end desktop and mobile services can quickly get user information to display, the HTTP API can get user information to return as JSON, etc. First off, it’s somewhat complex to assemble all of the data needed to fulfill the requests: we are performing multiple queries across multiple tables.
The User Information Service will provide a single HTTP resource: Let’s assume that the data this service needs is stored in a relational database (e.g. (Twitter’s actual data storage is probably not like this, but many existing systems that we’re all familiar with do follow this standard model, so let’s roll with it.) With this architecture we would likely end up with the following tables: A classic implementation of this service would likely end up querying the DB directly using SQL. This is a fairly simple example; if our service had different requirements, these aggregations could be much more complex, with grouping and filtering clauses, as well as joins across multiple tables.
As web developers, we often need to build services that query data from multiple sources in complex ways.
To improve performance, these services often pre-compute materialized views and store them in caches.This will split updating the materialized views and querying the cache into separate, decoupled services.We will send all data changes from each source table into its own Kafka topic.This materialized view cache belongs only to our new service, decoupling it a bit from other services.However, introducing this cache into our service presents some new problems.This puts load on the database, increases response latency of our service, and these aggregations are repeated on every request for the same .