1. Cache Operations Strategy
    1. Write-Through
      1. Does not improve write performance
        1. Operation is completed after written to the Data Store
      2. No support for Two-Phase Commit
    2. Read-Through
      1. Goes to disk if not in the cache
    3. Refresh Ahead
      1. Can reload a cache before expiration
        1. Expiration time is configurable in seconds Expiration Time Refresh Ahead Factor
        2. Objects accessed after the expiration needs to be retrieved from Data Store
        3. If the data is accessed before the expiration a refresh-ahead is scheduled (Async process)
    4. Write-Behind
      1. Written to Data Source based on configured delay
        1. After config delay call CacheStore to write to the Data Store can be: 10 seconds 20 minutes 1 week or longer
      2. Better Apps Performance
      3. Multiple Writes coalesced to one physical write ("write-coalesce")
      4. Protected somehow against database failures
      5. If other external applications share the same data, updates needs to be carefully handled. Cannot avoid conflict with external updates
  2. Caches
    1. Clustered Cache
    2. Near Cache
      1. Best of both worlds
      2. Fast Read access to the MRU (Most Recent Used) and MFU (Most frequently Used)
      3. Wraps 2 caches
        1. Front-Cache
          1. Provides Local Cache access
        2. Back-Cache
          1. Centralized and multi-tiered cache
          2. Load-on-demand when local cache misses
    3. Local Cache
  3. Data Distribution
    1. Dynamic Partitioning
      1. Each Server manages its share of information
      2. Data Logically located in the same server to avoid many hops (Affinity)
      3. Each server knows the the backup location to route the access
      4. Predictable Scalability
        1. More servers, better performance
    2. Primary/Backup
      1. Get() - First go to the Primary
      2. Put() - First go to the Primary
    3. Replicated Cache
      1. Replicates to all nodes
      2. Get() - Happens on the local node
      3. Put() - Happens on the local node and replicates to all nodes
      4. Manages the locks
      5. I am bit skeptical here, need to test
        1. Memory requirement per node increases
        2. Too much data to be managed by each cluster
        3. Not linear scalability if many updates
  4. TCMP Protocol
    1. TCMP a combination of
      1. UDP Multicast
        1. Cluster discover
        2. Heartbeat
        3. Message Delivery
          1. When needs to deliver to multiple nodes
        4. Usually disabled in WAN environments
      2. UDP Unicast
        1. member-to-member communication
        2. Sometimes to multiple nodes communication to reduce CPU in large clusters
      3. TCP
        1. Sophisticated "Death Dectection"
        2. NOT USED for data transfer due to the overhead
    2. Reliability
      1. UDP does not provide reliable or ordered message delivery
      2. TCMP uses a queue mechanism to solve UDP limitations
  5. CacheStore
    1. Consumes more Cache Service threads
      1. Common symptom of Insufficient thread pool is the cache access latency
      2. Should be careful when calling another cache instance to avoid overload the other cache service thread pool