Skip to main content

Posts

Showing posts from October, 2023

Ways of Sending Notification To External Application

 Sending notifications from your Spring Boot application to an external application can be achieved in various ways, depending on your specific requirements and the capabilities of the external application. Here are a few options: HTTP Webhooks: If the external application provides an HTTP endpoint (webhook) to receive notifications, you can send HTTP POST requests from your Spring Boot application to that endpoint whenever you want to send a notification. This is a simple and widely used method for integrating with external systems. Message Queues: Use a message queue system like Apache Kafka, RabbitMQ, or Apache ActiveMQ to send messages to the external application. Your Spring Boot application can produce messages to a topic/queue, and the external application can consume messages from that topic/queue. This approach decouples the sender and receiver, ensuring reliable delivery and scalability. RESTful API Calls: If the external application exposes RESTful APIs for receiving not...

Cache Management Policy

Cache management policies dictate how a cache interacts with the main memory when reading and writing data. These policies determine how data is stored in the cache, how it is retrieved, and when it is updated in both the cache and main memory. Here's a more concise and accurate explanation of how cache management policies affect these interactions: Read Operations: Read-Through Cache: When the CPU requests data and the cache misses (data is not in the cache), the cache forwards the request to the main memory. The main memory retrieves the requested data and sends it back to the cache. The cache then delivers the data to the CPU. The cache may update its content with the new data from the main memory, ensuring that the cache remains consistent. Advantages: Data Consistency: Ensures that the cache always contains up-to-date data from the main memory. This is crucial for applications where data consistency is paramount, such as databases or transaction processing systems. Simplicity:...

CPU Cache

A CPU cache is a small, high-speed, volatile storage component within the central processing unit (CPU) of a computer. Its primary purpose is to temporarily store frequently accessed data and instructions, thereby reducing the time it takes for the CPU to access and retrieve this information. Caches are crucial for improving a computer's overall performance, as they help to mitigate the speed gap between the fast CPU and the slower main memory (RAM). Here are some key points to help you understand CPU caches: Levels of Cache: Modern CPUs typically have multiple levels of cache, such as L1, L2, and L3. These caches are often referred to as the first-level cache (L1), second-level cache (L2), and third-level cache (L3). L1 cache is the smallest and closest to the CPU cores, while L3 cache is the largest but farther away in terms of access latency. Cache Hierarchy: The caches operate in a hierarchical manner, with L1 being the smallest but fastest and L3 being the largest but slower. ...

Cache Policy

Cache policies determine how data is stored and retrieved from a cache, which is a small and fast storage area that holds frequently accessed data to reduce the latency of accessing that data from a slower, larger, and more distant storage location, such as main memory or disk. Different cache policies are designed to optimize various aspects of cache performance, including hit rate, latency, and consistency. Here are some common types of cache policies: Least Recently Used (LRU): LRU is a commonly used cache replacement policy. It evicts the least recently accessed item when the cache is full. LRU keeps track of the order in which items were accessed and removes the item that has not been accessed for the longest time. First-In-First-Out (FIFO): FIFO is a simple cache replacement policy. It removes the oldest item from the cache when new data needs to be stored, regardless of how frequently the items have been accessed. Most Recently Used (MRU): MRU removes the most recently accessed ...

Kubernates

  Kubernetes is a powerful container orchestration platform used for deploying, managing, and scaling containerized applications. It can help you automate many aspects of managing containers, making it easier to run and scale applications in a cloud-native environment. Here's a brief introduction to Kubernetes: Key Concepts in Kubernetes: Container: Kubernetes is designed to work with containers, typically Docker containers. Containers are lightweight, portable, and can run consistently across various environments. Node: A node is a physical or virtual machine that runs your containers. Kubernetes manages the containers on these nodes. Cluster: A Kubernetes cluster consists of a set of nodes that work together. It includes a control plane and one or more worker nodes. The control plane manages the cluster and makes decisions about the desired state of the application. Pod: The smallest deployable unit in Kubernetes is a pod. A pod can contain one or more containers that share t...

Native Query

What is Native Query and How does it Works? Native queries in Spring Data JPA do not require recompilation on every call. The SQL query specified in a native query is typically compiled once when your application starts up or when the relevant repository interface is first used. The generated SQL is cached, and subsequent calls to the native query reuse the same compiled query. This caching behavior is part of the optimization provided by the JPA provider, such as Hibernate. Here's how it generally works: Compilation on First Use : When you first use a native query in your application or when your application starts up, the JPA provider (e.g., Hibernate) compiles the SQL query into an executable form. Caching : The compiled query is cached by the JPA provider for reuse. This means that the JPA provider will not recompile the query on every call, improving query execution performance. Reuse : Subsequent calls to the native query reuse the cached, precompiled query, which helps reduc...