Skip to main content

What distinguishes a collection from a stream? 

A collection and a stream are both ways to work with sequences of elements in Java, but they differ in several ways.

A collection is an in-memory data structure that holds a finite number of elements. Collections can be indexed and accessed in a random order. They typically provide methods for adding, removing, and querying elements.

A stream, on the other hand, is a sequence of elements that can be processed in a functional way, without the need to store them in memory. Streams are designed to support parallel processing of large data sets, and they can be created from various sources, such as collections, arrays, files, or other data sources.

Here are some key differences between collections and streams:

Storage: Collections store all their elements in memory, whereas streams do not necessarily store all their elements in memory. Streams can work with data from various sources, and they can process data on the fly, as it becomes available.

Mutability: Collections can be mutable or immutable, depending on the implementation. Streams, however, are always immutable and do not allow modification of their underlying data.

Iteration: Collections can be iterated over multiple times, and elements can be accessed in any order. Streams are typically consumed once and cannot be reused.

Lazy evaluation: Streams are evaluated lazily, meaning that intermediate operations are only performed when a terminal operation is invoked. This allows for more efficient processing of large data sets, as only the necessary data is processed.

Parallel processing: Streams support parallel processing, which means that large data sets can be processed in parallel, using multiple threads or processors.

In summary, collections are designed for storing and manipulating data in memory, while streams are designed for processing data in a functional, parallel, and lazy way.

Comments

Popular posts from this blog

How to create Annotation in Spring boot

 To create Custom Annotation in JAVA, @interface keyword is used. The annotation contains :  1. Retention :  @Retention ( RetentionPolicy . RUNTIME ) It specifies that annotation should be available at runtime. 2. Target :  @Target ({ ElementType . METHOD }) It specifies that the annotation can only be applied to method. The target cane be modified to:   @Target ({ ElementType . TYPE }) for class level annotation @Target ({ ElementType . FIELD }) for field level annotation @Retention ( RetentionPolicy . RUNTIME ) @Target ({ ElementType . FIELD }) public @ interface CustomAnnotation { String value () default "default value" ; } value attribute is defined with @ CustomAnnotation annotation. If you want to use the attribute in annotation. A single attribute value. Example : public class Books {           @CustomAnnotation(value = "myBook")     public void updateBookDetail() {         ...

Kafka And Zookeeper SetUp

 Kafka And Zookeeper SetUp zookeeper download Link : https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.8.3/apache-zookeeper-3.8.3-bin.tar.gz Configuration: zoo.conf # The number of milliseconds of each tick tickTime =2000 # The number of ticks that the initial # synchronization phase can take initLimit =10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit =5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir =/tmp/zookeeper # the port at which the clients will connect clientPort =2181 4 char whitelist in command arguments 4lw.commands.whitelist =* Start ZooKeeper Server $ bin/zkServer.sh start Check zookeeper status dheeraj.kumar@Dheeraj-Kumar bin % echo stat | nc localhost 2181 stat is 4 character whitelisted argument  Check Kafka running status : echo dump | nc localhost 2181 | grep broker Responsibility of Leader in Zookeeper: 1. Distrib...

Cache Policy

Cache policies determine how data is stored and retrieved from a cache, which is a small and fast storage area that holds frequently accessed data to reduce the latency of accessing that data from a slower, larger, and more distant storage location, such as main memory or disk. Different cache policies are designed to optimize various aspects of cache performance, including hit rate, latency, and consistency. Here are some common types of cache policies: Least Recently Used (LRU): LRU is a commonly used cache replacement policy. It evicts the least recently accessed item when the cache is full. LRU keeps track of the order in which items were accessed and removes the item that has not been accessed for the longest time. First-In-First-Out (FIFO): FIFO is a simple cache replacement policy. It removes the oldest item from the cache when new data needs to be stored, regardless of how frequently the items have been accessed. Most Recently Used (MRU): MRU removes the most recently accessed ...