Thread safe classes are the classes whose objects (resources) can be shared by multiple threads and concurrent modifications are possible without any racing race conditions or data inconsistencies.
By following the below principles, you can design classes that behave correctly and safely in multi-threaded environments.
Immutable objects are inherently thread-safe because their state cannot be modified after construction. You can design your class to be immutable by:
finalpublic final class ImmutableCounter {
private final int count;
public ImmutableCounter(int count) {
this.count = count;
}
public int getCount() {
return count;
}
}
Synchronization ensures that only one thread can execute a block of code at a time. This prevents race conditions when multiple threads access shared resources.
synchronized methods or blocks to protect the critical sections of a classpublic class SynchronizedCounter {
private int count = 0;
public synchronized void increment() {
count++;
}
public synchronized int getCount() {
return count;
}
}
Java provides java.util.concurrent.locks package, which offers more advanced locking mechanisms like ReentrantLock, ReadWriteLock, etc compared to synchronized methods.
These locks offer more flexibility than synchronized methods or blocks.
Note: In ReadWriteLock, it allows EITHER ReadLock for n users at a time OR WriteLock for one user at a time but not both.
Say we have a ticket booking system.
Multiple users (threads) will try to view the chart and book a ticket.
Reentrant Locking system would be causing slowness
because viewing can be allowed to multiple users at a time since it won’t cause data inconsistencies.
ReadWrite Locking would work better in this case.
Refer this for a working code example

tryLock()(an attempt to acquire the lock)lockInterruptibly()(an attempt to acquire the lock that can be interrupted)tryLock(long, TimeUnit) (an attempt to acquire a lock that can timeout)For simple atomic operations like increments and decrements, use classes from the java.util.concurrent.atomic package.
Classes like AtomicInteger, AtomicLong, etc., provide lock-free thread safety for single variables.
import java.util.concurrent.atomic.AtomicInteger;
public class AtomicCounter {
private final AtomicInteger count = new AtomicInteger(0);
public void increment() {
count.incrementAndGet();
}
public int getCount() {
return count.get();
}
}
Use thread-safe versions of collections, such as ConcurrentHashMap, CopyOnWriteArrayList, and BlockingQueue, provided in the java.util.concurrent package.
These collections handle synchronization internally and are designed for high-concurrency scenarios.
import java.util.concurrent.ConcurrentHashMap;
public class ThreadSafeCache {
private final ConcurrentHashMap<String, String> cache = new ConcurrentHashMap<>();
public void put(String key, String value) {
cache.put(key, value);
}
public String get(String key) {
return cache.get(key);
}
}
Keep the scope of synchronized blocks as small as possible to reduce contention and improve performance. Only synchronize the critical section of the code that modifies shared state.
Be mindful of potential deadlocks, where two or more threads are waiting on each other to release locks. Strategies to avoid deadlocks include:
Refer the code for a working example of a deadlock
When each thread needs its own copy of a variable, use ThreadLocal.
This ensures that each thread has its own independent instance of the variable, avoiding shared state issues.
public class ThreadLocalExample {
private static final ThreadLocal<Integer> threadLocalValue = ThreadLocal.withInitial(() -> 0);
public void setValue(int value) {
threadLocalValue.set(value);
}
public int getValue() {
return threadLocalValue.get();
}
}
Java provides higher-level concurrency utilities like ExecutorService, ForkJoinPool, CountDownLatch, CyclicBarrier, etc.
These abstractions help manage concurrency without needing to deal with low-level thread management directly.
Focus on ensuring correctness first when designing thread-safe classes. Once correctness is ensured, you can optimize performance by reducing the scope of synchronization or using more advanced concurrency techniques.