SlideShare a Scribd company logo
Java 5 Concurrency

1.1 Locks

Before Java 5 concurrency was achieved using synchronized locks and
wait/notify idiom. Synchronization is a locking mechanism where a block of
code or method is protected by a software lock. Any thread that wants to
execute this block of code must first acquire the lock. The lock is released
once the thread exits the synchronization block or method. Acquiring of lock
and releasing it is done by compiler thus relieving programmer of lock book-
keeping. However, there are some draw backs while using synchronization as
you will see below.

Wait/Notify idiom allows a thread to wait for a signal from other thread.
Wait can be timed or can be interrupted (using Thread.interrupt()).
Wait is signaled using notify.


1.1.1       Drawbacks of Synchronization

     No Back-off: Once a thread enters a synchronization block or
     method, it has to wait till lock is available. It cannot back off
     to execute other instructions if lock is not available or it is
     taking very long time to get the lock.

     No Read-Only Access: Multiple threads cannot acquire lock even if
     read-only access is required.

     Compile Time: Code synchronization is compile time decision.
     Synchronization cannot be turned off because of run-time
     conditions. To enable this, a lot of code duplication is
     required.

     No Metadata: Lock metadata like number of threads waiting for
     lock, average time to acquire lock is not available in java
     program.

1.1.2       Lock Interface

As of Java 5, Lock interface implementations can be used instead of
synchronization.

When a thread creates lock object, memory synchronization with cache occurs.
This behavior is similar to entering a synchronized block or method.

Lock interface has methods to lock and lock interruptibly.
Lock Interruptibly: This method acquires the lock unless the thread is interrupted
by another thread. On calling this method, if lock is available it is
acquired. If lock is not available, the thread becomes dormant and waits for
lock to be available. If some other thread calls interrupt on this thread,
then interrupted exception is thrown.

tryLock: This methods immediately acquires the lock and returns true if lock is
available. If lock is not available, it returns false.


1.1.3       Lock Implementation

1.1.3.1 ReentrantLock
Reentrant Lock is an implementation of Lock Interface. It allows a thread to
re-enter the code that is protected by this lock object.

It has additional methods that return the state of the lock and other meta
information.

Reentrant Lock can be created with fair parameter. Lock is then acquired by
thread in arrival order.




1.1.3.2 ReentrantReadWriteLock
This is an implementation of ReadWriteLock. It hold a pair of associated
locks one for read only operations and other for write only operations.
The corresponding locks are readlock and writelock. Read locks can be shared
by multiple readers. Write lock is exclusive, i.e., write lock can be granted
to only one writer thread when no reader thread has a read lock.

A reader thread is one that performs a read operation. A writer thread
performs write operations.

When fair is true, the locks are granted based on arrival order of threads.


Lock Downgrading

Lock downgrading is allowed, i.e., if a thread holds write lock, it can then
hold a read lock and then release write lock

ReentrantReadWriteLock l = ne ReentrantReadWriteLock();
l.writelock().lock();
l.readLock.lock();
l.writelock.unlock();
Lock Upgrading

Lock Upgrading is not allowed, ie. if a thread holds read lock then it cannot
hold write lock without releasing read lock.

RenetrantReadWriteLock l = new ReentrantReadWriteLock();
l.readLock().lock();
//process..
l.readLock.unlock();// first unlock then acquire write lock
l.writeLock().lock();


Concurrency Improvement

When there are large numbers of reader threads and small number of writer
threads, readwritelock will improve concurrency.




1.1.4       Typical Lock Usage

public class LockedMap {

    private Lock l = new ReentrantLock();
    Map myMap = new HashMap();

    public Object get(Object key){
      l.lock();
      try {
            return myMap.get(key);
      } finally {
            l.unlock();
      }
    }

    public void put(Object key, Object val) {
      l.lock();
      try{
            myMap.put(key,val);
      }finally{
            l.unlock();
      }
    }

}
1.2 Condition

Condition interface factors out object monitor methods wait, notify,
notifyall into separate class. Condition objects are intrinsically bound to a
lock and can be obtained by calling newCondition() on lock instance.

When lock replaces synchronized methods and blocks, conditions replace object
monitor methods.

Conditions are also called condition variables or condition queues.

On the same lock object multiple condition variables can be created.
Different set of threads can wait on different condition variable. A classic
usage is in producer and consumers of a bounded buffer.

public class BoundedBuffer {

   final Object[] buffer = new Object[10];
   Lock l = new ReentrantLock();
   Condition producer = l.newCondition();
   Condition consumer = l.newCondition();
   int bufferCount, putIdx, getIdx;

   public void put(Object x) throws InterruptedException{
     l.lock();
     try {
           while (bufferCount == buffer.length)
                 producer.await();

             buffer[putIdx++] = x;
                   if (putIdx == buffer.length)
                         putIdx = 0;
                   ++bufferCount;
                   consumer.signal();



       } finally {
             l.unlock();
       }
   }

   public Object get() throws InterruptedException {
     l.lock();
     try{
           while (bufferCount==0)
                 consumer.await();
Object x = buffer[getIdx++];
                       if(getIdx==buffer.length)
                             getIdx=0;
                       --bufferCount;
                       producer.signal();

                       return x;



           }finally{
                 l.unlock();
           }
      }

}


Await UnInterruptibly

This method on condition variable causes the thread to wait until a signal is
executed on that variable.


IllegalMonitorStateException

The thread calling methods on condition variable should hold the
corresponding lock. If it doesn’t then illegalmonitorstateexception is
thrown.




1.3       Atomic Variables
Atomic variables are used for lock free, thread safe programming on single
variables.
As case with volatile variables, atomic variables are never cached locally.
They are always synced with main memory.


CompareAndSet

Atomic variables use compare and swap (CAS) primitive of processors. CAS has
three operands a memory location (V), expected old value of memory location
(A), new value of memory location (B). If current value of memory location
matches the expected old value(A), then new value (B) is written to memory
location (V) and true is returned. In case the current value is different
from expected old value, then memory is not updated and false is return.

Code logic can retry this operation if false is returned.
Below code shows CAS algorithm. However, the actual implementation is in
hardware for processors that support CAS. For processors that do not support
CAS, locking as shown below is done to simulate CAS.

public class SimulatedCAS {

private int value;

public synchronized int getValue() { return value; }



boolean synchronized comapreAndSet(expectedVaue, updateValue) {
    bool set=false;
if(value==expectedvalue) {
    value=newvalue;
    set=true;




}else {
set=false
}
return set;
}

}




1.4 Data Structures

1.4.1           Blocking Queue

It is a queue data structure with additional features like consumers of queue
wait/block when queue is empty and producers wait/block when queue is full.

Queue implementations can guarantee fairness where in longest waiting
consumer/producer get the first chance to access the queue.

Below code depicts producer, consumer using blocking queue

    public class Producer implements Runnable {
         private final BlockingQueue q;

         public Producer(BlockingQueue q) {
                super();
                this.q = q;
         }
@Override
      public void run() {
             try {
                    while(true) {
                           q.put(produce());
                    }
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }

      }

      private Object produce() {
             // TODO Auto-generated method stub
             return new Object();
      }




}



public class Consumer implements Runnable {
       private final BlockingQueue q;

      public Consumer(BlockingQueue q) {
             super();
             this.q = q;
      }

      @Override
      public void run() {
             try {
                    while(true) {
                           consume(q.take());
                    }
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }

      }

      private void consume(Object take) {
             // TODO Auto-generated method stub

      }



}
public class Setup {

      /**
       * @param args
       */
      public static void main(String[] args) {
             BlockingQueue q = new ArrayBlockingQueue(10,true);
             Producer p = new Producer(q);
             new Thread(p).start();
             new Thread(new Consumer(q)).start();
             new Thread(new Consumer(q)).start();

      }

}




1.4.2        ConcurrentHashMap

Concurrent Has Map is a thread safe hash map but does not block all get and
put operations as synchronized version of hash map does. It allows full
concurrency of gets and expected concurrency of puts.

Concurrent Hash Map internally divides its storage in bins. The entries in
bin are connected by link list.
Nulls are not allowed in key and value.

get operation generally does not entail locking. But algorithm checks if
return value is null the bin (segment) is first locked and then the value is
fetched. A value can be null because of compiler reordering of instructions.

put operations are performed by locking that particular bin (segment).




1.5 Synchronizers

Synchronizers control the flow of execution in one or more threads.


1.5.1        Semaphore

A counting semaphore is used to restrict number of threads that can access a
physical or logical resource. Semaphore maintains a set of permits. Each call
to acquire consumes a permit, possibly blocking if permit is not available.
Each call to release(), releases a permit also signals a waiting acquirer.
Usage:
A library has N seats and thus allows only N members at one time to use it.
If all seats are occupied, then arriving members wait for the seat to get
vacant. Design a model for the library.

package com.concur.semaphore;

import java.util.concurrent.Semaphore;

public class Library {

      private final Semaphore s = new Semaphore(50, true);


      public void enter() throws InterruptedException {
             s.acquire();
      }

      public void exit() throws InterruptedException {
             s.release();
      }

      public void borrowBooks(int id){
             //implementation
      }

      public void returnBook(int id){
             //implementaion
      }

      public static void main(String args) {
             Library l = new Library();
             try {
                    l.enter();
                    l.borrowBooks(1234);
                    l.exit();
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }
      }

}




1.5.2        Mutex

Mutex is a counting semaphore with only one permit. They have lot in common
with locks. The difference is that in mutex some other thread can call a
release than the one holding the permit.
1.5.3        Cyclic Barrier

In cyclic barrier, each threads come to a barrier and wait there till all
threads have reached the barrier. Once all threads reach barrier, they are
released for further processing. Optionally a method can be called before
threads are released.



package com.concur.cyclic;

import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;

public class Barrier {
       final int num_threads;

      final CyclicBarrier cb;

      final boolean complete=false;

      public Barrier(int n) {
             num_threads=n;
             cb= new CyclicBarrier(num_threads, new Runnable(){

                   @Override
                   public void run() {
                          System.out.println("All threads reached barrier");
                          //check if complete and set complete

                   }});
      }

      public void process() throws InterruptedException, BrokenBarrierException {
             while(!complete) {
                    //process
                    cb.await();
                    //exits if process completed else loops
             }
      }

}




1.5.4        Countdown Latch

Count down latch is similar to cyclic barrier but differs in way the treads
are released. In cyclic barrier, threads are released automatically when all
threads reach barrier. In countdown latch that is initialized by N, threads
are released when countdown has been called N times. Any call to await block
threads if N!=0. Once N reaches 0, await() returns immediately.
Countdown latches cannot be reused. Once N reaches 0, all await() return
immediately.

package com.concur.countdown;

import java.util.concurrent.CountDownLatch;

public class Latch {

      private class Worker implements Runnable {

             final CountDownLatch start, done;

             public Worker(CountDownLatch start, CountDownLatch done) {
                    this.start = start;
                    this.done = done;
                    // TODO Auto-generated constructor stub
             }

             @Override
             public void run() {
                    try {
                           start.await();
                           // do process
                           done.countDown();
                    } catch (InterruptedException e) {
                           // TODO Auto-generated catch block
                           e.printStackTrace();
                    }

             }

      }

      public static void main(String[] args) {
             Latch l= new Latch();
             CountDownLatch start = new CountDownLatch(1);
             CountDownLatch done = new CountDownLatch(10);

             for (int i = 0; i < 10; i++) {
                    new Thread(l. new Worker(start, done)).start();
             }

             try {
                     //do something
                     start.countDown();

                     //do something

                    done.await();
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }
}
}


1.6 Executor Framework

Executor framework has API to create thread pool and submit tasks to be
executed by thread pools.

Executor interface has only one method execute that takes a runnable object.

Executor thread pools can be created by calling factory methods
Executors.newCachedThreadPool(): If thread is available, ot will be used else
new thread will be created. Threads not used for 60 seconds will be removed
fro cache.

Executors.newFixedThreadPool(n); N threads are created and added to the pool.
The tasks are stored in unbounded queue and pool threads pick up tasks from
the queue. If thread terminates due to failure, new thread is created and
added to pool.

Executor.newSingleThredExecutor(): A pool of single thread.




Executor Usage:


package com.concur.executor;

import       java.io.IOException;
import       java.net.ServerSocket;
import       java.net.Socket;
import       java.util.concurrent.Executor;
import       java.util.concurrent.Executors;

public class WebServer {

    Executor pool = Executors.newFixedThreadPool(50);

    public static void main(String[] args) throws IOException{
      WebServer ws = new WebServer();
      ServerSocket ssocket= new ServerSocket(80);
      while(true) {
            Socket soc = ssocket.accept();
            Runnable r = new Runnable(){
@Override
                       public void run() {
                             handle(soc);

                       }

                };

          ws.pool.execute(r);



          }
      }

}

1.7       Future

Future represents a task and serves as a wrapper for the tasks. The task may
not have started execution, or may be executing or may have completed. The
result of the task can be obtained by calling future.get().
future.get() returns immediately if task is completed, else it blocks till
the task gets completed.

FutureTask is an implementation of future interface. It also implements
runnable interface and allows the task to be submitted to
executor.execute(Runnable r).

Bow code snippet shows usage of future task class to implement a thread safe
cache.

package com.concur.cache;

import    java.util.concurrent.Callable;
import    java.util.concurrent.ConcurrentHashMap;
import    java.util.concurrent.ConcurrentMap;
import    java.util.concurrent.ExecutionException;
import    java.util.concurrent.Executor;
import    java.util.concurrent.Executors;
import    java.util.concurrent.FutureTask;

public class SimpleCache <K, V> {
       private ConcurrentMap<K, FutureTask<V>> cache = new ConcurrentHashMap();
       Executor pool = Executors.newFixedThreadPool(10);

          public V get(final K key) throws InterruptedException, ExecutionException {
                 FutureTask<V> val = cache.get(key);
                 if(val==null) {
                        Callable<V> c = new Callable<V>(){
                               @Override
public V call() {
                                  System.out.println("Cache Miss");
                                  return (V)new Integer(key.hashCode());

                           }

                      };

                    val = new FutureTask<V>(c);
                    FutureTask<V> oldVal=cache.putIfAbsent(key, val);
                    if(oldVal==null){
                           //this thread should execute future task to get the actual
cache value associated with key
                           pool.execute(val);
                    }else {
                           //assign val to oldVal as other thread has won the race to
store its future task in concurrent map
                           val=oldVal;
                    }

             }else{
                    System.out.println("Cache Hit");
             }
             return val.get();
      }

       public static void main(String[] args) {
              try {
                     SimpleCache<String,Integer > sc = new
SimpleCache<String,Integer>();
                     System.out.println(sc.get("Hello"));
                     System.out.println(sc.get("Hello"));
              } catch (InterruptedException e) {
                     // TODO Auto-generated catch block
                     e.printStackTrace();
              } catch (ExecutionException e) {
                     // TODO Auto-generated catch block
                     e.printStackTrace();
              }
       }
}

More Related Content

PPTX
Effective java - concurrency
PDF
Java Concurrency Gotchas
PDF
Java Concurrency Idioms
PDF
Java Concurrency Gotchas
ODP
Java Concurrency
PPTX
PPTX
Basics of Java Concurrency
PDF
Java 8 - Stamped Lock
Effective java - concurrency
Java Concurrency Gotchas
Java Concurrency Idioms
Java Concurrency Gotchas
Java Concurrency
Basics of Java Concurrency
Java 8 - Stamped Lock

What's hot (20)

PDF
Java Concurrency in Practice
PPTX
The Java memory model made easy
PDF
Java synchronizers
PDF
Actor Concurrency
PPTX
Thread syncronization
ODP
Java Concurrency, Memory Model, and Trends
PPTX
Qt Framework Events Signals Threads
PPT
Java concurrency
PPT
Inter threadcommunication.38
PPT
Clojure concurrency
PPTX
Inter thread communication &amp; runnable interface
PPTX
Concurrency Programming in Java - 07 - High-level Concurrency objects, Lock O...
PPT
04 threads
PPTX
Concurrency in Java
PPTX
Byte code field report
PPT
Java concurrency begining
PDF
Servletand sessiontracking
KEY
Non blocking io with netty
PDF
Other Approaches (Concurrency)
PDF
Locks (Concurrency)
Java Concurrency in Practice
The Java memory model made easy
Java synchronizers
Actor Concurrency
Thread syncronization
Java Concurrency, Memory Model, and Trends
Qt Framework Events Signals Threads
Java concurrency
Inter threadcommunication.38
Clojure concurrency
Inter thread communication &amp; runnable interface
Concurrency Programming in Java - 07 - High-level Concurrency objects, Lock O...
04 threads
Concurrency in Java
Byte code field report
Java concurrency begining
Servletand sessiontracking
Non blocking io with netty
Other Approaches (Concurrency)
Locks (Concurrency)
Ad

Similar to Java 5 concurrency (20)

DOC
Concurrency Learning From Jdk Source
PPTX
Full solution to bounded buffer
ODP
Multithreading Concepts
ODP
Java concurrency
PPTX
Java concurrency
PDF
[JavaOne 2011] Models for Concurrent Programming
PDF
Concurrency gotchas
PPTX
Concurrency
PPTX
Concurrent talk
PDF
Javaoneconcurrencygotchas 090610192215 Phpapp02
PPTX
Low-level concurrency (reinvent vehicle)
PPT
Thread
KEY
Modern Java Concurrency
PDF
Programming with Threads in Java
PPT
Hs java open_party
PPT
Reliable and Concurrent Software - Java language
PDF
Highly Scalable Java Programming for Multi-Core System
KEY
ぐだ生 Java入門第ニ回(synchronized and lock)
KEY
ぐだ生 Java入門第ニ回(synchronized and lock)
PPTX
Synchronization problem with threads
Concurrency Learning From Jdk Source
Full solution to bounded buffer
Multithreading Concepts
Java concurrency
Java concurrency
[JavaOne 2011] Models for Concurrent Programming
Concurrency gotchas
Concurrency
Concurrent talk
Javaoneconcurrencygotchas 090610192215 Phpapp02
Low-level concurrency (reinvent vehicle)
Thread
Modern Java Concurrency
Programming with Threads in Java
Hs java open_party
Reliable and Concurrent Software - Java language
Highly Scalable Java Programming for Multi-Core System
ぐだ生 Java入門第ニ回(synchronized and lock)
ぐだ生 Java入門第ニ回(synchronized and lock)
Synchronization problem with threads
Ad

Recently uploaded (20)

PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
PDF
KodekX | Application Modernization Development
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
How Onsite IT Support Drives Business Efficiency, Security, and Growth.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Electronic commerce courselecture one. Pdf
PDF
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
cuic standard and advanced reporting.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Spectral efficient network and resource selection model in 5G networks
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
The Rise and Fall of 3GPP – Time for a Sabbatical?
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
MYSQL Presentation for SQL database connectivity
NewMind AI Weekly Chronicles - August'25 Week I
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
KodekX | Application Modernization Development
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Reach Out and Touch Someone: Haptics and Empathic Computing
How Onsite IT Support Drives Business Efficiency, Security, and Growth.pdf
NewMind AI Monthly Chronicles - July 2025
Review of recent advances in non-invasive hemoglobin estimation
Electronic commerce courselecture one. Pdf
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
cuic standard and advanced reporting.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Telecom Fraud Prevention Guide | Hyperlink InfoSystem

Java 5 concurrency

  • 1. Java 5 Concurrency 1.1 Locks Before Java 5 concurrency was achieved using synchronized locks and wait/notify idiom. Synchronization is a locking mechanism where a block of code or method is protected by a software lock. Any thread that wants to execute this block of code must first acquire the lock. The lock is released once the thread exits the synchronization block or method. Acquiring of lock and releasing it is done by compiler thus relieving programmer of lock book- keeping. However, there are some draw backs while using synchronization as you will see below. Wait/Notify idiom allows a thread to wait for a signal from other thread. Wait can be timed or can be interrupted (using Thread.interrupt()). Wait is signaled using notify. 1.1.1 Drawbacks of Synchronization No Back-off: Once a thread enters a synchronization block or method, it has to wait till lock is available. It cannot back off to execute other instructions if lock is not available or it is taking very long time to get the lock. No Read-Only Access: Multiple threads cannot acquire lock even if read-only access is required. Compile Time: Code synchronization is compile time decision. Synchronization cannot be turned off because of run-time conditions. To enable this, a lot of code duplication is required. No Metadata: Lock metadata like number of threads waiting for lock, average time to acquire lock is not available in java program. 1.1.2 Lock Interface As of Java 5, Lock interface implementations can be used instead of synchronization. When a thread creates lock object, memory synchronization with cache occurs. This behavior is similar to entering a synchronized block or method. Lock interface has methods to lock and lock interruptibly.
  • 2. Lock Interruptibly: This method acquires the lock unless the thread is interrupted by another thread. On calling this method, if lock is available it is acquired. If lock is not available, the thread becomes dormant and waits for lock to be available. If some other thread calls interrupt on this thread, then interrupted exception is thrown. tryLock: This methods immediately acquires the lock and returns true if lock is available. If lock is not available, it returns false. 1.1.3 Lock Implementation 1.1.3.1 ReentrantLock Reentrant Lock is an implementation of Lock Interface. It allows a thread to re-enter the code that is protected by this lock object. It has additional methods that return the state of the lock and other meta information. Reentrant Lock can be created with fair parameter. Lock is then acquired by thread in arrival order. 1.1.3.2 ReentrantReadWriteLock This is an implementation of ReadWriteLock. It hold a pair of associated locks one for read only operations and other for write only operations. The corresponding locks are readlock and writelock. Read locks can be shared by multiple readers. Write lock is exclusive, i.e., write lock can be granted to only one writer thread when no reader thread has a read lock. A reader thread is one that performs a read operation. A writer thread performs write operations. When fair is true, the locks are granted based on arrival order of threads. Lock Downgrading Lock downgrading is allowed, i.e., if a thread holds write lock, it can then hold a read lock and then release write lock ReentrantReadWriteLock l = ne ReentrantReadWriteLock(); l.writelock().lock(); l.readLock.lock(); l.writelock.unlock();
  • 3. Lock Upgrading Lock Upgrading is not allowed, ie. if a thread holds read lock then it cannot hold write lock without releasing read lock. RenetrantReadWriteLock l = new ReentrantReadWriteLock(); l.readLock().lock(); //process.. l.readLock.unlock();// first unlock then acquire write lock l.writeLock().lock(); Concurrency Improvement When there are large numbers of reader threads and small number of writer threads, readwritelock will improve concurrency. 1.1.4 Typical Lock Usage public class LockedMap { private Lock l = new ReentrantLock(); Map myMap = new HashMap(); public Object get(Object key){ l.lock(); try { return myMap.get(key); } finally { l.unlock(); } } public void put(Object key, Object val) { l.lock(); try{ myMap.put(key,val); }finally{ l.unlock(); } } }
  • 4. 1.2 Condition Condition interface factors out object monitor methods wait, notify, notifyall into separate class. Condition objects are intrinsically bound to a lock and can be obtained by calling newCondition() on lock instance. When lock replaces synchronized methods and blocks, conditions replace object monitor methods. Conditions are also called condition variables or condition queues. On the same lock object multiple condition variables can be created. Different set of threads can wait on different condition variable. A classic usage is in producer and consumers of a bounded buffer. public class BoundedBuffer { final Object[] buffer = new Object[10]; Lock l = new ReentrantLock(); Condition producer = l.newCondition(); Condition consumer = l.newCondition(); int bufferCount, putIdx, getIdx; public void put(Object x) throws InterruptedException{ l.lock(); try { while (bufferCount == buffer.length) producer.await(); buffer[putIdx++] = x; if (putIdx == buffer.length) putIdx = 0; ++bufferCount; consumer.signal(); } finally { l.unlock(); } } public Object get() throws InterruptedException { l.lock(); try{ while (bufferCount==0) consumer.await();
  • 5. Object x = buffer[getIdx++]; if(getIdx==buffer.length) getIdx=0; --bufferCount; producer.signal(); return x; }finally{ l.unlock(); } } } Await UnInterruptibly This method on condition variable causes the thread to wait until a signal is executed on that variable. IllegalMonitorStateException The thread calling methods on condition variable should hold the corresponding lock. If it doesn’t then illegalmonitorstateexception is thrown. 1.3 Atomic Variables Atomic variables are used for lock free, thread safe programming on single variables. As case with volatile variables, atomic variables are never cached locally. They are always synced with main memory. CompareAndSet Atomic variables use compare and swap (CAS) primitive of processors. CAS has three operands a memory location (V), expected old value of memory location (A), new value of memory location (B). If current value of memory location matches the expected old value(A), then new value (B) is written to memory location (V) and true is returned. In case the current value is different from expected old value, then memory is not updated and false is return. Code logic can retry this operation if false is returned.
  • 6. Below code shows CAS algorithm. However, the actual implementation is in hardware for processors that support CAS. For processors that do not support CAS, locking as shown below is done to simulate CAS. public class SimulatedCAS { private int value; public synchronized int getValue() { return value; } boolean synchronized comapreAndSet(expectedVaue, updateValue) { bool set=false; if(value==expectedvalue) { value=newvalue; set=true; }else { set=false } return set; } } 1.4 Data Structures 1.4.1 Blocking Queue It is a queue data structure with additional features like consumers of queue wait/block when queue is empty and producers wait/block when queue is full. Queue implementations can guarantee fairness where in longest waiting consumer/producer get the first chance to access the queue. Below code depicts producer, consumer using blocking queue public class Producer implements Runnable { private final BlockingQueue q; public Producer(BlockingQueue q) { super(); this.q = q; }
  • 7. @Override public void run() { try { while(true) { q.put(produce()); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private Object produce() { // TODO Auto-generated method stub return new Object(); } } public class Consumer implements Runnable { private final BlockingQueue q; public Consumer(BlockingQueue q) { super(); this.q = q; } @Override public void run() { try { while(true) { consume(q.take()); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private void consume(Object take) { // TODO Auto-generated method stub } }
  • 8. public class Setup { /** * @param args */ public static void main(String[] args) { BlockingQueue q = new ArrayBlockingQueue(10,true); Producer p = new Producer(q); new Thread(p).start(); new Thread(new Consumer(q)).start(); new Thread(new Consumer(q)).start(); } } 1.4.2 ConcurrentHashMap Concurrent Has Map is a thread safe hash map but does not block all get and put operations as synchronized version of hash map does. It allows full concurrency of gets and expected concurrency of puts. Concurrent Hash Map internally divides its storage in bins. The entries in bin are connected by link list. Nulls are not allowed in key and value. get operation generally does not entail locking. But algorithm checks if return value is null the bin (segment) is first locked and then the value is fetched. A value can be null because of compiler reordering of instructions. put operations are performed by locking that particular bin (segment). 1.5 Synchronizers Synchronizers control the flow of execution in one or more threads. 1.5.1 Semaphore A counting semaphore is used to restrict number of threads that can access a physical or logical resource. Semaphore maintains a set of permits. Each call to acquire consumes a permit, possibly blocking if permit is not available. Each call to release(), releases a permit also signals a waiting acquirer.
  • 9. Usage: A library has N seats and thus allows only N members at one time to use it. If all seats are occupied, then arriving members wait for the seat to get vacant. Design a model for the library. package com.concur.semaphore; import java.util.concurrent.Semaphore; public class Library { private final Semaphore s = new Semaphore(50, true); public void enter() throws InterruptedException { s.acquire(); } public void exit() throws InterruptedException { s.release(); } public void borrowBooks(int id){ //implementation } public void returnBook(int id){ //implementaion } public static void main(String args) { Library l = new Library(); try { l.enter(); l.borrowBooks(1234); l.exit(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } 1.5.2 Mutex Mutex is a counting semaphore with only one permit. They have lot in common with locks. The difference is that in mutex some other thread can call a release than the one holding the permit.
  • 10. 1.5.3 Cyclic Barrier In cyclic barrier, each threads come to a barrier and wait there till all threads have reached the barrier. Once all threads reach barrier, they are released for further processing. Optionally a method can be called before threads are released. package com.concur.cyclic; import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.CyclicBarrier; public class Barrier { final int num_threads; final CyclicBarrier cb; final boolean complete=false; public Barrier(int n) { num_threads=n; cb= new CyclicBarrier(num_threads, new Runnable(){ @Override public void run() { System.out.println("All threads reached barrier"); //check if complete and set complete }}); } public void process() throws InterruptedException, BrokenBarrierException { while(!complete) { //process cb.await(); //exits if process completed else loops } } } 1.5.4 Countdown Latch Count down latch is similar to cyclic barrier but differs in way the treads are released. In cyclic barrier, threads are released automatically when all threads reach barrier. In countdown latch that is initialized by N, threads are released when countdown has been called N times. Any call to await block threads if N!=0. Once N reaches 0, await() returns immediately.
  • 11. Countdown latches cannot be reused. Once N reaches 0, all await() return immediately. package com.concur.countdown; import java.util.concurrent.CountDownLatch; public class Latch { private class Worker implements Runnable { final CountDownLatch start, done; public Worker(CountDownLatch start, CountDownLatch done) { this.start = start; this.done = done; // TODO Auto-generated constructor stub } @Override public void run() { try { start.await(); // do process done.countDown(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } public static void main(String[] args) { Latch l= new Latch(); CountDownLatch start = new CountDownLatch(1); CountDownLatch done = new CountDownLatch(10); for (int i = 0; i < 10; i++) { new Thread(l. new Worker(start, done)).start(); } try { //do something start.countDown(); //do something done.await(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }
  • 12. } } 1.6 Executor Framework Executor framework has API to create thread pool and submit tasks to be executed by thread pools. Executor interface has only one method execute that takes a runnable object. Executor thread pools can be created by calling factory methods Executors.newCachedThreadPool(): If thread is available, ot will be used else new thread will be created. Threads not used for 60 seconds will be removed fro cache. Executors.newFixedThreadPool(n); N threads are created and added to the pool. The tasks are stored in unbounded queue and pool threads pick up tasks from the queue. If thread terminates due to failure, new thread is created and added to pool. Executor.newSingleThredExecutor(): A pool of single thread. Executor Usage: package com.concur.executor; import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.util.concurrent.Executor; import java.util.concurrent.Executors; public class WebServer { Executor pool = Executors.newFixedThreadPool(50); public static void main(String[] args) throws IOException{ WebServer ws = new WebServer(); ServerSocket ssocket= new ServerSocket(80); while(true) { Socket soc = ssocket.accept(); Runnable r = new Runnable(){
  • 13. @Override public void run() { handle(soc); } }; ws.pool.execute(r); } } } 1.7 Future Future represents a task and serves as a wrapper for the tasks. The task may not have started execution, or may be executing or may have completed. The result of the task can be obtained by calling future.get(). future.get() returns immediately if task is completed, else it blocks till the task gets completed. FutureTask is an implementation of future interface. It also implements runnable interface and allows the task to be submitted to executor.execute(Runnable r). Bow code snippet shows usage of future task class to implement a thread safe cache. package com.concur.cache; import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ExecutionException; import java.util.concurrent.Executor; import java.util.concurrent.Executors; import java.util.concurrent.FutureTask; public class SimpleCache <K, V> { private ConcurrentMap<K, FutureTask<V>> cache = new ConcurrentHashMap(); Executor pool = Executors.newFixedThreadPool(10); public V get(final K key) throws InterruptedException, ExecutionException { FutureTask<V> val = cache.get(key); if(val==null) { Callable<V> c = new Callable<V>(){ @Override
  • 14. public V call() { System.out.println("Cache Miss"); return (V)new Integer(key.hashCode()); } }; val = new FutureTask<V>(c); FutureTask<V> oldVal=cache.putIfAbsent(key, val); if(oldVal==null){ //this thread should execute future task to get the actual cache value associated with key pool.execute(val); }else { //assign val to oldVal as other thread has won the race to store its future task in concurrent map val=oldVal; } }else{ System.out.println("Cache Hit"); } return val.get(); } public static void main(String[] args) { try { SimpleCache<String,Integer > sc = new SimpleCache<String,Integer>(); System.out.println(sc.get("Hello")); System.out.println(sc.get("Hello")); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (ExecutionException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }