Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Thread-Per-Message design is the simplest concurrency technique where wherein a thread is created for each incoming request. The benefits of creating a new thread to handle each request should outweigh the corresponding thread creation overheads. This design is generally recommended over sequential executions for time consuming, I/O bound, session based or isolated tasks.

Wiki Markup
On the other hand, there can be several disadvantages of this design such as thread creation overhead in case of frequent or recurring requests, significant processing overhead, resource exhaustion of threads (leading to {{OutOfMemoryError}}), thread scheduling and context switching overhead \[[Lea 00|AA. Java References#Lea 00]\].   

Thread Pools overcome these disadvantages as the maximum number of worker threads that can be initiated and executed simultaneously can be suitably controlled. Every worker accepts a Runnable object from a request and stores it in a temporary Channel like a buffer or a queue until resources become available. Because threads are reused and can be efficiently added to the Channel, most of the thread creation overhead is eliminated.

...

Code Block
bgColor#FFCCCC
class Helper {
  public void handle(String request) {
    //... 		
  }	
}

class GetRequest {
  protected final Helper h = new Helper();
  String request;

  public synchronized String accept() {
    String data = "Read data from pipe";
    //read Read the request data, else block
    return data;
  }

  public void request() {
    while(true) {
      request = accept();
      new Thread(new Runnable() {
        public void run() {
          h.handle(request);
        }
      }).start();
    }
  }
}

...

Wiki Markup
According to the Java API for the interface \[[Executor|http://java.sun.com/javase/6/docs/api/java/util/concurrent/Executor.html]\]API 06|AA. Java References#API 06]\] documentation for the {{Executor}} interface:

Wiki Markup
\[The Interface {{Executor}} is\] An object that executes submitted {{Runnable}} tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An {{Executor}} is normally used instead of explicitly creating threads.

Code Block
bgColor#ccccff
class GetRequest {
  protected final Helper h = new Helper();
  String request;

  public synchronized String accept() {
    String data = "Read data from pipe";
    //read Read the request data, else block
    return data;
  }

  public void request() {
    int NoOfThreads = 200;
    Executor exec = (Executor) Executors.newFixedThreadPool(NoOfThreads);
    while(true) {
      request = accept();
      exec.execute(new Runnable() {
        public void run() {
          h.handle(request);
        }
      });
    }
  }
}

...

In reality, there are some problems associated with the use of the Executor interface. For one, a task tasks that depends depend on other tasks should not execute in the same Thread Pool. A task that submits another task to a single threaded Executor remains blocked until the results are received whereas the second task waits until the first one has concluded. This constitutes a deadlock.

...

Code Block
bgColor#FFCCCC
class NetworkServer extends InitialHandshake implements Runnable {
  private final ServerSocket serverSocket;
  private final ExecutorService pool;

  public NetworkServer(int port, int poolSize) throws IOException {
    serverSocket = new ServerSocket(port);
    pool = Executors.newFixedThreadPool(poolSize);
  }
 
  public void run() {
    try { 
      // Interdependent tasks
      pool.submit(new SanitizeInput(password));  // passwordPassword is defined in class InitialHandshake
      pool.submit(new CustomHandshake(password));  // for e.g. client puzzles 
      pool.execute(new Handle(serverSocket.accept()));  // handleHandle connection
    } catch (IOException ex) { 
      pool.shutdown();
    }	 
  }
}

Compliant Solution

Always try to submit independent tasks to the Executor. Choosing a large pool size can also help reduce thread starvation problems. Note that any operation that has further constraints, such as the total number of database connections or total ResultSets open at a particular time, impose an upper bound on the Thread Pool size as each thread continues to block until the resource becomes available. The other rules of fair concurrency, such as not running response sensitive tasks, also apply.

Wiki Markup
Sometimes, a {{private static}} {{ThreadLocal}} variable is used per thread to maintain local state. With Thread Pools, these should be employed only if their lifetime is shorter than thethat life of the corresponding task \[[Goetz 06|AA. Java References#Goetz 06]\]. Moreover, such variables should not be used as a communication mechanism between tasks. Finally, the choice of the unbounded  {{newFixedThreadPool}} may not always be the best. Refer to the API documentation for choosing between the former, {{newCachedThreadPool}}, {{newSingleThreadExecutor}} and {{newScheduledThreadPool}} to suit the design requirements.

This compliant solution recommends executing the interdependent tasks as a single task within the Executor. In other cases, where the subtasks do not require concurrency safeguards, the subtasks can be moved outside the threaded region that is going to be executed by the Executor.

Code Block
bgColor#ccccff
class NetworkServer extends InitialHandshake implements Runnable {
  private final ServerSocket serverSocket;
  private final ExecutorService pool;

  public NetworkServer(int port, int poolSize) throws IOException {
    serverSocket = new ServerSocket(port);
    pool = Executors.newFixedThreadPool(poolSize);
  }
 
  public void run() {
    try {
      // Execute interdependent subtasks as a single combined task within this block
      pool.execute(new Handle(serverSocket.accept())); // handleHandle connection
    } catch (IOException ex) { 
      pool.shutdown();
    }	 
  }
}

Risk Assessment

Using simplistic concurrency primitives (often incorrectly too) may lead to severe performance degradation, deadlocks and starvation, or exhaustion of system resources. This results in a denial of service condition.

...