Wiki Markup |
---|
Many programs must address the problem of handling a series of incoming requests. TheOne simple concurrency strategy is the Thread-Per-Message design pattern, iswhich the simplest concurrency strategy, wherein uses a new thread is created for each request \[[Lea 2000|AA. Bibliography#Lea 00]\]. This pattern is generally preferred toover sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks. |
Wiki Markup |
---|
However, thisthe pattern also has several pitfallsintroduces additional overheads not seen in sequential execution, including overheads of the time and resource required for thread-creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching \[[Lea 2000|AA. Bibliography#Lea 00]\]. Furthermore, an attacker can cause a denial of service by overwhelming the system with too many requests all at once. Instead of, causing the system to become unresponsive rather than degrading gracefully,. theThis systemcan becomes unresponsive, causinglead to a denial of service. From a safety perspective, one component can exhaust all resources becausedue ofto somean intermittent error, consequently starving all other components. |
Thread pools allow a system to service as many requests as limit the maximum number of simultaneous request that it processes to a number that it can comfortably sustainserve, rather than terminating all services service when presented with a deluge of requests. Thread pools overcome these issues by controlling the maximum number of worker threads that can be initialized and executed will execute concurrently. Every Each object that supports thread pools accepts a Runnable
or Callable<T>
task and stores it in a temporary queue until resources become available. Because Additionally, thread life-cycle management overhead is minimized because the threads in a thread pool can be reused and can be efficiently added to or removed from the pool, thread life-cycle management overhead is minimized.
Programs that use multiple threads to serve requests should — and security-sensitive programs must — use thread pools to enable graceful degradation of service during traffic bursts.
Noncompliant Code Example
...
The Thread-Per-Message strategy fails to provide graceful degradation of service. As more threads are created, processing continues normally until some scarce resource is exhausted. For example, a system may allow only allow a limited number of open file descriptors, even though several more additional threads can be created to service serve requests. When the scarce resource is memory, the system may fail abruptly, resulting in a denial of service.
...
Wiki Markup |
---|
This compliant solution uses a fixed-thread pool that places an upper bound on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when tryingattempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients at a particular time \[[Tutorials 2008|AA. Bibliography#Tutorials 08]\]. |
...
The ExecutorService
interface used in this compliant solution derives from the java.util.concurrent.Executor
interface. The ExecutorService.submit()
method allows callers to obtain a Future<V>
object. This object both encapsulates the as-yet-unknown result of an asynchronous computation and also enables callers to perform additional functions such as task cancellation.
Wiki Markup |
---|
The choice of the unbounded {{newFixedThreadPool}} ismay notbe always optimalinappropriate. Refer to the Java API documentation for guidance on choosing between the following to meet specific design requirements \[[API 2006|AA. Bibliography#API 06]\]: |
...
Rule | Severity | Likelihood | Remediation Cost | Priority | Level |
---|---|---|---|---|---|
TPS00-J | low | probable | high | P2 | L3 |
...
TODO
Related Vulnerabilities
...
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="3106727ef188d57d-f389657f-43564be1-ac57af1c-cf5bb671f94c9e0ca67351ad"><ac:plain-text-body><![CDATA[ | [[API 2006 | AA. Bibliography#API 06]] | [Interface Executor | http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executor.html] | ]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="c07954f34df1f3db-9bf16845-4f1e4733-bf0e9b99-740fad10567cae1ec8fc866a"><ac:plain-text-body><![CDATA[ | [[Lea 2000 | AA. Bibliography#Lea 00]] | Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads | ]]></ac:plain-text-body></ac:structured-macro> | |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="95991947b02b95d5-bcf50c67-49a449cb-ae548f08-7f9649d63e5e4843a97b2bcc"><ac:plain-text-body><![CDATA[ | [[Tutorials 2008 | AA. Bibliography#Tutorials 08]] | [Thread Pools | http://java.sun.com/docs/books/tutorial/essential/concurrency/pools.html] | ]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="43a719e9db5f6656-a9faaf43-4c044927-ab6d81df-328ff6d190ccbbc05aacacb3"><ac:plain-text-body><![CDATA[ | [[Goetz 2006 | AA. Bibliography#Goetz 06]] | Chapter 8, Applying Thread Pools | ]]></ac:plain-text-body></ac:structured-macro> |
...