Wiki Markup |
---|
Many programs must address the problem of handling a series of incoming requests. The Thread-Per-Message design pattern |
...
is the simplest concurrency strategy wherein a new thread is created for each request \[[Lea 00|AA. Java References#Lea 00]\]. This design pattern is generally |
...
preferred |
...
to sequential executions of time consuming, I/O bound, session based, or isolated tasks. |
...
Wiki Markup |
---|
However, this design pattern also has several pitfalls, including overheads of thread-creation and scheduling, task processing, resource allocation and deallocation, and frequent context switching \[[Lea 00|AA. Java References#Lea 00]\]. Furthermore, an attacker can cause a denial of service by overwhelming the system with too many requests all at once. Instead of degrading gracefully, the system becomes unresponsive, |
...
causing a denial of service. From |
...
a safety |
...
perspective, one component can |
...
exhaust all resources because of some intermittent error, starving all other components. |
...
Thread
...
pools
...
allow
...
a
...
system
...
to
...
service
...
as
...
many
...
requests
...
as
...
it
...
can
...
comfortably
...
sustain,
...
rather
...
than
...
terminating
...
all
...
services
...
when
...
presented
...
with
...
a
...
deluge
...
of
...
requests.
...
Thread
...
pools
...
overcome
...
these
...
issues
...
by controlling the
...
maximum
...
number
...
of
...
worker
...
threads
...
that
...
can
...
be
...
initialized
...
and
...
executed
...
concurrently
...
.
...
Every
...
object
...
that
...
supports
...
thread
...
pools
...
accepts
...
a
...
Runnable
...
or
...
Callable<T>
...
task
...
and
...
stores
...
it
...
in
...
a
...
temporary
...
queue
...
until
...
resources
...
become
...
available.
...
Because
...
the
...
threads
...
in
...
a
...
thread
...
pool
...
can
...
be
...
reused
...
and
...
efficiently
...
added
...
or
...
removed
...
from
...
the
...
pool,
...
thread
...
life-cycle
...
management
...
overhead
...
is
...
minimized.
...
Noncompliant Code Example
This noncompliant code example demonstrates the Thread-Per-Message
...
design
...
pattern.
...
The
...
class
...
RequestHandler
...
provides
...
a
...
public
...
static
...
factory
...
method
...
so
...
that
...
callers
...
can
...
obtain
...
its
...
instance.
...
The handleRequest()
...
method
...
is
...
subsequently invoked to
...
handle
...
each
...
request
...
in
...
its
...
own
...
thread.
Code Block | ||||
---|---|---|---|---|
| =
| |||
} class Helper { public void handle(Socket socket) { //... } } final class RequestHandler { private final Helper helper = new Helper(); private final ServerSocket server; private RequestHandler(int port) throws IOException { server = new ServerSocket(port); } public static RequestHandler newInstance() throws IOException { return new RequestHandler(0); // Selects next available port } public void handleRequest() { new Thread(new Runnable() { public void run() { try { helper.handle(server.accept()); } catch (IOException e) { // Forward to handler } } }).start(); } // ... other methods such as shutting down the thread pool and task cancellation ... } {code} |
The
...
Thread-Per-Message
...
strategy
...
fails
...
to
...
provide
...
graceful
...
degradation
...
of
...
service.
...
As
...
more
...
threads
...
are
...
created,
...
processing
...
continues
...
normally
...
until
...
some
...
scarce
...
resource
...
is
...
exhausted.
...
For
...
example,
...
a
...
system
...
may
...
only
...
allow
...
only
...
a
...
limited
...
number
...
of
...
open
...
file
...
descriptors
...
even
...
though
...
several
...
more
...
threads
...
can
...
be
...
created
...
to
...
service
...
requests.
...
When
...
the
...
scarce
...
resource
...
is
...
memory,
...
the
...
system
...
may
...
fail
...
abruptly,
...
resulting
...
in
...
a
...
denial
...
of
...
service.
...
Compliant Solution
Wiki Markup |
---|
This compliant solution uses a _ |
...
fixed |
...
thread |
...
pool_ that places an upper bound on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when trying to respond to all incoming requests and allows it to degrade gracefully by serving a fixed number of clients at a particular time |
...
\[[Tutorials 08|AA. Java References#Tutorials 08]\]. |
Code Block | ||
---|---|---|
| ||
{code:bgColor=#ccccff} // class Helper remains unchanged final class RequestHandler { private final Helper helper = new Helper(); private final ServerSocket server; private final ExecutorService exec; private RequestHandler(int port, int poolSize) throws IOException { server = new ServerSocket(port); exec = Executors.newFixedThreadPool(poolSize); } public static RequestHandler newInstance(int poolSize) throws IOException { return new RequestHandler(0, poolSize); } public void handleRequest() { Future<?> future = exec.submit(new Runnable() { @Override public void run() { try { helper.handle(server.accept()); } catch (IOException e) { // Forward to handler } } }); } } {code} |
Wiki Markup |
---|
According to the Java API documentation for the {{Executor}} interface \[[API 06|AA. Java References#API 06]\]: |
...
Wiki Markup \[The Interface {{Executor}} is\] An object that executes submitted {{Runnable}} tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An {{Executor}} is normally used instead of explicitly creating threads.
...
The ExecutorService
interface used in this compliant solution derives from the java.util.concurrent.Executor
...
interface.
...
The
...
ExecutorService.submit()
...
method
...
allows
...
callers
...
to
...
obtain
...
a
...
Future<V>
...
object.
...
This
...
object
...
encapuslates
...
the
...
as-yet-unknown
...
result
...
of
...
an
...
asynchronous
...
computation,
...
and
...
enables
...
callers
...
to
...
perform
...
additional
...
functions
...
such
...
as
...
task
...
cancellation.
...
Wiki Markup |
---|
The choice of the unbounded {{newFixedThreadPool}} |
...
is not always |
...
optimal. Refer to the API documentation for choosing between {{newFixedThreadPool()}}, {{newCachedThreadPool()}}, {{newSingleThreadExecutor()}} and {{newScheduledThreadPool()}} to meet specific design requirements |
...
\[[API 06|AA. Java References#API 06]\]. |
Risk Assessment
Using simplistic concurrency primitives to process an unbounded number of requests may result in severe performance degradation, deadlock, or system resources exhaustion and denial-of-service
...
.
...
Rule | Severity | Likelihood | Remediation Cost | Priority | Level |
---|---|---|---|---|---|
CON21- J | low | probable | high | P2 | L3 |
Automated Detection
TODO
Related Vulnerabilities
References
Wiki Markup |
---|
\[[API 06|AA. Java References#API 06]\] [Interface Executor|http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executor.html]
\[[Lea 00|AA. Java References#Lea 00]\] Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads
\[[Tutorials 08|AA. Java References#Tutorials 08]\] [Thread Pools|http://java.sun.com/docs/books/tutorial/essential/concurrency/pools.html]
\[[Goetz 06|AA. Java References#Goetz 06]\] Chapter 8, Applying Thread Pools
\[[MITRE 09|AA. Java References#MITRE 09]\] [CWE ID 405|http://cwe.mitre.org/data/definitions/405.html] "Asymmetric Resource Consumption (Amplification)", [CWE ID 410|http://cwe.mitre.org/data/definitions/410.html] "Insufficient Resource Pool" |
...
...
...
...
...
...
...
...
...
...
...
...
...
lock 11. Concurrency (CON) CON22-J.
...
...
...
...
...
...
...
...
...
...
...