You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

Software vulnerability reports and reports of software exploitations continue to grow at an alarming rate, and a significant number of these reports result in technical security alerts. To address this growing threat to the government, corporations, educational institutions, and individuals, systems must be developed that are free of software vulnerabilities.

Coding errors cause the majority of software vulnerabilities. For example, 64 percent of the nearly 2,500 vulnerabilities in the National Vulnerability Database in 2004 were caused by programming errors [[Heffley 2004]].

Java is a relatively secure language: there is no explicit pointer manipulation; array and string
bounds are automatically checked; attempts at referencing a null pointer are trapped; the
arithmetic operations are well defined and platform independent, as are the type conversions.
The built-in bytecode verifier ensures that these checks are always in place.

Moreover, there are comprehensive, fine-grained security mechanisms available in Java that
can control access to individual files, sockets, and other sensitive resources. To take
advantage of the security mechanisms, the Java Virtual Machine (JVM) must have a
security manager in place. This is an ordinary Java object of class java.lang.SecurityManager (or a subclass) that can be put in place programmatically but is more usually specified via a command line parameter.

There are, however, ways in which Java program safety can be compromised. The remainder of this chapter describes misuse cases under which Java programs might be exploited, and examples of guidelines which mitigate against these attacks. Not all of the rules apply to all Java language programs; frequently their applicability depend upon how the software is deployed and your assumptions concerning trust.

The Myth of Trust

Software programs often contain multiple components that act as subsystems, where each component operates in one or more trusted domains. For example, one component may have access to the file system but lack access to the network, while another component has access to the network but lacks access to the file system. Distrustful decomposition and privilege separation [[Dougherty 2009]] are examples of secure design patterns that recommend reducing the amount of code that runs with special privileges by designing the system using mutually untrusting components.

When components with differing degrees of trust share data, the data are said to flow across a trust boundary. Because Java allows components under different trusted domains to communicate with each other, data can be transmitted across a trust boundary. Furthermore, a Java program can contain both internally developed and third-party code. Data that are transmitted to or accepted from third-party code also flow across a trust boundary.

While software components can obey policies that allow them to transmit data across trust boundaries, they cannot specify the level of trust given to any component. The deployer of the application must define the trust boundaries with the help of a system-wide security policy. A security auditor can use that definition to determine whether the software adequately supports the security objectives of the application.

Third-party code should operate in its own trusted domain; any code potentially exported to a third-party — such as libraries — should be deployable in well-defined trusted domains. The public API of the potentially-exported code can be considered to be a trust boundary. Data flowing across a trust boundary should be validated when the publisher lacks guarantees of validation. A subscriber or client may omit validation when the data flowing into its trust boundary is appropriate for use as is. In all other cases, inbound data must be validated.

Injection Attacks

Data received by a component from a source outside the component's trust boundary may be malicious. Consequently, the program must take steps to ensure that the data are both genuine and appropriate.

These steps can include the following:

Validation: Validation is the process of ensuring that input data fall within the expected domain of valid program input. For example, not only must method arguments conform to the type and numeric range requirements of a method or subsystem, but also they must contain data that conform to the required input invariants for that method.

Sanitization: In many cases, the data may be passed directly to a component in a different trusted domain. Data sanitization is the process of ensuring that data conforms to the requirements of the subsystem to which they are passed. Sanitization also involves ensuring that data also conforms to security-related requirements regarding leaking or exposure of sensitive data when output across a trust boundary. Sanitization may include the elimination of unwanted characters from the input by means of removal, replacement, encoding or escaping the characters. Sanitization may occur following input (input sanitize) or before the data is passed to across a trust boundary (output sanitization). Data sanitization and input validation may coexist and complement each other. Refer to the related guideline IDS01-J. Sanitize data passed across a trust boundary for more details on data sanitization.

Canonicalization and Normalization: Canonicalization is the process of lossless reduction of the input to its equivalent simplest known form. Normalization is the process of lossy conversion of input data to the simplest known (and anticipated) form. Canonicalization and normalization must occur before validation to prevent attackers from exploiting the validation routine to strip away illegal characters and thus constructing a forbidden (and potentially malicious) character sequence. Refer to the guideline IDS02-J. Normalize strings before validating them for more details. In addition, ensure that normalization is performed only on fully assembled user input. Never normalize partial input or combine normalized input with non-normalized input.

For example, POSIX file systems provide a syntax for expressing file names on the system using paths. A path is a string which indicates how to find any file by starting at a particular directory (usually the current working directory), and traversing down directories until the file is found. Canonical paths lack both symbolic links and special entries such as '.' or '..', which are handled specially on POSIX systems. Each file accessible from a directory has exactly one canonical path, along with many non-canonical paths.

In particular, complex subsystems are often components that accept string data that specifies commands or instructions to a the component. String data passed to these components may contain special characters that can trigger commands or actions, resulting in a software vulnerability.

Examples of components which can interpret commands or instructions:

Many rules address proper filtering of untrusted input, especially when such input is passed to a component that can interpret commands or instructions.

When data must be sent to a component in a different trusted domain, the sender must ensure that the data is suitable for the receiver's trust boundary by properly encoding and escaping any data flowing across the trust boundary. For example, if a system is infiltrated by malicious code or data, many attacks are rendered ineffective if the system's output is appropriately escaped and encoded.

Capabilities

A capability is a communicable, unforgeable token of authority. It refers to a value that references an object along with an associated set of access rights. A user program on a capability-based operating system must use a capability to access an object [Wikipedia 2011].

The term capability was introduced by Dennis and Van Horn [[Dennis 1966]]. The basic idea is that for a program to access an object it must have a special token. This token designates an object and gives the program the authority to perform a specific set of actions (such as reading or writing) on that object. Such a token is known as a capability.

In an object-capability language, all program state is contained in objects that cannot be read or written without a reference, which serves as an unforgeable capability. All external resources are also represented as objects. Objects encapsulate their internal state, providing reference holders access only through prescribed interfaces [[Mettler 2010A]].

Because of Java’s == operator, which tests pointer equality, every object has an unforgeable identity in addition to its contents. Identity tests mean that any object can be used as a token, serving as an unforgeable proof of authorization to perform some action [[Mettler 2010B]].

Authority is embodied by object references, which serve as capabilities. Authority refers to any effects that running code can have other than to perform side-effect-free computations. Authority includes not only effects on external resources such as files or network sockets, but also on mutable data structures that are shared with other parts of the program [[Mettler 2010B]].

Rules that involve capabilities include:

Leaking Sensitive Data

A system's security policy determines which information is sensitive. Sensitive data may include user information such as social security or credit card numbers, passwords, or private keys.

Java software components provide many opportunities to output sensitive information. Rules that address the mitigation of sensitive information disclosure include:

Resource Exhaustion

Denial of service can occur when resource usage is disproportionately large in comparison to the input data that causes the resource usage.

This guideline is of greater concern for persistent, server-type systems than for desktop applications. Checking inputs for excessive resource consumption may be unjustified for client software that expects the user to handle resource-related problems. Even for client software, however, should check for inputs that could cause persistent denial of service, such as filling up the file system.

The Secure Coding Guidelines for the Java Programming Language SCG 2009 lists some examples of possible attacks:

  • Requesting a large image size for vector graphics, for instance, SVG and font files.
  • "Zip bombs" whereby a short file is very highly compressed, for instance, ZIPs, GIFs and gzip encoded HTTP content.
  • "Billion laughs attack" whereby XML entity expansion causes an XML document to grow dramatically during parsing. Set the XMLConstants.FEATURE_SECURE_PROCESSING feature to enforce reasonable limits.
  • Using excessive disc space.
  • Inserting many keys with the same hash code into a hash table, consequently triggering worst-case performance (O(n 2)) rather than typical-case performance (O(n)).
  • Initiating many connections where the server allocates significant resources for each, for instance, the traditional "SYN flood" attack.

Rules for preventing denial of service attacks resulting from resource exhaustion include:

Type Safety

Java is believed to be a type-safe language LSOD 02, Sec. 5.1. For that reason, it should not be
possible to compromise a Java program by misusing the type system. To see why type safety
is so important, consider the following types:

public class TowerOfLondon {
  private Treasure theCrownJewels;
  ...
}

public class GarageSale {
  public Treasure myCostumeJewerly;
  ...
}

If these two types could be confused, it would be possible to access the private field
theCrownJewels as if it were the public field myCostumeJewerly. More generally, a “type
confusion attack” could allow Java security to be compromised by making the internals of the
security manager open to abuse. A team of researchers at Princeton University showed that
any type confusion in Java could be used to completely overcome Java’s security
mechanisms (see Securing Java Ch. 5, Sec. 7 McGraw 99).

Java’s type safety means that fields that are declared private or protected or that have
default (package) protection should not be globally accessible. However, there are a number
of vulnerabilities “built in” to Java that enable this protection to be overcome. These should
come as no surprise to the Java expert, as they are well documented, but they may trap the
unwary.

Public Fields

A field that is declared public may be directly accessed by any part of a Java program and
may be modified from anywhere in a Java program (unless the field is declared final).
Clearly, sensitive information must not be stored in a public field, as it could be
compromised by anyone who could access the JVM running the program.

Inner Classes

Inner classes have access to all the fields of their surrounding class. There is no bytecode
support for inner classes, so they are compiled into ordinary classes with names like
OuterClass$InnerClass. So that the inner class can access the private fields of the
outer class, the private access is changed to package access in the bytecode. For that reason, handcrafted bytecode can access these private fields (see “Security Aspects in Java Bytecode
Engineering” Schönefeld 02 for an example).

Serialization

Serialization enables the state of a Java program to be captured and written out to a byte
stream Sun 04b. This allows for the state to be preserved so that it can be reinstated (by
deserialization). Serialization also allows for Java method calls to be transmitted over a
network for Remote Method Invocation (RMI). An object (called someObject below) can
be serialized as follows:

ObjectOutputStream oos = new ObjectOutputStream (
new FileOutputStream (“SerialOutput”) );
oos.writeObject (someObject);
oos.flush ( );

The object can be deserialized as follows:

ObjectInputStream ois = new ObjectInputStream (
new FileInputStream (“SerialOutput”) );
someObject = (SomeClass)ois.readObject ( );

Serialization captures all the fields of a class, provided the class implements the
Serializable interface, including the non-public fields that are not normally accessible
(unless the field is declared transient). If the byte stream to which the serialized values
are written is readable, then the values of the normally inaccessible fields may be read.
Moreover, it may be possible to modify or forge the preserved values so that when the class
is deserialized, the values become corrupted.

Introducing a security manager does not prevent the normally inaccessible fields from being
serialized and deserialized (although permission must be granted to write to and read from
the file or network if the byte stream is being stored or transmitted). Network traffic
(including RMI) can be protected, however, by using SSL.

Reflection

Reflection enables a Java program to analyze and modify itself. In particular, a program can
find out the values of field variables and change them Forman 05, Sun 02. The Java
reflection API includes a method call that enables fields that are not normally accessible to be
accessed under reflection. The following code prints out the names and values of all fields of
an object someObject of class SomeClass:

Field [ ] fields = SomeClass.getDeclaredFields( );
for (Field fieldsI : fields) {
  if ( !Modifier.isPublic (fieldsI.getModifiers( )) ) {
    fieldsI.setAccessible (true);
  }
  System.out.print (“Field: “ + fieldsI.getName( ));
  System.out.println (“, value: “ + fieldsI.get (someObject));
}

A field could be set to a new value as follows:

String newValue = reader.readLine ( );
fieldsI.set (someObject,
returnValue (newValue, fieldsI.getType ( )) );

Introducing the default security manager does prevent the fields that would not normally be
accessible from being accessed under reflection. The default security manager throws
java.security.AccessControlException in these circumstances. However, it is
possible to grant a permission to override this default behavior:
java.lang.reflect.ReflectPermission can be granted with action suppressAccessChecks.

The JVM Tool Interface

Java 5 introduced the JVM Tool Interface (JVMTI) Sun 04d, replacing both the JVM
Profiler Interface (JVMPI) and the JVM Debug Interface (JVMDI), which are now
deprecated.

The JVMTI contains extensive facilities to find out about the internals of a running JVM,
including facilities to monitor and modify a running Java program. These facilities are rather
low level and require the use of the Java Native Interface (JNI) and C Language
programming. However, they provide the opportunity to access fields that would not
normally be accessible. Also, there are facilities that can change the behavior of a running
Java program (for example, threads can be suspended or stopped).

The JVMTI works by using agents that communicate with the running JVM. These agents
must be loaded at JVM startup and are usually specified via one of the command line options
{{–agentlib:}} or {{–agentpath:}}. However, agents can be specified in environment
variables, although this feature can be disabled where security is a concern. The JVMTI is
always enabled, and JVMTI agents may run under the default security manager without
requiring any permissions to be granted. More work needs to be done to determine under
exactly what circumstances the JVMTI can be misused.

Debugging

The Java Platform Debugger Architecture (JPDA) builds on the JVMTI and provides highlevel
facilities for debugging running Java systems Sun 04c. These include facilities similar
to the reflection facilities described above for inspecting and modifying field values. In
particular, there are methods to get and set field and array values. Access control is not
enforced so, for example, even the values of private fields can be set.

Introducing the default security manager means that various permissions must be granted in
order for debugging to take place. The following policy file was used to run the JPDS Trace
demonstration under the default security manager:

grant {
  permission java.io.FilePermission "traceoutput.txt", "read,write";
  permission java.io.FilePermission "C:/Program Files/Java/jdk1.5.0_04/lib/tools.jar", "read";
  permission java.io.FilePermission "C:/Program", "read,execute";
  permission java.lang.RuntimePermission "modifyThread";
  permission java.lang.RuntimePermission "modifyThreadGroup";
  permission java.lang.RuntimePermission "accessClassInPackage.sun.misc";
  permission java.lang.RuntimePermission "loadLibrary.dt_shmem";
  permission java.util.PropertyPermission "java.home", "read";
  permission java.net.SocketPermission "<localhost>", "resolve";
  permission com.sun.jdi.JDIPermission "virtualMachineManager";
};

Monitoring and Management

Java contains extensive facilities for monitoring and managing a JVM Sun 04e. In
particular, the Java Management Extension (JMX) API enables the monitoring and control of
class loading, thread state and stack traces, deadlock detection, memory usage, garbage
collection, operating system information, and other operations Sun 04a. There are also
facilities for logging monitoring and management. A running JVM may be monitored and
managed remotely.

For a JVM to be monitored and managed remotely, it must be started with various system
properties set (either on the command line or in a configuration file). Also, there are
provisions for the monitoring and management to be done securely (by passing the
information using SSL, for example) and to require proper authentication of the remote
server. However, users may start a JVM with remote monitoring and management enabled
with no security for their own purposes, and this would leave the JVM open to compromise
from outsiders. Although a user could not easily turn on remote monitoring and management
by accident, they might not realize that starting a JVM so enabled, without any security also
switched on, could leave their JVM exposed to outside abuse.

  • No labels