Legacy software frequently assumes that every character in a string occupies 8 bits (a Java byte
). The Java language assumes that every character in a string occupies 16 bits (a Java char
). Unfortunately, neither the Java byte
nor Java char
data types can represent all possible Unicode characters. Many strings are stored or communicated using encodings such as UTF-8 that support characters with varying sizes.
While Java strings are stored as an array of characters and can be represented as an array of bytes, a single character in the string might be represented by two or more consecutive elements of type byte
or of type char
. Splitting a char
or byte
array risks splitting a multibyte character.
Ignoring the possibility of supplementary characters, multibyte characters, or combining characters (characters that modify other characters) may allow an attacker to bypass input validation checks. Consequently, characters must not be split between two data structures.
Multibyte Characters
Multibyte encodings are used for character sets that require more than one byte to uniquely identify each constituent character. For example, the Japanese encoding Shift-JIS (shown below) supports multibyte encoding where the maximum character length is two bytes (one leading and one trailing byte).
Byte Type |
Range |
---|---|
single-byte |
|
lead-byte |
|
trailing-byte |
|
The trailing byte ranges overlap the range of both the single-byte and lead-byte characters. When a multibyte character is separated across a buffer boundary, it can be interpreted differently than if it were not separated across the buffer boundary; this difference arises because of the ambiguity of its composing bytes [[Phillips 2005]].
Supplementary Characters
According to the Java API [[API 2006]] class Character
documentation (Unicode Character Representations):
The
char
data type (and consequently the value that aCharacter
object encapsulates) are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now \u0000 to \u10FFFF, known as Unicode scalar value.The Java 2 platform uses the UTF-16 representation in
char
arrays and in theString
andStringBuffer
classes. In this representation, supplementary characters are represented as a pair ofchar
values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).An
int
value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits ofint
are used to represent Unicode code points, and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:
- The methods that only accept a
char
value cannot support supplementary characters. They treatchar
values from the surrogate ranges as undefined characters. For example,Character.isLetter('\uD840')
returnsfalse
, even though this specific value if followed by any low-surrogate value in a string would represent a letter.- The methods that accept an
int
value support all Unicode characters, including supplementary characters. For example,Character.isLetter(0x2F81A)
returnstrue
because the code point value represents a letter (a CJK ideograph).
Noncompliant Code Example (Read)
This noncompliant code example tries to read up to 1024 bytes from a socket and build a String
from this data. It does this by reading the bytes in a while loop, as recommended by rule FIO10-J. Ensure the array is filled when using read() to fill an array. If it ever detects that the socket has more than 1024 bytes available, it throws an exception. This prevents untrusted input from potentially exhausting the program's memory.
public final int MAX_SIZE = 1024; public String readBytes(Socket socket) throws IOException { InputStream in = socket.getInputStream(); byte[] data = new byte[MAX_SIZE+1]; int offset = 0; int bytesRead = 0; String str = new String(); while ((bytesRead = in.read(data, offset, data.length - offset)) != -1) { offset += bytesRead; str += new String(data, offset, data.length - offset, "UTF-8"); if (offset >= data.length) { throw new IOException("Too much input"); } } in.close(); return str; }
This code fails to account for the interaction between characters represented with a multibyte encoding and the boundaries between the loop iterations. If the last byte read from the data stream in one read()
operation is the leading byte of a multibyte character, the trailing bytes are not encountered until the next iteration of the while
loop. However, multibyte encoding is resolved during construction of the new String
within the loop. Consequently, the multibyte encoding can be interpreted incorrectly.
Compliant Solution (Read)
This compliant solution defers creation of the string until all the data is available.
public final int MAX_SIZE = 1024; public String readBytes(Socket socket) throws IOException { InputStream in = socket.getInputStream(); byte[] data = new byte[MAX_SIZE+1]; int offset = 0; int bytesRead = 0; while ((bytesRead = in.read(data, offset, data.length - offset)) != -1) { offset += bytesRead; if (offset >= data.length) { throw new IOException("Too much input"); } } String str = new String(data, "UTF-8"); in.close(); return str; }
This code avoids splitting multi-byte encoded characters across buffers by deferring construction of the result string until the data has been read in full.
Compliant Solution (Reader
)
This compliant solution uses a Reader
rather than an InputStream
. The Reader
class converts bytes into characters on the fly, so it avoids the hazard of splitting multibyte characters. This routine aborts if the socket provides more than 1024 characters rather than 1024 bytes.
public final int MAX_SIZE = 1024; public String readBytes(Socket socket) throws IOException { InputStream in = socket.getInputStream(); Reader r = new InputStreamReader(in, "UTF-8"); char[] data = new char[MAX_SIZE+1]; int offset = 0; int charsRead = 0; String str = new String(data); while ((charsRead = r.read(data, offset, data.length - offset)) != -1) { offset += charsRead; str += new String(data, offset, data.length - offset); if (offset >= data.length) { throw new IOException("Too much input"); } } in.close(); return str; }
Noncompliant Code Example (Substring)
This noncompliant code example attempts to trim leading letters from the string
. It fails to accomplish this task because Character.isLetter()
lacks support for supplementary and combining characters [[Hornig 2007]].
// Fails for supplementary or combining characters public static String trim_bad1(String string) { char ch; int i; for (i = 0; i < string.length(); i += 1) { ch = string.charAt(i); if (!Character.isLetter(ch)) { break; } } return string.substring(i); }
Noncompliant Code Example (Substring)
This noncompliant code example attempts to correct the problem by using the String.codePointAt()
method, which accepts an int
argument. This works for supplementary characters but fails for combining characters [[Hornig 2007]].
// Fails for combining characters public static String trim_bad2(String string) { int ch; int i; for (i = 0; i < string.length(); i += Character.charCount(ch)) { ch = string.codePointAt(i); if (!Character.isLetter(ch)) { break; } } return string.substring(i); }
Compliant Solution (Substring)
This compliant solution works both for supplementary and for combining characters [[Hornig 2007]]. According to the Java API [[API 2006]] class java.text.BreakIterator
documentation:
The
BreakIterator
class implements methods for finding the location of boundaries in text. Instances ofBreakIterator
maintain a current position and scan over text returning the index of characters where boundaries occur.
The boundaries returned may be those of supplementary characters, combining character sequences, or ligature clusters. For example, an accented character might be stored as a base character and a diacritical mark.
public static String trim_good(String string) { BreakIterator iter = BreakIterator.getCharacterInstance(); iter.setText(string); int i; for (i = iter.first(); i != BreakIterator.DONE; i = iter.next()) { int ch = string.codePointAt(i); if (!Character.isLetter(ch)) { break; } } if (i == BreakIterator.DONE) { // Reached first or last text boundary return ""; // The input was either blank or had only (leading) letters } else { return string.substring(i); } }
To perform locale-sensitive String
comparisons for searching and sorting, use the java.text.Collator
class.
Risk Assessment
Failure to correctly account for supplementary and combining characters can lead to unexpected behavior.
Rule |
Severity |
Likelihood |
Remediation Cost |
Priority |
Level |
---|---|---|---|---|---|
IDS10-J |
low |
unlikely |
medium |
P2 |
L3 |
Bibliography
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="cb8ee4ad-1223-4165-babf-878d7fd17e96"><ac:plain-text-body><![CDATA[ |
[[API 2006 |
AA. References#API 06]] |
Classes |
]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="63a43829-06a5-4d9e-b175-77a0c8ca6335"><ac:plain-text-body><![CDATA[ |
[[Hornig 2007 |
AA. References#Hornig 07]] |
Problem Areas: Characters |
]]></ac:plain-text-body></ac:structured-macro> |
IDS09-J. Do not use locale-dependent methods on locale-dependent data without specifying the appropriate locale IDS11-J. Eliminate noncharacter code points before validation