Legacy software frequently assumes that every character in a string occupies 8 bits (a Java byte
). The Java language assumes that every character in a string occupies 16 bytes (a Java char
). Unfortunately, neither the Java byte
nor Java char
data types can represent all possible Unicode characters. Many strings are stored or communicated using an encoding such as UTF-8
that allows characters to have varying sizes.
While Java strings are stored as an array of characters, and can be represented as an array of bytes, a single character in the string might be represented by two or more consecutive elements of type byte
or of type char
. Splitting a char
or byte
array risks splitting a multibyte character.
Ignoring the possibility of supplementary characters, multibyte characters, or combining characters (characters that modify other characters) may allow an attacker to bypass input validation checks. Consequently, programs must not split characters between two data structures.
Multibyte Characters
Multibyte encodings such as UTF-8 are used for character sets that require more than one byte to uniquely identify each constituent character. For example, the Japanese encoding Shift-JIS (shown below), supports multibyte encoding wherein the maximum character length is two bytes (one leading and one trailing byte).
Byte Type |
Range |
---|---|
single-byte |
|
lead-byte |
|
trailing-byte |
|
The trailing byte ranges overlap the range of both the single byte and lead byte characters. When a multibyte character is separated across a buffer boundary, it can be interpreted differently than if it were not separated across the buffer boundary; this difference arises because of the ambiguity of its composing bytes [Phillips 2005].
Supplementary Characters
According to the Java API [[API 2006]], class Character
documentation (Unicode Character Representations)
The
char
data type (and consequently the value that aCharacter
object encapsulates) are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now U+0000 to U+10FFFF, known as Unicode scalar value.The Java 2 platform uses the UTF-16 representation in
char
arrays and in theString
andStringBuffer
classes. In this representation, supplementary characters are represented as a pair ofchar
values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).An
int
value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits ofint
are used to represent Unicode code points and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:
- The methods that only accept a
char
value cannot support supplementary characters. They treatchar
values from the surrogate ranges as undefined characters. For example,Character.isLetter('\uD840')
returnsfalse
, even though this specific value if followed by any low-surrogate value in a string would represent a letter.- The methods that accept an
int
value support all Unicode characters, including supplementary characters. For example,Character.isLetter(0x2F81A)
returnstrue
because the code point value represents a letter (a CJK ideograph).
Noncompliant Code Example (byte
)
This noncompliant code example reads bytes from a FileInputStream
into a 1024 byte buffer before concatenating to a String
.
public String readBytes(Socket socket) throws IOException { InputStream in = socket.getInputStream(); String str = ""; byte[] data = new byte[1024]; while (in.read(data) > -1) { str += new String(data, "UTF-8"); } in.close(); return str; }
This code fails to consider the interaction between characters represented with a multi-byte encoding and the boundaries between the loop iterations. If the 1024th byte read from the data stream in one read()
operation is the leading byte of a multibyte character, the trailing bytes are not encountered until the next iteration of the while
loop. However, multi-byte encoding is resolved during construction of the new String
within the loop. Consequently, the multibyte encoding is interpreted incorrectly.
Compliant Solution (byte
)
This compliant solution does not create a string until all the data is available.
public String readBytes(Socket socket) throws IOException { InputStream in = socket.getInputStream(); int offset = 0; int bytesRead = 0; byte[] data = new byte[4096]; while (true) { bytesRead = in.read(data, offset, data.length - offset); if (bytesRead == -1) { break; } offset += bytesRead; if (offset >= data.length) { break; } } in.close(); String str = new String(data, "UTF-8"); return str; }
This code avoids splitting multibyte encoded characters across buffers by deferring construction of the result string until the data has been read in full. It does assume that the 4096th byte in the stream is not in the middle of a multibyte character.
The size of the data
byte buffer depends on the maximum number of bytes required to write an encoded character and the number of characters. For example, UTF-8 encoded data requires four bytes to represent any character above U+FFFF
. Because Java uses the UTF-16 character encoding to represent char
data, such sequences are split into two separate char
values of two bytes each. Consequently, the buffer size should be four times the size of the maximum number of characters.
Compliant Solution (byte
, readFully()
)
The no-argument and one-argument readFully()
methods of the DataInputStream
class read all of the requested data or throw an exception. These methods throw EOFException
if they detect the end of input before the required number of bytes have been read; they throw IOException
if some other input/output error occurs. This compliant solution assumes a maximum string size of 4096 bytes.
public String readBytes(Socket socket) throws IOException { InputStream in = socket.getInputStream(); byte[] data = new byte[4096]; DataInputStream din = new DataInputStream(in); din.readFully(data); in.close(); String str = new String(data, "UTF-8"); return str; }
Noncompliant Code Example (char
)
This noncompliant code example attempts to trim leading letters from the string
. It fails to accomplish this task because Character.isLetter()
lacks support for supplementary and combining characters [[Hornig 2007]].
// Fails for supplementary or combining characters public static String trim_bad1(String string) { char ch; int i; for (i = 0; i < string.length(); i += 1) { ch = string.charAt(i); if (!Character.isLetter(ch)) { break; } } return string.substring(i); }
Noncompliant Code Example (char
)
This noncompliant code example attempts to fix the problem by using the String.codePointAt()
method, which accepts an int
argument. This works for supplementary characters but fails for combining characters [[Hornig 2007]].
// Fails for combining characters public static String trim_bad2(String string) { int ch; int i; for (i = 0; i < string.length(); i += Character.charCount(ch)) { ch = string.codePointAt(i); if (!Character.isLetter(ch)) { break; } } return string.substring(i); }
Compliant Solution (char
)
This compliant solution works both for supplementary and for combining characters [[Hornig 2007]]. According to the Java API [[API 2006]], class java.text.BreakIterator
documentation
The
BreakIterator
class implements methods for finding the location of boundaries in text. Instances ofBreakIterator
maintain a current position and scan over text returning the index of characters where boundaries occur.The boundaries returned may be those of supplementary characters, combining character sequences, or ligature clusters. For example, an accented character might be stored as a base character and a diacritical mark.
public static String trim_good(String string) { BreakIterator iter = BreakIterator.getCharacterInstance(); iter.setText(string); int i; for (i = iter.first(); i != BreakIterator.DONE; i = iter.next()) { int ch = string.codePointAt(i); if (!Character.isLetter(ch)) { break; } } if (i == BreakIterator.DONE) { // Reached first or last text boundary return ""; // The input was either blank or had only (leading) letters } else { return string.substring(i); } }
To perform locale-sensitive String
comparisons for searching and sorting, use the java.text.Collator
class.
Risk Assessment
Failure to correctly account for supplementary and combining characters can lead to unexpected behavior.
Rule |
Severity |
Likelihood |
Remediation Cost |
Priority |
Level |
---|---|---|---|---|---|
IDS10-J |
low |
unlikely |
medium |
P2 |
L3 |
Bibliography
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="9b431d00-11ac-4a2c-a8ae-d11981004169"><ac:plain-text-body><![CDATA[ |
[[API 2006 |
AA. Bibliography#API 06]] |
Classes |
]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="b78ad00b-6098-463f-9e02-18a4e11aeb25"><ac:plain-text-body><![CDATA[ |
[[Hornig 2007 |
AA. Bibliography#Hornig 07]] |
Problem areas: Characters |
]]></ac:plain-text-body></ac:structured-macro> |
IDS09-J. Do not use locale-dependent methods on locale-sensitive data without specifying the appropriate locale IDS11-J. Eliminate non-character code points before validation