EDIT Null The all-bits-off (zero) value has a long history in character encoding. In early telegraph systems and on paper tape, there was no way to distinguish between all-bits-off as a character and all-bits-off as the absence of a signal, so it was treated as the "null" character-- that is, the receiving equipment does nothing in response to this signal (or, perhaps more accurately, absence of signal).

On Teletypes and later printing terminals, since returning the carriage to the beginning of the line took time, it was common practice to follow a carriage-return signal with a sequence of null characters to give the receiving equipment time to return the carriage--failing to do this would result in dropped characters at the receiving end, or printing characters in whatever position the printhead was in while moving.

The practice of all-bits-off being a "null" character has persisted in most character encodings, including Unicode, to this day. (In early punched-card codes, all bits off signified a space instead, but this has been changed in modern encodings, such as EBCDIC, that are derived from old punched-card codes.)

The null character has been taken over for other uses, such as ending strings in C programming.
EDIT In most situations today, the null character is still treated as a no-op and ignored. But the C programming language and various others derived from it use the null value to mark the end of a character string in memory. The C strlen function simply scans forward from a specified memory address until it locates a zero byte, counting the number of nonzero bytes it passes on the way. This approach means every string takes one byte more than is necessary to store the characters, but it allows for strings of arbitrary length, where schemes that precede a string with a length value are theoretically limited by the size of the length value.

The use of the 0x00 byte value to mark the end of a string can be a problem if the byte buffer actually contains characters that consist of more than one byte and only one byte of a character is 0x00. UTF-16 and various East Asian encodings have this problem. There are various transforms that are used to avoid spurious nulls in East Asian encodings; in Unicode-based environments, UTF-8 is usually used in byte-based buffers. (In a true UTF-16 buffer, based on 16-bit words, the 16-bit value 0x0000 (or U+0000 in Unicode) is usually used to mark the end of a string.)

Using the value 0x00 to mark the end of a string isn't, strictly speaking, a "character," but it does prevent using 0x00 to represent an actual character.
U+0000 <control>