Unicode "no-op" Character?

Ken Whistler via Unicode unicode at unicode.org
Wed Jul 3 13:22:55 CDT 2019


On 7/3/2019 10:47 AM, Sławomir Osipiuk via Unicode wrote:
>
> Is my idea impossible, useless, or contradictory? Not at all.
>
What you are proposing is in the realm of higher-level protocols.

You could develop such a protocol, and then write processes that honored 
it, or try to convince others to write processes to honor it. You could 
use PUA characters, or non-characters, or existing control codes -- the 
implications for use of any of those would be slightly different, in 
practice, but in any case would be an HLP.

But your idea is not a feasible part of the Unicode Standard. There are 
no "discardable" characters in Unicode -- *by definition*. The 
discussion of "ignorable" characters in the standard is nuanced and 
complicated, because there are some characters which are carefully 
designed to be transparent to some, well-specified processes, but not to 
others. But no characters in the standard are (or can be) ignorable by 
*all* processes, nor can a "discardable" character ever be defined as 
part of the standard.

The fact that there are a myriad of processes implemented (and 
distributed who knows where) that do 7-bit ASCII (or 8-bit 8859-1) 
conversion to/from UTF-16 by integral type conversion is a simple 
existence proof that U+000F is never, ever, ever, ever going to be 
defined to be "discardable" in the Unicode Standard.

--Ken


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://unicode.org/pipermail/unicode/attachments/20190703/d7a2a6ee/attachment.html>


More information about the Unicode mailing list