Italics get used to express important semantic meaning, so unicode should support them

Markus Scherer at
Tue Dec 15 11:31:36 CST 2020

On Mon, Dec 14, 2020 at 10:54 PM Rebecca Bettencourt via Unicode <
unicode at> wrote:

> Many character sets from 8-bit microcomputers had “inverse” or “reverse
> video” characters that were treated as distinct from their “normal video”
> counterparts. When we proposed encoding these, as atomic characters or
> using variation sequences or by any other means, the UTC shot down the idea
> completely.

Early computing systems conflated layers of processing where modern ones
separate them. For example, a quarter of ASCII and of EBCDIC, respectively,
was used for control codes which we inherited but which are now mostly
unused because we use lower-level mechanisms instead that carry text purely
as payload.

I think the plain text / rich text distinction has been quite successful. I
don't actually personally like the math-styled characters because they seem
specific to a particular math tradition. When I was in high school, the
vector-math teacher gave us a choice between the old style of using
Fraktur/Sütterlin for vector variables vs. the new style of regular letters
with an arrow on top. "Vector" markup with different style choices seems
better for this kind of thing.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the Unicode mailing list