Richard Wordingham via Unicode
unicode at unicode.org
Sat Feb 9 04:58:05 CST 2019
On Fri, 8 Feb 2019 18:08:34 -0800
Asmus Freytag via Unicode <unicode at unicode.org> wrote:
> On 2/8/2019 5:42 PM, James Kass via Unicode wrote:
> You are still making the assumption that selecting a different glyph
> for the base character would automatically lead to the selection of a
> different glyph for the combining mark that follows. That's an iffy
> assumption because "italics" can be realized by choosing a separate
> font (typographically, italics is realized as a separate typeface).
The usual practice is to look for a font that supports both base
character and mark.
> Under the implicit assumptions bandied about here, the VS approach
> thus reveals itself as a true rich-text solution (font switching)
> albeit realized with pseudo coding rather than markup, markdown or
> escape sequences.
Isn't that already the case if one uses variation sequences to choose
between Chinese and Japanese glyphs?
>> Of course, the user might insert VS14s without application
>> assistance. In which case hopefully the user knows the rules. The
>> worst case scenario is where the user might insert a VS14 after a
>> non-base character, in which case it should simply be ignored by any
>> application. It should never “break” the display or the processing;
>> it simply makes the text for that document non-conformant. (Of
>> course putting a VS14 after “ê” should not result in an italicized
Is there any obligation on applications to ignore it? In plain text,
the Unicode rules allow the application to choose to render every third
'ê' as italic. Possibly it comes down to the mens rea of the
application (or of its coder or specifier), but without mentalism an
application could opt to treat <ê, VS14> as <e, VS14, U+0302>.
A relevant concern would be 'voracious' with the first 'o'
italicised by VS14. How would current typeface selection logic work?
I can envisage <o, VS14> only being in the cmap of an italic font.
More information about the Unicode