<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 12/14/2025 3:22 PM, Mark E. Shoulson
via Unicode wrote:<br>
</div>
<blockquote type="cite"
cite="mid:b9ff3f9c-09a8-4d0d-b1fb-832644f4c1eb@kli.org">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p>On 12/14/25 5:44 PM, Asmus Freytag via Unicode wrote:</p>
<blockquote type="cite"
cite="mid:e16081af-bea3-4e4f-ba96-316d3ce6a1ef@ix.netcom.com">
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8">
<div class="moz-cite-prefix">On 12/14/2025 10:47 AM, Phil Smith
III via Unicode wrote:<br>
</div>
<blockquote type="cite"
cite="mid:012d01dc6d2a$105656a0$310303e0$@akphs.com">
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8">
<meta name="Generator"
content="Microsoft Word 15 (filtered medium)">
<style>@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}@font-face
{font-family:Aptos;}p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:12.0pt;
font-family:"Aptos",sans-serif;}a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:#0A2F41;}.MsoChpDefault
{mso-style-type:export-only;}div.WordSection1
{page:WordSection1;}</style>
<div class="WordSection1">
<p class="MsoNormal"><span
style="font-family:"Calibri",sans-serif;color:#0A2F41">Well,
I’m sorta “asking for a friend” – a coworker who is deep
in the weeds of working with something Unicode-related.
I’m blaming him for having told me that :)<o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-family:"Calibri",sans-serif;color:#0A2F41"><o:p> </o:p></span></p>
<br>
</div>
</blockquote>
<p>This actually deserves a deeper answer, or a more
"bird's-eye" one, if you want. Read to the end.</p>
<p>The way you asked the question seems to hint that in your
minds you and your friend conflate the concept of "combining"
mark and "diacritic". That would not be surprising if you are
mainly familiar with European scripts and languages, because
in that case, this equivalence kind of applies.</p>
</blockquote>
<p>Yes. This is crucial. You (Phil) are writing like "sheez, so
there's e and there's e-with-an-acute, we might as well just
treat them like separate letters." And that maybe makes sense
for languages where "combining characters" are maybe two or
three diacritics that can live on five or six letters. Maybe it
does make sense to consider those combinations as distinct
letters (indeed, some of the languages in question do just
that.) But some combining characters are more rightly perceived
as things separate from the letters which are written in the
same space (and have historically always been considered so).
The most obvious examples would be Hebrew and Arabic
vowel-points. Does it really make sense to consider בְ and בֶ
and בְּ and all the other combinatorics as separate distinct
things, when they clearly contain separate units, each of which
has its own consistent character? Throw in the Hebrew "accents"
(cantillation marks) and you're talking an enormous
combinatorial explosion at the *cost* of simplicity and
consistency, not improving it. Ditto Indic vowel-marks and a
jillion other abjads and abugidas. </p>
</blockquote>
Nice examples to back up what I wrote.
<blockquote type="cite"
cite="mid:b9ff3f9c-09a8-4d0d-b1fb-832644f4c1eb@kli.org">
<p> If anything, there's a better case to be made that the
precomposed letters were maybe a wrong move.</p>
<br>
</blockquote>
<p>That "might" have been the case, had Unicode been created in a
vacuum.</p>
<p>Instead, Unicode needed to offer the easiest migration path from
the installed base of pre-existing character encodings, or risk
failing to gain ground at all.</p>
<p>All the early systems mainly started out with legacy applications
and legacy data that needed to be supported as transparently as
possible. Given the pervasive amount of indexing into strings and
length calculations that are deeply embedded into these legacy
applications, trying to support these with a different encoding
model (not just with a different encoding) would have been a
non-starter.</p>
<p>As we've seen since, the final key in that puzzle was IETF
creating an ASCII compatible, variable length encoding form that
violated one of Unicode's early design goals (to have a fixed
number of code units per character). However, allowing direct
parsing of data streams for ASCII-based syntax characters was more
of a compatibility requirement than had appeared at first.</p>
<p>The reason, this was not built directly into the earliest Unicode
versions was that it is something that (transport) protocol
designers are up against more than people worried about
representing text in documents.</p>
<p>Looking at Unicode from the perspective "what, if I could design
something from scratch?" can be intellectually interesting but is
of little practical value. Any design that would have prevented
people from different legacy environments from coalescing around
would simply have died out.</p>
<p>If it amuses you, you could think of some features of Unicode as
being akin to the "vestiginal" organs that evolution sometimes
leaves behind. They may not strictly be required the way the
organism functions today, but without their use in the historical
transition, the current form of the organism would not exist,
because the species would be extinct.</p>
<p>A./</p>
<br>
</body>
</html>