Why incomplete subscript/superscript alphabet ?

Michael Everson everson at evertype.com
Mon Oct 10 15:38:56 CDT 2016

On 10 Oct 2016, at 21:24, Julian Bradfield <jcb+unicode at inf.ed.ac.uk> wrote:
>> We need reliable plain-text notation systems. Otherwise distinctions we wish to encode may be lost. 
> We have no need to make such distinctions in "plain text”.

You mightn’t. 

> It's convenient to have major distinctions easily accessible without
> font hacking,

Yes, indeed. 

> but there's no need to have every notation one might dream up forcibly incorporated into "plain text”.


> In particular, for super/subscripts, which is where we came in, even
> the benighted souls using Word still typically recognize and can use
> LaTeX notation.

I can’t use LaTeX notation. I don’t use that proprietary system. And don’t you dare tell me that I am benighted, or using Word. Neither applies.

On 10 Oct 2016, at 21:31, Julian Bradfield <jcb+unicode at inf.ed.ac.uk> wrote:

> On 2016-10-10, Hans Åberg <haberg-1 at telia.com> wrote:
>> It is possible to write math just using ASCII and TeX, which was the original idea of TeX. Is that want you want for linguistics?
> I don't see the need to do everything in plain text.

Of course not. You’re a programmer. 

(Mathematical typesetting is not my concern.)

> Because phonetics has a much small set of symbols, I do kwəɪt ləɪk
> biːɪŋ eɪbl tʊ duː ðɪs, and because they're also used in non-specialist
> writing, it's useful to have the symbols hacked into Unicode instead
> of hacked into specialist fonts.
> But subscripts? No need.

And yet we use such things. 

I have an edition of the Bible I’m setting. Big book. Verse numbers. I like these to be superscript so they’re unobtrusive. Damn right I use the superscript characters for these. I can process the text, export it for concordance processing, whatever, and those out-of-text notations DON’T get converted to regular digits, which I need.


More information about the Unicode mailing list