Math input methods

Philippe Verdy verdy_p at wanadoo.fr
Tue Jun 10 07:29:49 CDT 2014


ℕ ⊃ℤ ⊃ℚ ⊃ℝ ⊃ℂ are without doubt more useful and more common in
double-struck styles than in Fraktur styles.

But there are cases where they will be distinctly replaced by bold letters
(notably when woking with homomorphic/dual sets correlated bijectively with
them but having distinct projections/coordinates in a numeral set, provided
that there's a defined couple of operations for compositing these
coordinates with elementary base elements in the dual/homomorphic set).

As soon as you start using derived styles for such notation of duals,
you'll imemdiately want to use the same styles for deriving elements
(numbers) of these numeral sets, so you'll get double-struck or bold or
fracktur variable names and digits (notably for zero, one and i,j,k used by
some other extended sets that remove some property like commutativity or
distributivity of basic operations).

These styles become then equivalent in functionality as diacritics or
adding a prepended operator or subscripts/superscripts denoting in which
set they are projected or denoting their numeral system, except that they
are represented in a compact composite which is not easily decomposable.
But it is still possible:

ℕ ⊃ℤ ⊃ℚ ⊃ℝ ⊃ℂ could be written as well:

\SetN \in SetZ \in \SetQ \in \SetR \in \SetC

(TeX is commonly used for such notation when composing documents)

The notation is functionaliy equivalent but it obscures notations that are
already complex, so Mathematicians are inventing various shortcuts or
compact representations in their text. But you can't simply treat these
notations as prefered visual style, these styles have imporant strict
definitions that disambiguate the meaning. These formulas also have strict
layout restrictions, mich more than usual plan-text. We are in a corner
case where it is just safer to consider that maths notations are not text
but are binary objects that do not weok very well with the Unicode
character model, and that are also far from the weak definition of symbols.

Even a basic variable name 'x' is not a letter x of the Latin script, its
letter case cannot be changed, it cannot be freely transliterated, and
side-by-side letters do not form "words". Their grammar is in a very
specific language and is highly contextual (and frequently altered by
document-specific prior definitions and conventions.

No plain-text algorithm will work correctly with maths notations, notably
in advanced levels (not the level taught in schools for children learning
arithmetic for use in daily social life). In fact I have doubts that even
Unicode should make lots of efforts trying to encode that more than as an
informal collection of independant symbols living in their own world, with
their own script and their own "language(s)" (and there are many languages,
as many as there are authors in fact and frequently more when authors
invent specific languages for specific documents).

For most people in the world, they can't understand the level of
abstraction meant by these notations. And it is already hard for them to
accept the concept of "negative" numbers and understand that the value of
almost all reals are not even representable, or what a complex number means
; even the concept of multiplication of numbers is difficult to understand
unless you bind it to a 2D cartesian space, and immediately they wonder
what happens in their visible 3D world; then let's not speak about zeroes,
or infinites or curved spaces, or fractal dimensions, or about
infinitesimal quantities that are not absolutely comparable in our
commonly perceived
Cartesian space on which we have a limited vision...

Their vision is more pragmatic: as long as they have a solution (or a tool
to compute it) and it gives satisfaction in most of the cases they can
perceive in their life, they will not need to go further, their vision is
bound to their "experience" (and experience is not bad in itself, it is a
string base for science, propagation of knowledge and utility). Most people
are not probabilists, they favor statistics for remembering their
experience and guide immediately their choices of action and try to explain
their intents to others.



2014-06-05 10:57 GMT+02:00 Hans Aberg <haberg-1 at telia.com>:

> On 5 Jun 2014, at 04:50, David Starner <prosfilaes at gmail.com> wrote:
>
> > On Wed, Jun 4, 2014 at 6:00 AM, Jukka K. Korpela <jkorpela at cs.tut.fi>
> wrote:
> >> The change is logical in the sense that bold face is a
> >> more original notation and double-struck letters as characters imitate
> the
> >> imitation of boldface letters when writing by hand (with a pen or piece
> of
> >> chalk).
> >
> > On the other hand, bold face is a minor variation on normal types.
> > Double-struck letters are more clearly distinct, which is probably why
> > they moved from the chalkboard to printing in the first place. I don't
> > see much advantage of ���������� over ℕℂℝℤℚ, especially when
> > confusability with NCRZQ comes into play.
>
> The double-struck letters are useful in math, because they free other
> letter styles for other use. First, only a few were used as for natural,
> rational, real and complex numbers, but became popular so that all letters,
> uppercase and lowercase, are now available in Unicode.
>
>
>
> _______________________________________________
> Unicode mailing list
> Unicode at unicode.org
> http://unicode.org/mailman/listinfo/unicode
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://unicode.org/pipermail/unicode/attachments/20140610/73d6621a/attachment.html>


More information about the Unicode mailing list