Unicode String

Build statusHex.pmHex.pmHex.pm

Adds functions supporting some string algorithms in the Unicode standard. For example:

Installation

The package can be installed by adding :unicode_string to your list of dependencies in mix.exs:

def deps do
  [
    {:unicode_string, "~> 1.0"},
    ...
  ]
end

Then run mix dep.get.

Word Break Dictionary Download

If you plan to perform word break segmentation on Chinese, Japanese, Lao, Burmese, Thai or Khmer languages you will need to download the word break dictionaries by running mix unicode.string.download.dictionaries.

Casing

Case Folding

The Unicode Case Folding algorithm defines how to perform case folding. This allows comparison of strings in a case-insensitive fashion. It does not define the means to compare ignoring diacritical marks (accents). Some examples follow, for details see:

Note

Although the folding algorithm commonly downcases characters, folding is not a general purpose downcasing process. It exists only to facilitate case insensitive string comparison.

iex> Unicode.String.equals_ignoring_case? "ABC", "abc"
true

iex> Unicode.String.equals_ignoring_case? "beißen", "beissen"
true

iex> Unicode.String.equals_ignoring_case? "grüßen", "grussen"
false

Case Mapping

The Unicode Case Mapping algorithm defines the process and data to transform text into upper case, lower case or title case. Since most languages are not bicameral, characters which have no case mapping remain unchanged.

Three case mapping functions are provided:

Each function operates in a locale-aware manner implementing some basic capabilities:

There are other casing rules that are not currently implemented such as:

# Basic case transformation
iex> Unicode.String.upcase("the quick brown fox")
"THE QUICK BROWN FOX"

# Dotted-I in Turkish and Azeri
iex> Unicode.String.upcase("Diyarbakır", locale: :tr)
"DİYARBAKIR"

# Upper case in Greek removes diacritics
iex> Unicode.String.upcase("Πατάτα, Αέρας, Μυστήριο", locale: :el)
"ΠΑΤΑΤΑ, ΑΕΡΑΣ, ΜΥΣΤΗΡΙΟ"

# Lower case Greek with a final sigma
iex> Unicode.String.downcase("ὈΔΥΣΣΕΎΣ", locale: :el)
"ὀδυσσεύς"

# Title case Dutch with leading dipthong
iex> Unicode.String.titlecase("ijsselmeer", locale: :nl)
"IJsselmeer"

Segmentation

The Unicode Segmentation annex details the algorithm to be applied with segmenting text (Elixir strings) into words, sentences, graphemes and line breaks. Some examples follow, for details see:

# Split text at a word boundary.
iex> Unicode.String.split "This is a sentence. And another.", break: :word
["This", " ", "is", " ", "a", " ", "sentence", ".", " ", "And", " ", "another", "."]

# Split text at a word boundary but omit any whitespace
iex> Unicode.String.split "This is a sentence. And another.", break: :word, trim: true
["This", "is", "a", "sentence", ".", "And", "another", "."]

# Split text at a sentence boundary.
iex> Unicode.String.split "This is a sentence. And another.", break: :sentence
["This is a sentence. ", "And another."]

# By default, common abbreviations are suppressed (ie
# they do not cause a break)
iex> Unicode.String.split "No, I don't have a Ph.D. but I don't think it matters.", break: :word, trim: true
["No", ",", "I", "don't", "have", "a", "Ph.D", ".", "but", "I", "don't",
 "think", "it", "matters", "."]

iex> Unicode.String.split "No, I don't have a Ph.D. but I don't think it matters.", break: :sentence, trim: true
["No, I don't have a Ph.D. but I don't think it matters."]

# Sentence Break suppressions are locale sensitive.
iex> Unicode.String.Segment.known_locales
["de", "el", "en", "en-US", "en-US-POSIX", "es", "fi", "fr", "it", "ja", "pt",
 "root", "ru", "sv", "zh", "zh-Hant"]

iex> Unicode.String.split "Non, c'est M. Dubois.", break: :sentence, trim: true, locale: "fr"
["Non, c'est M. Dubois."]

# Note that break: :line does NOT mean split the string
# at newlines. It splits the string where a line break would be
# acceptable. This is very useful for calculating where
# to perform word-wrap on some text.
iex> Unicode.String.split "This is a sentence. And another.", break: :line
["This ", "is ", "a ", "sentence. ", "And ", "another."]

Dictionary-based word segmentation

Some languages, commonly East Asian and Southeast Asian languages, don't typically use whitespace to separate words, so a dictionary lookup is needed for word-break segmentation.

This implementation supports dictionary-based word breaking for:

The dictionaries are those used in CLDR since they are under an open source license and are consistent with ICU.

Note that these dictionaries need to be downloaded with mix unicode.string.download.dictionaries prior to use. Each dictionary will be parsed and loaded into persistent_term on demand. Note that each dictionary has a sizable memory footprint as measured by :persistent_term.info/0:

Dictionary Memory Mb
Chinese 104.8
Thai 9.6
Lao 11.4
Khmer 38.8
Burmese 23.1

How dictionary break works

For Thai, Lao, Khmer, and Burmese the dictionary break algorithm is implemented in Unicode.String.DictionaryBreak. It uses the same approach as ICU's DictionaryBreakEngine: a cost-based lookahead that considers multiple word candidates at each position to find the best segmentation.

The algorithm proceeds through the text as follows:

  1. Candidate gathering. At each position, all dictionary words that start at that position are found (shortest to longest match) using prefix search against the trie-structured dictionary.

  2. Single candidate. If exactly one candidate matches, it is accepted immediately.

  3. Multiple candidates with 3-word lookahead. When multiple candidates exist, each is tested by looking ahead up to two more words. The candidate that leads to the longest chain of consecutive dictionary words wins. Candidates are tried longest-first, and the first candidate confirmed by a 3-word chain is accepted.

  4. Non-dictionary resync. When no dictionary word is found (or only a very short one), the algorithm scans forward through non-dictionary characters until reaching a position where dictionary words resume. The non-dictionary stretch is combined with the preceding word.

  5. Combining mark absorption. After each word boundary, any following Unicode combining marks (General Category M — vowel signs, tone marks, virama/coeng characters) are absorbed into the preceding word so that diacritics remain attached to their base.

  6. Thai suffix handling. For Thai, the suffix characters PAIYANNOI (U+0E2F) and MAIYAMOK (U+0E46) are absorbed into the preceding word when no dictionary word follows.

For Chinese and Japanese, the standard UAX #29 word-break rules are used with dictionary lookups for ideographic character sequences. The dictionary determines word boundaries within runs of CJK ideographs.

Mixed-script text

When text contains a mix of dictionary-script characters and other scripts (e.g., a Khmer sentence with embedded Latin words), the split_with_fallback/3 function partitions the text into same-script runs. Dictionary breaking is applied to the target-script ranges, and a fallback function (typically the standard UAX #29 word breaker) handles the rest. The results are concatenated to produce a single segmentation covering the full string.

See conformance.md for details on conformance with the UAX #29 break algorithm and differences between this implementation and ICU.

Segment Streaming

Segmentation can also be streamed using Unicode.String.stream/2. For large strings this may improve memory usage since the intermediate segments will be garbage collected when they fall out of scope.

iex> Enum.to_list Unicode.String.stream("this is a list of words", trim: true)                       ["this", "is", "a", "list", "of", "words"]

iex> Enum.map Unicode.String.stream("this is a list of words", trim: true),
...>   fn word -> %{word: word, length: String.length(word)} end
[
  %{length: 4, word: "this"},
  %{length: 2, word: "is"},
  %{length: 1, word: "a"},
  %{length: 3, word: "list"},
  %{length: 2, word: "of"},
  %{length: 5, word: "words"}
]

References