Ubuntu Trusty Tahr is going to be released on April 17th 2014.
The font is already available in Debian. In both Ubuntu and Debian you can install the font by
sudo apt-get install fonts-meera-taml
Thanks Vasudev for packaging it for Debian.
Ubuntu Trusty Tahr is going to be released on April 17th 2014.
The font is already available in Debian. In both Ubuntu and Debian you can install the font by
sudo apt-get install fonts-meera-taml
Thanks Vasudev for packaging it for Debian.
One of the integral building blocks for providing multilingual support for digital content are fonts. In current times, OpenType fonts are the choice. With the increasing need for supporting languages beyond the Latin script, the TrueType font specification was extended to include elements for the more elaborate writing systems that exist. This effort was jointly undertaken in the 1990s by Microsoft and Adobe. The outcome of this effort was the OpenType Specification – a successor to the TrueType font specification.
Fonts for Indic languages had traditionally been created for the printing industry. The TrueType specification provided the baseline for the digital fonts that were largely used in desktop publishing. These fonts however suffered from inconsistencies arising from technical shortcomings like non-uniform character codes. These shortcomings made the fonts highly unreliable for digital content and their use across platforms. The problems with character codes were largely alleviated with the gradual standardization through modification and adoption of Unicode character codes. The OpenType Specification additionally extended the styling and behavior for the typography.
The availability of the specification eased the process of creating Indic language fonts with consistent typographic behaviour as per the script’s requirement. However, disconnects between the styling and technical implementation hampered the font creation process. Several well-stylized fonts were upgraded to the new specification through complicated adjustments, which at times compromised on their aesthetic quality. On the other hand, the technical adoption of the specification details was a comparatively new know-how for the font designers. To strike a balance, an initiative was undertaken by the a group of font developers and designers to document the knowledge acquired from the hands own experience for the benefit of upcoming developers and designers in this field.
The outcome of the project will be an elaborate, illustrated guideline for font designers. A chapter will be dedicated to each of the Indic scripts – Bengali, Devanagari, Gujarati, Kannada, Malayalam, Odia, Punjabi, Tamil and Telugu. The guidelines will outline the technical representation of the canonical aspects of these complex scripts. This is especially important when designing for complex scripts where the shape or positioning of a character depends on its relation to other characters.
This project is open for participation and contributors can commit directly on the project repository.
This is a follow up of a 4 year old blog post about hyphenation. Hyphenation allows the controlled splitting of words to improve the layout of paragraphs, typically splitting words at syllabic or morphemic boundaries and visually indicating the split (usually with a hyphen).
More recently browsers such as Firefox, Safari and Chrome have begun to support the CSS3 hyphens property, with hyphenation dictionaries for a range of languages, to support automatic hyphenation.
For hyphenation to work correctly, the text must be marked up with language information, using the language tags described earlier. This is because hyphenation rules vary by language, not by script. The description of the hyphens property in CSS says “Correct automatic hyphenation requires a hyphenation resource appropriate to the language of the text being broken. The user agents is therefore only required to automatically hyphenate text for which the author has declared a language (e.g. via HTML lang or XML xml:lang) and for which it has an appropriate hyphenation resource.”
-webkit-hyphens: auto; -moz-hyphens: auto; -ms-hyphens: auto; -o-hyphens: auto; hyphens: auto;
CSS Text Level 3 does not define the exact rules for hyphenation, however user agents are strongly encouraged to optimize their line-breaking implementation to choose good break points and appropriate hyphenation points.
Firefox has hyphenation rules for about 40 languages. A complete list of languages supported in FF and IE is available at Mozilla wiki
You can see that none of the Indian languages are listed there. Hyphenation rules can be reused from the TeX hyphenation rules. Jonathan Kew was importing the hyphenation rules from TeX and I had requested importing the hyphenation rules for Indian languages too. But that was more than a year back, not much progress in that. Apparently there was a licensing issue with derived work but looks like it is resolved already.
While this is all well and good, it doesn’t provide the fine grain control you may require to get professional results. For this CSS4 Text introduce more features.
PS: Sometimes hyphenation can be very challenging. For example hyphenating the 746 letter long name of Wolfe+585, Senior.
Malayalam Wikisource community today released the first offline version of Malayalam wikisource during the 4th annual wiki meetup of Malayalam wikimedians. To the best of our knowledge, this is the first time a wikisource project release its offline version. Malayalam wiki community had released the first version of Malayalam wikipedia one year back.
Releasing the offline version of a wikisource is a challenging project. The technical aspects of the project was designed and implemented by myself. So let me share the details of the project.
As you know a Wikisource contains lot of books, and each book varies in its size, it is divided to chapters or sections. There is no common pattern for books. Each having its own structure. A novel presentation is different from a collection of poems from a Poet. Wikisource also has religious books like Bible, Quran, Bhagavat Geeta, Ramayana etc. Since books are for continuous reading for a long time, the readabilty and how we present the lengthy chapters in screen also matters. Offline wikipedia tools for example, Kiwix does not do any layout modification of the content and present as it is shown in wikipedia/wikisource. The tool we wrote last year for Malayalam wikipedia offline version also present scrollable vertical content in the screen. Both are not configurable to give different presentation styles depending on the nature of the book.
What we wanted is a book reader kind of application interface. Readers should be able to easily navigate to books, chapters. The chapter content will be very lengthy. For a long time reading of this content, a lengthy vertically scrolled text is not a good idea. We also need to take care of the width of the lines. If each line spans 80-90% of the screen, especially for a wide screen monitor, it is a strain for neck and eyes.
The selection of books for the offline version was done by the active wikimedians at Wiksource. Some of the selected books was proof read by many volunteers within the last 2 weeks.
The tools used for extracting htmls were adhoc and adapted to meet the good presentation of each book. So there is nothing much to reuse here. Extracting the html and then taking the content part alone using pyquery and removing some unwanted sections from html- basically this is what our scripts did. The content is added to predefined HTML templates with proper CSS for the UI. CSS3 multicolumn feature was used for book like interface. Since IE did not implement this standard even in IE9, for that browser the book like interface was not provided. Chrome browser with version less than 12 could not support, because of these bugs: http://code.google.com/p/chromium/issues/detail?id=45840 and http://code.google.com/p/chromium/issues/detail?id=78155. For easy navigation, mouse wheel support and page navigation buttons provided. For solving non-availability of required fonts, webfonts were integrated with a selection box to select favorite font. Reader can also select the font size to make the reading comfortable.
Why static html? The variety of platforms and other versions we need to support, necessity to have webfonts, complex script rendering, effort to develop and customize UI, relatively small size of the data, avoiding any installation of software in users system etc made us to choose static html+ jquery + css as the technology choice. The downside is we could not provide full text search.
Apart from the wikisource, we also included a collection of copyleft of images from wikimedia commons. Thanks to Nishan Naseer, for preparing a gallery application using jquery. We selected 4 categories from Commons which are related to Kerala. We hope everybody will like the pictures and it will give a small introduction to Wikimedia Commons.
Even though the python scripts are not ready to reuse in any projects, if anybody want to have a look at it, please mail me. I am not putting it in public since the script does not make sense outside the context of each book and its existing presentation in Malayalam wikisource.
Thanks to Shiju Alex for coordinating this project. And thanks to all Malayalam wikisource volunteers for making this happen. We have included poems, folk songs, devotional songs, novel, grammar book, tales, books on Hinduism, Islam-ism, Christianity, Communism, Philosophy. With this release, it becomes the biggest offline digital archive of Malayalam books.
I am just back from Mediawiki Berlin Hackathon. On May 13 to 15, Mediawiki developers attended the hackathon and squashed many bugs and discussed many features. Members of language committee had its first real-life meeting in parallel with hackathon. It was a nice event, learned a lot, talked to many awesome hackers and linguists.
Sourashtra is a language spoken by Sourashtra people living in South Tamilnadu and Gujarat of India. Originated from Brahmi and then Grandha, this language is mother tongue for half a million people. But most of them are not familiar with the script of this language. Very few people knows reading and writing on Sourashtra script. Sourashtra has a ISO 639-3 language code saz and Unicode range U+A880 – U+A8DF
Recently Sourashtra wikipedia project was started in the wikimedia incubator : http://incubator.wikimedia.org/wiki/Wp/saz and Mediawiki localization started in translatewiki Since the language did not had any proper fonts or input tools, this was not going well.
When we add a new language support in Mediawiki or start a new language wikipedia, we need to develop the language technology ecosystem to support its growth. This ecosystem comprises of Unicode code points for the script, proper fonts, rendering support, input tools, availability of these fonts and input tools in operating systems or alternate ways to get it working in operating system etc.
Sourashtra language had a unicode font developed by Prabu M Rengachari, named ‘Sourashtra’ itself. The font had problems with browsers/operating systems. We fixed to make it work properly. The font was not licensed properly. Prabu agreed to release it in GNU GPLV3 license with font exception. He also agreed to rename the font to another name other than the script name itself.
Once we have a font with proper license, we wanted it to be available in operating systems. I filed a packaging request in Debian. Vasudev Kamath of Debian India Team packaged it and now it is available in debian unstable(sid). Parag Nemade of Fedora India packaged the font for Fedora and will be avialable in upcoming Fedora 15.
To add a new language support in operating system, we need a locale definition. In GNU Linux this is GLibc locale definition. With the help of Prabu, I prepared the saz_IN locale file for glibc, and filed as bug report to add to glibc. I hope, soon it will be part of Glibc.
Well, all of these was possible since it was GNU/Linux or Free software. Things are a bit difficult on the other side, proprietary operating system world. There is nothing we can do with those operating systems. Since there is no ‘market’ for these minority language, it won’t come to the priority of those companies to add support for these languages. Users will see squares or question marks when they visit sourashtra wikipedia.
We are working on a solution for this, not only for sourashtra, but a common solution for all languages. We are developing a webfonts extension for Mediawiki to provide font embedding in wiki pages to avoid the necessity of having fonts installed in user’s computers. The extension is in development and one can preview it in my test wiki. For Sourashtra, we added webfonts support(preview) .
Input tools needs to be developed and packaged. For mediaiwki, with the help of Narayam extension, we can easily add this support.
A demo of cross language approximate search in Indic text:
The Malayalam word സാമ്പാര് is compared against a paragraph from http://ml.wikipedia.org/wiki/Sambar.
In the bottom half, words marked in yellow color are search results.
You can see that a Kannada word ಸಾಂಬಾರ್ is matched for Malayalam word. And that is why this is called cross-language.
The inflections of the words സാമ്പാര് – സാമ്പാറും, സാമ്പാറു etc are also found as results.
This is the kind of search we need in Indic languages, not just the letter by letter comparison we do for English.
You can try it here: http://silpa.org.in/ApproxSearch
By mixing both written like(edit distance) and sounds like(soundex), we achieve an efficient aproximate string searching. This application is capable of cross language string search too. That means, you can search Hindi words in Malayalam text. If there is any Malayalam word, which is approximate transliteration of hindi word, or sounds alike the Hindi words, it will be returned as an approximate match. The “written like” algorithm used here is a bigram average algorithm. The ratio of common bigrams in two strings and average number of bigrams will give a factor which is greater than zero and less than 1. Similarly the soundex algorithm also gives a weight. By selecting words which has comparison weight more than the threshold weight(which 0.6), we get the search results.
A few months back, we started fixing the collation rules of Indian languages in GNU C library. Pravin Satpute prepared patches for many languages and I prepared patches for Malayalam and Tamil. Later Pravin enhanced the Tamil patch.
You can read the rules used for Malayalam collation here[PDF document]. Tamil patch was applied to upstream, but the bug is still open since there is some confusion on the results.
Before reading the below discussion, please read the discussion happened in the bug report : [ta_IN] Tamil collation rules are not working in other locales
Since many Tamil friends can give valuable comments on this, I am giving an explanation for my patch here. K Sethu gave some interestin his comments on the patch and I would like to hear from others also. Since collation is a very important component on Tamil support, I feel that an open discussion and consensus should happen among language speakers outside bug trackers.
This is the logic used currently in Tamil and Malayalam Collation rules also follow the same logic.
Apart from these, equal weights are assigned for ோ (0BCB), ௌ (0BCC), and their canonical equivalent forms.
If anybody interested in testing the patch, get ta_IN and iso14651_t1_common files from here, back up those file in /usr/share/i18n/locales, and place these two files there. reconfigure your locale using “sudo dpkg-reconfigure locales”. Sort some random file using “LANG=ta_IN sort yourfile”. If your distro is not debian based, follow the instructions from here
There is an easy way to test this. Silpa project provides an online application for Indic language collation. You can try it from here. It is a Unicode Collation algorithm implementation. The Unicode collation definition has many mistakes but we have a patched version. You can compare the results between original and patched version.
Feel free to inform this discussion to anybody interested on Tamil Computing. I would be happy to help in the implementation if we reach a consensus.
Recently, while preparing a critique for IDN Policy for Malayalam language prepared by CDAC, I noticed that ICANN does not allow control characters in the domain names. Sometime back I noticed Python 3 identifiers also does not allow control characters in the Identifiers. This blog post attempts to analyze the issue by looking at the Unicode and ICANN specifications about these special characters.
Apart from the existing characters in Indic languages, Zero width Joiner and Zero width non joiners are widely used in Indic languages to control how the ligatures are formed. For some samples on how they are used, refer the wikipedia links. Being control characters and invisible characters, they are often removed while doing normalization , particularly before doing a string comparison, or collation (sort).
Identifiers, the strings that uniquely represent some data often has a policy on what kind of characters it can contain. For example, email address is an identifier, which unambiguously defines somebody’s email address, does not allow ‘space’ characters in between. Some examples for this kind of identifiers are: email ids, web domain address, variables in programming languages etc.
Gone are the days where identifiers can be represented only using English characters. Python 3.0+ allows you to define a variable in program using any words that can be represented in Unicode. For more details on this Python feature read PEP 3131 – Supporting Non Ascii Identifiers . Some samples : Program written in Malayalam. In tamil , and In Hindi
Same is the case of Web addresses. With the advent of Internationalized Domain Names(IDN) that allows you register web addresses in your own languages, the English only web address scene is changing.
But this change brings some issues in the definition of ‘Identifiers’ – just like English, what are the characters allowed in using a domain name or programming language identifier that can be used? Standards and specifications are being drafted on this for each language. For Internationalized domain names in Indian languages, CDAC is drafting the policy. For python, the PEP 3131 has specification.
As a general rule, Unicode standard and the standards based on Unicode does not allow you use Unicode control characters such as zwj and zwnj in identifiers. Based on that The Internet Corporation for Assigned Names and Numbers (ICANN) , in RFC 3454 , it prohibits a list of control characters. RFC 3454 is used as a specification for converting a Unicode encoded domain name to its Punicode version for doing the validation. For example,Thottingal, in Malayalam- തോട്ടിങ്ങല് (0D24 0D4B 0D1F 0D4D 0D1F 0D3F 0D19 0D4D 0D19 0D32 0D4D 200D), when converted to punicode becomes xn--fwcaqax2g2d7dtadc . This conversion excludes the zwj at the end of the word. If I do a reverse conversion from xn--fwcaqax2g2d7dtadc to unicode what I get is തോട്ടിങ്ങല് (0D24 0D4B 0D1F 0D4D 0D1F 0D3F 0D19 0D4D 0D19 0D32 0D4D). Note that codepoint 200D – ZWJ is removed. That means I cannot register my domain thottingal.in in Malayalam properly. You can verify this using ICU online converter. Now another example, Tamilnadu – in Malayalam തമിഴ്നാട് (0D24 0D2E 0D3F 0D34 0D4D 200C 0D28 0D3E 0D1F 0D4D) becomes xn--lwcjmx4a2de7id. When I do a reverse conversion, I getതമിഴ്നാട് (0D24 0D2E 0D3F 0D34 0D4D 0D28 0D3E 0D1F 0D4D) . Now ZWNJ(200C) is missed. Try yourself using the converter . This means one cannot register a website with Tamilnadu written in Malayalam properly. The IDN policies for Indic languages are based on this exclusion rules for zwj, zwnj.
For python 3.0+ , you cannot have an identifier in programming language with zwj, zwnj or any control character in it. See this bug report for more details: Issue 5358
All of the above issues are because of the assumption that zwj,zwnj is prohibited from Identifiers for all cases. But that is not true. Look at the Unicode Standard Annex 31 – “Unicode Identifier and Pattern Syntax”(TR31). TR31 is based on Public Review 96 – “Allowing Special Characters in Identifiers”
This annex describes specifications for recommended defaults for the use of Unicode in the definitions of identifiers and in pattern-based syntax. It also supplies guidelines for use of normalization with identifiers. […]
default-ignorable characters are normally excluded from Unicode identifiers. However, visible distinctions created by certain format characters (particularly the Join_Control characters) are necessary in certain languages. A blanket exclusion of these characters makes it impossible to create identifiers with the correct visual appearance for common words or phrases in those languages. Identifier systems that attempt to provide more natural representations of terms in modern, customary use should allow these characters in input and display, but limit them to contexts in which they are necessary. […]
But since the characters are invisible, to meet the security considerations, It should be clearly defined where and all we can use them. What if a domain is registered with 5 zwnj continuously in it? It will look same to a string with 4 zwnjs. So TR31 defines 3 valid cases where zwnj and zwj can be used in an Identifier.
These 3 cases covers all zwj,zwnj usage patterns in our languages.
So now it is clear that Unicode standard allows them in Identifiers. In that case, there should not be a conflict between Unicode Identifier policy and ICANN policy or any other identifier policy such as PEP 3131. Blanket exclusion of these characters are not allowed. So RFC 3454 should be compatible with TR31. The IDN policy of Indic languages should be based on that new specification and not based on the existing RFC 3454. Since CDAC is responsible of Indic Domain policy, they should take responsibility for bringing this change.
Having said that, is it desirable to have two domains, one with a valid zwj/zwnj usage and another without them? Of course, they will be visually different, avoiding any possibilities for spoofing. Now the question is whether those two words represent two words in the language?
As far as Malayalam is concerned there are three cases here:
So the question is whether a domain differing by a valid zwj/zwnj use to an existing registered domain to be allowed or not? I would suggest to use existing policy for domain comparison for this. ie, If the collation weights of existing domain and to-be registered domains are same , don’t register the new one. ZWJ, ZWNJ are characters with zero collation weight and in collation or string comparison they are ignored.
Recently we released two Jabber buddy bots for dictionary lookup. By adding email@example.com as a chat contact one can ask for the meaning of an English word in Malayalam by just sending a chat message. Similarly for English-Hindi or Hindi-English dictionary, we have another bot firstname.lastname@example.org. Both of these dictionaries use Dict databases based on DICT protocol.
Both of these bots were well received by the users. We have 8000+ users for English-Malayalam Dictionary. Online blogs/media also gave good publicity. Thanks a lot!.
SMC developers Rajeesh Nambiar, Ershad, Ragsagar, and Sarath Lakshman had helped in improving the program. You can get the source code from here. It is a small program written using python XMPP library.
We had written this programs one year back, 2009 december itself. We could not launch them for public since we did not had a server to host them. Usually webhosting providers wont allow to run programs like this in their servers. Recently netdotnet.com provided a VPS server for SMC and we could launch them from that server.
English-Hindi dictionary is reasonably big, but English-Malayalm is very small with only ~10k words. So we just added a Malayalam Wiktionary backend for the bot.
Here is a video on how to use English-Hindi bot prepared by Varun Verma
We can start this kind of bot for other languages too, if we have dictionaries with Free S/w compatible licenses. If interested, please contact me.