First - it is very encouraging to see a paper on a fascinating, and so far not too well described language like Tamil, and the study is to be commended for its pioneering contribution in this field. BUT - and I have to say this is a big BUT - why was this paper submitted to the ACL? What is its relevance to finite state methods and dependency parsing? There is nothing of computational interest in the paper, though there is plenty of linguistic interest and the application is of considerable relevance for speech technology applications to local languages. These comments do not affect the overall quality of the paper - relative to a speech context, which is acceptable. But there are plenty of conference outlets for speech technology work, and ACL is not one of them. What I would have liked to see is a formalisation of the rule system in an appropriate algebra or logic, with discussion of its computational properties, i.e. complexity, processing regimes, data structures. A clearer statement of the evaluation methods and results achieved would be required, however. Also, formulations like "will help to solve this problem", "should be able to capture information" imply that the properties of the approach have not been sufficiently thought through or actually applied. Consequently I cannot recommend acceptance for the ACL conference, but would encourage re-writing, taking the points mentioned above into account, and re-submitting later to a speech or phonetics conference such as EUROSPEECH (INTERSPEECH), ICSLP, ICASSP. This paper makes interesting reading, and offers, a rare treat in the computational field, a formal comparison of Indian languages in terms of their phonotactics, focussing on the historical fate of schwa. The paper is very uneven, however. First, too much space is devoted to a naive functionalist variety of "natural" phonetic explanation: my little girl would like a statement like "Gradually the number of people speaking the schwa-deleted form of the word outnumbered those who spoke the standard form and finally the former replaced the latter - but not me. Strangely, no reference is made to the foremost (and vastly more sophisticated) proponent of this type of explanation, Bjorn Lindblom. Second, the "basic definitions" are REALLY basic set definitions, and it ls not at all clear why a lexicon should be a "union" of words. It is simply a relation, i.e. the set of pairs , in other words a pronouncing dictionary in speech jargon. There is a fair amount of name-dropping (Vennemann, Kager McNeilage & Davis) without any evidence that these approaches are relevant. Third, the syllable ranking function is simply a stipulation of a total ordering over coarsely defined syllable types. Genuine FS approaches (e.g. Carson-Berndsen, Belz) would provide a much more finely grained phonotactic basis for defining appropriate orderings. Fourth, the two-pass algorithm could have been formulated more clearly in terms of finite state transducers, in which case the string-length-counting property (and non-determinism) of the algorithm would have been instantly obvious, casting doubt on the claim of linear complexity (this should at the very least have been demonstrated). Having said all this, I find both the languages treated and the constraint-based idea very interesting. If the computational aspects are presented more fully and clearly, and the irrelevant and speculative functional phonetic bits thrown away, the paper could be an acceptable ACL paper. On the other hand, it could be argued that the paper would be equally relevant to a speech technology conference. This paper provides interesting quantitative data about the phonotactics of European Portuguese. But I am quite amazed that the authors even thought of submitting it to the ACL, because neither the topic nor the methodology of the paper are remotely suitable for an ACL conference. The paper has nothing at all to say about computation, let alone finite state methods or dependency parsing. The topic is more suitable for a speech technology or phonetics conference, but even so the methodology is very elementary - frequencies and percentages of phones in various contexts with no attempt to draw interesting statistical, theoretical or practical conclusions. The underlying idea of this paper is very interesting, and very appropriate for a highly underspecified representation system such as Arabic orthography, and the authors have clearly researched the problem reasonably thoroughly. What makes me hesitate are the following points. First, the presentation of the problem leaves something to be desired. Second, it is astonishing that in a submission to an ACL conference, in particular to the finite state methods track, the pioneering model of Arabic using multi-tape FS transducers should not even be mentioned. Third, it is almost as astonishing that in view of the underspecification problem, the treatment of Arabic morphophonology in an inheritance system by Reinhard et al. )1991) should also be ignored. Fourth, I had hoped that computational linguistics had grown out of presenting raw Prolog code chunks with no discussion of their algorithmic complexity or the data structures they represent - there are enough appropriate formalisms for the underspecification problem without using programming code (for instance, a clear formulation in terms of AVMs and appropriate operations would have been preferable). Fifth, the paper loses track of the main computational point in its disjointed discussion of clitic pronouns, semantics, and phonetic transcription. Sixth, the paper concludes with a VERY naive discussion of prosody and speech synthesis - including what "MBROLA allows you to impose" - but without an actual conclusion. So the promise of the title is unfortunately only imperfectly realised. I would hope, though, that the authors will revise the paper bearing this critique in mind, and resubmit on another occasion.