ࡱ>  tvsxy[;@ ~bjbjkk q 8sN N N ^ $FFFP&GGHHIIIIrvs$@s$dR 3}fr@r3}3} n n II("j'''3}n II'3}'d'd&=IvH Hp7FlH<@1 =6n n n n s0 Tsu'wbyTsTsTs $"1"1Overcoming the customization bottleneck using example-based MT Stephen D. Richardson, William B. Dolan, Arul Menezes, Monica Corston-Oliver Microsoft Research Butler Hill Group One Microsoft Way 4610 Wallingford Ave. N. Redmond, WA 98052 Seattle WA 98103 {steveri, billdol,  HYPERLINK "mailto:arulm}@microsoft.com" arulm}@microsoft.com  HYPERLINK "mailto:moliver@socrates.berkeley.edu"  HYPERLINK "mailto:moco@butlerhill.com" moco@butlerhill.com Abstract We describe MSR-MT, a large-scale hybrid machine translation system under development for several language pairs. This systems ability to acquire its primary translation knowledge automatically by parsing a bilingual corpus of hundreds of thousands of sentence pairs and aligning resulting logical forms demonstrates true promise for overcoming the so-called MT customization bottleneck. Trained on English and Spanish technical prose, a blind evaluation shows that MSR-MTs integration of rule-based parsers, example based processing, and statistical techniques produces translations whose quality exceeds that of uncustomized commercial MT systems in this domain. Introduction Commercially available machine translation (MT) systems have long been limited in their cost effectiveness and overall utility by the need for domain customization. Such customization typically includes identifying relevant terminology (esp. multi-word collocations), entering this terminology into system lexicons, and making additional tweaks to handle formatting and even some syntactic idiosyncrasies. One of the goals of data-driven MT research has been to overcome this customization bottleneck through automated or semi-automated extraction of translation knowledge from bilingual corpora. To address this bottleneck, a variety of example based machine translation (EBMT) systems have been created and described in the literature. Some of these employ parsers to produce dependency structures for the sentence pairs in aligned bilingual corpora, which are then aligned to obtain transfer rules or examples (Meyers et al. 2000; Watanabe et al. 2000). Other systems extract and use examples that are represented as linear patterns of varying complexity (Brown 1999; Watanabe and Takeda 1998; Turcato et al. 1999). For some EBMT systems, substantial collections of examples are also manually crafted or at least reviewed for correctness after being identified automatically (Watanabe et al. 2000; Brown 1999; Franz et al. 2000). The efforts that report accuracy results for fully automatic example extraction (Meyers et al. 2000; Watanabe et al. 2000) do so for very modest amounts of training data (a few thousand sentence pairs). Previous work in this area thus raises the possibility that manual review or crafting is required to obtain example bases of sufficient coverage and accuracy to be truly useful. Other variations of EBMT systems are hybrids that integrate an EBMT component as one of multiple sources of transfer knowledge (in addition to other transfer rule or knowledge based components) used during translation (Frederking et al. 1994; Takeda et al. 1992). To our knowledge, commercial quality MT has so far been achieved only through years of effort in creating hand-coded transfer rules. Systems whose primary source of translation knowledge comes from an automatically created example base have not been shown capable of matching or exceeding the quality of commercial systems. This paper reports on MSR-MT, an MT system that attempts to break the customization bottleneck by exploiting example-based (and some statistical) techniques to automatically acquire its primary translation knowledge from a bilingual corpus of several million words. The system leverages the linguistic generality of existing rule-based parsers to enable broad coverage and to overcome some of the limitations on locality of context characteristic of data-driven approaches. The ability of MSR-MT to adapt automatically to a particular domain, and to produce reasonable translations for that domain, is validated through a blind assessment by human evaluators. The quality of MSR-MTs output in this one domain is shown to exceed the output quality of two highly rated (though not domain-customized) commercially available MT systems. We believe that this demonstration is the first in the literature to show that automatic training methods can produce a commercially viable level of translation quality. MSR-MT MSR-MT is a data-driven hybrid MT system, combining rule-based analysis and generation components with example-based transfer. The automatic alignment procedure used to create the example base relies on the same parser employed during analysis and also makes use of its own small set of rules for determining permissible alignments. Moderately sized bilingual dictionaries, containing only word pairs and their parts of speech, provide translation candidates for the alignment procedure and are also used as a backup source of translations during transfer. Statistical techniques supply additional translation pair candidates for alignment and identify certain multi-word terms for parsing and transfer. The robust, broad-coverage parsers used by MSR-MT were created originally for monolingual applications and have been used in commercial grammar checkers. These parsers produce a logical form (LF) representation that is compatible across multiple languages (see section 3 below). Parsers now exist for seven languages (English, French, German, Spanish, Chinese, Japanese, and Korean), and active development continues to improve their accuracy and coverage. Generation components are currently being developed for English, Spanish, Chinese, and Japanese. Given the automated learning techniques used to create MSR-MT transfer components, it should theoretically be possible, provided with appropriate aligned bilingual corpora, to create MT systems for any language pair for which we have the necessary parsing and generation components. In practice, we have thus far created systems that translate into English from all other languages and that translate from English to Spanish, Chinese, and Japanese. We have experimented only preliminarily with Korean and Chinese to Japanese. Results from our Spanish-English and English-Spanish systems are reported at the end of this paper. The bilingual corpus used to produce these systems comes from Microsoft manuals and help text. The sentence alignment of this corpus is the result of using a commercial translation memory (TM) tool during the translation process. The architecture of MSR-MT is presented in Figure 1. During the training phase, source and target sentences from the aligned bilingual corpus are parsed to produce corresponding LFs. The normalized word forms resulting from parsing are also fed to a statistical word association learner (described in section 4.1), which outputs learned single word translation pairs as well as a special class of multi-word pairs. The LFs are then aligned with the aid of translations from a bilingual dictionary and the learned single word pairs (section 4.2). Transfer mappings that result from LF alignment, in the form of linked source and target LF segments, are stored in a special repository known as MindNet (section 4.3). Additionally, the learned multi-word pairs are added to the bilingual dictionary for possible backup use during translation and to the main parsing lexicon to improve parse quality in certain cases. At runtime, MSR-MTs analysis parses source sentences with the same parser used for source text during the training phase (section 5.1). The resulting LFs then undergo a process known as MindMeld, which matches them against the LF transfer mappings stored in MindNet (section 5.2). MindMeld also links segments of source LFs with corresponding target LF segments stored in MindNet. These target LF segments are stitched together into a single target LF during transfer, and any translations for words or phrases not found during MindMeld are searched for in the updated bilingual dictionary and inserted in the target LF (section 5.3). Generation receives the target LF as input, from which it produces a target sentence (section 5.4). Logical form MSR-MTs broad-coverage parsers produce conventional phrase structure analyses augmented with grammatical relations (Heidorn et al. 2000). Syntactic analyses undergo further processing in order to derive logical forms (LFs), which are graph structures that describe labeled dependencies among content words in the original input. LFs normalize certain syntactic alternations (e.g. active/passive) and resolve both intrasentential anaphora and long-distance dependencies. MT has proven to be an excellent application for driving the development of our LF representation. The code that builds LFs from syntactic analyses is shared across all seven of the languages under development. This shared architecture greatly simplifies the task of aligning LF segments (section 4.2) from different languages, since superficially distinct constructions in two languages frequently collapse onto similar or identical LF representations. Even when two aligned sentences produce divergent LFs, the alignment and generation components can count on a consistent interpretation of the representational machinery used to build the two. Thus the meaning of the relation Topic, for instance, is consistent across all seven languages, although its surface realizations in the various languages vary dramatically. Training MSR-MT This section describes the two primary mechanisms used by MSR-MT to automatically extract translation mappings from parallel corpora and the repository in which they are stored. Statistical learning of single word- and multi-word associations The software domain that has been our primary research focus contains many words and phrases that are not included in our general-domain lexicons. Identifying translation correspondences between these unknown words and phrases across an aligned dataset can provide crucial lexical anchors for the alignment algorithm described in section 4.2. In order to identify these associations, source and target text are first parsed, and normalized word forms (lemmas) are extracted. In the multi-word case, English captoid processing is exploited to identify sequences of related, capitalized words. Both single word and multi-word associations are iteratively hypothesized and scored by the algorithm under certain constraints until a reliable set of each is obtained. Over the English/Spanish bilingual corpus used for the present work, 9,563 single word and 4,884 multi-word associations not already known to our system were identified using this method. Moore (2001) describes this technique in detail, while Pinkham & Corston-Oliver (2001) describes its integration with MSR-MT and investigates its effect on translation quality. Logical form alignment As described in section 2, MSR-MT acquires transfer mappings by aligning pairs of LFs obtained from parsing sentence pairs in a bilingual corpus. The LF alignment algorithm first establishes tentative lexical correspondences between nodes in the source and target LFs using translation pairs from a bilingual lexicon. Our English/Spanish lexicon presently contains 88,500 translation pairs, which are then augmented with single word translations acquired using the statistical method described in section 4.1. After establishing possible correspondences, the algorithm uses a small set of alignment grammar rules to align LF nodes according to both lexical and structural considerations and to create LF transfer mappings. The final step is to filter the mappings based on the frequency of their source and target sides. Menezes & Richardson (2001) provides further details and an evaluation of the LF alignment algorithm. The English/Spanish bilingual training corpus, consisting largely of Microsoft manuals and help text, averaged 14.1 words per English sentence. A 2.5 million word sample of English data contained almost 40K unique word forms. The data was arbitrarily split in two for use in our Spanish-English and English-Spanish systems. The first sub-corpus contains over 208,000 sentence pairs and the second over 183,000 sentence pairs. Only pairs for which both Spanish and English parsers produce complete, spanning parses and LFs are currently used for alignment.  REF _Ref511463603 \h Table 1 provides the number of pairs used and the number of transfer mappings extracted and used in each case. Spanish- EnglishEnglish- SpanishTotal sentence pairs208,730183,110Sentence pairs used161,606138,280Transfer mappings extracted1,208,8281,001,078Unique, filtered mappings used58,31447,136Table  SEQ Table \* ARABIC 1. English/Spanish transfer mappings from LF alignment MindNet The repository into which transfer mappings from LF alignment are stored is known as MindNet. Richardson et al. (1998) describes how MindNet began as a lexical knowledge base containing LF-like structures that were produced automatically from the definitions and example sentences in machine-readable dictionaries. Later, MindNet was generalized, becoming an architecture for a class of repositories that can store and access LFs produced for a variety of expository texts, including but not limited to dictionaries, encyclopedias, and technical manuals. For MSR-MT, MindNet serves as the optimal example base, specifically designed to store and retrieve the linked source and target LF segments comprising the transfer mappings extracted during LF alignment. As part of daily regression testing for MSR-MT, all the sentence pairs in the combined English/Spanish corpus are parsed, the resulting spanning LFs are aligned, and a separate MindNet for each of the two directed language pairs is built from the LF transfer mappings obtained. These MindNets are about 7MB each in size and take roughly 6.5 hours each to create on a 550 Mhz PC. Running MSR-MT MSR-MT translates sentences in four processing steps, which were illustrated in Figure 1 and outlined in section 2 above. These steps are detailed using a simple example in the following sections. Analysis The input source sentence is parsed with the same parser used on source text during MSR-MTs training. The parser produces an LF for the sentence, as described in section 3. For the example LF in  REF _Ref511505209 \h  \* MERGEFORMAT Figure 2, the Spanish input sentence is Haga clic en el botn de opcin. In English, this is literally Make click in the button of option. In fluent, translated English, it is Click the option button.  EMBED PBrush  Figure  SEQ Figure \* ARABIC 2. LF produced for Haga clic en el botn de opcin. MindMeld The source LF produced by analysis is next matched by the MindMeld process to the source LF segments that are part of the transfer mappings stored in MindNet. Multiple transfer mappings may match portions of the source LF. MindMeld attempts to find the best set of matching transfer mappings by first searching for LF segments in MindNet that have matching lemmas, parts of speech, and other feature information. Larger (more specific) mappings are preferred to smaller (more general) mappings. In other words, transfers with context will be matched preferentially, but the system will fall back to the smaller transfers when no matching context is found. Among mappings of equal size, MindMeld prefers higher-frequency mappings. Mappings are also allowed to match overlapping portions of the source LF so long as they do not conflict in any way. After an optimal set of matching transfer mappings is found, MindMeld creates Links on nodes in the source LF to copies of the corresponding target LF segments retrieved from the mappings.  REF _Ref511544730 \h Figure 3 shows the source LF for the example sentence with additional Links to target LF segments. Note that Links for multi-word mappings are represented by linking the root nodes (e.g., hacer and click) of the corresponding segments, then linking an asterisk (*) to the other source nodes participating in the multi-word mapping (e.g., usted and clic). Sublinks between corresponding individual source and target nodes of such a mapping (not shown in the figure) are also created for use during transfer.  EMBED PBrush  Figure  SEQ Figure \* ARABIC 3. Linked LF for Haga clic en el botn de opcin. Transfer The responsibility of transfer is to take a linked LF from MindMeld and create a target LF that will be the basis for the target translation. This is accomplished through a top down traversal of the linked LF in which the target LF segments pointed to by Links on the source LF nodes are stitched together. When stitching together LF segments from possibly complex multi-word mappings, the sublinks set by MindMeld between individual nodes are used to determine correct attachment points for modifiers, etc. Default attachment points are used if needed. Also, a very small set of simple, general, hand-coded transfer rules (currently four for English to/from Spanish) may apply to fill current (and we hope, temporary) gaps in learned transfer mappings. In cases where no applicable transfer mapping was found during MindMeld, the nodes in the source LF and their relations are simply copied into the target LF. Default (i.e., most commonly occurring) single word translations may still be found in the MindNet for these nodes and inserted in the target LF, but if not, translations are obtained, if possible, from the same bilingual dictionary used during LF alignment.  REF _Ref511507811 \h Figure 4 shows the target LF created by transfer from the linked LF shown in  REF _Ref511544730 \h Figure 3.  EMBED PBrush  Figure  SEQ Figure \* ARABIC 4. Target LF for Click the option button. Generation A rule-based generation component maps from the target LF to the target string (Aikawa et al. 2001). The generation components for the target languages currently handled by MSR-MT are application-independent, having been designed to apply to a range of tasks, including question answering, grammar checking, and translation. In its application to translation, generation has no information about the source language for a given input LF, working exclusively with the information passed to it by the transfer component. It uses this information, in conjunction with a monolingual (target language) dictionary to produce its output. One generic generation component is thus sufficient for each language. In some cases, transfer produces an unmistakably non-native target LF. In order to correct some of the worst of these anomalies, a small set of source-language independent rules is applied prior to generation. The need for such rules reflects deficiencies in our current data-driven learning techniques during transfer. Evaluating MSR-MT In evaluating progress, we have found no effective alternative to the most obvious solution: periodic, blind human evaluations focused on translations of single sentences. The human raters used for these evaluations work for an independent agency and played no development role building the systems they test. Each language pair under active development is periodically subjected to the evaluation process described in this section. Evaluation Methodology For each evaluation, five to seven evaluators are asked to evaluate the same set of 200 to 250 blind test sentences. For each sentence, raters are presented with a reference sentence in the target language, which is a human translation of the corresponding source sentence. In order to maintain consistency among raters who may have different levels of fluency in the source language, raters are not shown the source sentence. Instead, they are presented with two machine-generated target translations presented in random order: one translation by the system to be evaluated (the experimental system), and another translation by a comparison system (the control system). The order of presentation of sentences is also randomized for each rater in order to eliminate any ordering effect. Raters are asked to make a three-way choice. For each sentence, raters may choose one of the two automatically translated sentences as the better translation of the (unseen) source sentence, assuming that the reference sentence represents a perfect translation, or, they may indicate that neither of the two is better. Raters are instructed to use their best judgment about the relative importance of fluency/style and accuracy/content preservation. We chose to use this simple three-way scale in order to avoid making any a priori judgments about the relative importance of these parameters for subjective judgments of quality. The three-way scale also allows sentences to be rated on the same scale, regardless of whether the differences between output from system 1 and system 2 are substantial or negligible. The scoring system is similarly simple; each judgment by a rater is represented as 1 (sentence from experimental system judged better), 0 (neither sentence judged better), or -1 (sentence from control system judged better). For each sentence, the score is the mean of all raters judgments; for each comparison, the score is the mean of the scores of all sentences. Evaluation results Although work on MSR-MT encompasses a number of language pairs, we focus here on the evaluation of just two, Spanish-English and English-Spanish. Training data was held constant for each of these evaluations. Spanish-English over time Spanish-English systemsMean preference score (6-7 raters)Sample sizeMSR-MT 9/00 vs. MSR-MT 12/000.30 0.10 (at 0.99)200 sentencesMSR-MT 12/00 vs. MSR-MT 4/010.28 0.07 (at 0.99)250 sentences This table summarizes two evaluations tracking progress in MSR-MTs Spanish-English (SE) translation quality over a seven month development period. The first evaluation, with seven raters, compared a September 2000 version of the system to a December 2000 version. The second evaluation, carried out by six raters, examined progress between December 2000 and April 2001. A score of -1 would mean that raters uniformly preferred the control system, while a score of 1 would indicate that all raters preferred the comparison system for all sentences. In each of these evaluations, all raters significantly preferred the comparison, or newer, version of MSR-MT, as reflected in the mean preference scores of 0.30 and 0.28, both of which were significantly greater than 0 at the .99 level. These numbers confirm that the system made considerable progress over a relatively short time span. Spanish-English vs. alternative system Spanish-English systemsMean preference score (6-7 raters)Sample sizeMSR-MT 9/00 vs. Babelfish-0.23 0.12 (at 0.99)200 sentencesMSR-MT 12/00 vs. Babelfish0.11 0.10 (at 0.95)200 sentencesMSR-MT 4/01 vs. Babelfish0.32 0.11 (at .99)250 sentences This table summarizes our comparison of MSR-MTs Spanish-English (SE) output to the output of Babelfish ( HYPERLINK "http://world.altavista.com/" http://world.altavista.com/). Three separate evaluations were performed, in order to track MSR-MTs progress over seven months. The first two evaluations involved seven raters, while the third involved six. The shift in the mean preference score from -0.23 to 0.32 shows clear progress against Babelfish. By the second evaluation, raters slightly preferred MSR-MT in this domain. By April, all six raters strongly preferred MSR-MT. English-Spanish vs. alternative system English-Spanish systemsMean preference score (5-6 raters)Sample sizeMSR-MT 2/01 vs. L&H0.078 0.13 (not significant at 0.95)250 sentencesMSR-MT 4/01 vs. L&H0.19 0.14 (at 0.99)250 sentences The evaluations summarized in this table compared February and April 2001 versions of MSR-MTs English-Spanish (ES) output to the output of the Lernout & Hauspie (L&H) ES system ( HYPERLINK "http://world.altavista.com/" http://officeupdate.lhsl.com/) for 250 source sentences. Five raters participated in the first evaluation, and six in the second. The mean preference scores show that by April, MSR-MT was strongly preferred over L&H. Interestingly, though, one rater who participated in both evaluations maintained a slight but systematic preference for L&Hs translations. Determining which aspects of the translations might have caused this rater to behave differently from the others is a topic for future investigation. Discussion These results document steady progress in the quality of MSR-MTs output over a relatively short time. By April 2001, both the SE and ES versions of the system had surpassed Babelfish and L&H, respectively, in translation quality for this domain. While these versions of MSR-MT are the most fully developed, the other language pairs under development are also progressing rapidly. In interpreting our results, it is important to keep in mind that MSR-MT has been customized to the test domain, while the Babelfish and L&H systems have not. This certainly affects our results, and means that our comparisons have a certain asymmetry. As our work progresses, we hope to evaluate MSR-MT against a quality bar that is perhaps more meaningful: the output of a commercial system that has been hand-customized for a specific domain. The asymmetrical nature of our comparison cuts both ways, however. Customization produces better translations, and a system that can be automatically customized has an inherent advantage over one that requires laborious manual customization. Comparing an automatically-customized version of MSR-MT to a commercial system which has undergone years of hand-customization will represent a comparison that is at least as asymmetrical as those we have presented here. We have another, more concrete, purpose in regularly evaluating our system relative to the output of systems like Babelfish and L&H: these commercial systems serve as (nearly) static benchmarks that allow us to track our own progress without reference to absolute quality. Conclusions and Future Work This paper has described MSR-MT, an EBMT system that produces MT output whose quality in a specific domain exceeds that of commercial MT systems, thus attacking head-on the customization bottleneck. This work demonstrates that automatic data-driven methods can provide commercial-quality MT. In future work we hope to demonstrate that MSR-MT can be rapidly adapted to very different semantic domains, and that it can compete in translation quality even with commercial systems that have been hand-customized to a particular domain. Acknowledgements We would like to acknowledge the efforts of the MSR NLP group in carrying out this work, as well as the contributions of the Butler Hill Group in performing the independent evaluations described in section 6. References Aikawa, T., M. Melero, L. Schwartz, and A. Wu. 2001. Multilingual natural language generation, Proceedings of 8th European Workshop on Natural Language Generation, ACL 2001. Brown, R. 1999. Adding linguistic knowledge to a lexical example-based translation system, Proceedings of TMI 99. Franz, A., K. Horiguchi, L. Duan, D. Ecker, E. Koontz, and K. Uchida. 2000. An integrated architecture for example-based machine translation, Proceedings of COLING2000. Frederking, R., S. Nirenburg, D. Farwell, S. Helmreich, E. Hovy, K. Knight, S. Beale, C. Domashnev, D. Attardo, D. Grannes, and R. Brown. 1994. Integrating translations from multiple sources within the Pangloss Mark III machine translation system, Proceedings of AMTA94. Heidorn, G., K. Jensen, S. Richardson, and A. Viesse. 2000. Intelligent writing Assistance, in R. Dale, H. Moisl and H. Somers (eds)  HYPERLINK "http://www.Alpha XR/exec/obidos/ASIN/0824790006/robertdalespubli" Handbook of Natural Language Processing. Marcel Dekker, Inc., New York. Meyers, A., M. Kosaka, and R. Grishman. 2000. Chart-based transfer rule application in machine translation, Proceedings of COLING-ACL98. Menezes, A. and S. Richardson 2001. A best-first alignment algorithm for automatic extraction of transfer mappings from bilingual corpora, Proceedings of the Workshop on Data-Driven Machine Translation, ACL 2001. Moore, R. 2001 Towards a Simple and Accurate Statistical Approach to Learning Translation Relationships Among Words, Proceedings of the Workshop on Data-Driven Machine Translation, ACL 2001. Pinkham, J and M. Corston-Oliver 2001 Adding Domain Specificity to an MT system, Proceedings of the Workshop on Data-Driven Machine Translation, ACL 2001. Richardson, S. D., W. Dolan, and L. Vanderwende 1998. MindNet: Acquiring and Structuring Semantic Information from Text, Proceedings of COLING-ACL98. Takeda, K., N. Uramoto, T. Nasukawa, and T. Tsutsumi 1992. Shalt 2a symmetric machine translation system with conceptual transfer, Proceedings of COLING92. Turcato, D., P. McFetridge, F. Popowich, and J. Toole 1999. A unified example-based and lexicalist approach to machine translation, Proceedings of TMI 99. Watanabe, W. Kurohashi, S. and E. Aramaki 2000. Finding structural correspondences from bilingual parsed corpus for corpus-based translation, Proceedings of COLING2000. Watanabe, H. and K. Takeda 1998. A pattern-based machine translation system extended by example-based processing, Proceedings of COLING-ACL98.  Parsers for English, Spanish, French, and German provide linguistic analyses for the grammar checker in Microsoft Word. Babelfish was chosen for these comparisons only after we experimentally compared its output to that of the related Systran system augmented with its computer domain dictionary. Surprisingly, the generic SE Babelfish engine produced slightly better translations of our technical data. Figure  SEQ Figure \* ARABIC 1. MSR-MT architecture. Runtime Training Multi-word pairs Transfer mapping MindNet Target LFs Parsing Target sentences Source LFs Parsing Source sentences Bilingual dictionary Target sentence Target LF Linked LF Source LF Generation Transfer MindMeld Analysis Source sentence MindNet build Updated bilingual dictionary LF transfer mappings Dictionary merge Single-word pairs Statistical word association learner LF alignment >?h  " ѻᰥxmbxxZbxNh[`h^>*nH tH hDnH tH h[`hfnH tH h[`hG nH tH h[`h{/nH tH h[`h^nH tH h[`hO&H*nH tH h[`h7nH tH h[`hDRnH tH h[`hEnH tH *hDh^5CJH*OJQJaJmH sH hDh+Z5CJaJmH sH hDh5CJaJmH sH  h ho hG hY? CK``gdBV )h]^hgd#$a$gdO& $^a$gd7$^`a$gdDR$x^`a$gdDR  (#xgd $a$gdo 8{||~" # L M N b c e f g h q z4:DĹxphd`d`d\d`d`d`dhJhVPhG hG mH sH hnH tH h[`h^>*nH tH $HhUh[`h?O>*nH tH -jHhUh[`h?O>*UnH tH h[`hfnH tH h[`h7nH tH h[`hF1nH tH &jh[`hF1>*UnH tH h[`hF1>*nH tH  jh[`hF1>*UnH tH "DWrx!!p%v%T(Z(((,,j-p---#.d.2%2V2^2_2n22222223 3=3>3626666r7888jhG UhSG hR\J hXhG hvh9*?h|hI6hG mH nH u hG 6h^^jhG 0JUhVPjh/UmHnHtHujh/CJUhG h7G !d%G(T(+*a-q-#.d./c12226;9<9=9$ (#$1$If`gdX`gd`888888;9<9=9_9 :::&:':(:):f:g:::<<==>>>>@@@@@@@@@@@@A3A4A[ArAsAtAuAAǹ۰۬ۨۨۨۨۨ۝ۊۊۊ hXhG hBVhG 6 hW<hW<hXjhG UhVPh{hG mH nH uhG 56\]nH tH hG mH nH sH tH hG nH tH hG hW<mHnHuhW<jhG UjhG U1=9N9_9`9u9}99d[[ $$Ifa$kdT$$Ifl4    F <J 4    t0        4 l / af4 $$Ifa$99999ypgg $$Ifa$ $$Ifa$kd<$$Ifl4    F <J 4    t0        4 laf499999ypgg $$Ifa$ $$Ifa$kd$$Ifl4    F <J 4  t0        4 laf4999: :ypee $$$Ifa$ $$Ifa$kd$$Ifl4    F <J 4  t0        4 laf4 : :_:g:<>>??tAytricaXri`gdBV``gdX&$a$kdf$$Ifl4    F <J 4  t0        4 laf4 AAAAAAAAAAAAAAA4BrBBEEEE F FFFFFUFZF}FFFFFFGGbGgGlGpG H HHHѸܱܱܧܜܧܧ܂܂܂܂܂sj z> hG CJUVaJ hG 5\hBVhW<mHnHu hBVhW<jDhG UhG 56\] hkhG hmhG 6]mH sH hW<mHnHsH uhmhG mH sH hG jhG UjhG Uj/!z> hG CJUVaJ-tAAAA7E H HrH{HnKMMMMNPRRSSV#Z`gd7`gdbwx`gdZ$$a$`gdBV``gdk&$HHH H'H(H>H?H@HAHQHqHrH{IIMM'M(M)M0M1M2MwMxMMMMMMMMMMMMMMMMMMMMMﱨwp hG 6]jc&hG Uj0 z> hG UVhBVhW<mHnHu hBVhW<j%hG UhW<mHnHuhW<ji%hG U hBVhG hmhG 6]mH sH hW<mHnHsH uhmhG mH sH hG jhG UjhG U,MUNbNNNRRYY[[7\D\v\\\\\\\\\\\\]]] ]]]]:]O]P]S]T]^]_]v]z]]]]]]]]^ûûûh$h-9h@qhFn6hFn6CJ\aJh'mCJ\aJhFn6CJaJh+ CJ\aJhG CJ\aJhG CJaJh$CJaJh[>c56CJaJhG 56CJaJhG OJQJhH$ hG 6]hVPhRnhG 0#Z[[v\\\\\\$$$$Ifa$gdmC$gdmC`gd_`\\\\] ]]qbbbbb$$$$Ifa$gdmCkdc-$$IflFF f t0    44 la]].]:]F]P]T]^]sgg[[RR $$Ifa$ $$Ifa$gd'm $$Ifa$gdFn6kd .$$IflF f t0    44 la^]_]`]^``aa9aEasqhb]TTTT $$Ifa$gdZ``gd_kd.$$IflF f t0    44 la ^^e^g^j^~^^^^^^^^________``%`.`p`q`r````````-a/aFaQa`atauaaaaaaaaaǿ~hBCJ\aJhG CJ\aJhG CJaJhFn6CJaJh[>c56CJaJhG 56CJaJhZ h6hQhG 6 hhG  hhG hhhkh +)hVPhiZhG h&ha0h$hQ1EaFa`amawa{aaofffff $$Ifa$kdV/$$IflFFT p t06    44 laaaaaaaaofffff $$Ifa$kd0$$IflFFT p t06    44 laaaaaabqhhhh $$Ifa$kd0$$IflFT p t06    44 laaabb)b-b3bnbobbbbbbbbbb cccc%c'c(c)c3c4c>cCckcccccccccccdddd!dKdLdMdtdͽ굱ꭩ깕 h4;hj h4;h<h@hhBh pSh)Jh h7hjhO:h5Nh? j1hBhG >*UhBhG >*jhBhG >*UhVPh8hG hFn6CJaJhFn6CJ\aJ1bbbkcMdtdddddqof][RRRR $$Ifa$`gd4;`gd kdV1$$IflFT p t06    44 la tdddddddddeeeee2e3e@eAeBeQeRe~eeeeeeee f!f"f?f@fxffffffffuqqmqhRhtj4h[>chG >*Uh[>chG >*jh[>chG >*UhVPhE47hG hNBYCJ\aJhCJ\aJhCJaJhNBYCJaJh[>cCJ\aJhG CJ\aJh$CJaJhG CJaJh[>c56CJaJhG 56CJaJ)ddddddeqhhhhh $$Ifa$kd2$$IflF f t06    44 laeee)e3eAeqhhhh $$Ifa$kd3$$IflF f t06    44 laAeBeCefqof`gdHkd34$$IflF f t06    44 laffffff ggBgLgggggggghhAhHhXh_hchihrhhhhhhhhhhhhii i-i?iii3j4jFjGjjjll|nnnnnnoowpԸԴԸԦԢh3o<h{jhG H*Uh4(h9h}:h6vhFh2+hVPhK hhG hp;ch hXhnOGh.*hP^h<)hX4_hE47h7v*B*phjh8 hj6Uh8 hj6)wwwww$x%x&xUxVxXxZx[x\x]xxxxxxxxxxCyDyEyFy]y_yyyyyyyyz zzzzz%z-zzzzzޞ޽ޞ޽އރރރ޽h #h81hFhG 6 h}OhG hNhhG 6h  hxhxh?hBSS6h?h?h?6hy? h#ThBSS hhhh#T h#Th4hG  h}OhDZhjhDZ6 hDZhDZhDZh &u0zzzz{{{{0{4{6{7{8{9{{{|||||||||||||||}}}}"}#}5}6}O}P}[}\}d}e}v}w}}}ļ󬣬||||||h/h/B*CJaJphh/h/h/5B*\aJ$phhVPhW<mHnHujhG UhG nH tH hG nH tH h3khG mH sH jhG 0JUhC*J hx6hzhG 6hy?h hG h>0hG 60|||}}}}"}#}.}5}6}?}G}O}P}W}[}\}d}e}l}v}w}~}}}} $H$a$gd/H$gd/&$a$}}}}}}}}}}}}}}}}}}}~~~~ ~!~0~1~N~O~e~f~w~x~~~~~~~~~hC*JhG h/h/h/B*CJaJph(}}}}}}}}}}}}}}}}}}}}}}}}~~~~H$gd/ $H$a$gd/~~ ~!~)~0~1~9~C~N~O~[~e~f~q~w~x~~~~~~~~~~~H$gd/ $H$a$gd/~~d$^`gd#50&P P:p!. A!"#J$b%8 00&P P:p!. A!"#J$b% P hDyK arulm}@microsoft.comyK 8mailto:arulm}@microsoft.com}DyK _Ref511463603$$If!vh5f55#vf#v#v:V l4 t0    5J 545 / /  4 / f4$$If!vh5f55#vf#v#v:V l4 t0    5J 545 /  4f4$$If!vh5f55#vf#v#v:V l4 t0    5J 545 4f4$$If!vh5f55#vf#v#v:V l4 t0    5J 545 4f4$$If!vh5f55#vf#v#v:V l4 t0    5J 545 4f4}DyK _Ref511505209Dd y J  C A? "bl=BWJNnl=BWJNPNG  IHDRgSCGsRGB pHYs+wIDATx^*OLQE1#]{g'1? |Z?m r0}@} A9>?RtpZ@@6cyL _ G_UmV%}lWA.׵_\A|a~RCeu-5Nރo`UK*Ly9{\*/-%9 )G)PP9 XRxNtUrC}n7#Nȹ r_na3:y/co8{c9o"[qob@+pso([IS6=z&cE~HaFMO;"M,Dƫ5ziݝlCU^5+HKeVڔv */3@&ɽuENiC(>}[FM#Xp7#-hb`~X.WSoTL3Z I]26r,fL|&Ae{{3X\;*"nu3j6~`\ȧO'5#+R'}8?_Yl-n1?,LGތ%iIk޲'b ?ˡFP~/vޗsa#KIy2#_BfFRL } <4@mm |[;ѱf= p%yP^=?D䩕QKzOѮ'-W=W8^P}ByNUГŶlWuiUX*?LӟMjwXtG5p+Laf2|Rp䆖mH}|9VΫt{$L%@=CXXV{C M^/\ѕ"F,_8RV=1>] V} h*V|D[mJʫ0qrF>d$e6- 傱)NOK}7>G:!m꟣ ʣΓyrt+3($ j'Q>.%(\IB(G8{%cGylQ+0rscP2PUA)G8%MyTw.7[<&x* 9?,|m7blnѶݛq&1i<2N8S'^H-~/xw#R^PޮuTc ># kl+i 6)W:Cd,3(6zStBc| sw{9TI5)[rS/گΨeL2Q҆&L§@T/WoH򯪤?D,bnw#h&qOš=,OUىeyp6oBrӬ*zWP٥R#cF!;)UBYA28aN*mU$էŨsiyʜf;2iRe e_02ﲘXRc/uUf}R_e6(ݗ7wtfIENDB`}DyK _Ref511544730Dd 8=PPJ  C A? "b Q=/M\XDIDVnQ=/M\XDIDVPNG  IHDRHk fsRGB pHYs+rIDATx^\&) o/x)  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijlmnopqru{|}~Root Entry F7w@ Data k5WordDocumentqObjectPool k77_1048191279 Fk7k7Ole PRINTz+CompObjM  !"#$%&'()*+,-.0 FPBrushPBrushPBrush9q  FPBrushPBrushPBrush9q      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  ͕  ͕A g(gX+ObjInfo Ole10Native +Ole10ItemName_1048191147 Fk7m7+BM+6(gX+Ole PRINT CompObjMObjInfo      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~! ?%  H %A H H(HZ  FPBrushPBrushPBrush9q Oh+'0Ole10NativeDOle10ItemName _1048191024 Fm7m7Ole        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXY]^_`abcdefghijklmnopqrstuvwxyz{|}~@BM>6(HCompObj MObjInfo Ole10Native\Ole10ItemName_BMv6(S@Q"$_@.{;1.c\}aP1}a.ݎ@6cϷ݌J ߞE [j`* 7Vq 0 zrb/_*'eZNr j[-?JKWeX9|@`ub۾)דuIl @fb;3#B@"Lr0Th=,ϫ+g>> Ё@sW:4@K\ @@B@ (>ؼ7(0Y{ "ǁx3=w¤h#69?HbW`9%o"0` ^@3xJ5giU_@Qdm-W8郙IK?*B $SMpț^}1%)sۓi|D?2 8tE_VM7c^*Acܤ;F ThUQwNQ]{ 1=@ǫq/5"[ЩRɤ6l?'lK @K=2Վ@rICN8Yth Fl‹̼q٪Yfo΁i`G=Ԇ@_ 5qUFYmJ%;!luM%j5Pr+g5(Ԃš7j<:c|- 3;jBI&bmu/#KT b (nGj^cO]]D2ukh&$g4v@ͼ!bV_tU Nbcj5c$,A1 ;|@` =F ”W ^e!,d,0d$Jܡ'c۞PFhLC? ~b_D;ts;fb7@zԊ% 'i^MIV nV~yVĨ2u{>I !|NbevuwyeB9[~b#iWVHJ[e`yƫt\IUnuP:%j/B@0Uv4;`컱f9mm; ^ P ];M$lGǛS1kM~H!aD!0BI+CuXb &Fڀ 1ĶFl ?4,bO2j 0Jldf@Ƀd%vhT&ˆioGVI;缗C|s]6erB֟;\oQěmL$Ahu`GqSdnwcq2ܛ JRv^9TK sWSN{*Ȍ#p`!bנ#1FN.U>/Jvo2ϭ%fHJbvũ}Kc1&˝[.SvJ*cߪ`.Flqw΍{?A ݅Ɣ?Q{Prʠu:b? $Ǒ'{0bRA[SO#Fl =(A#Bvfxa-ݛJM>rdmm"0+cVDcyvi4 l Cթ8CtA حd.̓d5D J+c=T$IQbҼ3Ķ(M6hS--,yI~#xGOP7 L$vcW%ySS5[zMA&. IXpnY80`"?w2yToG뎫Y~X*dfHzRMWI=:ܪ[U'(۶>2A#bcx mϰ[H-S ~@B@ jbc1 0ؠ `Kl &<#i?'TfBnhllO>ECqsD0cto}~,<@ TN>~*eR A Q@NqhMNˍZ-벽@!hFg@seB  t A`=e} d3綇ob=h5(B:Y˗\.؏^G`ܶ.#33#jXq~N;`kNYۤwn$7]ϕr!G*lB`?- <zͳ@-m* 0]]OEʁ,*ُtIEÊ`oлDĎ|Z@)>_~G;>DdnJ*!4}Y.ЅLVswG# Cz&ϱt꯭-"gR|E|74m}TϝeV{Y3؊\9dX\ ŮiWIxЯ bygQspf-C`d,YgۓmP9w;Z*{C˛arLSS5һ9ܷ+k_h0^&K, |p-M*c$,W _g\@  !=3XMlogXu}|p}f\`BgV!6gCCc @`V}I$d5y|)_RVd@IŞ!75x4CYV)%m >BKJ2w;yyCU "؊s 9mTSbIoI;;Lay+-$b5c+,߷хl[c'˗[Ȝkώ!8~ ۈ-msLWDb~*MŹ 7NȓrKL-8y®899vG(D` #J p @(@c@  vX7"bD 0= ?Ei*0WT#]dJp`{{ZLmٛ_.g15Ijg\w.39eՔ>%(bU8bᥴܴ5*`gr9*e8GՑ0!cm[M¬̐$U. b^7-N :"͓Vz4agꃭ@=AϕlLh@,iSirZԲ@ePtN11i6$ vxP &yR֨2Q33 7(a{C8TͬVjH.,igQIk[(у]@uo2 &)DVKLV#59*Q*yT\[eһU.;8nw u@ R\}aP1zWj `#0C@á f doWOO3J)y=;ՠ,?jGƠ,?'~ 9Eg5cXAң#3+ơzcA ?unw8/b𬺓ꉦ/WMSaT(ţz/4,Z>~ mXQg$e4 %ž)#6IsYS'aM1XjPU]nس dYn3L׹<"z[VAuT39 )j8w39kQur;1EĞ+zPLde1O23ǮN1Om٩x4}&UK􆷚TH&7:o7&~oN 1b ڙ+E)>m, @l 0.g, vTRj|: =}P1:1  ce ~2q6:śLܥ^|a?ֿ}`1* vӃ:qiT|BpY&-ٞdsG)֢1d :TDE韣7e9buBC%OM|xܵGE@EdNiZwqq;8FI\7 v5MSRݣe #5v?v}-%u^[H \J#mE\$+ʭ9ۏU*i|WҶs#qN 1QN :jҞmh*Q "+nwԧgƙ A=P03l}\j8G)ۺ޵w(B?`}X Uf bo8m>w-1l=>-9(uU~ zEc2'9'n Q>qqɻzO|(ŗCp ;/\_4/az52u~<,&aI{=]3;ŜFw0ܩy @FYTv+_VYS| 8TYFGC!,Yũ.f\R^U@GMP m^fiwNB$Ɣ\`KiӖށ9 !?, ^ >`T E"9 !@ / )!b(X0xHA|C@E r@C L^@l/RBar?IENDB`}DyK _Ref511507811}DyK _Ref511544730Dd 0  # Ab|~aև`vYXs_X&nP~aև`vYXs_PNG  IHDRS3 sRGB pHYs+IDATx^:O-kQn !G_D°St^C _pwu 6Vn<| ߍ?}Cz3;]P`k>rRT0{|"J N|`tG2QPn|./(gr4oq0y@D.7^"W:7\37¥.u z^%Yx|FBu-&?oF"w(ЉopOEFQ[G.҉oI5W.- rie_VgT3[~.i8j4~ϗ__jXWtHҍJFLKX8 #&[M[IE㽥Ӭۜށ4E t&{1nͲTѱi g=Ҡߡp;P}K + {!\Q|QW>{PqEYOF}J}~auDÌaޣ3|ͦӫ/^#rhI쀾37^/ϱIIMW:i`] t])Ǥ&%+)>g;Q榭Hi΁շq &ا1U8r%%_6`T fw[_ൔo/Ǔ ?'! RgbW|G?Pd3 F1 S%; pܑU]2 S=13״mow^2&( Inq]ҋVQEV\\e}tf4."&a8!^" /o^R`]|gt{6Xyh0uZ n׸52lA*9ʰU_(>&^zar=:zcST_`Qm/*~f!!Q |%4+*ni"6 ,rE!L{:cm6TlI3QS(kozq_9d*FG@[Q>mm+Oȯv4  T+olTHW%UDUVzPT!ɖLőZY1bo6o03MJVMWpעwd =d}=&m~GUoKUWH+1?gOAm0Ѧjme%lNLn xqM6_b@CL]H^AhbW}Ÿ x;'%Jq^Sfi>5H{jf)hIENDB`$$If!vh5f55#vf#v#v:V lF t05f55$$If!vh5f55#vf#v#v:V l t05f55$$If!vh5f55#vf#v#v:V l t05f55$$If!vh55p5#v#vp#v:V lF t0655p5$$If!vh55p5#v#vp#v:V lF t0655p5$$If!vh55p5#v#vp#v:V l t0655p5$$If!vh55p5#v#vp#v:V l t0655p5DyK http://world.altavista.com/yK 8http://world.altavista.com/$$If!vh5f55#vf#v#v:V l t065f55$$If!vh5f55#vf#v#v:V l t065f55$$If!vh5f55#vf#v#v:V l t065f55DyK http://world.altavista.com/yK 8http://world.altavista.com/1TableSummaryInformation(DocumentSummaryInformation8CompObj/j 4@ \ h t Model in WordoodeChristina Schorrdhrihriacl2001-submission.dotSteve Richardsonn.d7evMicrosoft Word 10.0@9+@V@8@N2b՜.+,D՜.+,L hp  Universite de Montreal:rA Model in Word Title P8Xlt _PID_HLINKS_AdHocReviewCycleID_EmailSubject _AuthorEmail_AuthorEmailDisplayName_PreviousAdHocReviewCycleID_ReviewingToolsShownOnceA$iq3Chttp://www.amazon.com/exec/obidos/ASIN/0824790006/robertdalespubliu50http://world.altavista.com/u5-http://world.altavista.com/|Lmailto:moco@butlerhill.com)V%mailto:moliver@socrates.berkeley.edu0mailto:arulm}@microsoft.comA Papers for workshopbilldol@microsoft.comce Bill Dolanr3 Eill  FMicrosoft Word Document MSWo*T@T Normal$7$8$a$ CJOJQJ_HaJmH sH tH d@d Heading 1!$$ xx1$@&a$5CJmH sH tH ud@d Heading 2&$$xx@&]^a$5CJmH sH u`@` Heading 3$$ x@&a$5CJmH sH tH u>@> Heading 4 $$@&6]>@> Heading 5 $$@&5\DA@D Default Paragraph FontVi@V  Table Normal :V 44 la (k(No List h @h Footer,$ 0n8!x]a$<CJaJmH sH u`@` Header 0n#x6<CJ]aJmH sH uH&@H Footnote Reference CJEHaJf@"f  Footnote Text 0n1$CJaJmH sH uTO2T j Bibliography '(^`tH 8B@B8 Body Text 5\tH VORV Author#$$ (#1$a$ mH sH u^>@b^ Title$$$ 0dx1$a$5CJmH sH tH ubOrb E-mail address$]a$CJOJQJ_HmH sH tH 0U@0 Hyperlink>*B*TOT Abstract1$Q1$]Q^mH sH tH u,Oa, Title2CJzOzSection Heading$$ & F2^2`5CJ_HmHnHsH tH uVOVSubsubsection Heading  & FxCJfOfSubsection Heading $ & F2x^2`CJtH NON File Name1$CJOJQJmH sH tH u&Oq& URL1$TOT Abstract2 $1$]^mH sH tH uRYR  Document Map!-D M OJQJ^JRC@"R Body Text Indent"hL^h`L6]FV@1F FollowedHyperlink >*B* ph*W@A* Strong5\@P@R@ Body Text 2% B*aJ ph@"@@ Caption &xx5CJ\aJHrH  Balloon Text'CJOJQJ^JaJHR@H Body Text Indent 2 (h^h@T@@ Block Text)]^Fbvz9BL_y%/9JZxv\[RMHFD@><:0.,*(&$9BL_y%/9JZx v7?C K GdG T +"a%q%#&d&'c)***.;1<1=1N1_1`1u1}1111111111112 2 2_2g246677t99997= @ @r@{@nCEEEEFHJJKKN#RSSvTTTTTTTTTU UUU.U:UFUPUTU^U_U`UVXXYY9YEYFY`YmYwY{YYYYYYYYYYYYZZZk[M\t\\\\\\\\\\]]])]3]A]B]C]^`*`agc6eGfcfgwhhYidijj3kDlfmmno&pp_qqr8sstttttttttuuuu"u#u.u5u6u?uGuOuPuWu[u\udueuluvuwu~uuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvv v!v)v0v1v9vCvNvOv[vevfvqvwvxvvvvvvvvvvv0000000000)0 00000000 000p0000 000 00 0a%0000 0a%00000000000000000000000&0 0a%00 00 0600&0 06000&0 060000&0 060@0 00 0J000 0J0 0S00000 0@0@0@0@0@00@0@0@0@0@0@      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^0000P 0S0000000000000000000000000X 0S000000000 0000000000 0J0000 00`0`0`00h0h0h0h0h0p0000000h0p0p@0p@0p0 o@0@0@0@0@0@00(o&0(0@00@0]900@0@0]900@0@0@0]90 0@0@0]90 0@0]90 0@0@0]900@0@0]900@0]900@0@0]900 @0@0]900 @0@0]900 @0@0]900 @0@0]900 @0@0]900 @0]900 @0]90 0 @0]90!0 @0]90"0 @0@0]90$0 @0@0]90&0 @0@0@0]90)0 @0@0]90+0 @0@0]90-0 @0@0]90/0 @0@0]9010 @0@0]900o]900oC K GdG T +"a%q%#&d&'c)***.;1<1=1N1_1`1u1}1111111111112 2 2_2g246677t99997= @ @r@{@nCEEEEFHJJKKN#RSSvTTTTTTTTTU UUU.U:UFUPUTU^U_U`UVXXYYYYZZ\\]]A]B]tu"u.u5u?uGuOuWu[uduluvu~uuuuuuuuuuuuuuuuuvvv v)v0v9vCvNv[vevqvwvvvvvv0@00)0 00000000 0@000000 000 00 0(0000 0(00000`0`00000000000000000&0p 0(p0p0p 0p0p 09p0p0p&0p 09p0p0p0p&0p 09p0p0p0p0p&0p 09p0p0p 0p0p 0Lp0p0p0p 0Lp0p 0Vp0p0p0p0p@0@0p@0p@0p@0p@0p@0@0p@0p@0p@0p@0p@0p@00p0p0p 0VO900u0O900u0O90 0da!UO90 0Dv0$ ,TĠO90 0Dv0$ ,TĠO90080O900@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0 0XrW , " D8AHM^atdfwptwz}~@CDFLNOTXZ^abcdf=9999 :tA#Z\]^]EaaabdeAefwp|}~~~AEGHIJKMPQRSUVWY[\]_`eghi~B"Mbg0002&2(2888t999999= >> @@@'@>@@@E(E1EwEEEEEEEEEnZZZ]!^?^lmDmvXXXt t: tt: ttt: tXXX  4  cc/Xb$|S#ռ;ePb$a\3R%6l}'nwZlQC 7""7""@H 0(  d'\b'(  H  #  z&n  " \U  # 3"n   c $X99?"` " \U ZB  B S D5E3#G$9 ZB   S D2 E32 46 T   C ̙.h85=   <?o?o̙I.h85< T  C ̙?T%K)   <?o?o̙?T%/L) T  C ̙o9.ADE3   <?o?o̙:.CE3 `B B c $D5W;8W; T  C ̙MI$9H T  C ̙h8O=   <?o?o̙[$9O= T  C ̙ O U   <?o?o̙ PU T  C ̙ZQgU   <?o?o̙7RT ZB  B S DJ*W;.W; ZB !B S DOW; W; TR " C ̙!B0H T # C ̙>%ZQt,U  $ <?o?o$̙C%R--T T % C ̙8ZQ?U  & <?o?o&̙8R4?T T ' C ̙JZQRU  ( <?o?o(̙JRRT T ) C ̙O#U  * <?o?o*̙PY"U T + C ̙.O5U  , <?o?o,̙O/P(5U T - C ̙^AOHU  . <?o?o.̙=BPGU T / C ̙TO \U  0 < ?o?o0̙!UP[U  ZB 1 S DTR7R ZB 2 S DRR ZB 3 S D\"R>%R ZB 4 S Dt,RW/R ZB 5 S D5R8R ZB 6 S D4?RBR ZB 7 S DGRJR ZB 8 S DRRUR TR 9 C ̙8h8EF>  : < ?o?o:̙(:$9A=  T ; C ̙$%w*  < < ?o?o<̙T%2)  T = C ̙&#)  > < ?o?o>̙&#r)  T ? C ̙%$--w*  @ < ?o?o@̙&T%q,)  ZB A S Dm'O' ZB B S D#'&' T C C ̙".%4  D <?o?oD̙.9E3 T E C ̙/#E3  F <?o?oF̙V0#2 T G C ̙%".--4  H <?o?oH̙&.Z,E3 ZB I S Dm1O1 ZB J S D#1&1 `B K c $D;H;ZQ `B L c $D!(H!(ZQ  M <?o?oM̙#dB*H ZB N S DEW;MIW; ZB O S Dm=h'B ZB PB S D;=LB T Q C ̙jF.S| ?S$ @S ASd BS CS DS$ EST FS GSHSIS̔ JSKS< LS MSLNS OS PS QS RST SS TS US VS WST XS YS ZS [ST \S ]S ^S _ST `S$ aS bS cSd dSD*eSlfSܐgShSiS jS| kS lSܪ??GJVV^ahhmvv} 1W ***\f jkkl l\l\m\mnq~t~ttv      !"#&$%'()*+,.-/FITT]`ffltt|5[ $*$*$*!\fjl lllildmdmnqtttv    !"#%&$'()*+,.-/ :!*urn:schemas-microsoft-com:office:smarttagsStreet>*urn:schemas-microsoft-com:office:smarttags PostalCode9*urn:schemas-microsoft-com:office:smarttagsState( urn:schemas:contacts nameSuffix(.urn:schemas:contacts middlename -urn:schemas:contactsSn'/urn:schemas:contacts GivenName>0*urn:schemas-microsoft-com:office:smarttags PersonName;"*urn:schemas-microsoft-com:office:smarttagsaddress?*urn:schemas-microsoft-com:office:smarttags stockticker8*urn:schemas-microsoft-com:office:smarttagsCity9 *urn:schemas-microsoft-com:office:smarttagsplace 0/.-0/.-0/-0/-"!   - /00/-  0/- O[ ( !"a(h(66888888889999>>b?g?l?p?t?|?C@I@M@P@B BTTt_y_``1b4bhhjjjjjj3k=kFkOk`kiknkrkkkkkkkklllllNmTmum{mmmpppppppq_qfqoqyq~qqqq rrr%r8s:sstttv 2*23366EEEE[[ __diijjDlylllGmemfmmmnnn1o6oo%p&p[p\pppp_qqq+rrrr7s8s:sstttttuv333333333333333333333333333G a%q%**=1+2_2g266779999 @C@r@{@EEEFJJKKSSvTTTU:U_UXZt\(])]C]`*`GfcfjjkkGmfm8stttuuuu!u#u4u6uNuPuZu\ucueuuuwuuuuuuuuuuuuuuuuuuuuuvvvvv!v/v1vMvOvdvfvvvxvvvvvvv8stttvSteve RichardsonSteve RichardsonSteve RichardsonSteve RichardsonSteve RichardsonSteve RichardsonSteve RichardsonSteve Richardson William DolanChris Brockett! Ӷ#=zza,B4Du L* ۔ Y zz99*?1<@(BMBDUEnOGMyHFIJ)JC*JR\J}N?OVPDRBSS pS#TBVX5XNBYZ+ZiZd\q\5]Y^X4_m_-Y`[`SIbp;c[>c^fUgjj7k3keknRn-EoMsFu &udsuv6v7vJwbw=yG|U}!G~G XcUc|xy?/AR| ` )q@ ".DZgx<$.*5NUUNh8+aIJlZ[O&'y!to H#_"o{za.~k3oe0}p)8@qtYm81d_P^9_?Rn D /=P|/7X t}OBj &4(!\: 8 ?t?8OBwB!$7vK }:P;? 2+mC'mK<)^^JH>0 FF<1=1N1_1`1u1}1111111111112 2 2TTTTTT UUU:UPU^U_UXY9YEYFY`YwYYYYYYYYYZZt\\\\\\\]]]3]A]B]Xitttvvv@aaaavP@UnknownUnknownSteve RichardsonSteve RichardsonUnknown20010517T130828897HU"Steve Richardson20010703T160126193WFCNSteve RichardsonGz Times New Roman5Symbol3& z Arial3z TimesG5  hMS Mincho-3 fg?5 z Courier New5& z!Tahoma;Wingdings"AtW&MWFV%2b:2b:!xx4drr 3QH ?m$K:\submission\acl2001-submission.dot Model in WordChristina SchorrSteve RichardsonFZZZZmtws-revised.doc###!                           rdDocWord.Document.89q