ࡱ> M .bjbj== ˂WWlP P P P T :8:8:88D9D f;Z????SB&yH=J/$ڿ n !K1B"SB!K!KqXP P ??[qXqXqX!KP R? ?qX!KqX$qX\ " ?Z; mi' "-:8S> ҽ0hUUhqX  P P P P  User Benefits of Non-Linear Time Compression Liwei He & Anoop Gupta  FILLIN \* MERGEFORMAT   FILLIN \* MERGEFORMAT September 21st, 2000 Technical Report MSR-TR-2000-96 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 User Benefits of Non-Linear Time Compression Liwei He & Anoop Gupta Microsoft Research One Microsoft Way, Redmond, WA 98052 +1 (425) 703-6259  HYPERLINK "mailto:{lhe, anoop}@microsoft.com" {lhe, anoop}@microsoft.com ABSTRACT In comparison to text, audio-video content is much more challenging to browse. Time-compression has been suggested as a key technology that can support browsing time compression speeds-up the playback of audio-video content without causing the pitch to change. Simple forms of time-compression are starting to appear in commercial streaming-media products from Microsoft and Real Networks. In this paper we explore the potential benefits of more recent and advanced types of time compression, called non-linear time compression. The most advanced of these algorithms exploit fine-grain structure of human speech (e.g., phonemes) to differentially speed-up segments of speech, so that the overall speed-up can be higher. In this paper we explore what are the actual gains achieved by end-users from these advanced algorithms, and whether the gains are worth the additional systems complexity. Our results indicate that the gains today are actually quite small and may not be worth the additional complexity. Keywords: Time compression, Digital library, Multimedia browsing, User evaluation INTRODUCTION Digital multimedia information on the Internet is growing at an increasing rate corporations are posting their training materials and talks online [ REF _Ref493042426 \r 13], universities are putting up their videotaped courses online [ REF _Ref493042441 \r 23], news organizations are making newscasts available online. While the network bandwidth is somewhat of a bottleneck today, this is rapidly getting addressed with the new broadband infrastructure being put in place. The eventual bottleneck really is the limited human time. With so much content available, it is highly desirable to have technologies that let people browse audio-video quickly. The impact of even a 10% increase in browsing speed can be large, if one considers the vast number of people that will end-up saving time. Just as a person may read text at different rates depending on the situation (e.g., when reading a deep technical article vs. skimming a magazine) or different people may have different reading rates, we will like to provide people the ability to speed-up or slow-down audio-video content based on their preferences. In this paper we focus on technologies that allow such speed-up and slow-down of speech content. While the video-portion of audio-video content is also important, it is easier to handle than speech and is considered elsewhere [ REF _Ref493042456 \r 24]. Also, we focus on informational content with speech (e.g., talks, lectures, and news) rather than entertainment content (e.g., music videos, soap operas), as previous work has shown that people are less likely to speed-up the latter [ REF _Ref493042477 \r 16]. The core technology that supports such speed-up or slow-down of speech is called time-compression [ REF _Ref493042529 \r 6,  REF _Ref493043554 \r 9,  REF _Ref493042637 \r 12,  REF _Ref493043318 \r 14,  REF _Ref493042554 \r 20]. Simple forms of time-compression have been used before in hardware device contexts [ REF _Ref493042565 \r 1] and telephone voicemail systems [ REF _Ref493042576 \r 17]. Within the last few months, we have also seen basic support for time-compression in major streaming-media products from Microsoft and Real Networks [ REF _Ref493042602 \r 5,  REF _Ref493042614 \r 18]. Most systems today use linear time-compression, where the speech content is uniformly time compressed, e.g., every 100ms chunk of speech is shortened to 75ms. Using linear time compression, previous user studies show [ REF _Ref493043196 \r 11,  REF _Ref493042477 \r 16,  REF _Ref493042630 \r 19] that participants achieve steady-state speed-up factors of ~1.4. With that speed-up, users can save more than 15 minutes on a one-hour lecture. In this paper, we explore how much additional benefit can be achieved from non-linear time-compression techniques. We consider two such algorithms. The first, simpler algorithm combines pause-removal with linear time compression. It first detects pauses (silence intervals) in the speech, then shortens or removes the pauses. Such a procedure can remove 10-25% from normal speech [ REF _Ref446583581 \r 8]. It then performs linear time compression on the remaining speech. The second non-linear algorithm we consider is much more sophisticated. It is based on the recently proposed Mach1 algorithm [ REF _Ref493043005 \r 3], the best such algorithm known to us. It tries to mimic the compression strategies that people use when they talk fast in natural settings, and it tries to adapt the compression rate at a fine granularity based on low-level features (e.g., phonemes) of human speech. As we will elaborate later, the non-linear algorithms, while offering the potential for higher speed-ups, require significantly more compute (CPU) cycles, cause increased complexity in client-server systems for streaming media, and may result in a jerky video portion. So the core questions we address in this paper are the following: What are the additional benefits of the two non-linear algorithms over the simple linear time-compression algorithm implemented in products today? While inventors of Mach1 present some user-study data about benefits, their results correspond to very high speed-up factors (2.6 to 4.2 fold speedup), where only a subset of speech is understood. Most people will not listen to speech at such fast rates. We are interested in understanding peoples preference at more comfortable and sustainable speed-up rates. Only if the difference at sustainable speed is large will it be worthwhile to implement these algorithms in products. How much better is the more sophisticated algorithm over the simpler non-linear algorithm? The magnitude of differences will again guide our implementation strategy in products. At a high level, our results show that for speed-up factors most likely to be used by people, the benefits of the more sophisticated non-linear time compression algorithms are quite small. Consequently, given the substantial complexity associated with these algorithms, we may not see them adopted in the near future. The paper is organized as follows: Section  REF _Ref491671271 \r \* MERGEFORMAT 2 reviews various time-compression algorithms evaluated in this paper and associated systems implications. Section 3 presents our user-study goals, Section 4 the experimental method, and Section 5 the results of the study. We discuss results and present related work in Section 6 and conclude in Section 7. TIME-COMPRESSION ALGORITHMS USED AND SYSTEMS IMPLICATIONS In this section, we briefly discuss the three classes of algorithms we consider in this paper, and systems implications for incorporating them in client-server delivery systems. Linear Time Compression (Linear) In this class of algorithms, time-compression is applied consistently across the entire audio stream with a given speed-up rate, without regard to the audio information contained therein. The most basic technique for achieving time-compressed speech involves taking short fixed-length speech segments (e.g., 100ms), and discarding portions of these segments (e.g., dropping 33ms segment to get 1.5-fold compression), and abutting the retained segments [ REF _Ref493042529 \r 6]. Discarding segments and abutting the remnants, however, produces discontinuities at the interval boundaries and produces audible clicks and other forms of signal distortion. To improve the quality of the output signal, a windowing function or smoothing filtersuch as a cross-fadecan be applied at the junctions of the abutted segments [ REF _Ref493042554 \r 20]. A technique called Overlap Add (OLA) yields good quality ( REF _Ref492111179 Figure 1). Further improvements to OLA are made in Synchronized OLA (SOLA) [ REF _Ref493042965 \r 21] and Pitch-Synchronized OLA [ REF _Ref493042844 \r 10].  EMBED Visio.Drawing.6  Figure  SEQ Figure \* ARABIC 1: An illustration of Overlap Add algorithm. The technique used in this study is SOLA, first described by Roucos and Wilgus [ REF _Ref493042965 \r 21]. It consists of shifting the beginning of a new speech segment over the end of the preceding segment to find the point of highest waveform similarity. This is usually accomplished by a cross-correlation computation. Once this point is found, the frames are overlapped and averaged together, as in OLA. SOLA provides a locally optimal match between successive frames and mitigates the reverberations sometimes introduced by OLA. The SOLA algorithm is labeled Linear in our user studies. Pause Removal plus Linear Time Compression (PR-Lin) Non-linear time compression is an improvement on linear compression: the content of the audio stream is analyzed, and compression rates may vary from one point in time to another. Typically, non-linear time compression involves compressing redundancies, such as pauses or elongated vowels, more aggressively. The PR-Lin algorithm we use here first detects pauses using the algorithm described below. It leaves pauses below 150ms untouched, and shortens longer pauses to 150ms. It then applies linear time-compression as described in the previous subsection. Pause detection algorithms have been published extensively. A variety of measures can be used for detecting pauses even under noisy conditions [ REF _Ref493042981 \r 2]. Our algorithm uses Energy and Zero crossing rate (ZCR) features. In order to adjust changes in the background noise level, a dynamic energy threshold is used. We use a fixed ZCR threshold of 0.4 in this study. If the energy of a frame is below the dynamic threshold and its ZCR is under the fixed threshold, the frame is categorized as a potential-pause frame, otherwise it is labeled as a speech frame. Contiguous potential-pause frames are marked as real-pause frames when they exceed 150ms. Pause removal typically shortens the speech by 10-25% before linear time-compression is applied. Adaptive Time Compression (Adapt) A variety of more sophisticated algorithms have been proposed for non-linear adaptive time compression. For example, Lee and Kim [ REF _Ref493042994 \r 15] try to preserve the phoneme transitions in the compressed audio to improve understandability. Audio spectrum is computed first for audio frames of 10ms. If the magnitude of the spectrum difference between two successive frames is above a threshold, they are considered as a phoneme transition and not compressed. Mach1 [ REF _Ref493043005 \r 3] makes further improvements and tries to mimic the compression that takes place when people talk fast in natural settings. These strategies come from the linguistic studies of natural speech [ REF _Ref493043017 \r 25,  REF _Ref493043027 \r 26] and are listed as follows: Pauses and silences are compressed the most Stressed vowels are compressed the least Schwas and other unstressed vowels are compressed by an intermediate amount Consonants are compressed based on the stress level of the neighboring vowels On average, consonants are compressed more than vowels Mach1 estimates continuous-valued measures of local emphasis and relative speaking rate. Together, these two sequences estimate the audio tension: the degree to which the local speech segments resist changes in rate. High-tension regions are compressed less and low-tension regions are compressed more aggressively. Based on the audio tension, the local target compression rates are computed and used to drive a standard time-scale modification algorithm, such as SOLA. Since Mach1 is, to our knowledge, the best adaptive time compression technique, the algorithm used in our adaptive time compression condition is based on it. The Mach1 executable was not available to us, so we could not use it directly. Furthermore, the original Mach1 algorithm cannot guarantee a specified speedup rate (it is an open loop algorithm). In order to compare audio clips compressed using different algorithms, we required precise speedup as specified by the user. We made modifications to the algorithm so the achieved speedup rate is always as the same as specified. We wanted to ensure that our revised algorithm (Adapt) was comparable in quality to Mach1 algorithm. A preference study was run to compare our adaptive algorithm against the original Mach1 algorithm. Without indication of the sources, 12 colleagues were asked to compare 3 time-compressed speech files published on Mach1s web site and the same source files compressed using our implementation. Out of the total 36 comparisons, our algorithm was preferred 9 times, Mach1 was preferred 12 times, and they were found to be equal 15 times. A one-sample Chi-square test was conducted to assess whether the participants preferred the results from our algorithm, the published Mach1 results, or had no preference. The results of the test were non significant: Chi2 (2,N=36) = 1.5, p=0.472, indicating our technique is comparable to the Mach1 algorithm. Systems Implications of Algorithms In deciding between these three algorithms for inclusion in products, there are two considerations: 1) what are the relative benefits (e.g. speed-up rates) achievable, and 2) what are the costs (e.g. implementation challenges). We explore the former in the User Study section. Here we briefly discuss the latter. The first issue is computational complexity or CPU requirements. The first two algorithms, Linear and PR-Lin, are easily executed in real-time on any Pentium-class machine using only a small fraction of the CPU. The Adapt algorithm, in contrast, has 10+ times higher CPU requirements, although it can be executed in real-time on modern desktop CPUs. The second issue is complexity of client-server implementations. We assume people will like the time-compression feature to be available with streaming-media clients where they can just turn a virtual knob to adjust speed-up. While there are numerous issues [ REF _Ref493042630 \r 19], a key issue has to do with buffer management and flow-control between the client and server. The Linear algorithm has the simplest requirements, where the server simply needs to speed-up its delivery at the same rate at which time-compression is requested by client. The nonlinear algorithms (both PR-Lin and Adapt) have much more complex requirements due to the uneven rate of data consumption at the client e.g., if a 2 second pause is removed, then associated data is instantaneously consumed by the client and the server will have to compensate. The third issue is audio-video synchronization quality. (This issue is obviously not present when considering speech-only content.) With the Linear algorithm, the rendering of video frames is speeded up at the same rate as the speed-up for speech. While everything happens at higher speed, the video remains smooth and perfect lip-synchronization between audio and video can be maintained. This task is much more difficult with non-linear algorithms (PR-Lin and Adapt). As an example, consider removal of a 2-second pause from the audio track. Option-1 is to also remove the video frames corresponding to those 2 seconds. In this case the video will appear jerky to the end-user, although we will retain lip synchronization between audio and video for subsequent speech. Option-2 is to make the video transition smoother by keeping some of the video frames from that 2-second interval and removing some later ones, but now we will loose the lip synchronization for subsequent speech. There is no perfect solution. The bottom line is that non-linear algorithms add significant complexity to the implementers task. We would like to know if there are significant user benefits. USER STUDY GOALS There are multiple dimensions along which we will like to understand users reactions to the three algorithms presented above. We used the following four metrics: Highest intelligible speed. What is the highest speed-up factor at which the user still understands the majority of the content? This metric tells us which algorithms perform best when the end-user is pushing the limits of time-compression technology for short segments of speech. Comprehension. Given the same fixed speed-up factor for all algorithms, what is a users relative comprehension? This metric is indicative of the relative quality of speech produced by algorithms. When observed for multiple speed-up factors, it also indicates when we are driving users beyond sustainable speed. Subjective preference. When given the same audio clip compressed using two different techniques at the same speed-up factor, which one does a user prefer? This metric is directly indicative of the relative quality of speech produced by the algorithms. Since people are very sensitive to subtle distortions that are not computationally understood, this is the only way to understand quality issues. Sustainable speed. What is the speed-up factor that end-users will settle on when listening to long pieces of content (e.g., a lecture), still assuming some time pressure? We believe this metric is the most indicative of benefits that will accrue to users in natural settings. EXPERIMENTAL METHOD 24 people participated our study in exchange for a gratuity. They came with a variety of background from professionals in local firms to retirees to homemakers. All of them had some computer experience and some used computers on a daily basis. The subjects were invited to our usability lab to take the test. The listener study was Web based. All the instructions were presented to the subjects via web pages. The study consisted of four tasks corresponding to the four goals outlined in the previous section (see  REF _Ref491848617 Table 1). Highest Intelligible Speed Task: The subjects were given 3 clips time-compressed by Linear, PR-Lin, and Adapt and were asked to find the fastest speed at which the audio was still intelligible. For each algorithm, short segments of a clip were presented to the subjects in sequence. The subjects used five speed-control buttons (much-faster, faster, same, slower, much-slower) to control the speed at which the next segment was played. The speed control buttons increased or decreased the speed by a discrete level of either 0.1 or 0.3. The subjects clicked the Done button when they found their highest intelligible speed for the clip. We asked the subjects to choose their own definition of what intelligible meant, e.g. understanding 90-95% of words in the audio, as long as they were consistent with their definition throughout the task. The audio clips used in this task were from 3 talks. The natural speech speed, as measured by words per minute (WPM), had a fairly wide range among the chosen clips (see  REF _Ref491848617 Table 1). The WPM of the fastest speaker is 71% greater than the slowest speaker. However, the experiments were all counterbalanced among subjects, as we will discuss later. Comprehension Task: We gave each subject 6 clips of conversations time-compressed by the three algorithms at 1.5x and at 2.5x. The subjects listened to each conversation once (repeats were not allowed) and then answered four multiple-choice questions about the conversation. The conversation clips were taken from the audio CDs from Kaplans TOEFL (Test of English as Foreign Language) study program [ REF _Ref493043072 \r 22]. The subjects were encouraged to guess if they were not sure of the answer. We note that the two chosen speed-up factors, 1.5x and 2.5x, represent points on each side of the sustained speed-up factor for users. Subjective Preference Task: The subjects were instructed to compare 6 pairs of clips time-compressed by the three algorithms at 1.5x and at 2.5x and indicate their preference on a three-point scale: prefer clip 1, no preference, prefer clip 2. The audio clips in this task were captured live from an ACM97 talk given by Brenda Laurel. Sustainable Speed Task: We gave the subjects 3 clips time-compressed by the three algorithms and asked them to imagine that they were in a hurry, but still wanted to listen to the clips. They adjusted the speed control buttons to find a maximum speed for each clip that was sustainable for the duration of the clips, which were about 8 minutes uncompressed. They were required to write 4-5 sentences to summarize what they just heard upon the completion each clip, though the textual summaries were used only to motivate the subjects to pay more attention but not as part of the actual measurement. The audio clips in this task were taken from the audio CD book Dont Know Much About Geography [ REF _Ref493043099 \r 4]. Within each task, we used a repeated measures design in a 3x3 Latin Square configuration to counterbalance against ordering effects, i.e., the order in which users experienced the time compression methods. The task list for a typical subject is listed in  REF _Ref492113159 Table 2. Table  SEQ Table \* ARABIC 2: The task list for a typical subject. TaskConditionTC factor1Highest intelligible speed Linear(User adjusted)PR-LinAdapt2ComprehensionLinear1.5PR-LinAdaptLinear2.5PR-LinAdapt3PreferenceLinear vs. Adapt1.5PR-Lin vs. LinearAdapt vs. PR-LinLinear vs. Adapt2.5PR-Lin vs. LinearAdapt vs. PR-Lin4Sustainable speedLinear(User adjusted)PR-LinAdapt LISTENER STUDY RESULTS As stated in the Introduction section, for each of the metrics we would first like to understand the benefits of the non-linear algorithms (PR-Lin and Adapt together) over the simpler Linear algorithm. Second, if the non-linear algorithms are indeed better, we would like to differentiate between the simpler PR-Lin and the more complex Adapt algorithm. Highest Intelligible Speed The first task measures the highest speed at which the clips are still intelligible. As one would expect, we see that the non-linear algorithms do significantly better than Linear combined average speed-up of 2.05 vs. 1.76 (see Tables 3 and 4). This is also true when listening speed is measured as words per minute (WPM). Comparing the two non-linear algorithms, we find that PR-Lin does significantly better than Adapt when using speed-up factor as metric, but not when using WPM as a metric (see Table 4). The result is somewhat contradictory to our expectations, as we would have expected the more sophisticated Adapt algorithm to do better. On the other hand, one possible explanation may be as follows. PR-Lin is more aggressive as it totally eliminates pauses, while Adapt is gentler when shortening pauses and as a result it has to compress the audible speech more to reach the same speed-up as PR-Lin. Table  SEQ Table \* ARABIC 3: Highest intelligible speed task. WPM numbers converted from raw speed are also listed. The standard deviations are in the parenthesis. ConditionSpeed (StDev)WPM (StDev)Linear1.76 (0.29)246 (64)PR-Lin2.15 (0.45)296 (67)Adapt1.94 (0.36)271 (75)Average1.95 (0.40)271 (71) Table  SEQ Table \* ARABIC 4: The results of the one-way within-subject ANOVA contrast test for highest intelligible speed task. Contrast testFPLinear vs. (Adapt & PR-Lin) in speed44.910.000Adapt vs. PR-Lin in speed 8.137.009Linear vs. (Adapt & PR-Lin) in WPM5.362.030Adapt vs. PR-Lin in WPM1.885.183Comprehension Task In this task, listener comprehension was tested under different algorithms at the speed-up factors of 1.5x and 2.5x. We expected Adapt to do best, followed by PR-Lin and Linear, and the comprehension differences to increase at the higher speed-up factor. Note that 1.5x and 2.5x represent points on the two sides of the highest intelligible speed-up factor for users. The quiz scores from the comprehension task are listed in  REF _Ref492289429 Table 5. At 1.5x speed-up, the average score of Linear actually came out on top, although there is no significant difference between Linear and the other two conditions (see  REF _Ref492370581 Table 6). In essence, the data simply say that at 1.5x the content is well understood across all conditions. At 2.5x speed-up, we see that the two non-linear algorithms do significantly better than Linear (see Table 6, row 3). This is not very surprising, since the non-linear algorithms need to compress the audible portions of speech much less than the Linear algorithm (since the pauses are compressed much higher than target rate by PR-Lin and Adapt). Comparing PR-Lin and Adapt at 2.5x, there is no significant difference at p <.05 level. There does seem to be a trend in favor of Adapt though, given that p = 0.083 (Table 6, row 4). We reflect on this trend in the discussion section. Table  SEQ Table \* ARABIC 5: Quiz score results from the comprehension task. Condition1.5x (%)2.5x (%)Overall (%)Linear844967PR-Lin786170Adapt827478Average826172 Table  SEQ Table \* ARABIC 6: The results of the one-way within-subject ANOVA contrast test for the comprehension task. Contrast testFPLinear vs. (Adapt & PR-Lin) at 1.5x.754.394Adapt vs. PR-Lin at 1.5x.324.575Linear vs. (Adapt & PR-Lin) at 2.5x8.507.008Adapt vs. PR-Lin at 2.5x3.286.083Preference Task In this task, subjective preference was tested under different algorithms at the speed-up factors of 1.5x and 2.5x. The motivation was that minor artifacts caused by time compression which might not affect comprehension may still change a listeners preference. At 1.5x (see Tables 7 and 8), we see that peoples preference is essentially the same for Linear and Adapt, although there is a slight preference for PR-Lin (p=.093). Table  SEQ Table \* ARABIC 7: The preference counts from the preference task. ConditionPreference1.5x2.5xOverallLinear vs. PR-LinLinear628None5813PR-Lin131427PR-Lin vs. AdaptPR-Lin13417None5914Adapt61117Adapt vs. LinearAdapt82129None8311Linear808 Table  SEQ Table \* ARABIC 8: Chi square test results on the preference task. ConditionChi2PLinear vs. PR-Lin at 1.5x4.750.093PR-Lin vs. Adapt at 1.5x4.750.093Adapt vs. Linear at 1.5x.0001.000Linear vs. PR-Lin at 2.5x9.000.011PR-Lin vs. Adapt at 2.5x3.250.197Adapt vs. Linear at 2.5x13.500.000At 2.5x, as may be expected, both PR-Lin and Adapt do significantly better than Linear (p=.011 and p=.000 respectively). Comparing the two non-linear algorithms, there is slight but non-significant preference for Adapt over PR-Lin (p=.197), with 11 subjects preferring Adapt, 8 having no preference, and 4 preferring PR-Lin. Sustainable Speed This task tries to measure the highest speed at which a subject can listen to the audio for a sustained period of time. The average speed-up factors at which the listeners eventually settled are summarized in  REF _Ref492287021 Table 9. The highest speed-up factor is with Adapt (8% better than Linear), followed by PR-Lin (4% better than Linear). Again a one-way within-subject ANOVA was conducted. The contrast between PR-Lin and Adapt as a group vs. Linear is significant (see  REF _Ref492310939 Table 10). There is no significant difference between Adapt and PR-Lin, though there is a trend in favor of Adapt. We comment on this trend in the discussion section. Table  SEQ Table \* ARABIC 9: Sustainable speed by conditions. WPM numbers converted from raw speed are also listed. The standard deviations are in the parenthesis. ConditionSpeed (StDev)WPM (StDev)Linear1.62 (0.28)273 (46)PR-Lin1.69 (0.38)286 (63)Adapt1.76 (0.40)298 (68)Average1.69 (0.36)286 (60) Table  SEQ Table \* ARABIC 10: The results of the one-way within-subject ANOVA contrast test for sustainable speed task. The results for raw speed and WPM are the same because all three clips were from the same speaker and have almost identical WPM. Contrast testFPLinear vs. (Adapt & PR-Lin) in speed9.414.005Adapt vs. PR-Lin in speed2.181.153Linear vs. (Adapt & PR-Lin) in WPM9.414.005Adapt vs. PR-Lin in WPM2.181.153 DISCUSSION AND RELATED WORK Before discussing results from this paper we briefly summarize results from the Mach1 paper [ REF _Ref493043005 \r 3]. The user study reported in the Mach1 paper included listener comprehension and preference tasks comparing Mach1 and linear time compression algorithm. Clips of 2 to 15 sentences in length were compressed at a speedup factor of 2.6-4.2. These are very high speeds, as the resulting word rates are from 390 to an astonishing 673 wpm. Listener comprehension for Mach1 compressed speech was found to improve on average 17% over that for linear time compressed speech. In the preference test, Mach1 compressed speech was chosen 95% of the time. The difference between Mach1 and linear time compression was found to increase with the speedup factor. In attempting to benefit from Mach1s results, and our own results reported earlier, it is useful to segment the observations into two sets: a) for low-to-medium speed-up factors (e.g., 1.5x), and b) for high speed-up factors (e.g., 2.5x). For low-to-medium speed-up factors, we have no data from Mach1 paper. Our own data for 1.5x looking at comprehension and preference metrics shows that there is no significant difference between Linear, PR-Lin, and Adapt. There is a slight trend in favor of PR-Lin (p=.093) in the preference task. Our speculation is that this is due to the fact that with removal of pauses (~15-20% time savings upfront), PR-Lin has to compress the audible speech much less than the other two algorithms, and the data seem to indicate that people do not care as much about pauses when listening to short speeded-up speech segments. At high speed-up factors (e.g., 2.5x), our own data show that there is significant preference for the non-linear algorithms (PR-Lin and Adapt) over Linear (p=.008 for comprehension task and p=.011 and p=.000 for preference task). These are consistent with the Mach1 results, which compared at even higher speed-up factors (2.6 to 4.2). Comparing PR-Lin and Adapt, while we see no significant differences at p < .05, we see a slight trend in favor of Adapt. Our intuition is that as we go to much higher speed-up factors beyond 2.5, Adapt will likely be significantly better. So what do the above results imply for a designer? The first question to ask is what will be the sweet-spot speedup factor where users will spend most of the time. Our data here on sustainable speed indicates around 1.6-1.7 when in a hurry. Past results from Harrigan [ REF _Ref493043196 \r 11], Omoigui et al [ REF _Ref493042630 \r 19] and Li et al [ REF _Ref493042477 \r 16] indicate comfortable speed-up factor of ~1.4. Results from Foulke and Sticht [ REF _Ref493042654 \r 7] indicate speedup of ~1.25 corresponding to a word rate of 212 WPM. The above data indicate that low-to-medium speed-up factors will likely dominate users viewing patterns. Consequently, for most purposes the Linear algorithm should suffice as discussed in Section 2.4, it is computationally efficient, simpler for client-server systems, and there is no jerky video. More aggressive implementations can go to PR-Lin, while still having the benefit of being computationally simple. Algorithms like Adapt/Mach1 may only be suitable for very high speed-up factors, for example, when one is in fast-forward mode. As we were wrapping up these studies and thinking about the results showing no substantial benefits from sophisticated algorithms like Mach1/Adapt at sustainable speeds we were left wondering whether it is the case that these state-of-the-art algorithms are still not so good or whether we are hitting some more inherent human limits. With even the best algorithms, participants reached a sustainable speed of only 1.76x. Is that limit due to the technology or to a human limitation on the parsing end? Assuming humans are most adept at parsing natural human speech, this can be tested by comparing naturally sped up speech with artificially compressed speech. We ran two such comparisons in a quick user study. A colleague of ours with significant background in public speaking was asked to read 2 articles (each around 700 words) and 3 short sentences at two speeds. His fast speed was approximately 1.4 times the regular speed. Both the slow readings (SR) and fast readings (FR) were digitized and were time compressed using our Adapt algorithm. Fifteen colleagues participated in the web-based experiment. In the first comparison, subjects compared the slow readings speeded-up by Adapt at 1.4x versus the fast readings (which were naturally 1.4 times faster than the slow reading). Out of 45 total comparisons (since there were 3 short clips) 19 preferred FR, 18 preferred speeded-up SR, and 8 expressed no preference. Our second comparison was a sustainable speed test where subjects speeded-up both SRs and FRs until comfortable. If naturally sped up speech is qualitatively different from that generated by Adapt, we would expect the benefits of each to be somewhat additive. Using Adapt, participants should be able to speed up the FR clips to a speed faster than that of the SR clips. This was not the case. When normalized to the speech-rate of the slow readings, the sustainable speed-up for SR was 1.63 and 1.68 for FR. There were no statistical differences, suggesting that the algorithm is a reasonable substitute for natural human speech compression. The results from both tasks support the hypothesis that for low-to-medium speed-up factors speeds that end-users feel comfortable with end-users cannot distinguish between computer algorithms speeding-up speech and a human speaking faster. The results also indicate that the current crop of algorithms is indeed very good, effectively substitutable for natural speech speed-up. It may be the case that limits are on the human-listening side rather than on how we generate time-compressed speech. CONCLUDING REMARKS We are faced with an information glut, both of textual information and, increasingly, audio-visual information. The most precious commodity today is human attention and time. Time-compression in some sense is a magical technology that helps us generate extra time by allowing us to watch audio-visual content speeded-up. Simple forms of time-compression technology are already appearing in commercial streaming-media products from Microsoft and Real Networks. The question explored in this paper is whether the new advanced algorithms for time-compression have the potential of significantly enhancing user benefits (time savings) and to develop an understanding of the associated implementation costs. Our results show that for speed-up factors most likely to be used by people, the more sophisticated non-linear time compression algorithms do not offer a significant advantage. Consequently, given the substantial implementation complexity associated with these algorithms in client-server streaming-media systems, we may not see them adopted in the near future. Based on a preliminary study, we speculate the problem is not that the benefits are small because the sophisticated algorithms are not very good. In fact, end-users cannot distinguish between these algorithms speeding-up speech and a human speaking faster. Thus delivering significantly larger time-compression benefits to end-users remains an open challenge for researchers. ACNOWLEDGMENTS The authors would like to thank Scott LeeTiernan for his help in statistical analysis. Thanks also go to JJ Cadiz for his initial implementation of the experiment code and his voice for the fast and slow reading experiment and Marc Smith for his valuable comments on the paper. REFERENCES Arons, B. Techniques, Perception, and Applications of Time-Compressed Speech. In Proceedings of 1992 Conference, American Voice I/O Society, Sep. 1992, pp. 169-177. Atal, B.S. & Rabiner, L.R. A Pattern Recognition Approach to Voiced-Unvoiced-Silence Classification with Applications to Speech Recognition, IEEE International Conference on Acoustics, Speech, and Signal Processing, ASSP-24, 3 (June 1976), 201-212. Covell, M., Withgott, M., & Slaney, M. Mach1: Nonuniform Time-Scale Modification of Speech, Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing. Seattle, WA, May 12-15 1998. Davis, K.C. Dont Know Much About Geography, Bantam Doubleday Dell Audio Publishing, New York, 1992. Enounce, 2xAV Plug-in for RealPlayer  HYPERLINK "http://www.enounce.com/products/real/2xav/index.htm" http://www.enounce.com/products/real/2xav/index.htm Fairbanks, G., Everitt, W.L., & Jaeger, R.P. "Method for Time or Frequency Compression-Expansion of Speech." Transactions of the Institute of Radio Engineers, Professional Group on Audio AU-2 (1954): 7-12. Reprinted in G. Fairbanks, Experimental Phonetics: Selected Articles, University of Illinois Press, 1966. Foulke, W. & Sticht, T.G. Review of research on the intelligibility and comprehension of accelerated speech. Psychological Bulletin, 72: 50-62, 1969. Gan, C.K. & Donaldson, R.W. Adaptive Silence Deletion for Speech Storage and Voice Mail Applications. IEEE Transactions on Acoustics, Speech, and Signal Processing 36, 6 (Jun. 1988), pp 924-927. Gerber, S.E. Limits of speech time compression. In S. Duker (Ed.), Time-Compressed Speech, 456-465. Scarecrow, 1974. Griffin, D.W. & Lim, J.S. Signal estimation from modified short-time fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-32 (2): 236-243, 1984. Harrigan, K. The SPECIAL System: Self-Paced Education with Compressed Interactive Audio Learning, Journal of Research on Computing in Education, 27, 3, Spring 1995. Harrigan, K.A. Just Noticeable Difference and Effects of Searching of User-Controlled Time-Compressed Digital-Video. Ph.D. Thesis, University of Toronto, 1996. He, L., Grudin J. & Gupta, A., 2000. Designing Presentations for On-demand Viewing, In Proc.CSCW00. ACM. Heiman, G.W., Leo, R.J., Leighbody, G., & Bowler, K. "Word Intelligibility Decrements and the Comprehension of Time-Compressed Speech." Perception and Psychophysics 40, 6 (1986): 407-411. Lee, S. & Kim, H. Variable Time-Scale Modification of Speech Using Transient Information, IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 2, pp 1319-1322, Munich, 1997. Li, F.C., Gupta, A., Sanocki, E., He, L. & Rui Y. "Browsing digital video," Proc. CHI00, Pages 169 176, ACM. Maxemchuk, N. "An Experimental Speech Storage and Editing Facility." Bell System Technical Journal 59, 8 (1980): 1383-1395. Microsoft Corporation, Windows Media Encoder 7.0  HYPERLINK "http://approjects.co.za/?big=windows/windowsmedia/en/wm7/Encoder.asp" http://approjects.co.za/?big=windows/windowsmedia/en/wm7/Encoder.asp Omoigui, N., He, L., Gupta, A., Grudin, J. & Sanocki, E. Time-compression: System Concerns, Usage, and Benefits. Proceedings of ACM Conference on Computer Human Interaction, 1999. Quereshi, S.U.H. Speech compression by computer. In S. Duker (Ed.), Time-Compressed Speech, 618-623. Scarecrow, 1974. Roucos, S. & Wilgus, A. "High Quality Time-Scale Modification for Speech," IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 2, pp 493-496, Tampa, FL, 1985. Rymniak, M., Kurlandski, G, et al. The Essential Review: TOEFL (Test of English as a Foreign Language, Kaplan Educational Centers and Simon & Schuster, New York. Stanford Online: Masters in Electrical Engineering, 1998.  HYPERLINK "http://scpd.stanford.edu/cee/telecom/onlinedegree.html" http://scpd.stanford.edu/cee/telecom/onlinedegree.html Tavanapong, W., Hua, K.A. & Wang J.Z. A Framework for Supporting Previewing and VCR Operations in a Low Bandwidth Environment, In Proc. Multimedia97, 303-312, ACM. van Santen, J. Assignment of Segmental Duration in Text-to-Speech Synthesis, Computer Speech and Language, 8(2): 95-128, 1994. Withgott, M. & Chen, F. Computational Models of American Speech, CSLI Lecture Notes #32, Center for the Study of Language and Information, Stanford, CA. Table  SEQ Table \* ARABIC 1: Information about tasks and test materials. TaskAudio sourceWPMApprox. Length1Highest intelligible speed3 technical talks99-169In 10 sec segments2Comprehension6 conversations from Kaplans TOEFL program185-20428-50 sec3Preference3 clips from an ACM97 talk by Brenda Laurel 17830 sec4Sustainable speed3 clips from Dont know much about geography1698 min <=STUmnpqCZM] E   C D Z [ ] ^ !#$:;<=?@VWX5CJOJQJ6] 0JB*phjB*UphjB*Uph B*ph jUCJ jCJU5\ 5B*phCJG     <=Top*$a$$a$-BCZ E R s d & Fd) & Fdd$a$ $da$$da$d<$a$XY[\rsuvxy  !"EF\]_`+,./I` $%'(*+ABDEGH^_ab@[vwSTjklm_`""""""$$$$$$3$6] jU`2|B4 "%0%}%' (<*+,?-a-=/u000 & FG$) & FJ)3$D$Z$[$n$o$v$w$x$$$$$$$$$$$$$$ % %%%%%,%-%.%/%7%8%N%O%P%Q%%%%%%%******------D/E/[/\/]/^/ 0!07080:0;0=0>0T0U0W0X088<<<<<<tDDEEFF5\H* jUj8U= UV] mHnHu jU6W01d11t35969q:;?CCCtDEFZHqIIJKNmPRDTW h & FKd & FGFZHkHKKKKKKKKKKOOOOOOOmPPRRRRRRR SDTZTWWWWWWXX1X2X8X9X:XBXCXXXYXZX[XeXXXXXXXXXXXYYYYYY'Y(Y2Y3Ytttttt5B*CJH*\ph5B*CJ\ph mHnHu jUCJ B*CJphTBkTk[k]k_kak $$Ifa$akbkckhkjklk78.... $$Ifa$$$If4r* B&&&02  22  22  22  24alkokpkqkxk{k.H$$If4r* B&&&02  22  22  22  24a $$Ifa${k~kkkkk.$$If4r* B&&&02  22  22  22  24a $$Ifa$kkkkkk.8$$If4r* B&&&02  22  22  22  24a $$Ifa$kkkkkk.@$$If4r* B&&&02  22  22  22  24a $$Ifa$kkkkkk $$Ifa$kkkkkk7.... $$Ifa$$$If4r* B&&&02  22  22  22  24akkkkkk.8$$If4r* B&&&02  22  22  22  24a $$Ifa$kkkkkk.<$$If4r* B&&&02  22  22  22  24a $$Ifa$kkkkk.$$If4r* B&&&02  22  22  22  24a $$Ifa$kkPlZl_lalbl|lll`$$IflF F nn 06    4 lal $$Ifa$$ lllllxxx $$Ifa$~$$IflF F nn06    4 lallllllxxx $$Ifa$~$$IflF F nn06    4 lallllllxxx $$Ifa$~$$IflF F nn06    4 lalllmmmxxx $$Ifa$~$$IflF F nn06    4 lalmm6m=mBmCmnno?qqqr rxxxvtvvqxxx$ $$Ifa$~$$IflF F nn06    4 lal rrr!r*rltccc $$Ifa$$$IflFX h\\ 06    4 la*r+r2r>rGrtxxx $$Ifa$~$$IflFX h\\06    4 laGrHrNrZrcrpxxx $$Ifa$~$$IflFX h\\06    4 lacrdrlrxrrxxxx $$Ifa$~$$IflFX h\\06    4 larrrssss|sss $$Ifa$$~$$IflFX h\\06    4 lassssslccc $$Ifa$$$IflF 5" 06    4 lasssssxxx $$Ifa$~$$IflF 5"06    4 lasstttxxx $$Ifa$~$$IflF 5"06    4 latt2t8t=txxx $$Ifa$~$$IflF 5"06    4 la=t>t?t[t_wOxz|2U#wl}~$$IflF 5"06    4 latt~~&~'~)~*~<~=~S~T~V~W~g~h~~~~~~~~~~~~  -PQΒܓ\VW͕Ε<vT38Dp 0>˛[3678xy)uj U0J(0Jj U6@6] jUVlB)8Q\ʔ1ϕb٘6ךCʜ:z$ & FC ^ `a$  & FC ^ ` & Fdz/cgգ֣ۣ $ h$Ifa$$ & F  & FC ^ `=̠ABADYҢCsգEFۤܤ).CJ 5CJ\ mHnHu0J(CJCJ0JjhU jU6]$+6())) $ h$Ifa$$$IfTl6r~ xP<j 2064 la+2EFHVJ<$$IfTlr~ xP<j064 la $ h$Ifa$ФԤۤWJJJJJ $ h$Ifa$$$IfTlr~ xP<j064 laۤܤޤ#)*+W8JJJJJWH $ h$Ifa$$$IfTlr~ xP<j064 la+,-.$&P/R / =!8"8#8$%' 0&P/R / =!8"8#8$%+ 0&P/R / =!8"8#8$% P+ 0&P/R / =!8"8#8$% P /R / =!"#$%DyK {lhe, anoop}@microsoft.comyK Dmailto:{lhe, anoop}@microsoft.com' Dd2 0  # A2 `[᫾7hL 3`!w `[᫾7hL 5 H<E xZ l~ofn . C]b!DMTCP*!rIkc_׶jZU2i˟ED#5QD(T)IKS*.}ow;;{wyϾ9 @ cadAAڒeYlǂ(Lx( ۳2'PK6YE|TEa[oڷk8g, k GJ{t :Mz [J2kHfמ;:N%r޻4Kr迚 緇H_ˆ=caֱ~eQ{Ȃ ajJ܏u><C03"?~,*AA\.?FDH#pfA}(8/^u_&; g8SF:X{ 3dn#(N\q8rfÑ33(rF;FrmOݫǨ.\V}'Fe*O-_s-`P2]FHo0F.{#/`:]ԛm=m1%l4ǶoՉnYΠ69nqc5Qh q-ٵZe5A3><+`{Usq .B7.rD"8sIrN6'[8{9s29Nƨ'ENKaxOu'' (o:ba=}P= [ܛőSrZFY),%#Ta:/6<E#s"'{*ױM &Da {cIW(QW[Bbb, E.\=g9 qz8v| [p _ހ/0{q^ux~|o_1D9g{j=Y/M$I@6>} 3)=Z?r만~VPx~_P8%\?pZ?Y#4ε]4̵N oݹy5pţuPP a48]p:h>z3uc_pa|E[E Ǘ!h=P!hv|"l/_.ԭN4ԭ%OussEju3')T2, ǐ*.(Vjz6|a| .0JpQ3|d>#낏|k}V9œeS;Az\Z7y/ sbjkB3Ϙ!gf|hvv7SV8~k\}W8z''f'''ʉ[!NF8,RY>0>4խYTTjQkSMjUJer,S2yǵ>4g.}$7H=gpns+`ʳ<ϫ D.qb.q8*S *s{[htY58f-jA&U|U#_j}ϔultcuNyV[>V|ow|z|^d^}^lfCbWsğZZ8ݺ~:vo?ҳ ~wxo~<ŋ "?/6wa?lf+74?F 0f'0/IsFjں@}}Z_N(WSȷϳ݇ 0nכi3 ;iw2Ȉ06;БL}~ƻ%?94bp҈Lț#44vvG%tku8VDnFK6)rfȫ}ՆxՕW蛔d[r W&r[UW"7^o!o}6LS}=:V49 :m6LKUW"gd9,z߿ })MhTN-'˓wr?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~Root Entry  Fo' Data WordDocument ˂ObjectPool0X'o'_1028982072F0X'a'Ole CompObjqObjInfo  !"#% FMicrosoft Visio DrawingVISIO 6.0 ShapesVisio.Drawing.69q՜.+,D՜.+,@HP\h t   VisioDocument +AVisioInformation"SummaryInformation( 0 DocumentSummaryInformation8Visio (TM) Drawing +A 69?Rd !fffMMM333$ $ Uw8@ Td Arial@5NWingdzs@rNtMonotype Sorts Symbol5T?? Y@-1U J:DT1EW-hPT8*  .U_b a酸0zGz?@3CfRbU|||%U P } |KpG/Y&4Y$? v AfY&,,' /Dq&z&} | |y  }{})} ?2  | | ||||*|||E-?k4'-'CECE,'*CECECE-?H>?:`}'CEUCECECECEUCECE- O4F2AJY;; AnVnVAbGTfY R_R__ `#f:/lb6p`fW /l /l  Y?5?(\ #!+ | tFf-|J-|-|-wG QUo "OM`EtoA _3=OOOaOsO@O??O?7ܻuR? sU/-O%7I[mRf J+ K5   . ||i| d' 0/UKiR?d?v4Gzt2#&Us s s s) -0jӯY/ƛ!3)󿵶1D///Q+p)?HdYc4AkǿٿYfU|%1I߀O]-(fp ɻ %-=Y[:!#:4)A/q//YkϹ/p?2 Sπ'9}Fɇ7%õ8@\n1$u  (:L^p)BH>ZՔU \ n4!'4#U2q ?/%&k/%&`,`,`',ѓ4?|QԞQyٓA G $z Z$bp6q /#3=/Uq@T0'vr@1N!T0;UOauADV=煤{2r H ^\\RESEARCH\112C4thD<S odX+LetterO_b PRIVA0|#'rpx\KhC]vU c x//,/>/P/b/t////////??(?:?5XRX>@Dr?aft@???????? OO@.O@OROdOvOOK0A2BAE%winspool\\RESEARCH\112C4thDIP_172.30?.80.324( UPxP4FDTe]@y  ahZ- ^TUIGz@IL<+ @??I?*?Qc@ ?-^ua`Iu e u G{!a|p  @@(\o@ˀH450J3??X/1p!Wb56 1m(1%!D!"2 !3613gF.&F3.&-F .&AF.&UF1#iF19!1#}F#.&F3*.&F1.&F8.&F1#FU2666 " S6FFFF-F-FAFAFUFUFiFiF}F}FFFFFFFFFZFFq$ !-%UA$AUAUA !5A-AU$ m$ AAd/U!H<p$ !g%p$D!g "m'AgD%p$-A{#Al(AAg-Al(UA?wal(?w%p$9!?wal(A?wp!l(A?wal(A?wal(A?wAAl(ca5aiD! /vo //?><1Y5?=-!u``up `buA@`u`Oi3"2,VFi"K*FQ[Vs??ە?9!X#T ;"tpt!{F?]tEpv!$ݧآR1S ,/l !ءU@ !%q$ba]aų̰E:͒s(a !RdvЏ*1CUgyӟ 2!E63gpX嶺仯أED"R1S4BNhz¿ԿmE  '^pD!eϻuϐD0'^1ȶc6劭.ߺ袋 < > 2GNFń2ùͫb#Lѐ-A0@di-Afu|ocU@Lo4)9M *;dA?[?m5u}?h??M,u*??N?OFx#O?GO>taQ heOriginal SpeechbIq 2D*new-R 'l&- K"b/#/5/G.!Ifq////////??qOOOOm?V??Z O1OPgOtOQFCompressedEGtSQADOS_[_id__K_]_o_ ޕwnnZЊnnVCTZ:_ eM6wdo o)o |CoUogoDoi8@qI+H!.™媑e$x@X xH! 8ϴH!ڳ8@q[kq şן` FQ'q2AȦ/eKͥ֯sh#cqh#T#[mewϵǼdA޿cqAdUN +SC_WitxE=ae/.?@o`֍Pn_|5ǿ-$u``uH `bu Ӏa="Xsba:T'WiGίi.<- ң h҆tX/K4^/>au/BW$łLe?Qta y?Q'u;UOvYk ݏE Ć+TYwPs P;?_@/I6<K4ǏX:F L⃹?$@ )#L? Ks 4aBn҈??7]r5AQs&8J\泰?@19?@W_fE 8 {TW$E*<Nrq!ӂE ҭVVV ]r ہ$hCIuVgWonL &i/./@/R/d/v//؏//UGvؠG \L/9? [41Bi?4-DT!5? ;f? E?2Տ?8WڱR*<NuOOOϨȼjC%ߚon=Oߛ_s߅_H Pq2T22 2DŸԟzqO}AOSOeO¿㟭Os/__O_a_s__] __׼U5o 3G1Thׯ_Ns7hz// ///??N?`?r?????<'9K]oρϓϥϷϸOOOO#56_Yk}ߏߡ߳ߴ_VSo 1NH6d>6H6iV=HIo[omk{h>6:G=7 ;OMNsNsiOBuOMIyV.@Rd\_Џ@;oMoM[8x BweeE/@'TȀS.U5p~k!6Va 6dM6- /t$6P+:ڡB3`F65UU <TkAeNjOO`?w_ڡ,ZPbtλܿuڡ=suL8Sewωϛϭý~D  PĽUDVhzߌߞ߰U!w"o*ol~ /=0~c?@Z3G ~eI'le[/m//!/////j ?->8!yL3&'e4&z3/?Xx(x?u8K85C&7'XD?>  OE-O9OOO __7L?y_^VWru__!8qo'otm? Uogi__oa7a/ASewov!3|@P//,/>/P/b/t/////B$/yee Dc>44>*L>?EFk?}=C???>403D1C4 _IEyCoOOOOOOOOO_#_5_G_Y_k_R_____#o _ o1CCoQio{oooooooo<]?<Wi{}}/"Qb'>f=j8@DASHUCoIES黟͟ߟ@'!;MrGw`٣x ij˿ݿ%7IϪϯ -?Qcu٨߄u(/$:( 3h'Di/{.'|-/"4Fj@|i??"4FXjT֖&8J\n0AVO/,/P/b$V gW$onVgmWo/??&?8?J?\?n??????~G/peDdC\A OxVyOx`OrOONZ@of϶G\AX"6dOM6"["|CU___ZL{)S#ıTْߖU-xgtue~2Dőp%ڣuѹڣTE%s/(HX|ď)&Q0 J&dǟٓ"iQ }$  ֘WpʯܯUiQ ARSOrV0O@BOTOxZF@;f?@Z3kN?OO_#Q/_A_S_e_@w_ϛZqϾY.Ǭ "B"A, _E!jx=hZeE,c#$#Ls?%Y#-Qcu@}ہzvETF 0/M.*dYq ) b R4B??+7r, 5U1 AI[m H!3|@/ 4FXj|AS5 ,jV;`VjVVx]j%hifod`V\g_Wd]oonsoeooi//+/=/O/a/s///~////?O?'?9?]oo?}X??????OO%O<ڿ"hOOMOOOO__}2dS}2jҒS_iu__ no(o:ol.mtҁuoo /AS}Mgy3V-sˣ- -?Qcu֯絘&ɟ۟#5GYk}ůq2/2Pf8Q .*&G= 7O.@RdvONGq鄐La]ah @)4 KSbNb6ٟdM60NbbJNb|%//>Uv/*LUQ3lv!)vQ\!-s=vr vUvv)v)v5=v=v8݁AY"UQeHNR^OpOQxa@qQQ5)q\!-C?!E?9!3EWi{O /{Se__(J(@ /A<*NĔk/"/4/F/՝J##/E??:>16T?f?,vwZ䙏ǏᏫO%O7OIO[OmOMyQOO_&.wYhh/3НJ! oc#o5oGoYoko}ooo`$hRo+=Oasx1yܯ4+J:A^[^|d!7Z 2DVhzŸԟD+`6HZl~Ưj$6HZl~ƿؿ꿈!y(OmrτnϺ('4(0 7:HZl~ߐߢߴ%@ 2.G#adx-*jol+ Ҁ1Bz2|)\z22v8!j% V Lr!,AF(dUF)TVFiF1B7FAFUAFUFUFiF-iFC9N%!et"!XA}!2SF@TUAa2S@TiAhS}!/"//O/// ??R;Lo^opoooooo)opEWs)^p>JV,>PbtΏYTE0A_{^_ԟ@TS'T\(cftίQ((L^KG*D10A4౤@Ο::?:LWX౗ n[?|[%΢[ Dߖ9&/ALBdX+m/E40]cmmo3 ze =(Im^#rl$=1^#صl$#m /$6H~ hd?ZUqX?U8VS<'9K]m,{FwF.Fm{F{FFFFF//*/B?9?9k}@=B;'O(MQ /bt*{Z/|g9vmfM_q//%(A4)XfI/4_o[A(A/h,o%hXfwWg+~Xf`w,}ggjx????????UOO@,Ol:qOC0OOOOOOO__v&F_X_j_|________ oo0oBo4)qo\m/o.oo/t @AST }BTAxwG߰1pm Џ)"4F@fD( ;(|S+ +BؿC5GeD[mL+"2>v6q7Z_٦Ge̾F<уiuKi;6,>Pbtߪ۔߆%%&#/ASew345䪙  2DVhzU#IHZlw(x ;f?@Z3þ5?ӑ5ߟ'FKtyn.Sb\1 (BBGBFBBMMVŕhѪm^_ xXlxXsFWG6t_^_}u oY˿ݿ/%7ͭ*_TotBME86f55o7ntBzC2ELG *4 5 5zB:L@BT4wrCΓH/ ??/?@A?S?e?w??;??!3|@`??OO,O>OPObOtOOODOA CD"(_efQ_r___: ^ ";UKo]ooooooooooo#5G.k} -!EWi@{ÏՏ<9_43EWppʟYO--"BfFX\د꯰^\[T/$"1cK%ee/ Ϳ߿)i#S<T!6ߠ1߹%`7e=6y -?Q`6u`/?K8?H6DG7TEOWN6GXM7O"FXE_Dvlw*mvwOOO__&_8_J__@n__m__G 8Adљ@c8a_2Uof+p8`m[mDŶs2Qr;TŶ$6H.@~sQwhQs_ű oo0o.Ox4]tEȥ?@W+?@o`MVPNx,ooobX /AS)wz푗y.2sQ ]lgX^lvyР=ݒgzű/= ?C^Z']bD[/m.݃/E/m) -?QcqXٟ*OdhTPHs Xzo@z?>4_Ǐ!xy L⃹?$@ )#L? Xxr4B7QOcOuGrv5e{Q#5?.袋U3?@Bg0WfUhT3ljzbUwz :LsE vfffm;|,/ //>/bfwg~݃хy// ??-???Q?ԟ?Pݶ CUh]Oz'F __._yϋϬv_<?`r__o*o^H c{-RL1UP0WqY-i!A_gVT_ oo'oo3( J'oKUc|WAͯ9ϑ)0(TUFrameUl4,Gz@L<+ @ 59/<}A-5957"AU2@597<GRH<( U2E\59<7 RUlL69?3P69<@}?\\69~)RD<]69)?U$ 4(U1(5 O"D&U=Q Jf )h"/Ty*+Uv'Ʌ&Q- -H*9(TYgEQ/,GuideTheDocPage-1Gesture FormatU|X339= }ED39} l39- 39G*Ul4,Gz@L<+ @ |59_>A-l3e7Ul4, 59> A>-597A_*<N@59k> BRVg<49q7uH<( H<( _*<NE596? RVg@49Cq |{N  g"4FX(}?@(«ޯpLh_79|69:TR 6939q&#!B392>X/BwB_l"69+ $6'+69<VNt69=)69'1|692=3`239>Q?D[79 FD]79P, BL5, 49, F}C$49, o=      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijOh+'0 HP\htlheG` *  EMFXRl@VISIODrawing% %   &%  W$DDDD% ( % &%  W$% ( % % Rp Arialm<ۀwИwLwm@i@im(nۀw`'<ۀwxwxmmxu ۀww=xm@i@i#` Pjxdv% % x %  x T AALlOriginal Speech     x% % Rp Arialm<ۀwИwLwm@i@i`'<ۀwxwxmmxu X ۀww=xm@i@iX #` Pjxdv% % x %  x TAAAALpCompressed Speech     x% &%  W<aQ]QSVXZ\q]a]% ( % &%  W,Z]aja]]`Zd[j% ( % &%  W<QT]QSVX(Z5\D]T]% ( % &%  W,T][jT]X`[d[j% ( % % Rp Arialm<ۀwИwLwm@i@i`'<ۀwxwxmmxu ۀww=xm@i@i#` Pjxdv% % x %   T`KmkAAK|LTGap  % &%  W0+(DD:2-(+% ( % &%  W0[+tDtDr:l2d-[+% ( % &%  W$(+[+(+[+% ( % &%  W$(+(D(+(D% ( % &%  W$[+[D[+[D% ( % &%  W0+DD:2-+% ( % &%  W0+ D D :2-+% ( % &%  W$++++% ( % &%  W$+D+D% ( % &%  W$+D+D% ( % &%  W0>+WD>D@:F2N-W+% ( % &%  W0+DD:2-+% ( % &%  W$W++W++% ( % &%  W$W+WDW+WD% ( % &%  W$+D+D% ( % &%  W0((% ( % &%  W0[ttrld[% ( % &%  W$([([% ( % &%  W$((((% ( % &%  W$[[[[% ( % &%  W0[t[]bjt% ( % &%  W0% ( % &%  W$tt% ( % &%  W$tttt% ( % &%  W$% ( % &%  W0% ( % &%  W0   % ( % &%  W$% ( % &%  W$% ( % &%  W$% ( % &%  W8uQ]QTWY[]u]% ( % &%  W,o]uju]q`oeqj% ( % &%  W8>Ql]>Q?TCWJYT[`]l]% ( % &%  W,l]rjl]p`reqj% ( % % Rp Arialm<ۀwИwLwm@i@i`'<ۀwxwxmmxu Pۀww=xm@i@iP#` Pjxdv% % x %   TlXmAAX|LXFrame   % PagesPage-18_VPID_PREVIEWS_VPID_ALTERNATENAMES(_PID_LINKBASE A Oh+'0x   ( 4 @ LX`hp%!PS-Adobe-3.0!PSKeith Inston1Table hSummaryInformation( DocumentSummaryInformation8CompObj$jeeitNormalnkendras2ndMicrosoft Word 9.0@@̽$@Rb'@Rb'՜.+,D՜.+,X hp  Microsoft - Standard Desktop@C %!PS-Adobe-3.0 Title 8@ _PID_HLINKSAtr*7http://scpd.stanford.edu/cee/telecom/onlinedegree.htmlFSAhttp://www.microsoft.com/windows/windowsmedia/en/wm7/Encoder.aspo14http://www.enounce.com/products/real/2xav/index.htm,"mailto:{lhe, anoop}@microsoft.com  FMicrosoft Word Document MSWordDocWord.Document.89q+ i:@: Normal $<a$_H mH sH tH T@T Heading 1$ & F2(@&5CJKHOJQJkH'4@4 Heading 2  & F2@&CJ6!6 Heading 3  & F2@&56LL Heading 4$ & F2<@&5CJOJQJ<< Heading 5 & F2<@&CJ@@ Heading 6 & F2<@&6CJ@@ Heading 7 & F2<@&OJQJDD Heading 8 & F2<@& 6OJQJJ J Heading 9 & F2<@&56CJOJQJ<A@< Default Paragraph Font8&@8 Footnote ReferenceH*(O( Author5CJHOH Paper-Title $xa$5CJ$OJQJkH'0O"0 AffiliationsCJB2B Footnote Textp^`pCJB Bulletp & Fp>Th^`p2"@2 Caption $x5CJOb Referencesp & F>T.^`CJ(U@q( Hyperlink>*B*&B@& Body Text refz & F0(>T.Tf^`0CJ>> Figure head]^CJ.P. Body Text 25,+, Endnote Text22 Ref0^`0CJ8O8 Style10(^`0CJ,, Header  !, , Footer !RR PseudoCode!$p^`pa$CJOJQJ^J8Y"8 Document Map"-D OJQJHA2H PseudoProc# & F3 ] CJOJQJ^J>V@A> FollowedHyperlink >*B* ph:'@Q: Comment ReferenceCJaJ,b, Comment Text&hCrh Body Text Indent"'$ *P^a$CJOJQJ^J_H&X@& Emphasis6]6O6 Normal tight )d$x.L@. Date *$a$CJ_H.XC.&O| <=TopBCZERs 2|B4!0!}!# $<&'(?)a)=+u,,,-d--t/1565q67;???t@ABZDqEEFGJmLNDPSnGnHnNnZncndnlnxnnnnooooooooooooopppp2p8p=p>p?p[p_sOtvx2{U}#wlB)8Q\ʐ1ϑbٔ6זCʘ:z/cg՟֟۟+2EFHVРԠ۠ܠޠ#)*+,/00000000000000000000*00000000000000`00000`00)0)002 00E0E0E0E0E0E)0E)0EJ )0EJ )0E)0E)0E2 002 0)0)000)02 00#0#0#0#2 00?)0?)G 0?)G 0?)G 0?)G 0?)G 0?)0?)0?)0?)2 005050505052 00?K 0?K 0?K 0?K 0?2 00qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE0qE2 00V2 0VV0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W0W2 0VV0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]0]2 0VV0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e0 e2 0VV0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j0j2 00?p0?p0?p0?p0?p0?p0?p0?p0?p0?p0?p2 00l0l00)0C 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0 QC 0 QC 0 QC 0 QC 0 QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0QC 0Q2`0|`00000000000000000000000000000000000000X3$FYAkt.ptvx0WXXYY'Y2Y_YuYYYYZ[`C`_`}`aGanaaaggggg`hhhhiAkBkaklk{kkkkkkkkkllllm r*rGrcrrssst=tlz+ۤ+.qsuwyz{|}~-rTmpCZ] # : < ? V X [ r u x !E\_+. $'*ADG^avSjl_   Z n w !!!,!.!7!N!P!!!!&&&)))D+[+]+ ,7,:,=,T,W,888GGGKKKNNNSSST1T9TBTXTZTA[W[Y[\\\___F`Z`b`c0c2ccccfffhhhmkkklllEm[m]mnnnpppz&z)zCI\eߘ:Cyz/7hmcjpzʝНӝ,/pR 5 1?46^!}!!#)#y&&)`*=+t,8Q9$GGII3KKMNRSS;TVVG_g`aasbvbeeffii.j2jkkell[pp`udu[w_wxxy1{{{u})/zgj,/333333333333333333 pC^ # = ? Y [ v x "E`/ (*EGbvSm_  Z x !7!Q!!!&&))D+^+ ,;,=,X,88GGKKNNSST:TBT[TA[Z[\\__F`c`c3cccffhhmkkllEm^mnnppz*z=!{L1 4&c=\W<p  5m  Z>   pL8>;  3&ii-td8+CWľewľ A .UG  c  ^! ^" 2B9% fa(' 0>'u !)ľp,fq!- C0 91 N; "< z)<|خ q?& &E ($F V@F<KI O)J SJDnQNľN(XH~Q 5\S ~U֣U֣g/V 6w[^(jh^[ x6^ >t_3֖b #^dHO!N]shľX&h a~iNm49jj MjT|B#!"l Yn|[B2en Iq U\t |[u 3:wľ}<-~ $y * ^`OJQJo( hh^h`OJQJo(h^`.h^`.hpLp^p`L.h@ @ ^@ `.h^`.hL^`L.h^`.h^`.hPLP^P`L.h^`o(@@^@`o(.0^`0o(..``^``o(... ^`o( .... ^`o( ..... ^`o( ...... `^``o(....... 00^0`o(........P^`P@@^@`.0^`0..``^``... ^` .... ^` ..... ^` ...... `^``....... 00^0`........h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo( hh^h`OJQJo(hh^h`.@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(P^`P@@^@`.0^`0..``^``... ^` .... ^` ..... ^` ...... `^``....... 00^0`........ hh^h`OJQJo(@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p.@p^`p.@p^`p.@p^`p.@p^`p. hh^h`OJQJo( hh^h`OJQJo(@p^`p.hh^h`.@p^`p.@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p.@p^`p. hh^h`OJQJo(@p^`p.@p^`p.P^`P@@^@`.0^`0..``^``... ^` .... ^` ..... ^` ...... `^``....... 00^0`........h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p. hh^h`OJQJo(@p^`p.@p^`p.@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p.P^`P@@^@`.0^`0..``^``... ^` .... ^` ..... ^` ...... `^``....... 00^0`........@p^`p.@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(hhh^h`.h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p.h hh^h`OJQJo(h 88^8`OJQJo(oh ^`OJQJo(h   ^ `OJQJo(h   ^ `OJQJo(oh xx^x`OJQJo(h HH^H`OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(hh^h`.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p.h hh^h`OJQJo(h 88^8`OJQJo(oh ^`OJQJo(h   ^ `OJQJo(h   ^ `OJQJo(oh xx^x`OJQJo(h HH^H`OJQJo(h ^`OJQJo(oh ^`OJQJo(@p^`p.@p^`p. ^`OJQJo( hh^h`OJQJo(h^`o(.h^`.hpLp^p`L.h@ @ ^@ `.h^`.hL^`L.h^`.h^`.hPLP^P`L. hh^h`OJQJo(hhh^h`.88^8`OJPJQJ^Jo(nh ^`OJQJo(h   ^ `OJQJo(h   ^ `OJQJo(oh xx^x`OJQJo(h HH^H`OJQJo(h ^`OJQJo(oh ^`OJQJo(@p^`p.@p^`p.@p^`p.@p^`p.@p^`p.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(@p^`p. hh^h`OJQJo(KQ3&V@F 919jj($F^"!"l 4$yc .x6^5m b^!.UG-~KIN;C0&EenU\t2B9%H~Q|[uZ> "<fa('g/VA5\SIqX&hz;p !-~Ui- NL q?0>'=!MjQMjQMjQa~iMjQew3:wQN !)N]shCWp,}^[6w[!-ȔQ0z)<O)J=>t_SJ#^dYnUQ 0@h h^h`OJQJo(`Q @h h^h`OJQJo(FF                            H;0nGnHnNnZncndnlnxnnnooooooooooooopppp2p8p=p>pl՟֟۟+2EFHVРԠ۠ܠޠ#)*/@HP LaserJet IIILPT1:winspoolHP LaserJet IIIHP LaserJet III0C od,,LetterDINU"0_HP LaserJet III0C od,,LetterDINU"0_.`@UnknownGz Times New Roman5Symbol3& z Arial;& z Helvetica3z Times?5 z Courier New5& z!Tahoma;Wingdings"hSIFSIFIC!0d 2Q%!PS-Adobe-3.0 Keith Instonekendras