ࡱ> ` bjbj uݙ%> > > > $  PpL$d|D"d 4 huj > > #M >  D 4p @( X&m<~GL44" V ?"ayv??????D.DAp=.p   > > > > > >   Sequential File Programming Patterns and Performance with .NET Peter Kukol Jim Gray December 2004 Technical Report MSR-TR-2004-136 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 Table of Contents  TOC \o "1-3" \h \z \u  HYPERLINK \l "_Toc91923996" 1. Introduction  PAGEREF _Toc91923996 \h 1  HYPERLINK \l "_Toc91923997" 2. Buffered File I/O  PAGEREF _Toc91923997 \h 1  HYPERLINK \l "_Toc91923998" 3. Sequentially Reading a Binary File  PAGEREF _Toc91923998 \h 2  HYPERLINK \l "_Toc91923999" 4. Creating and sequentially writing a binary file  PAGEREF _Toc91923999 \h 3  HYPERLINK \l "_Toc91924000" 5. Reading and writing typed binary data  PAGEREF _Toc91924000 \h 3  HYPERLINK \l "_Toc91924001" 6. Reading and writing text data  PAGEREF _Toc91924001 \h 4  HYPERLINK \l "_Toc91924002" 7. Summary of Simple Sequential File Access Programs  PAGEREF _Toc91924002 \h 4  HYPERLINK \l "_Toc91924003" 8. Performance measurements  PAGEREF _Toc91924003 \h 5  HYPERLINK \l "_Toc91924004" 9. Un-buffered file performance measurements  PAGEREF _Toc91924004 \h 6  HYPERLINK \l "_Toc91924005" 10. The cost of file fragmentation  PAGEREF _Toc91924005 \h 7  HYPERLINK \l "_Toc91924006" 11. Summary  PAGEREF _Toc91924006 \h 8  HYPERLINK \l "_Toc91924007" References  PAGEREF _Toc91924007 \h 8  HYPERLINK \l "_Toc91924008" Appendix  PAGEREF _Toc91924008 \h 9  Abstract: Programming patterns for sequential file access in the .NET Framework are described and the performance is measured. The default behavior provides excellent performance on a single disk 50 MBps both reading and writing. Using large request sizes and doing file pre-allocation when possible have quantifiable benefits. When one considers disk arrays, .NET unbuffered IO delivers 800 MBps on a 16-disk array, but buffered IO delivers about 12% of that performance. Consequently, high-performance file and database utilities are still forced to use unbuffered IO for maximum sequential performance. The report is accompanied by downloadable source code that demonstrates the concepts and code that was used to obtain these measurements. Sequential File Programming Patterns and Performance with .NET Peter Kukol, Jim Gray Microsoft Research {PeterKu, Gray} @Microsoft.com December 2004 Introduction Sequential file access is very common. Sequential file performance is critical for gigabyte-scale and terabyte-scale files; it can mean the difference between a task running in minutes or in days. This is the third in a series of articles that explores high-performance sequential file access on Windows"! file systems. The original paper, written in 1997 [Riedell97], studied Windows NT4 on a 200 MHz Pentium"! processor accessing  high-performance 4 GB SCSI disks that delivered 7 MBps and cost more than $1,000 each. The Year 2000 study [Chung00] looked at Windows2000"! operating on dual 750 MHz processors accessing 27GB ATA disks that delivered 19 MBps and cost $400. This article examines WindowsXP"! and Windows Server 2003"! on dual 2.8 GHz processors accessing 250 GB SATA disks delivering 50 MBps and costing $130 each. Previous articles explained how to use low-level programming to trick the operating system into giving you good performance. The theme of this article is that the default behavior gives great performance, in large part because the hardware and software have evolved considerably over the years. So the article is really about how to write simple sequential file access programs on Windows"! systems using the .NET framework. It covers sequential text and binary access as well more advanced topics such as un-buffered access. It measures the speed and overhead impacts of block size, fragmentation, and other parameters. The concepts and techniques are illustrated using simplified C# code snippets available for download as a companion to this article [download]. Buffered File I/O Sequential file access is very predictable, one can pre-fetch the next read and one can stream the sequence of writes. Randomly reading a disk, 8KB at a time, retrieves about one megabyte of data per second. Sequential access delivers 50 times more data per second. This sequential:random performance ratio is growing as technology improves disk densities and as disks spin faster. Applications are increasingly learning to buffer the hot data in main memory and sequentially pre-fetch data from and post-write data to disk. Like most runtimes, the .NET framework and Windows does this buffering for you when it detects a sequential file access pattern. As Figure 1 shows, the lower layers of the IO stack perform additional buffering. You might look at Figure 1 and say: All those layers mean bad performance. Certainly, that is what our intuition tells us. But surprisingly most of the layers get out of the way in the common path, so the actual cost-per-byte is very low for sequential IO; yet, the layers provide excellent default behavior. The main effect of buffering is to combine small logical read and write requests into fewer-larger physical disk I/O requests. This avoids reading the disk when the data is already in memory, thus improving performance. As an extreme example, consider a file being written one-byte-at-a-time. Without buffering, every write request would read a block from the disk, modify a byte, and then write the block back to the disk. Buffering combines thousands of such reads and writes into a single write that just replaces the block-values on disk (without ever having to read the old values of the blocks). The .NET runtime stream classes and Windows file system provide this buffering by default. Buffering uses extra memory space, extra memory bandwidth, and extra CPU cycles. Seven years ago, this overhead was an important issue for most applications [Riedel97]. But, as explained in the first paragraph, processor speeds have improved 28-fold while disk speeds have improved a mere seven-fold. Measured in relative terms, disks have become four times slower than processors over the last decade so sacrificing some processor and memory performance in exchange for better disk performance is a good bargain [Patterson]. It is RARE that a modern system is cpu-bound. Our measurements and experience suggest that the cost of buffering is relatively minor and that the benefits almost always outweigh the costs. Thus, the default buffering should used unless measurements conclusively prove that its performance is significantly worse a rare event. If your program is waiting, it is likely waiting for network or disk activity rather than waiting for a cpu. There are scenarios, notably in server-oriented transaction processing systems, where disabling buffering is appropriate. Sections 7and 8 quantify buffering costs so that you can evaluate this tradeoff for your application. Sequentially Reading a Binary File Opening a binary file and creating a stream to read its contents can be done in one step by creating a new instance of the FileStream class. The FileStream() constructor has many flavors (overloaded versions); lets use the simplest one at first: The only required arguments are the file name and the open mode. The file name is a string for the full path to the file or it is interpreted relative to the current directory search path. The string constant used above is preceded by @ to avoid needing double back-slashes in the file name like this: C:\\temp\\test.dat (the @ notation is unique to C#). The file name is usually a path on a local disk, but it may be on a network share (e.g. @\\server\share\test.dat). In Windows, file names are not case sensitive. The second parameter is a FileMode enumeration value. The most common file modes are: OpenThe file must already exist. Used to access existing files.CreateIf the file already exists truncate it, otherwise create it. (It is like CreateNew or Truncate.)CreateNewA new file will be created. An exception is thrown if the file already exists. Avoids over-writing existing files.OpenOrCreateOpen an existing file; if it does not exist create an empty file. (It is like CreateNew or Append.)AppendIf the file exists, it is opened and data will be appended at its end. If the file doesnt already exist, a new one is created. (It is like OpenOrCreate, but writes at the end.) TruncateThe file must already exist. Open and truncate the current file contents. Opening a file may fail for several reasons. The file may not exist or the path may not be valid or you may not be authorized, or... Thus, the code should be wrapped in an exception handler. Ideally the handler would deal with each specific exception, but a simple handler that catches all exceptions (and displays the exception string before exiting) is the minimum requirement: Once the FileStream is open, the basic choices are to read byte-at-a-time, line-at-a-time (if it is text), or byte-arrayat-a-time. The easiest approach is to read one byte at a time: There is substantial overhead associated with reading each byte individually (see Figure 3 in section 8). If this overhead is an issue, an alternative is to read line-at-a-time or an entire byte array each time (and process each byte with an inner loop). The following two snippets demonstrate those metaphors: and: The FileStream() constructor has several overloaded versions that let you control how the file is accessed. For example, to sequentially read a large file we can create the stream as follows: This explicitly sets the IO transfers to be 256 KB. A rule of thumb is that larger transfer sizes are better (within reason), and powers of 2 work best. The measurements in Section 8 are a guide to selecting a good transfer size. The SequentialScan flag hints that access to the file will be sequential (recommending to the file system that it pre-fetch and post-write the data in large transfers). Creating and sequentially writing a binary file Sequential file writing is very similar to reading, except that proper buffering is extremely important. Without buffering, when a program changes just one byte, the file system must fetch the disk block that contains the byte, modify the block and then rewrite it. Buffering avoids this read-modify-write IO behavior. Windows and .NET provide write buffering by default, so that whole blocks are written and so that there are no extra reads if the block is being replaced. Assuming the FileStream fs has been opened, the byte-at-a-time write code is: It is often better to accumulate an entire buffer of data, and write the entire buffer out at once: New FileStream data isnt immediately reflected on the physical disk media. The new data is buffered in main memory. The .NET framework will flush the buffer under various conditions. Modified data are written out to the physical media by lazy background threads in the framework and the file system a two-stage buffering process. It may be many seconds before modified data is written to disk; indeed, a temporary file may never be written if it is immediately deleted. The Flush() stream method returns when all that streams .NET buffers have been written to the file system. To force the file system write to disk controller, you must call the FlushFileBuffers() Windows API (see the IOspeed.fileExtend() program for an example of this [download]). It is likely that Flush() will be overloaded to have a full-flush option in the future. Forcing the disk controller to write to disk is problematic some disks and controllers observe the force unit access option in SCSI and SATA, but some do not. The NTFS flush() generates this command but it is often ignored by the hardware. Reading and writing typed binary data When integers, floats, or other values are written to or read from files, one option is to convert the values to text strings and use FileStreams. The problem with doing that is that you have to serialize the data yourself, converting between the value and its representation as a series of bytes in the byte[] buffer. This can be tedious and error-prone. Fortunately the framework provides a convenient pair of classes, BinaryReader() and BinaryWriter(), that read and write binary data. You simply wrap an instance of the binary class around our FileStream and then you can directly read and write any built-in type, including integers, floats, decimals, datetime, and strings. The FileStream.Write() method can be called with an argument of any base type. The methods properly overloaded version will output the binary value. Reading binary values from a stream is the mirror image of this; calling the appropriate reader function such as ReadInt16() or ReadString()on a BinaryReader as shown in the following examples. Writing typed data to a file.Reading the data from the file.// Open a file for writing FileStream fs = new FileStream("...", FileMode.CreateNew, FileAccess.Write); // Create a binary writer on the file stream BinaryWriter bw = new BinaryWriter(fs); // Write an integer value to the stream uint integer_val = ; bw.Write(integer_val); // Write a string value to the stream string string_val = ; bw.Write(string_val); etc bw.Close(); // Close the stream// Open a file for reading FileStream fs = new FileStream("...", FileMode.Open, FileAccess.Read); // Create a binary writer on the file stream BinaryReader br = new BinaryReader(fs); // Read an integer value to the stream uint integer_val = br.ReadUInt32(); // Read a string value to the stream string string_val = br.ReadString(); etc // Close the binary reader br.Close(); Reading and writing text data Text files are sequences of lines, strings terminated by a character pair (UNIX lines are terminated by just a new-line character.) Text files can be read as a FileStream as described above; but, the StreamReader and StreamWriter classes also implement simple text-oriented file I/O and handle things such as file encoding (ASCII vs. Unicode and so on). Once youve opened a file for input or output and have a FileStream instance for it, you can wrap it in a StreamReader or StreamWriter instance and then easily read or write text lines. Here is an example that counts the lines in a file: Summary of Simple Sequential File Access Programs The previous sections showed the rudiments of creating, writing, and reading files in .NET. The next section presents measurements showing that this simple approach delivers impressive performance. It is rare that an application needs more than this direct approach. But, section 9 presents advanced topics that may be useful to very data intensive applications. To summarize, the following simple program shows how to sequentially create and read a file. The error handling has been removed to simplify the presentation.  Performance measurements The performance of the simple file access programs was measured using Beta 1 of the .NET Framework Version 2.0 under Windows XP SP2 on the following hardware: Tyan S2882 motherboard Dual AMD Opteron 246 CPUs 2 GB PC3200 DDR RAM SuperMicro MV-SATA disk controller Maxtor 250 GB 7200 RPM SATA disk The write benchmarks were run on a freshly formatted 250GB volume. The 30GB test file was recreated every time unless otherwise noted. The file sizes are large enough to overflow any caches Section 10 discusses the effects of disk fragmentation.  The performance results used the programs at the download site and the detailed spreadsheet of measurements is also on that site [download]. Figure 2 shows the sequential FileStream speed in MB/sec vs. buffer size; the buffer size varied from 1 byte through 4 MB. Note that the first buffer size entry (labeled 1B) corresponds to the readByte() or writeByte() case. All the other entries correspond to reading or writing a byte[] buffer of the indicated size. The graph shows that, except for very small and very large buffers, the speed is nearly constant at about 50 MB/sec. The slightly higher write speed is due to Windows and disk controller caching. When a write is posted control usually returns immediately to the program which can then continue; whereas, when a synchronous read is issued the program has to wait until all of the data has arrived. Figure 3 shows the average cost, in CPU cycles per byte of FileStream sequential reads and writes. It shows that using larger buffers reduces the per-byte overhead significantly. Specifically, the overhead stabilizes at around 5 cycles per byte for writes and 10 cycles per byte for reads at request sizes of 64 KB. This suggests that a dual 2 GHz cpu like the one benchmarked here could sustain over 1GBps of file activity. Other measurements have shown this to be the case. [Kukol04]. Most applications run at 1% or 10% of that speed, so it seems FileStream access is adequate for almost any application. As shown in Figures 6 and 7, a file striped across 16 disk drives delivers 800 MBps and uses about 30% of a processor when those experiments are done with buffered IO the speed is dramatically less about 100 MBps vs 800 MBps so for now, the .NET runtime is OK for single disks, but un-buffered IO is needed to drive disk arrays at speed.. To summarize: for simple sequential file access its a good idea to use the FileStream class with a request size of 1 KB or larger. Request sizes of 64KB or larger have minimal cpu overhead. Un-buffered file performance measurements FileStream buffering can be disabled for that stream when it is opened. This bypasses the file cache and avoids the consequent overhead of moving data between the NTFS cache, the .NET cache and the application data space. In un-buffered IO, the disk data moves directly between the applications address space and the device (the device adapter in Figure 1) without any intermediate copying. To repeat the conclusion of the last section, the FileStream class does a fine job. Most applications do not need or want un-buffered IO. But, some applications like database systems and file copy utilities want the performance and control un-buffered IO offers. There is no simple way to disable FileStream buffering in the V2 .NET framework. One must invoke the Windows file system directly to obtain an un-buffered file handle and then wrap the result in a FileStream as follows:  Calling CreateFile() with the FILE_FLAG_NO_BUFFERING flag tells the file system to bypass all software memory caching for the file. The true value passed as the third argument to the FileStream constructor indicates that the stream should take ownership of the file handle, meaning that the file handle will automatically be closed when the stream is closed. After this hocus-pocus, the un-buffered file stream is read and written in the same way as any other. Un-buffered I/O goes almost directly to the hardware, so the request buffers must be aligned to a sector boundary (both in memory and within the file), and the request size must be a multiple of the volumes sector size. VirutalAlloc() returns storage aligned to a page boundary so unmanaged request buffers can be allocated from such page aligned storage. Today sectors on most disks are 512 bytes, but in the future they may well be much larger so a 64KB alignment is recommended. IOspeed.DriveSectSize()determines the device sector size of a file [download]. Figure 4 shows the per-byte overhead of reading/writing files with buffering disabled. The overhead of un-buffered IO using large requests is approximately 0.2 0.3 cycles per byte. Not surprisingly, Figure 5 shows that a 64KB transfer size is necessary to achieve the highest speed with un-buffered I/O. A rule of thumb is that the minimum recommended transfer size is 64 KB and that bigger transfers are generally better. Un-buffered file streams can drive very fast disk configurations. Using Windows disk manager, we created a striped NTFS volume (often called software RAID 0) that spanned 16 physical drives spread across two 8-port SuperMicro SATA controllers in two PCI-X busses and measured file stream read and write speeds. Figures 6 and 7 show that the file can be read at about 800 MB/sec and written at about 400 MB/sec using un-buffered I/O the limited write speed is an artifact of the disk drives we used (8 of them were Hitachi 400GB drives that write at 25MBps and so slow the entire array to that write speed.) When large requests are used, the CPU overhead (clocks per byte) is quite low about 0.7 clocks per byte. Striped NTFS volumes require the operating system do more work to distribute the I/O requests across the physical media and this is reflected in a slightly higher overhead when Figure 7 is compared to Figure 4. These experiments indicate that a simple C# file stream program is capable of reading a large disk array at speeds on the order of 800 MB/sec with low processor overhead. They also show that the controller and bus bottlenecks observed 5 years ago by Chung [Chung00] have been addressed. Most controllers can support 8 drives and most 64-bit PCI-X can support 750MBps transfers. Figure 6 requires two such busses, but with PCI-Express will easily support the bandwidth of 8 future disks. The cost of file fragmentation When a large file is created by incrementally appending data new file extents are allocated as the file grows. These extents are not necessarily contiguous they are allocated using a best-fit algorithm. If a file has many non-contiguous extents, we say the file is fragmented. A sequential scan of a fragmented file can be a random scan of the disk as the scan reads each fragment in turn. Fragmentation can cause a significant performance loss both when the file is initially created and when it is later accessed. Simple techniques can reduce fragmentation when creating or extending large files, and utilities can reduce fragmentation of existing files by reorganizing the disk. When the approximate file size of a file is known in advance it is best to tell the file system the estimated size as soon as possible (ideally, immediately after the file is opened.) This lets the file system efficiently pre-allocate the largest possible chunks (the fewest fragments) of physical media to hold all of the file contents and thereby reduce fragmentation. The simplest way to do this is to extend the file to the known final size. With a FileStream, this is done by invoking the SetLength() method. The following code creates and allocates a 128 megabyte file: FileStream fs = new FileStream(fileName, FileMode.OpenOrCreate); fs.SetLength(128 * 1024 * 1024); Figure 8 shows how file creation speed improves when the final file size is set at creation time. The tests were run on a freshly formatted volume (clean disk) as well as a volume that was fragmented with a tool we built [download]. Since fragmentation in the real world is a stochastic process, the tool repeatedly creates and deletes files randomly. We then ran the tests (taking the median of all the measurements.) As can be seen from the graphs, one should make at least 1 KB write requests to get decent performance. Beyond that, if the disk is fragmented, it pays to tell the file system about the final file. Declaring the file size at creation time reduces the disk fragmentation slow-down from about 25% to about 15% -- (46MBps and 39MBps respectively). The 15% pre-allocated file slowdown is caused by disk zoning the innermost zone runs at 34 MBps while the outermost zone runs at 57MBps the average speed of 46MBps is almost exactly the red line speed measured for preallocated files in Figure 8. On a clean disk, files are allocated in the outer band of the disk. On a fragmented disk, files are typically allocated in the middle and inner disk zones that have lower byte transfer rates. We conclude from this that fragmentation reduces sequential performance by about 13%. Summary Seven years ago, one had to be a guru to read or write disks at 7MBps [Riedel97]. That programming style is still possible (see the Appendix for an example.) Now, the out-of-the-box default performance is 7x that and disk arrays deliver over 1 GBps. In part this is because processors, disks, and controllers have improved enormously; but, in large part it reflects the fact that the software stack described in Figure 1 has been streamlined and redesigned so that it usually does the right thing. This evolution is not complete (e.g. the flush option does not flush the NTFS cache), but those details are being repaired as we write this article. In 1997 and 2000, 4 disks could saturate a disk controller and one controller could saturate a bus. Modern controllers can handle the bandwidth of 8 disks and modern busses can handle the bandwidth of two controllers. To summarize our findings: For single disks, use the defaults of the .NET framework they deliver excellent performance for sequential file access. Pre-allocate large sequential files (using the SetLength() method) when the file is created. This typically improves speed by about 13% when compared to a fragmented file. At least for now, disk arrays require un-buffered IO to achieve the highest performance. Buffered IO can be 8x slower than un-buffered IO. We expect this problem will be addressed in later releases of the .NET framework. If you do your own buffering, use large request sizes (64KB is a good place to start). Using the .NET framework, a single processor can read and write a disk array at over 800 MBps using un-buffered IO. One can characterize the three articles in this series as: Riedel 1997: Windows can drive SCSI disks and disk arrays as fast as they can go, if you are a guru. Controllers and busses are bottlenecks. Chung 2000: IDE disks are slower than SCSI but they have great price-performance. Controllers and busses are bottlenecks. Kukol 2004: .NET runtime delivers excellent performance by default and it is easy to configure balanced IO systems with commodity SATA disks and controllers. Certainly, there will be an opportunity to redo these experiments in 2010, but we are unclear what the future holds. We expect the .NET framework to fix the buffered disk array performance issues and to make it easier to disable all buffering. Since disk capacities will likely be in the 3 terabyte range by then, we assume issues of file placement, archiving, and snapshoting will be the dominant concerns. References [Chung00] L. Chung, J. Gray, B. Worthington, R. Horst, Windows 2000 Disk IO Performance, MSR-TR-2000-55, September2000 [Download]  HYPERLINK "http://research.microsoft.com/research/downloads/" http://research.microsoft.com/research/downloads/ and  HYPERLINK "http://research.microsoft.com/research/downloads/download.aspx?FUID={A6F86A1E-0278-4C72-9300-F747251C3BF0}" http://research.microsoft.com/research/downloads/download.aspx?FUID={A6F86A1E-0278-4C72-9300-F747251C3BF0} [Kukol04] P. Kukol, J. Gray, Sequential Disk IO Tests for GBps Land Speed Record, March 2004, MSR-TR-2004-62 http://research.microsoft.com/research/pubs/view.aspx?type=Technical%20Report&id=766 [Patterson] D. Patterson, Latency Lags Bandwidth, ROC retreat, January 2004,  HYPERLINK "http://roc.cs.berkeley.edu/retreats/winter_04/posters/pattrsn_BWv1b.doc" http://roc.cs.berkeley.edu/retreats/winter_04/posters/pattrsn_BWv1b.doc [Riedel97] E. Reidel, C. VanIngen, J. Gray, A Performance Study of Sequential IO on WindowsNT"! 4.0, MSR-TR-97-34, September 1997 Acknowledgments Leonard Chung, Catharine vanIngen, and Bruce Worthington were very helpful in designing these experiments and made several consturcive suggestions for improving the presentation. Appendix The following code shows what is needed to do asynchronous and un-buffered IO. It gives the example of a program that copies one file to another. The reader launches N reads and then, as they complete, it writes the buffer to the target file. The asynchronous write completion callback issues the next read on that buffer. This code has no error handling; it is literally the shortest program we could write (error handling is included in the download version of this program [download].) It is included here to persuade you that you do not want to do this unless you really are in pain. Buffering allows you to overlap reads and writes and get most of the benefits of this asynchronous code. using System; using System.IO; using System.Threading; namespace AsyncIO { class FileCopy { // globals const int BUFFERS = 4; // number of outstanding requests const int BUFFER_SIZE = 1<<20; // request size, one megabyte public static FileStream source; // source file stream public static FileStream target; // target file stream public static long totalBytes = 0; // total bytes to process public static long bytesRead = 0; // bytes read so far public static long bytesWritten = 0; // bytes written so far public static Object WriteCountMutex = new Object[0]; // mutex to protect count // Array of buffers and async results. public static AsyncRequestState [] request = new AsyncRequestState [BUFFERS]; // structure to hold IO request buffer and result. public class AsyncRequestState { // data that tracks each async request public byte[] Buffer; // IO buffer to hold read/write data public AutoResetEvent ReadLaunched; // Event signals start of read public long bufferOffset; // buffer strides thru file BUFFERS*BUFFER_SIZE public IAsyncResult ReadAsyncResult;// handle for read requests to EndRead() on. public AsyncRequestState(int i){ // constructor bufferOffset = i * BUFFER_SIZE; // offset in file where buffer reads/writes ReadLaunched = new AutoResetEvent(false); // semaphore says reading (not writing) Buffer = new byte[BUFFER_SIZE]; // allocates the buffer } } // end AsyncRequestState declaration // Asynchronous Callback completes writes and issues next read public static void WriteCompleteCallback(IAsyncResult ar) { lock (WriteCountMutex) { // protect the shared variables int i = Convert.ToInt32(ar.AsyncState); // get request index target.EndWrite(ar); // mark the write complete bytesWritten += BUFFER_SIZE; // advance bytes written request[i].bufferOffset += BUFFERS * BUFFER_SIZE; // stride to next slot if (request[i].bufferOffset < totalBytes) { // if not all read, issue next read source.Position=request[i].bufferOffset; // issue read at that offset request[i].ReadAsyncResult = source.BeginRead(request[i].Buffer,0, BUFFER_SIZE,null,i); request[i].ReadLaunched.Set(); } } } // main routine implements asynchronous File.Copy(@"C:\temp\source.dat", @"C:\temp\target.dat"); static void Main(string[] args) { source = new FileStream(@"C:\source.dat", // open source file FileMode.Open, // for read FileAccess.Read, // FileShare.Read, // allow other readers BUFFER_SIZE, // buffer size true); // use async target = new FileStream(@"C:\target.dat", // create target file FileMode.CreateNew, // fault if it exists FileAccess.Write, // will write the file FileShare.None, // exclusive access BUFFER_SIZE, // buffer size true); // use async totalBytes = source.Length; // Size of source file AsyncCallback writeCompleteCallback = new AsyncCallback(WriteCompleteCallback); for (int i = 0; i < BUFFERS ; i++) request[i] = new AsyncRequestState(i); // launch initial async reads for (int i = 0; i < BUFFERS; i++) { // no callback on reads. request[i].ReadAsyncResult = source.BeginRead(request[i].Buffer, 0, BUFFER_SIZE, null, i); request[i].ReadLaunched.Set(); // say that read is launched } // wait for the reads to complete in order, process buffer and then write it. for (int i = 0; (bytesRead < totalBytes ); i = (i+1) % BUFFERS) { request[i].ReadLaunched.WaitOne(); // wait for flag that says buffer is reading int bytes = source.EndRead(request[i].ReadAsyncResult); // wait for read complete bytesRead += bytes; // process the buffer target.BeginWrite(request[i].Buffer, 0, bytes, writeCompleteCallback, i); // write it } // end of reader loop while (pending > 0) Thread.Sleep(10); // wait for all the writes to complete source.Close(); target.Close(); // close the files } // end of async copy. } } IOspeed.exe tests read and write bandwidth to a file Usage: IOspeed [options] filePath Options: -r[fileSize] read optionally with file size to be created if needed (default=1G) -w[fileSize] write (default is read) optionally with file size to be written (default=1G) -t test duration (default=30 seconds) -b I/O block size (default=64K) -a[count] use asynch (overlapped) I/O, (Default is sync, default async depth is 4). -d disable .NET and NTFS disk caching (memory buffering) -s[seekDistancePercentage] random I/O (default is sequential) optionally average seek distance percentage of filesize, default=100. -x< fileSize> create/extend/fill a file to given size -p< fileSize> like x but preallocate file before writing (reduces fragmentation). -c touch every byte in the source file (more memory and cpu load) -q quiet mode (as opposed to verbose), write data as comma separated list. sizes suffixed with K, M, or G are are interpreted as Kilo, Mega, or Giga bytes. Examples: Iospeed t30 b64K r1G s0 a.dat // the default settings, same as next line Iospeed a.dat // sequential read a.dat, 64KB requests, buffering for 30 seconds Iospeed t60 -p100M a.dat // preallocates 100MB file a.dat, then reads it for 60 seconds Iospeed -t30 -w100M -p a.dat // preallocates 100MB file a.dat, then writes it for 30 seconds Iospeed a2 -b256K a.dat // 2-deep async read of a.dat in 256KB requests Description IOspeed tests read and write bandwidth to a file. The r and w options measure read and write speed respectively. They can be modified by the t option to request a test duration different from the 30 second default, or with a a option to request asynchronous IO with the specified count of outstanding IOs, or with the d option to disable .NET and NTFS file caching, or with the b option to specify an IO request size other than the 64KB default, and with a s option to request random seeks between IOs (-s0) is the sequential default and s100 causes the seek to go to a random place on the disk. The percentage is a distance control, not a fraction of sequential-random. The c option asks the cpu to touch every byte rather than just discarding the bytes from main memory, making the cpu and memory bandwidth load more realistic. The default file size is 1GB. The fileSize option can override this value. If the file does not exist it is created with this size. If the file already exists, and it is smaller than the requested size, it is extended to have this size. These file extension and the write tests overwrite the file. The x and p options test the speed of file extension with and without pre-allocation. If there is a preexisting file with that name, it is deleted and a new empty file is created. x incrementally grows the file by appending to the end (using synchronous writes of the given block size). The p option first sets the file length, thereby allocating fewer fragments and then acts just like the x option. FragDisk.exe fragments a disk by creating a large number nearly filling the disk and then deleting some of the files at random to create holes. Usage FragDisk [options] directoryPath Options: -m set max. number of files to create (default=100000) -Fm set min. file size (in MB) (default=1) -FM set max. file size (in MB) (default=256) -c set number of files per cycle (default=1000) -d set max. number of files per directory (default=100) -s set max. number of sub-directories per directory (default=10) -n set max. create/delete cycles, 0 = no limit (default=0) -k set percentage (1-99) of files to keep (default=5) -f set desired percentage (1-99) to fill the volume (default=70) -r set random seed (default=137) Example: FragDisk f95 k10 c:\temp Description: This code creates many directories and files (of various sizes) and then deletes a random subset of them thereby creating a fragmented disk. The file sizes are randomly chosen between 1MB and 256MB. FragDisk recursivly buids a directory tree with enough sub-directories in it so that the leaves can accommodate enough files to fill the disk to the desired fullness. It then crates enough files of sizes chosen randomly between 1MB and 256MB (or sizes specified via the F command-line options) to fill the disk to f percentage. It then deletes a random subset of these files so that the disk shrinks ot k percent full. It repeats this process for n cycles. IOexamples.exe source code examples from the paper. Usage IOexamples.exe [fileName [ recordCount] ] Options: default file name is: C:\IO_Examples_temp.txt default record count is 1,000,000 Example: IOexamples C:\temp\test.dat 10000000 -- creates and tests a file of 100m 100-byte records Description: This code demonstrates different styles of and performance of sequential file IO on Windows NTFS file systems using .NET IO classes. They are the examples for the MSR Technical Report by Kukol and Gray titled: Sequential File Programming Patterns and Performance with .NET. These are typicaly IO programming patterns. This times the following cases: Read | Write byte-at-a-time | line-at-a-time | 64KB block-at-a-time. The file-write-speed test builds the sort benchmark file specified at  HYPERLINK "http://research.microsoft.com/barc/SortBenchmark/" http://research.microsoft.com/barc/SortBenchmark/ AsyncCopy.exe source code for async copy shown on page 9. Usage AsyncCopy Options: none Example: AsyncCopy -- creates and copies a 1GB file Description: This code is the full project (with error handling) of the async copy program in the appendix of the Micrsoft Technical Report by Kukol and Gray titled: Sequential File Programming Patterns and Performance with .NET. The code is downloadable from:  HYPERLINK "http://research.microsoft.com/research/downloads/" http://research.microsoft.com/research/downloads/. The program creates the 1GB file C:\temp\source.dat and copies it to C:\temp\target.dat using double buffering (four outstanding IOs.) In the end it deletes the source and target files.     PAGE  PAGE 4  Figure 1: Hardware and software layers and caching in the disk IO path. Figure 2: Buffered FileStream bandwidth vs. request size. Beyond 8-byte requests, the system runs at disk speed of around 50 MBps.  Figure 3: CPU consumption per byte of IO vs. request size. At 4KB and larger buffers the overhead stabilizes at about 5 or 10 clocks per byte.  Figure 4: Un-buffered IO cpu cost is 10x less than buffered IO for large requests.  Figure 5: Un-buffered IO has good throughput at 64KB request sizes. Figure 8: File creation speed vs. write-request size, disk fragmentation and pre-allocation of file length. The dotted lines show the standard deviation of each measurement. try { FileStream fs = new FileStream(fileName, FileMode.Open); } catch (Exception e) { Console.WriteLine("Error opening file {0}. \n {1}", fileName, e); throw new FileNotFoundException("Error opening file: " + fileName, e ); } int nextByte = 0; // holds next byte in the stream (or -1) while (0 <= (nextByte = fs.ReadByte())) { // ReadByte()returns -1 at end of file /* ... process 'nextByte' ... */ } using (StreamReader sr = new StreamReader(fileName)){ string line; while ((line = sr.ReadLine()) != null) { /* ... process bytes line[0] line[line.Length-1] */ } } int fileIOblockSize = 64 * 1024; // read up to 64KB each time byte [] IObuff = new byte[fileIOblockSize]; // buffer to hold bytes while (true) { int readCount = fs.Read(IObuff, 0, IObuff.Length); if (readCount < 0) break; /* ... process ' IObuff[i]' for i = 0...readCount-1*/ } using System; using System.IO; class Examples { static void Main(string[] args) { string filename = @"C:\TEMP\TEST.DAT"; // a file name FileStream fs = new FileStream(filename,// name of file FileMode.Create);// mode (create a new file) for (int i = 0; i < 100; i++) // write 100 bytes in the file fs.WriteByte((byte)'a'); // each byte is an "a" fs.Position = 0; // reposition at start of file. while(fs.PositionIJ_`iop    , - taSJSh*>mHnHuh2ah*>0JmHnHu$jh2ah*>0JUmHnHuh)syh)syCJaJjh)syh)syCJUaJhQ hQ5 hh5 hjhj hjhh hpaCJ hhhhhhhhCJ#h ? @ Y Z [ \ ] ^ _ ` a | } ~  żűӼżlżű[Ӽż jqhQ%UmHnHu2jh*>hQ%>*B*UmHnHphuh*CEmHnHu j{hQ%UmHnHujh*>UmHnHuh*>mHnHuh*>mHnHuh2ah*>0JmHnHu$jh2ah*>0JUmHnHu2jh*>hQ%>*B*UmHnHphu#          7 8 9 : < = l m n 񰡰Ļk񰡰ZĻ j]hQ%UmHnHu2jh*>hQ%>*B*UmHnHphuh*CEmHnHu jghQ%UmHnHujh*>UmHnHuh*>mHnHuh*>mHnHu$jh2ah*>0JUmHnHu2jh*>hQ%>*B*UmHnHphuh2ah*>0JmHnHu#       6 7 8 Q R S T U V W X λ谡k谡Z jIhQ%UmHnHu2jh*>hQ%>*B*UmHnHphuh*CEmHnHu jShQ%UmHnHujh*>UmHnHuh*>mHnHu$jh2ah*>0JUmHnHu2jh*>hQ%>*B*UmHnHphuh2ah*>0JmHnHuh*>mHnHu#X Y t u v w y z   " # $ % & ' ( λ񰡰k񰡰Z j5hQ%UmHnHu2jh*>hQ%>*B*UmHnHphuh*CEmHnHu j?hQ%UmHnHujh*>UmHnHuh*>mHnHu$jh2ah*>0JUmHnHu2jh*>hQ%>*B*UmHnHphuh*>mHnHuh2ah*>0JmHnHu#( ) * E F G H J K t u v ߼߱l߱[ j! hQ%UmHnHu2jh*>hQ%>*B*UmHnHphuh*CEmHnHu j+hQ%UmHnHujh*>UmHnHuh*>mHnHu2jh*>hQ%>*B*UmHnHphuh*>mHnHuh2ah*>0JmHnHu$jh2ah*>0JUmHnHu#       # $ % > ? @ A B C D E F a b c d n o p ջհkհZ j hQ%UmHnHu2j h*>hQ%>*B*UmHnHphuh*CEmHnHu j hQ%UmHnHujh*>UmHnHuh*>mHnHu2j h*>hQ%>*B*UmHnHphuh2ah*>0JmHnHu$jh2ah*>0JUmHnHuh*>mHnHu# ߼߱wsog`Y`KGChGhhhv"h)sy56B* ph hThT hv"h)syhv"h)sy5h)syhz>jh)syh)syCJUaJh*CEmHnHu j hQ%UmHnHujh*>UmHnHuh*>mHnHu2j h*>hQ%>*B*UmHnHphuh*>mHnHuh2ah*>0JmHnHu$jh2ah*>0JUmHnHu  7>U]^cpqn.BN.BFjln^`z  &.Žh_CJaJhl_CJaJhUdCJaJh TCJaJh CJaJhhgCJaJhGCJaJjhjh&sU hjh_hhhpaaJ hpaaJ hKaJhhhhaJ h!E!J!K!X!\!e!f!!!!!!!!!!!!!!"""""!"7"8"=">"R"X"j"t"{"|"""""###&#.#4#A#H#T#شh3 h)6CJaJh)CJaJh02CJaJhQqCJaJhGCJaJh/CJaJh/ECJaJh CJaJhl_CJaJGT#n#v#w#x#y#####################$$$J$U$[$]$j$k$m$$$$$$ %%D%~%%%%%%%%%%иhhLCJaJh&sCJaJhhgCJaJh+CJaJh6uCJaJhCJaJhl_CJaJhBXCJaJh&@ECJaJhlCJaJh/CJaJhmYCJaJhKw;CJaJh/ECJaJh)CJaJ5%%%%&&&&"&1&8&K&S&_&i&p&t&u&|&&&&&&&&&('(4(<(>(J(K(L(T(]((((褠wodh`whQCJaJhCJaJha CJaJh`whCJaJjh\UmHnHuh#^hh[h+h) 5CJaJh!CJaJhA CJaJhPCJaJh&sCJaJhCJaJh!CJaJhKw;CJaJh/CJaJhlCJaJhpCJaJ&'(K(C)S++++$$$IfgdiGgd#^gd\xgdw^F!Eƀ$.gda(((())))B)C)L)U)^)_)))))))))*** *)*3*D*K*i*j*t*ɾɳzrzgrgr\Tzh\CJaJh`wht*CJaJh`whA=CJaJhA=CJaJhCJaJh&sCJaJh`wh&sCJaJh`whCJaJhw^CJaJh`wh|CJaJh`wh$CJaJh`wh) 5CJaJh`whCbCJaJh`whtMCJaJh`wh?CJaJh`whQCJaJh`whCJaJ t*u*****************+++ ++++++ +#+R+S+ǿϬϬϙoZoooOh`whz}CJaJ)h\ht*B*CJOJQJ^JaJphf3)h\h|B*CJOJQJ^JaJphf3)h\h`wB*CJOJQJ^JaJphf3h`whzCJaJh\CJaJh`wh|CJaJh_CJaJhCJaJhA=CJaJh`wht*CJaJ)h\h\B*CJOJQJ^JaJphf3 h\h\CJOJQJ^JaJS+k+l+m+n+v+w+++++++++++++++++++++++ ,/,1,T,̶og__Whz CJaJhQ3KCJaJh6CJaJ+hw^h E<5B*CJOJQJ\aJph%h E<5B*CJOJQJ\aJphhw^hCJaJh E<CJaJhw^h|CJaJ+hw^h|5B*CJOJQJ\aJphh`whz}CJaJh02h|CJaJh`wh\CJaJh\CJaJh`wh|CJaJ +++U,vv$$$IfgdiGzkd~ $$Ifl0Z`'o t0'644 laU,V,`,,qq$$$Ifgdkd $$Ifl0Z`'o t0'644 lagFT,V,_,|,,,,,,!-"-$-C-D-F-M----------------... .*.?.G.ļļԦ̛̼̼j_hw^h`wCJaJ+hw^h`w5B*CJOJQJ\aJphhw^hw^CJaJhw^CJaJh E<CJaJhw^h\wCJaJ+hw^h\w5B*CJOJQJ\aJphhQ3KCJaJhz CJaJh{_CJaJhw^h{_CJaJ+hw^h{_5B*CJOJQJ\aJphhw^h E<CJaJ%,,,E-vv$$$Ifgdzkd\ $$Ifl0Z`'o t0'644 laE-F-M--.vvv$$$IfgdiGzkd $$Ifl0Z`'o t0'644 la.. .X.vv$$$IfgdiGzkd0$$Ifl0Z`'o t0'644 laX.Y./001112&4}vkf_ZUUgd#^gd{gdE)gd E< ^gd{3xgdzxgdw^zkd$$Ifl0Z`'o t0'644 la G.Y.k.l.u.}............//!/)/*/7/8/7I7M7R7X7[7_7d7s7x7y7}7777777777818384858ôåÚxmbmxWKmbWxmxhA hQ6CJaJhA hz CJaJhA h# CJaJhA hX4CJaJhA hQCJaJhA hc6CJaJhA hKM{CJaJhA hcCJaJhA ha^B*CJaJphhA hoaB*CJaJphhA hKM{B*CJaJphhA hQB*CJaJphhA hA3B*CJaJphhA hh4B*CJaJph58;8>8J8M8P8Q8R8[8_8b8c8j8r8s8z8}888888888888888889*9-999:9F9S9^9ɩzzzogoghKCJaJh# hX4CJaJh# h# CJaJh# hh1S/ hG>hG>hX4CJaJhPCJaJh+p:CJaJ hK0JhKCJaJ&<<N<O<{<<<<<<<<<=== = ===9=;=>============> >>>A>F>J>P>Y>_>a>>>ƻs hA hCJOJQJ^JaJ hA h8 CJOJQJ^JaJh8 CJOJQJ^JaJhCJaJhCJaJh vCJaJh{hCJaJ hA hCJOJQJ^JaJhCJaJ hA hCJOJQJ^JaJh1S/CJaJhCJaJ->>>>>>?????$?0?1?R?S?U?V?W?t?u?????????????@@@'@/@0@]@i@o@r@Ƹ𨤠xmhUh1S/B*phh8 h1S/B*phf3 h^h1S/hUh1S/B* phhUh1S/B*phh1S/h h/CJaJhCJaJh8 CJOJQJ^JaJ hA h8 CJOJQJ^JaJ hA hCJOJQJ^JaJh8 CJaJh vCJaJhCJaJ*u?????@0@]@@@@@ Axhhhhhhhhhh$$V$If^Vgd vzkd$$Ifl0 (vv t0644 la $$$Ifgd v r@s@@@@@@@@@@@@@@ A AAA'A)A+A2AA?AIAJARAVAaAiAjAAAAAAAAAAAAAAAAAABB&BȪȈȄh h8 h1S/B*phf3h~VB*phhUh~VB*phh~V hgh1S/hhUhB*phhB*ph h^h1S/hUhUhUB*phhUB*phhUh1S/B*phhUh1S/B* phh1S/4 A)A?AGAjAAAB.B[BBBBBC!CC?CJCQCRCUCVCWC^ChCCCCCCCCCCCCCCCC D DD!D%D&D0D1DCDDD뼸zhrh8 CJaJhrh9\CJaJhQCJaJh8 CJaJhO_CJaJh{|CJaJhG>hAh{| hG>hG> h^h1S/hUhB*phhUh B*phhUh1S/B*phh1S/h h CJ h^h -ICJChCE>9gd9\F!Eƀ$.gdazkdU$$Ifl0 (vv t0644 laDDEDHDIDNDZD_DkDtDyDDDDDDD!E%E=E@EAE^EbEEEEEEEEEEEEE F_F`FdFF.G/GH깦ꘑyqh;/CJaJh?hv70JCJaJh?hv7CJaJ hy=hv7jh\UmHnHuhrhO_CJaJhCJaJhO_CJaJhMCJaJh8 CJaJ h8 h9\CJOJQJ^JaJhQCJaJhrh9\CJaJhrhQCJaJ*E FHH8HHH ILG>>h`hgdgdF!Eƀ$.gda" 8h8p @ xHPgdgdv7F!Eƀ$.gdaHHHH+H8HKHOHkHoHxHzHHHHHHHHHHHH I I I!I3IEIRIXIfIƾƾƶƫƫƁrcrXPXhRcCJaJh IhCJaJh[vhRcCJaJmHsHh[vhCJaJmHsHh[vh~VCJaJmHsHh]CCJaJhFmht(CJaJht(CJaJhFmhCJaJh~VCJaJhv,CJaJhCJaJ hy=hqfPhhy=hCJaJh~VCJOJQJ^JaJjhUmHnHu I IDIfIgIbJJJLLMMQQCRDRERGRqRF!Eƀ$ .gdagdgda^h`hgdfIgIhIIIIIIIIIIIIIIII J)J=J?JaJbJcJ{JϹڡ{nWJhEhE0JCJaJ,jhy=haCJUaJmHnHtHuhEha^0JCJaJh8 CJaJht"CJaJh16xht"CJaJh16xh16xCJaJh16xh16x0JCJaJh16xh8 CJaJh16xhv,CJaJh16xh]CCJaJh16xha^CJaJhPCJaJhEha^CJaJhEhv,CJaJha^CJaJ{JJJJJJJJJJJJKKKKKKK"K,K.K/K0K;K>KRKSKTKhKiKKǼ}rg\hEh~VCJaJhEh|uCJaJhEhlmCJaJht"CJaJhEhRcCJaJhEh!`hCJaJhEhM{ CJaJhEh<<CJaJhEh +CJaJhEhCJaJhEhECJaJh0JCJaJhEh0JCJaJhEhE0JCJaJht"0JCJaJKKKKKKKKKKKKKLLLL4L5LJLKLMLxLyLLLLLLLLLLLLLLLLLLLøߪꔌyqht"CJaJh{ht"CJaJhRcCJaJh<<CJaJh{h:DCJaJhEhtCJaJhEhRchRcCJaJhEh +CJaJhEh]CCJaJ ht"hM{ CJOJQJ^JaJhEhM{ CJaJhEhRcCJaJhEhzCJaJ)LLLLLLLLM]M_MfMgMMMMMMMMMN N'N(N6NwN{NNNNNNNNNNNNNOOOO$O(OOOOO(PսսhtCJaJhPCJaJhliCJaJhlWCJaJh{hBCJaJh0%CJaJhv,CJaJhBCJaJh<<CJaJh +CJaJh{h:DCJaJh~VCJaJhRcCJaJh]CCJaJh2CJaJ2(P)P]P^PcPiPtPwPPPPPPPQQIQTQQQQQQQQQQQQQQQQQQQQսwogoo_hv,CJaJht"CJaJhBCJaJhliCJaJhlWCJaJhAdCJaJhACJaJh{hbhCJaJh{hfCJaJh(&CJaJhfCJaJhv7CJaJhHCJaJhv"CJaJh8CJaJhtvmCJaJh?CJaJh]UCJaJhBhBCJaJ$QQRR,R4RBRDRERFRGRIRJRRRpRqR{RRRRRRRRRRRRR SSS*S4S8SSS^SaSkSlSpSqSuSvSةءء𙑙~sh{h !CJaJh{hAdCJaJh2FCJaJhECJaJh@CJaJh`CJaJh$CJaJhh2F hW> h~VhbhCJaJh~VCJaJh]UCJaJhW> CJaJhlWCJaJhACJaJhv,CJaJhMCJaJ+vSwSSSSSSSSSSSSSS&T(T,T2T3T6T7TLTuTTTTTUUUUUUUU,U-U=U>UŭŢ}ŽummhW> CJaJh\ lCJaJ2jh=*CJOJQJU^JaJmHnHtHuh{hCJaJh{hP CJaJh2FCJaJht(CJaJh\CJaJh:9CJaJh`CJaJhECJaJhMCJaJh@CJaJh{h !CJaJhAdCJaJ'qRUUUUWWYYZZ[[@_B_-aMaZc[cgdQF!Eƀ$ .gdagdYgd+)gd\gdC>UAUEUHUVUXUYU\UlUtUyUUUUUUUUUUUUUUUUUUUUUUUVV!V&VEVNVȰȰإviahECJaJht(h!0JCJaJ ht(h!CJOJQJ^JaJh\ lCJaJjh=*UmHnHuh !CJaJh{hCJaJhaKCJaJh2FCJaJh:9CJaJh\CJaJh!CJaJh>CJaJh@CJaJhW> CJaJhiPCJaJhv,CJaJ&NVkVlVvVVVVVV WW+W.WOWPWQWlWmWxWyW~WWWWWWWWWWWWWX&X'X*X0X:XPXQXeXiXjXXX諶𐈐uh\hlfCJaJhLZCJaJhgCJaJh CJaJh~aCJaJh\h:.CJaJh[th[tCJaJh2FCJaJh[thW> CJaJh[th1h1CJaJhECJaJh!h!0Jh1CJaJhW> CJaJh!CJaJ-XXXXXY Y-Y;YOYxYyYYYYYYYYYYYYYYYYYYøvk`UMh<<CJaJh\hkGCJaJhc hc CJaJhc h~aCJaJ hc hc CJOJQJ^JaJhc CJaJhc hECJaJhc hlfCJaJhc hAdCJaJhgCJaJh\hAdCJaJh\hlfCJaJhLZCJaJh CJaJ hc h CJOJQJ^JaJ hc hlfCJOJQJ^JaJYYCZEZQZRZTZUZ`ZaZdZpZxZyZ|ZZZZZZZZZZZZZZ[[["[$[=[>[F[R[[[b[c[h[[[׿׬DŽ||t|h(&CJaJh,-sCJaJh<CJaJhc CJaJhW> CJaJhCJaJh&\CJaJh2CJaJh{h=*CJaJh=*CJaJh2FCJaJh+)CJaJh{h+)CJaJh CJaJh{h:.CJaJh{h<<CJaJ*[[[[[[[[[[[[[[[[[[[[[\\<\A\G\I\K\L\\\\\\ûãÐëÈ}rgr_Trh8h*!CJaJhtCJaJh8htvmCJaJh8hj5CJaJh8h5lCJaJh5lCJaJhACJaJh{h*!CJaJh&CJaJh*!CJaJh`^[CJaJh[tCJaJh{hj5CJaJh2FCJaJjh=*UmHnHuh{h+mCJaJh+)hh;CJaJ \\\\\\\\\\]] ]]]]]N]]]]]#^&^Y^Z^k^l^n^^^ ___%_&_=_>_?_ɾԾ}ujujujububuh<<CJaJh{h*!CJaJh*!CJaJh*!h*!CJaJhtvmCJaJh-CJaJh&CJaJh8CJaJh8h8CJaJh8heCJaJh8htvmCJaJh8h2CJaJh8hj5CJaJh8h&CJaJh8h`^[CJaJh8hMCJaJ&?_@_A_B_]_b_c_d_n_______________aa,a-a:a=aLaMalama{aaaaa˸ްxpdph<h<6CJaJh<CJaJh$CJaJh*!CJaJh-;CJaJhVhhhYCJaJh,-sCJaJh+p:CJaJh CJaJh{h7dCJaJh&\CJaJhtvmCJaJh{h&CJaJh&CJaJh{hzyCJaJh*!h{hj5CJaJ%aabb[bebbbbbbc'c9ceCJaJhvTCJaJh\vCJaJhw6CJaJhaCJaJh{.CJaJh*!CJaJh-;CJaJh$CJaJh<h<6CJaJh<CJaJh]UCJaJ3dddddddddddd$efBfCfDfEfи𰸰~v~v~nfnh*!CJaJh-CJaJhVCJaJhAyCJaJh{CJaJ h$h]UCJOJQJ^JaJ h$h]CJOJQJ^JaJh]CJaJhCJaJh<CJaJhTCJaJh@CJaJh#CJaJhw6CJaJh$CJaJh]UCJaJh CJaJ#[cDfEffffkk\n]n:oUoxgd)sygd)syFEƀ$ .gd\gdbG6gdQgdz EfOfUfXfYfcfnfvf{ffffffffffffffffff g g=gLgYgigsggggggggg»}}u}m}}hwmCJaJhXCJaJh[lCJaJh\h\CJaJh\CJaJhtCJaJh|CJaJh*!CJaJ h*!h*!hV h(zhbG6jh]UUmHnHu h(zh %h@h*!B* phhs&h*!B*ph h(zh*!hs&h*!B* ph(ggggh h h hh!h#h8hRixЊ&)+.WZtً!,.2\yDzDzDzDzDzDzDzDzDzDzDzDzDzǤDzh7CJOJQJ^JaJ)h~[h]KB*CJOJQJ^JaJph h~[h]KCJOJQJ^JaJ)h~[h]KB* CJOJQJ^JaJph#h{.B* CJOJQJ^JaJph@%RxЊ#qً8y~Ќjb܎, Dh8p @ xHP 7$8$H$gd]KyЌӌ֌،ی=jnq܍Vabp܎(=>CDtf^VRh 7hwmCJaJh7CJaJhwmCJOJQJ^JaJ hUhUCJOJQJ^JaJ#h]KB* CJOJQJ^JaJph)h~[h]KB*CJOJQJ^JaJph(h Th]KCJOJQJ^JaJmHsH1h Th]KB*CJOJQJ^JaJmHphsH)h~[h]KB* CJOJQJ^JaJph h~[h]KCJOJQJ^JaJ >ACyzTUF͑Α 7$8$H$gd 7 7$8$H$gd 7gd 7( Dh8p @ xHP gd]KDOPxyzƴƘm^L14h th 7B* CJOJQJ^JaJmHnHphu#hH+hH5B*mHnHphuh 75B*mHnHphu%h 75B* CJaJmHnHphu.hQ,wh 75CJOJQJ^JaJmHnHu7hQ,wh 75B* CJOJQJ^JaJmHnHphu#hH+h 75B*mHnHphu)h 75B* OJQJ^JmHnHphuhH+h 75 hH+h 7h 75CJ aJ hxh 75CJ aJ SU`ȃΑWXCDENOPfζζζ馔s^(h,h 7B* CJaJmHnHphu#hH+h t5B*mHnHphuh 75B*mHnHphu#hH+h 75B*mHnHphuh,h 7CJaJmHnHu.h tB* CJOJQJ^JaJmHnHphu4h th 7B* CJOJQJ^JaJmHnHphu+h th 7CJOJQJ^JaJmHnHuWXDEOPZ LMJݛgd 7 \ 7$8$H$gd 7 7$8$H$gd 7 7$8$H$gd 7fgyzYZ{}ڔܔߔ QSXZmo ٷٷٷٷٷymymhE`h 76CJaJhE`h 7CJaJhOh 76CJaJhOh 7CJaJh 7CJaJhQ,wh 75hQ,whWm5h 7h,h 7CJaJmHnHu"h 7B* CJaJmHnHphu(h,h 7B* CJaJmHnHphu"h tB* CJaJmHnHphu* 58X] fgjؚܚ=AIKSWuɛۛܛݛޛϾϾϾϩϾϾϾϝ}}}y}nhFh 7CJaJh 7hWm hFh 7hxh 75CJ aJ h 75CJ aJ h 7B*CJaJph(hOh 7B*CJaJmHnHphu hOh 76B*CJaJphhOh 7B*CJaJphhOh 7CJaJhE`h 76CJaJhE`h 7CJaJ&ݛޛ`@՝g#$23ѡҡ 7$8$H$gd 7$8$H$gd 7gd 7ޛ"#$2ƮƛtYGtG1+hFh 7CJOJQJ^JaJmHnHu"h 7B* CJaJmHnHph3u4hhB* CJOJQJ^JaJmHnHphu#hH+h 75B*mHnHphu(hQ,wh 7B* CJaJmHnHph3u%h 7CJOJQJ^JaJmHnHu.hWmB* CJOJQJ^JaJmHnHphu.h 7B* CJOJQJ^JaJmHnHphuh 75B*mHnHphu#hQ,wh 75B*mHnHphu23/03BCIYZ&Сѡҡӡաݡ 렔|q_P_h 75B*mHnHphu#hQ,wh 75B*mHnHphuh+p:h 7CJaJ h+p:h+p:h+p:h 75h 75CJ aJ hxh 75CJ aJ h 7B*CJaJph(hhB*CJaJmHnHphu(hhE`B*CJaJmHnHphu(hh 7B*CJaJmHnHphu(hQ,wh 7B*CJaJmHnHphuҡ ;<EF{XYy| 7$8$H$gd 7gd 7<DEF'W]^ʸʸ{fTfTfTfTfTfT"h 7B*CJaJmHnHphu(hQ,wh 7B*CJaJmHnHphu#hQ,wh 75B*mHnHphu"h 7B* CJaJmHnHph3u0h th 7B* CJOJQJaJmHnHphu#hH+h t5B*mHnHphuh 75B*mHnHphu#hH+h 75B*mHnHphu(hQ,wh 7B* CJaJmHnHph3uEGHIz{|}ɥצˆsleSDS2"h 7B* CJaJmHnHph3uh 75B*mHnHphu#hQ,wh 75B*mHnHphu h+p:h+p: h+p:h 7hxh 75CJ aJ h 75CJ aJ h 7B*CJaJph#hNh 70JCJaJmHnHu7jhNh 7B*CJUaJmHnHphu(hQ,wh 7B*CJaJmHnHphu"h 7B*CJaJmHnHphu+jh 7B*CJUaJmHnHphuʥ˥ԥڥۥ LMOP( Dh8p @ xHP gdwmJ7$8$EƀFH$gdE` 7$8$H$gd 7ɥ˥ԥեإ٥ۥZ[\ǵ릔fJf3f,hH+h 70JB*CJaJmHnHphu7jhH+h 7B*CJUaJmHnHphu1jhH+h 7B*CJUaJmHnHphu(hH+h 7B*CJaJmHnHphu#hQ,wh 75B*mHnHphuh 75B*mHnHphu"h tB* CJaJmHnHph3u"h 7B* CJaJmHnHph3u#hH+h 75B*mHnHphu(hQ,wh 7B* CJaJmHnHph3uŧէKLMNPQSTVWYZ`abdeklmnoqrsttmbWjx-hAhvUh)hvCJaJ h/hvjh/hvUh*CE0JmHnHu h60Jjh60JUhv hv0Jjhv0JUh~jh~U h]KhwmhQ,wh 7B*CJaJph(hH+h 7B*CJaJmHnHphu4hH+h 7B*CJOJQJ^JaJmHnHphu PRSUVXYbcdopqrtBDԩթש*,gd6^gdZ2gd/h]hgd1S/ &`#$gd6ŨƨѨҨը֨18ABCDKLlmopԩթ֩ש*+6AI̽j Ih6hvCJUaJh6hv:CJaJhHCJaJh6hv0JCJaJh6hvCJaJjd?h6hvCJUaJhvhBhvCJaJ h-hvji6h*hvUhvCJaJhZ2hvCJaJ0IMN[`pqrsz{}!#$')*4:=>HS[bcejluzȽӕvaaava)hhvB* CJOJQJ^JaJphhvCJOJQJ^JaJ hhvCJOJQJ^JaJ)hhvB*CJOJQJ^JaJphhBhvCJaJhvCJaJhZ2hvCJaJj~QhthvUhvhK!hvCJaJh6hvCJaJh6h=*CJaJh=*CJaJ,qr!#$e{ǫ]Ԭ֬gdgdX^gd 7$8$H$gd ^gdw^^gdtFEƀDgd6z{ǫͫӫث٫ܫݫ˶ꌤvcHvHv4hhvB*CJOJQJ^JaJmHnHphu%hvCJOJQJ^JaJmHnHu+hhvCJOJQJ^JaJmHnHu.hvB* CJOJQJ^JaJmHnHphu#hvB* CJOJQJ^JaJph3)hhvB*CJOJQJ^JaJph3fhvCJOJQJ^JaJ hhvCJOJQJ^JaJ)hhvB* CJOJQJ^JaJph3ݫ !"*./4]cdjϴϋzvkvdvYQF>dvd8 hvaJhvB*phhU-hvB*phhvB*phhU-hvB*ph h_hvhhvB*phhv hshvCJOJQJ^JaJ hhvCJOJQJ^JaJ.hvB* CJOJQJ^JaJmHnHphu4hhvB* CJOJQJ^JaJmHnHphu+hhvCJOJQJ^JaJmHnHu4hhvB* CJOJQJ^JaJmHnHphuԬլ֬׬ݬ߬%./AEIKƺƺƛƺƺtcWWh{vhvmHnHu h{vhvB*mHnHphuhX8hvmHnHuhE)hvB*ph hE)hvhvmHnHuhvB*mHnHphu hE)hvB* mHnHphuhE)hvmHnHu hE)hvB*mHnHphu hubhvhhvB*phhv h_hvhU-hvB*ph!֬׬Iҭ&ZnwѮҮ& 8h8p @ xHP7$8$H$gd~V h8p7$8$H$gdgd^ gdgd&V­ȭѭҭ֭"'*+JWXZvwxijĕĕtb#hvB*CJOJQJ^JaJphh1hvmHnHu h^ hvhvB*mHnHphuhvmHnHu hUhvB*mHnHphuhv hU-hvB*mHnHphuh{vhvmHnHu h{vhvB*mHnHphuhvB*mHnHphu h{vhvB*mHnHphu!ŮѮӮ׮ -/>@JPST^hxιΓ~iTiFhvCJOJQJ^JaJ)hO_hvB* CJOJQJ^JaJph)hPhvB*CJOJQJ^JaJphff)hhvB*CJOJQJ^JaJphf3)hO_hvB*CJOJQJ^JaJph hy=hvCJOJQJ^JaJ)hy=hvB*CJOJQJ^JaJph#hvB*CJOJQJ^JaJph#hvB*CJOJQJ^JaJphhvCJOJQJ^JaJ>x&cڰCF" 8h8p @ xHPgd, Dh8p @ xHP 7$8$H$gd~V& 8h8p @ xHP7$8$H$gd~V ѯ %&@acejٰڰ CDEFGHٯٯĚٯمĚĚٯĚٯĚĚwwf hBrhvCJOJQJ^JaJhvCJOJQJ^JaJ)h8 hvB*CJOJQJ^JaJphf3)hy=hvB* CJOJQJ^JaJph)hy=hvB*CJOJQJ^JaJph)hPhvB*CJOJQJ^JaJphff hy=hvCJOJQJ^JaJ)hO_hvB* CJOJQJ^JaJph(FHIڱ89im"_ϳX # 8p @ :gd\gd(gdgdB*" 8h8p @ xHPgdHIS[^_iԱر"%5789?Tghimwx}Ʋ˲βhhvB*phhhvB*phhhvB*ph3f hQhvh\hvB*phf3hhvB*ph hdhvh8 hvB*phf3 hB*hvhvB*phhO_hvB*phhO_hvB* phhv5"8@I_uϳ &1BCWXYZٴڴ۴ܴͺͳ~shS`}h=*CJaJh!h=*h^RaJjsph!h=*Uh8CJaJh6h=*CJaJjhh!h=*U h3;hvhvB*phhO_hvB* phhU-hvB*phh ThvB*ph hNshvh ThvB* phhv h hv*XY[ڴݴ X׵*xĬάϬ*Z%[Z^Zgd\ 7$8$H$gd 7$8$H$gdSjgd=* !$%);=W`cehԵյ ׼׼׼ש׼׼׎ש׼׼xvש[4hSjhvB*CJOJQJ^JaJmHnHphuU+hahvCJOJQJ^JaJmHnHu4hKM{hvB*CJOJQJ^JaJmHnHphu%hvCJOJQJ^JaJmHnHu4hSjhvB*CJOJQJ^JaJmHnHphu+hSjhvCJOJQJ^JaJmHnHuhvh=*hK!h=*CJaJ!.Length; k++) // fill the write buffer IObuff[k] = (byte)(rand.Next() % 256); // with random bytes fs.Write(IObuff, 0, IObuff.Length); // write out entire buffer } [DllImport("kernel32", SetLastError=true)] static extern unsafe SafeFileHandle CreateFile( string FileName, // file name uint DesiredAccess, // access mode uint ShareMode, // share mode IntPtr SecurityAttributes, // Security Attr uint CreationDisposition, // how to create uint FlagsAndAttributes, // file attributes SafeFileHandle hTemplate // template file ); SafeFileHandle handle = CreateFile(FileName, FileAccess.Read, FileShare.None, IntPtr.Zero, FileMode.Open, FILE_FLAG_NO_BUFFERING, null); FileStream stream = new FileStream(handle, FileAccess.Read, true, 4096); Random rand = new Random(); // seed a random number generator for (int j = 0; j < 100; j++) { // write 100 random bytes byte nextByte = (byte)(random.Next() % 256); // generate a random byte fs.WriteByte(nextByte); // write it } )FGK_acwxìͬάҿҔy^KKK^5+h$>hvCJOJQJ^JaJmHnHu%hvCJOJQJ^JaJmHnHu4hSjhvB*CJOJQJ^JaJmHnHphu4hKM{hvB*CJOJQJ^JaJmHnHphu%h*CECJOJQJ^JaJmHnHu.hvB*CJOJQJ^JaJmHnHphu%hvCJOJQJ^JaJmHnHu+hSjhvCJOJQJ^JaJmHnHu.hvB*CJOJQJ^JaJmHnHphuάϬڬ(28MYZ[bf|ŭ˭߭%)-1FGZ_cqr}ƻƞ h\hvhK-^hvB*phhK-^hvB* phhv,hvB*ph3fhvCJOJQJ^JhK-^hvB*CJphhv,hvB*phh8 hvB*phf3 h:9hvhv>ɮ"Kvͯ()j( Dh8p @ xHP gdwm^gd 7$8$H$gdSjZ^Zgd\"=>DfgouůƯʯͯׯ  !'()*084hSjhvB* CJOJQJ^JaJmHnHphu+hSjhvCJOJQJ^JaJmHnHu hox<hvhK-^hvB*ph3fhK-^hvB*ph h\hv hQhvhK-^hvB* phh=*hv.8;<BEHiknpsհְϴϡϡs[sB1hv6B*CJOJQJ^JaJmHnHphu.hvB*CJOJQJ^JaJmHnHphu%hvCJOJQJ^JaJmHnHu4hKM{hvB*CJOJQJ^JaJmHnHphu%hvCJOJQJ^JaJmHnHu4hSjhvB* CJOJQJ^JaJmHnHphu+hSjhvCJOJQJ^JaJmHnHu4hSjhvB*CJOJQJ^JaJmHnHphu  ϼtple h]Khwmh`hvhDDhvCJaJmHnHu *hSjhvhSjaJhSjhvCJaJmHnHu4hKM{hvB*CJOJQJ^JaJmHnHphu%hvCJOJQJ^JaJmHnHu+hSjhvCJOJQJ^JaJmHnHu4hSjhvB*CJOJQJ^JaJmHnHphu 8 001h:p6/ =!"8#$8% 901hP:p)sy/ =!"8#$8% {DyK  _Toc91923996{DyK  _Toc91923996{DyK  _Toc91923997{DyK  _Toc91923997{DyK  _Toc91923998{DyK  _Toc91923998{DyK  _Toc91923999{DyK  _Toc91923999{DyK  _Toc91924000{DyK  _Toc91924000{DyK  _Toc91924001{DyK  _Toc91924001{DyK  _Toc91924002{DyK  _Toc91924002{DyK  _Toc91924003{DyK  _Toc91924003{DyK  _Toc91924004{DyK  _Toc91924004{DyK  _Toc91924005{DyK  _Toc91924005{DyK  _Toc91924006{DyK  _Toc91924006{DyK  _Toc91924007{DyK  _Toc91924007{DyK  _Toc91924008{DyK  _Toc91924008h$$If!vh5o5 #vo#v :Vl t'6,5o5 ar$$If!vh5o5 #vo#v :Vl t'6,5o5 agFh$$If!vh5o5 #vo#v :Vl t'6,5o5 ah$$If!vh5o5 #vo#v :Vl t'6,5o5 ah$$If!vh5o5 #vo#v :Vl t'6,5o5 ah$$If!vh5o5 #vo#v :Vl t'6,5o5 aO$$If!vh5v5v#vv:Vl t65vO$$If!vh5v5v#vv:Vl t65v=DyK 2http://research.microsoft.com/research/downloads/yK dhttp://research.microsoft.com/research/downloads/!DyK khttp://research.microsoft.com/research/downloads/download.aspx?FUID={A6F86A1E-0278-4C72-9300-F747251C3BF0}yK http://research.microsoft.com/research/downloads/download.aspx?FUID={A6F86A1E-0278-4C72-9300-F747251C3BF0}DyK Hhttp://roc.cs.berkeley.edu/retreats/winter_04/posters/pattrsn_BWv1b.docyK http://roc.cs.berkeley.edu/retreats/winter_04/posters/pattrsn_BWv1b.doc=DyK 2http://research.microsoft.com/barc/SortBenchmark/yK dhttp://research.microsoft.com/barc/SortBenchmark/=DyK 2http://research.microsoft.com/research/downloads/yK dhttp://research.microsoft.com/research/downloads/eDd :$CB0  # A"e TXVW@=e TXVT1pYx] pGvήd ďAX`a@HZ!F( p+D)A+v uτ08.w8M0!ϝ_qQBޛ]vfWTcgzzo:Bq3![tIipK>8RB\W¿:rc7!/~g Eȯ Ee* @. nΫ#! #3k:l6k$o&'Ab u%^Tf)uOgXV^gvC6C ()^rF|Oo}޶W] i+Oh[l#-uƦ<ž)v7>ei/6;;i\./ az'k{4mKem[*oQ{y#衛\٘^n}׫ga=zvZ:b 戮w-ԩ =T{Mdp}-z x-p}_X]X; uG~Tvv|[ _z_ ᚲ_uiM: Q({VMz.y_߄@۠oK7i!t{F4xB?=@[Ǡ Z׭^tsCP6COǼ2ﲟ_686HnV8Md#!2d|_%˷΍[zOhn9neF*+L])S;dm'gDem]M]^]MA>@t]^be^5DW])b'RZ$kI$kT~D,k;)k3 PZZNE:㐵&k9YEZN&k.k} /0Z,9TdfHyr5@v 6B:܂(Kazk 8¦^5@2YrM«W9s7Y+>#s-e g ͮG3{=2d/]_ Dg0dzM,3x\~0(t[QEdz^Ef Y[1sl=!}j,b Qi9;< j|c:Yju4%=t ?ѾJ$?g]?wzv}VߟLj.b%oN~ ?c!yer^Q+ٖ Tyːhex ljgL> ٟ_ѓ0ݟXǬQ(ؙ.C픡؉u$/t uz|`~O<;#tWǮ8}ZcL/>v<Ы0az!\}%ݧ^(nQl>[ lL\gұ}qp$wMv,alt%Y{JWp) X-ԁ8D K=bпg ܍ý:὿wp8rZ*Gt68% pvM8:p113u)QS9.?-+ʘ* _GE+Y>0N\DfAIxN 4jIn٢EU)C>=LMsv~vQh]F"̧4A>/wj|b>VZ>PN-ߙ B>DEs 9R|]ц.%h f84&.]L͉f]K]4%fj]1KvQ!ρ0_X(E]dWzskv])I6L?%(ߓ'aI.JQ8r(S"]T^Ot `꿡9\gfS),Fp i"[I/etA6| +f3,0RwJZ.SjP~7;'ܱT 䃼?;X36:;_xVw<[W웖c 'zgrwjᘽ8@Q)/BGKq_:eiW3x^N %b'̳,<_xwƁH/f󷀟2'MPZ$a+-s1jv? 6SSq ]ۺ*^ϲwИsQEue=QgS/q ̾)&/Iк0u -5ܾL F< J!OO{}0n/d1{yqTb6[/ٚ=%{zؽ-O*-)(H[X?MF?MBҕZD[&CI~{y*{2_6vO{{K (w4x5\b(kؽFd(JBޘ5 l zbqQ]2SMg3*}6 >+*Ne,5֨<kTE1ר:xwyZF剹F;=~ |'գ1רm8fi 5*zM#z~[/^BI>W0 LƮE\ ]֨`9Hc$y1:֨A󔃶/Gի`J:Xr>i} 3BqtNqa߃p&:N Y"||[aD_Xf֧dVl]k]afsNp*q:*8 ԬSsN2'~_^/{%X֔pZwZ$D53`q3u/#3Dw@:d59]AFGtvy 3.#l~W0,UG"jy4-}1^"lcb1e4[`wDY cMJ8ְjJ=`Vo%AgXTs68#?؃|f3aV `^Onyz~2zd׭=mCQ5vY/`us]a;`n7s"驞#Y֢E*~/}\7kɦ{)䪉U N&h 8f Z9(Np2a-?X%,]9\piK#Nrpo sǜHV\ 'a'qNVFV |seBorZb ( p:7<8[v&%9MV^!%ޛm MuX|l@@>_X1xEj@@A bbxA0S;.|4ݙhu)Ã?w۝-][5#a-OѨ>࿿o,s ]wyILIw>?~|}O0П20²u*6~O6&=ej)we4 c;2MMSG NDzΖXt'lOL6lmy,f\F Fh p9=m'Yyo2ܶ+h#Lx/ۀ*'1 |r_B{tVK**** ޽rB+_^rWCP%j{th᫂$g(e*m_ B[{QsXwoKT 䃼q:mZ~\( 9]bц5{;||UZ,qۖJjW3|%+kJo;]j: ;U,o>T=4_pklYշCmBʞ:|櫭N:_i+wW vg'UWa4_n bUOo]m~Z }||Nw[@@AíX|+ۏ6(jI< ^q mlFZO3?`*KзZ7`<0z午oYj߲F%?J1[~B,F0J;Dbks$;{@S(B/bllOM<oc1po9^unM*NI;}tc[_, C>T3>%|/(ny}t+6(Pq`sp"_aT$9_W۶*ByWaW%̟^.o O :^Gյ|g( oծ5[wa5%䫉䫹x%*: NsvoZ().ayEJrG"j練'aQ Dd kk0  # A2m/c9l(GdI-`!A/c9l(Gd\2q3|5y,x[]lTE>vwKŴE"ZԄ5 hOIeh OHx&bjMhy(>c(ODb%D!Yϙ{ww{f;sΝ?LbXEPyG}K~Ac—H$d&s96_$V5\Og)_GQago8T@Ԉ}j`;{RhG= jqo|ٝ_?]>ߣM;:=j,Tt~yj?ws0l,Jѓk=T֑Q" ԯcbX(^Fodʞi>#2ٕ>V cV0l22tP 1L[ .(d=zo[0#[d; GvL-Y8؍z%+;2b:l$+ud<%+:2cDAJuw_1A3?;24lÒW1]D?92^4lG1A;`ȺYL՞Dj9hƠ'c氛Ó?gm| Q<0s^8+[n nFGJM\kKc 0Xc2cFj*7#~ tGh~zTb>!dS9)F2)5' H~ϧ-Yqc;cu1X+c1f>e> *V1u0X/cf4cÌ 3v+bc>1YU3VjV3flb,5S=]6!K,{^Q/^꿅:kmX^\R4¥1.MpD .ŴR6DUaTs6FV Ya+ld6fV؂ 1W_YH[DҘ'ϴcZu@?@M*f"i$3=U ]x^'bWOukrKOf/Fz$_Җ SZR>,U6<$O[fj㍦["z>Q̫1 %Ҩwxf~l#d| f_wcz@m<֩GOO Խ'6:h?(Swoag3<[HDd aQ0  # A2w==F͵grLS6`!K==F͵grL 1q2|5pT,x[klTEfw[Il)?@U*ЇT5D%@$c*1` !&M0 9sw>vٙ9;̙sSE($.EUYp)d"L&e(y@IY*I3P([JL@!{R:`:hÉ1 ND`[Ƈbb  [i/"ږxSk;$0~쳪f\Jkz ´Y'nGv4Cff1YZLvH[]ݮ]Z!]ŴvQLYOD劸R HT{#B:$*׽uE젴HT{#pECz$*׽qEtNKT{#^wEaUz`ͣ>G^Za0慙X'ZŰC1ܫQ {SYbا)}3#"#˘[wFnOFc/>9͈C:g4{s`"Xz/:'VMh@+V$d+^N:цO->NĊW؅^p.X" nуs2+ޥX]\82?+1X!-w#GĢ@p^,|O8;Z+u-cj~J\OX{Sqmk!`fɞgy)Y՞6q]V,HhE3O<:lb#|O%)Weoر)Z~G[뛶n\N"/"?! R,Ѥ]Vs{~hDfTG$ϱXF%ݘ,f&/,q_l ~3=~TZ0KeաyTKUyӐjA2S\yR̪|8'u̮·mMWګ5۲Fږ擗1w %aM^th>$K#eX mVou:mu{ʅGQ<*2#L|=&] ,d΀3\~Nxe*&EIk=oQRא-tIī-9H~[5mGB3ѬλpޭE?5]00AyǷ~f"XxZߵ}}],=]Kx=\R<Ѱ1v6Ŝr&6)mJXrۘ߁'z?&F򔛏FI#o+[({p~r>742~(RC:~$SjN]jۙ^-l2y]WRT Dd T^^0  # A2# 9.wF=`!9.wF9mV401x[{lSUNku<#QA;e]惉N`Qp87݆]3A !!(&>DǏ1B.,tw^-kk{{ݯ2fC-П}8ttG4aw1SWDŒ25863OW).26щ0+b"-<-]$L}UX y}\^ϕ^y4\sm="2 Iחh'Z=G;}i~N>BXyΧϯ@*}~yITRp'2lFۛ=VC=^nZ#qi,fHόcsg繽PQl+Qr:=Jxq%NDD;#"vbC>J3⏶}m-xq]xq=-l{m/G%[[Ľ؆m-aDD;#~e8(D;#E<4D;#~ix8sTmDZam8U/$l(d"L_vA&#W5Qp;sb +K:3Ǵ11z=/ydd`<%'>URmPS}}*eܲb:2쩺3n.%49l>ZBQcx[5oE5/y?f*xj\%kռV۬y5oּw@׼Y:J. 5o%5z9TR_p]bZeDuW5BŜ|ˣ~G;'qy4z_p<Y$ EZ^7 iyǴQ-oY 5KGj/)@,í_tA N$75Bz|VF<1޷H+VZ 6489]vL*|4b7W )VZO'_w@Z]k?gta b5,8x@ZQ6ҪF#\ K]'TojCP=I,҈%{L*ZfjOINzb>Y[A,"؎dGa2B0g|Q=d \c$7=`^89Pz4[тz4]|_g|m)ӵls:3YuARXQK kƺckB=6:{Wu^p_(C,P*K/=Cuh-ǾVZnYzco,xO8$z7Ř HL&фd,qnj"^gG3>!o{ sUp @9ފ4F%8OP'͇'HpMyk `' 5%54ɫ \5ފ84Vib[~*jeBy֟HZyX=yk4sT HӸpM}k]xB!i\5װx6*WdUD4\_d* /"ESXEʠ8UĈ]EPt5W#\UVF)Қk ֡" obXE4gPE,*b|*.UDm$\wqIYcW}Qfj7^)xU}1Þ 8—a"*O&g-WdO \nai o5yIy0z#ҫW1..gJ%_ڭ^㫞wE80tsfzYDޘK/7oM1}7]qKLgK#l2,".tsDd PP0  # A2s̈tp,Mc>.G`!s̈tp,Mc>+vTH66D1xZ]lU>wvN lDh%Rʶ@1"` Iam6hTAcИ`BHD4D0&c4ƀIF;sgfw~vݱ;;97@~ @%l b ra^BWɎ0H)LJ+L˹8d$YOg)@ 74YJ :8t NV@, d0WHޞ8 AJaχr癢<`EQvռ%^WczL* w8 <˞\Y-?$[ۑ~?=EhW=)= 46Yz>ГwPQ~Z v~Ə ͏hehm*L>ɪ[y] MІ2Jm\6k'mWnL}YIfՕ qVvetNpVre<$${3te<4${3xd;YIf Qd${3pe4l㜕do뮌&m88i&9+ތߺ2NbB)J77Sm+4[v+-L36YIf8if9+ތ_2bZ?Y.{3^qe\ʚ8+NfsN3yћ8Rez${;/Xs^{ӝj"'J_JW㕎IlLbXBb Ijj>7$s= t+NkU), c'N]G}ZU.H<*)NV݄P|>YE*u@5AUѪ8 H!g~YhU7ZQt/jM#]vW~ˊg]l,fOO4kw˴[V6#c;yl3 ;;p^Ɏ!L2|G/4|v#c,b; #hzcovt;уv*o,1Wip^ z Va]{>p ٟLv&N bVGo<(DT<,~*?wl>(^KM ؼz E@c20})D#>BI iWB;"u{_ng,U{120_4ƺ&/x5Maб) k6a#քaq֭a-&EZ9~֬qjyK7KI&'5aLG'O; c;؛9D- -޶eEa,\irǎ!x[ہ_jRj>vQS|QnhZa3e>w|Wsg؃ kp{g4#pn^ݎjⴓs CN:}zڼhm=eUPXݪyɹĮ.`#vAF)b/-#"'\BO1n1Xc01{{]$<6-Q/f\ 9*+5f^+{L5%|2,s?#ȖDd q0  # A2ѻߪƀ1QZQ`!ѻߪƀ1QZlY 7)x\ tU@!-^j Q5"!$L!ȓ) (CdP$ ("(`#C#-4(@4ܪo^~u]nչw׽5=>M-ہYқ.b 2NQZNjj;iDT -t2|W}ձ{=DZ*a}wԙ<3:[uuSκwnBtT)'Fˆ7-OGro%huT)'Nˆam*$8Vh:Gܛ0b,hU#wˆXURN݄`m*'aYy6OGr%8іRNqˆK`+m*?Lql Q W0m$w#x[ ڇv zno>"#P``km&kzηY|)6Jc4pkMPmMn[rۂw6A-$rO!t 9?|m"|r/$B&^AU^ޯ j"zr6A-רmFUާ*65 e{^+Z*Z}cC@ w0.: \fZ tX-R#-rQj]1DBI w tǘQ؉NM_&B]N#wfS3Ӡ.&wj65ledmp,;O_5B/5G?W!8^í>2ddVZ1jetQ,_oex_+󠋹fCXVsl;9@Ҭ@v3܅> ^խ^/ Qh czi4xbBMs&6QoeXMiro8ir'q.1 Y5MXMFer_5mZk7z}jt2{K}MP~ pLLOrht ЋiS="1Amctthrېێveir=SNG &lM*r{w6A-Jr&jkrkr ^omZnSvAv@^W=ԴMۆkkn*@ro Vk21C8pe`L0D=b гţ/K\y9P;jpF<1C9x'{/FSSQUMM(j>tscP30Z tǐ)w:NQڧSIвqdS4#q]]~G nf*FYsVXKo={\O`Bvjt$;JཥMP}r7mZO}'XO`;Mdwjrk^Mz+B!\ehr3C1qH#xsמX6A;3=Gy]MP˽1-rr/k j[5^Gu&$[cxK֨րmm2^m Pgfnn0N3!M%PՓ@y'y'pUNJӏ]2  ч,ug-h"Ό~QtrAfb6 77 Eu{j0CD| G+{^ =^Z?x]`\ h6 #Xz+Vg`\ m&mm#wx6BB5r_m~oc{}J~ާ)+<@`o&h9TL)(rF,/S֙T6 j T,rgcZEGr;b9-;c:j̧Ơl+ɽnts*nNnw^/g, #Ȩ}%_ʑ\>瞷[ @O74wHp>0(X1kE'P6OM&=}~"(hX*' ?DfCkW5gf (T434e:5m1:[Q vj;l }Gg`)F χzŸ22ӨKܗm'w;x jNI'z!/ཧMP}އ~gܿwH{^"+R#&,} Zir[{,ǂw6A-Dr;ӵ j;9\'r;9wbIݹv%+xir/#k jגۃ&R=>/bgSgd}}cv9p&A2-)yRX7wojބ>Vw:mc SUc[yk]{v&BG䢞R5m5WY_ҳ[a2kuR,'Iq;ϧ볒Sm"X*XV])\ *_T9lsXKGCI)6 oLv~YwKGtZF0#XxgFޗ]93~Yd< |&55BsRNaw 7޻IFP;VN-U'»vhaAquEew]ݧݦh=uG<)gF?iMʃj@1ynL4e-qϮKr+.5ww~%xUM s}fԨ\DgcPz]0 Sg<:׆2~xߘEo۾שd;ʪ>$.dd\sz]l j>l j53|@ 68!RlkEJq*џ͐SMP˭"ן͐SMP˭&ן͐SMP˭!7:)Nt6kټ'6MfkdNL5_Va6rg52'&VfkdNLP˭!7:me3gD6;4!YȜl1q*џ,dNLP˭"ן,dNLP˭&ן,dNLP˭!7:em $yV mrLJFg2'&Vf'dNLP˭&ןNȜ[Cnt6ef:5* ;rQoFf6W!33#w>Sc*l߄Omb e/'f4~?RHs_Dd Or0  # A2 b0O y|'7f`! b0O y|'(o@BA 25}xZ]lTE>?vokIK?@*iR65YRҘ>Xx}#ڐD|0QT0*1hZC3w{sݺW!{w~sf̙ePܿdS BoCl[X !Vş >q~ͪ$[`o LWrx@ 48"rg}+Y3FI%^BH|(hPpϵ"(߯3gD›j7Y/(KZIN|n 1)FR tSWL}YIf̕pVpe4lcdo c&m+vxѕv͘qe+ Zq_Mi5 ku&Ի:d2E}2HlVbؤ%6. H[bkXL?o]duJlXbؔ.IjkmvYP!%T@O*.?ƦVBmL#8q=WjMg!NqAo .ޞcUaRRod)RҲ,%?ۣF-sGgEҖTT[z {ZT>AVq>E_j;9Z*;@젱# <4v< Y7YF|VyF?`1z*Hu^X1Jڈq늪]Eɏe*O~쵫SDê @9¨@dY`yj^) n7-(e\Ǫ"nV5;(Ə4*hUecI~YuѪchqF90\jJ+1Y){e,, RnDǷTr [t)|J4Jش ʀn*A*[!Qk@x!۾xd /6է+JQ+㼚`H~MGA+I$_؞)\d9~Q}=o bM1l jGZ5e6k[/!ж*fvLh(ImSjw VUhhV ~zu>tU#1Hx_7l5-,Ӭ1JᡭyWQȑ+H63wx}%ZV"bix#krXp,(W{gZk sG7 &cqdT>("Lrqm?;(N;Tٷ3 ;T'hN}R ~=zzQ/CN* zaa9l%(8]]Bo}$}dAB_I>0obnWGHw8JJrXWJ-+f+m9!Wʍٹsew~Nu2,wE!?dDd  0  # A2|{iϯкq~Lp`!|{iϯкq~L(qz=|5xZkhU>g_IwMI&MXkUMҤm`%4cK!RiccJԄC!PEX1?DJ@(R$H"RJ(Hk!jsgٙMw&!=鴝dkIJ(k!@wIRC>d`_6gf*~OސG_Ht ?G^A rp89C;@oE,]q)M?;^Ui'gn'%JKyhJ~Zr3CC`(12 _dY2Ff4pj1*G \bBn2:2 `eٝGQb,392R'qʲ;#8 b,38Aeئ+?92NQ&iʲ;ώTuʲ;mGTnMʲ;GƛTn-ʲ;#-*s6'XYvg8Ge+< ew_PY&eʲ;㯎T0\xߑ( Y1-k]Zv|y5&3_ۍ+5F ,b6 t0E{jȲOܲP HlQa Uجf6IM*lLac Tؠz֫8#SE.O9422bh< i}c}h^th~h4/ 'Zdm ϸW{15$E*lVa3 Qؤ&61 *lPa UXLa1c%֌.%T5]SؼT3"PX֧yU &N[ N`ʡGfi][vO{瞈h2{ڜprkzI!]rʓ|NyҪqXG{l:ƪ GzdUMV=w%OP#m򆴪IG##6YDZA$e=&H.F+ܣr nU/߁Vvn 3T綝 ŗpv:8??=tf۱}}];?3t ⎗#U Do VpMh j-tnNUWϡcCLE⫉+"4eP44W Yj V;}G`z2gJY|#hBQgCdT[ [>s5eVI9<_߸w' _ ='@@@ hNormalCJ_HaJmH sH tH b@b "y= Heading 1$ & F <@&5CJ KH OJQJ\^JaJ \@\ ]K Heading 2$<@& 56CJOJQJ\]^JaJDA@D Default Paragraph FontRi@R  Table Normal4 l4a (k(No List6U@6 #^ Hyperlink >*B*ph2B@2 #^ Body Textx2@2 TOC 1   ( ZY"Z ;v Document Map-D M CJOJQJ^JaJj@3j | Table Grid7:V0DOD (code ^ CJOJQJ^JaJH@RH ( Balloon TextCJOJQJ^JaJB'@aB {3Comment ReferenceCJaJ<r< {3 Comment TextCJaJ@jqr@ {3Comment Subject5\e@ fHTML PreformattedP 2( Px 4 #\'*.25@9-DM B*CJOJQJ^JaJph4 @4 1S/Footer  !.)@. 1S/ Page Number4@4 v7Header  !LOL !Body Text CharCJ_HaJmH sH tH :O: ! code CharCJOJQJ^J4^@4 4s Normal (Web)NON Style Heading 1 + 12 pt CJJOJ #aStyle Heading 1 !h^hCJaJbO!b jHeading 1 Char.5CJ KH OJQJ\^J_HaJ mH sH tH RO"1R !aStyle Heading 1 Char CharCJaJ2L@B2 hDate $$xa$CJ6OR6 hAuthor %$xa$CJV>@bV hTitle&$<@&a$5CJ KHOJQJ\^JaJ Kce@ w Z     Kce@ w Z     : JV_`op(((((((((((((((((((((((((( JV_`op # 6 U c p Uyq"   s t { 0!1!:!!!###$%%%T'')*Y..*1+1222222323^33333474W4m4u444415\55555"6G6O6j6v6w6x66989J;M;f;<<9<N<r<<<=>!>??@@DDqErEsEuEE3H4HIIJJMMMMNNnRpR[T{TVVrYsYYYY^^hbbb W?dٞb>PabsΠ>-jӢ֢<Ljt8*_=gQJ~()Vت7Z4}00000000000%0%00$00000000000000000 &0%0o %0o %0o %0o  !00  !00V0V0V0V0V !00000 0 0 0 0 0 0 0 00 0 0 0 00 00 0 0T00000000 !00'0'0' !00-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0- !005 !00p80p8 !00:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0: !00D0D0D0D0D0D0D0D0D0D0D0D0D0D0D !0 0fR0fR0fR0fR0fR0fR0fR@0fR @ 0 0\0\ 0\0|X @0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0?@0?@0?@0?@0?@0@0@0@0@0@0@0@0@0+@0+@0+@0+@0+@0+@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0O@0+@0+@0+@0+|X  JV_`op_W(D # 6 U c p Uyq"   s t { 0!1!:!!!###$%%%T'')*Y..*1+1222222323^33333474W4m4u444415\55555"6G6O6j6v6w6x66989J;M;f;<<9<N<r<<<=>!>??@@DDqErEsEuEE3H4HIIJJMMMMNNnRpR[T{TVVrYsYYYY^^aahbbbcddTeeff;gABPabsΠ>-jӢ֢آ٢<Ljtȣɣ8xy*_jm=gQ[\J~()Vت7Z4}00000000000%0%00$000000000000000000000000000000&0%0%0%0%0 !00c  !000000 !00U0U0U0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U0U 0U 0U 0U 0U 0U0U0U0U0U0U0U0U !00T'0T'0T' !00Y.0Y.0Y.0Y.0Y. 0Y. 0Y. 0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y.0Y. !00x6 !00909 !00M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M;0M; !00uE0uE0uE0uE0uE0uE0uE0uE0uE0uE0uE0uE0uE0uE0uE !0 0[T0[T0[T0[T0[T0[T0[T0[T 0 0^0^0^0^ 0^ 0^ 0^ 0^ 0^0^0^0^0^0^0^00h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0@0@0@0@0@0@0@0@0@0@0 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000y0y JV_`op_W(D # 6 U c p Uyq"   s t { 0!1!:!!!###$%%%T'')*Y..*1+1222222323^33333474W4m4u444415\55555"6G6O6j6v6w6x66989J;M;f;<<9<N<r<<<=>!>??@@DDqErEsEuEE3H4HIIJJMMMMNNnRpR[T{TVVrYsYYYY^^aahbbbcddTeeffPabsΠ>-jӢ֢٢<Ljtȣɣ8y*_jm=gQ\J~()Vت7Z4}00000000000%0%00$000000000000000000000000000000&0%0%0%0%0 !00d  !000000 !00v0v0v0v 0v 0v 0v0v0v0v 0v 0v 0v0v0v0v0v0v0v4 0v0v0v4 0v00v0v0v0v0v0v0v !00u'0u'0u' !00z.0z.0z.0z.0z.0z.0z. 0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z.0z. !006 !00&90&9 !00n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n;0n; !00E0E0E0E0E0E0E0E0E0E0E0E0E0E0E !0 06U06U06U06U06U06U06U06U 0 0^0^0^0^ 0^ 0^ 0^ 0^ 0^0^0^0^0^0^00g0g@0g0g0g0g 0  @ 0@ 0y00y00y00y00y00y00y00 I041y00y00/y00/y00/y00/y00/y00y00y00 y00y00y00y00y00y00y00y00y00y00y00 y00 I0;1y00 y00 $y00 y00y00y00y00y0$1 y0$1 y0$1y00I0E1 F@*I0E1 I0E1 I0E1y00y00 !y00 y00y00y00I00y00y00y00y01>4#y01=y01;y00y00 y00 y00 y01;y00y00 y00 y00 y00 y00y00y00y019y00y00y019y017y00y00y00y00y00y00`y00y00@0Dy00y0 0y0!1Cy0 0y0 0y00y00y00y00y0&1C@0@0@0@0@0@0@0y0-1<@0Xy051/y051/y051/y051/y051/y051.y051,@0y00%@0@0y01<& @0@0@0@0@0@0y00y00y0?1y0?1@0@0@0@0@0@0@0@0@0@0y0R1S,my0(1y0(1y0(1@0@0@0@0@0@0@0@0@0@0@0@0@0@0y000|X @0y00@0y00@0y00@0y00@0y0 0y0 0@0y00y00x @0@0@0@0@0@0{0 0 @0@0@0@0K00:@0@0{00/i@0@0@0@0{00+8i@0@0@0{00R U@0@0@0@0{0!0""Pi@0@0@0@0@0@0@0@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B@0B{0607i@0?p@0?@0?@0?@0?{0<0{0<0=Di@0@0@0{0C0Di@0@0@0@0@00{0J0KDf@0@0@0@0{0P0{0P0Qf@0+@0+@0+@0+@0+@0+{0W0X`e@0oP8@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP@0oP{0k0{0k0lz@0+@0+@0+@0+{0P0Qf|X  $$$$$$$$$'- X ( .AT#%(t*S+T,G./012G4'6758^93:<<>r@&B CDDHfI{JKL(PQvS>UNVXY[\?_adEfgk|pwvHy~}kRQyDf ޛ2ɥIzݫHά8[_`abcdefhijklnopsxyz{|}c'(+U,,E-.X.&4u? AICE IqR[cUo[quTCݛҡP,֬FX\^gmqrtuvw~].?Z\]_~9m7RTUWv#%&(Gu$?ABDcociiiiVjjk-lulؖ  X%X%X%X%X%X%X%X%X%X%X%X%X%̕XXXXX  '!!8@x(  V  # "`  V  # "`  V   #" ` V  # "` H  #  B    H  #  \   # #" ` B        H   #    N   3    H   #   B    H  #   B    V    #" `  B S  ?p y###$%'*9J;=4HINY ?*a#T  'TZ'mT< ( T<'.T Y'WD ]`'T|' Tv'T ZR(T 'T?N)T|(TTo&DS'RTL _Toc90620744 _Toc91923995 _Toc91923996 _Toc79900609 _Toc79900612 _Toc91923997 _Toc91923998 _Toc91923999 _Toc77151448 _Toc78191259 _Toc78332452 _Toc78332495 _Toc78332651 _Toc78332672 _Toc91924000 _Toc77151450 _Toc78191261 _Toc78332454 _Toc78332497 _Toc78332653 _Toc78332674 _Toc91924001 _Toc87364765 _Toc91924002 _Toc91924003 _Toc91924004 _Toc80436356 _Toc80437734 _Toc77148281 _Toc77149144 _Toc77151453 _Toc78191264 _Toc78332457 _Toc78332500 _Toc78332656 _Toc78332677 _Toc77148282 _Toc77149145 _Toc77151454 _Toc78191265 _Toc78332458 _Toc78332501 _Toc78332657 _Toc78332678 _Toc77148284 _Toc77149147 _Toc77151456 _Toc78191267 _Toc78332460 _Toc78332503 _Toc78332659 _Toc78332680 _Toc77148285 _Toc77149148 _Toc77151457 _Toc78191268 _Toc78332461 _Toc78332504 _Toc78332660 _Toc78332681 _Toc77148286 _Toc77149149 _Toc77151458 _Toc78191269 _Toc78332462 _Toc78332505 _Toc78332661 _Toc78332682 _Toc80436357 _Toc80437735 _Toc85342985 _Toc85342986 _Toc91924005 _Toc91924006 _Toc91924007 _Toc91924008c UT'Y.Y.Y.Y.Y.Y.Y.x6x6x6x6x6x6x699M;uEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE[T[T[T^hm  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJK o w'Y.Y.Y.Y.Y.Y.~.x6x6x6x6x6x66979e;EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE[T[TyT^hm$ϡyzV*urn:schemas-microsoft-com:office:smarttagsplacehttp://www.5iantlavalamp.com/ x :<;<<<N<O<a<b<r<c~f~h~k~l~~ݙݙߙߙS[ á5=@Klt!#<ADFS^npХާާ]fs&/RXYky}ѩ )7AK˪֪:<M<O<q<jjc~~ݙݙߙߙާާ?]m7U&u$Bo6 p Uy" Q { 1!:!x66M;f;uEE[T{T^^hbbefrffݙݙߙߙ=cCJBPԢעާާ\rʪ˪R?]m7U&u$BoJK..<<<<<<==*B+B/B/BPNQNRNRNkNlNNN2T=T*aGaaabbfbgbeeefffqfffff;givllllmFݙݙߙߙqqƛƛާާ|4}gk~R34d $HJ(j S4.:\L [ KXeS %m4%LiR%KXep$U6t^`.^`.88^8`.^`. ^`OJQJo( ^`OJQJo( 88^8`OJQJo( ^`OJQJo(hh^h`. hh^h`OJQJo(^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH. ^`o(hH. ^`hH. L^`LhH.   ^ `hH. jj^j`hH. :L:^:`LhH.   ^ `hH. ^`hH. L^`LhH. ^`o(hH. ^`hH. L^`LhH.   ^ `hH. jj^j`hH. :L:^:`LhH.   ^ `hH. ^`hH. L^`LhH.h hh^h`o(hH.h 88^8`hH.h L^`LhH.h   ^ `hH.h   ^ `hH.h xLx^x`LhH.h HH^H`hH.h ^`hH.h L^`LhH. ^`o(hH. ^`hH. L^`LhH.   ^ `hH. jj^j`hH. :L:^:`LhH.   ^ `hH. ^`hH. L^`LhH.88^8`o(() ^`hH.  L ^ `LhH.   ^ `hH. xx^x`hH. HLH^H`LhH. ^`hH. ^`hH. L^`LhH.L%m4%~}|S [ iR%p$U         V        jN`        L        z<Wz<\[{.!=*^C_E77TezNBJ{R c 8 B a \ L # P  / 7 W> M{  9-<j5l !5{SpoJ>H+{A[_]g%)~uBlf~h^ /QI(&gj):D_wm5 q!K!*"v"E#V*#,#0%Q%;R% %&s&t({(B*Z*Fu*k+ +v1+]+v,p-)."q.1S/#0T02{3A3h4z*4W4>y4) 545bG6]6w6R7%8:9H9 : :+p:W;-;Kw;<< E<SQ<&T=f=y=)>z> ??a?p?@2@lA]CNuC:D&@E*CEMEZE1F2FdFGiGt@H|MH@PHxHH II/IfJS]JKKQ3KNKLhL;Mo-NHNmNqfPiPQTQ^R=S TvTU1UPV//*>A g1Zy8c vf- t" -`Vh[t/1Gh'TatvE`sd8;~H@R<AMz(~2Xe _KO_02]FpRc{$-9<lD${.my+"Rf!$warv7?[zxx 0PLkGe& /P|fCM|Mx|A UUd03lio(&!= sp(HImaK E~aMN\z N.gT_c3rt*]]|)8[#- 8c=@f 2Va^Y:.CQVZ&\>lAsU-|WP "_Nw^a*!:5]U(z@tMfx> ,;v>e1f@Ir4+2b/EAy{E)2O{|0l-?dt 6?^az?[C"XA=N $Z2X4nrw!!]KQ]6=A] +x[v*G>3 %Z,3HU_afz$`~C$SjNs;/A9rQyJst$}Y#<[fF 0uj5&V~V#^4~ 7[l c/oaGX1sgWmn{v-?mY _Toc9192400648 _Toc9192400542 _Toc919240044, _Toc919240034& _Toc919240024  _Toc919240014 _Toc91924000: _Toc91923999: _Toc91923998: _Toc91923997: _Toc91923996  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~Root Entry FP&(Data x1Table+WordDocumentuSummaryInformation(DocumentSummaryInformation8CompObjq  FMicrosoft Office Word Document MSWordDocWord.Document.89q