ࡱ> 1432o@ .bjbj p p _oo $^(^(^(^(")<n$j)R,,,,-/a0T^n`n`n`n`n`n`n$pRUrnME--MEMEn,,nUOUOUOMEfV,,^nUOME^nUOUOsOjL2l,^) VĊG^(IFnkBnn0nksL\s42l2l0sblMEMEUOMEME09? ECnnz UOzScalable Byzantine-Fault-Quantifying Clock Synchronization John Douceur Jon Howell October 15, 2003 Technical Report MSR-TR-2003-67 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 Scalable Byzantine-Fault-Quantifying Clock Synchronization John Douceur and Jon Howell Microsoft Research October 15, 2003 Abstract We present a scalable protocol for establishing bounds on clock synchronization in the presence of Byzantine faults. The worst a faulty participant can do to a correct host is cause the correct host to establish (arbitrarily) weak but correct bounds; because the correct hosts knows what those bounds are, we refer to the protocol as Byzantine-fault quantifying. Correct hosts can use the quantified bounds to inform path selection, enabling them to route around misbehaving hosts. We describe how to employ the protocol in a practical environment that makes use of Byzantine-fault tolerant replicated state machines. Introduction This paper describes a scalable clock-synchronization protocol that withstands malicious failures. We developed this protocol in the context of Farsite, a distributed serverless filesystem. Farsite uses replication and encryption to produce reliable, private storage from resources supplied by incompletely trusted hosts. The hosts are desktop workstations, on the scale of 100,000 machines. By incompletely trusted, we mean that we expect only a small fraction of the population of machines to be Byzantine-faulty. Farsite uses simple replication to ensure the availability of file data. To ensure the availability and consistency of directory metadata, upon which the consistency of the entire filesystem service depends, Farsite uses Byzantine-fault-tolerant replicated state machines. We call the set of hosts replicating a BFT state machine a server group, because it acts like a virtual trustworthy server. A server group enforces consistency by doling out leases to client machines. A lease is a lock that expires: the lock is a promise that the host holding the lease can access the data it protects consistently; the expiration provides robustness against failure [ REF howard1988 \h 4]. If a lease-holder vanishes (because of network failure, software crash or malicious failure) and fails to return the lease, the lease eventually expires so that it can be given out to another host. Achieving consistency in the presence of expiration requires that the lessor and lessee agree on the expiration time, at least conservatively. The lessee cannot believe it holds the lease once the lessor believes the lease has expired. Clock synchronization is necessary to accomplish this agreement. Note that when we say synchronize clocks, we dont adjust the value of a (scalar) system clock, which then becomes input to future rounds of the algorithm, as does NTP [ REF mills1991 \h 8]. Instead, we compute and maintain bounds (a pair) on the relationship between a local system clock and a global reference clock. The bounds let lessees behave conservatively: if a lease is set to expire at 3:00 PM, and based on its local clock the client can bound the global clock within (2:57,3:02), then the client must assume that the lease might have expired. The purpose of the protocol presented in this paper is to enable each participant in Farsite to compute bounds relating its local clock to some global reference clock. In Section  REF _Ref52679996 \r \h 3, we describe the protocol, treating the global reference clock and each participant as a separate host. In Section  REF _Ref52680052 \r \h 4, we discuss how we implement a trustworthy reference clock in Farsite, and in Section  REF _Ref52706998 \r \h 5 how time bounds are communicated from individual hosts to server groups. The Scalable Protocol We present the scalable protocol in three steps. First, we assume that the participating hosts are arranged in a communication tree, and we describe how to make a single measurement that relates the local oscillator of each participating machine to the reference clock; the key idea is to make the measurement robust against adversarial participants. Next, we discuss how to schedule measurements to preserve scalability. Finally, we address path selection, the problem of establishing a communication tree. Measurement The first key idea is that we can enable every participant in the system to make a measurement of the global reference clock without trusting other participants and performing only constant work on any host, including the reference host. The output of a measurement for a host h is a triple . Values  and are in units of the local oscillator on h, and value  is in the units of the global oscillator. The measurement triple indicates that the global oscillator read  at the same moment that the local oscillator read some value , where . Measurement algebra For the measurement to be useful in the future, we need some constraint on the relationship between the local and reference oscillators, or else we know nothing after the measurement is complete. We assume that oscillators g and h have rates differing by at most some bound :  Or, solving for :  Given our measurement, we can bound  in terms of :  Into the future, at any time , we can bound the values of  by substituting minimum and maximum values of :  Hierarchical synchronization One way to make a scalable measurement of a reference clock, adopted by the NTP protocol, is to use hierarchical synchronization. Hosts in the topmost layer of the communication tree (stratum 1) directly measure the reference clock. Each host in stratum 2 measures a host in stratum 1, and computes information about the reference clock from that measurement and statistical information provided by the stratum 1 host. Hierarchical synchronization seems to require a participant to trust each ancestor on the path to the root to provide correct information about its synchronization measurements; we cannot afford that trust. Hierarchical communication To achieve scalability, however, hierarchy seems like an indispensable tool. To exploit hierarchy in the presence of adversaries, we remove from the protocol reliance on the purported computations of intermediate nodes. Instead, we use intermediate nodes only to relay communication to the global reference host; we can use cryptographic techniques to immediately verify whether an intermediate node has faithfully performed its communication duties. A simple manifestation of this idea is to use signed communication between each participant and the global reference host. Concatenating signed messages on the way up the tree and demultiplexing messages on the way down clearly enables us to limit the number of packets seen by each host to a constant (the tree fanout). However, it leaves the problem that the number of bytes seen by hosts high in the tree is ; furthermore, the root host must verify signatures and sign  outbound messages. The problem is solved by observing that host h does not require that it receive a personal message from g, nor does it even require that g even be aware of hs existence. To achieve the measurement property , h only cares that for some message from g timestamped , h can verify that  occurred after some time . Such a proof supports the left inequality; the right inequality is supported because of the causality of the universe and because g is trusted never to sign a timestamp that its oscillator has not yet produced. The measurement protocol, then, works like this. Host h produces a nonce at local time , and makes a note of that relationship. (We refer to the nonce itself by the local time  that it was created.) Host h forwards the nonce to its parent in the communication tree. That parent host p waits for other children to submit their nonces , then it produces its own nonce . Host p computes a cryptographic hash function to create a derivative hash , and forwards that hash to its parent. This process recurs up the tree, until the global reference host g collects a set of hashes from the nodes in the top layer. The global reference host reads its oscillator as value , and signs a message containing  and the top-level hashes. It relays that message to the top-level nodes. The intermediate nodes relay the signed message from g down the tree, along with the Merkle-tree sibling data [ REF merkle1980 \h 7]: each node passes to its children the list of hashes that composed the input to its derivative hash. When these data arrive at host h, h can verify that the message from g is indeed from g because of its signature. Furthermore, h can verify that  must have occurred after , for if it had not, then either an adversary managed to guess the value of nonce , or an adversary managed to invert the cryptographic hash function to show how the top-level hash in  can be computed as a function of input . As described, the protocol resists adversaries tampering with communication. But if an adversary merely mutes or delays communication, the protocol has two shortcomings. First, because intermediate nodes wait for all of their children to participate, any single failure would cause the protocol to halt. We address this problem in Section  REF _Ref52175542 \r \h 3.2. Second, while a delay or drop in communication cannot cause a participant to compute incorrect bounds on the relationship between its clock and the global clock, it certainly prevents the protocol from tightening those bounds (or measuring initial bounds). We discuss how to circumvent this denial of service in Section  REF _Ref52247795 \r \h 3.3. Historical rate information With only a bound on the rate deviation of the clocks, our knowledge of the synchronization of the clocks deteriorates (the bounds grow) as time passes from , the moment of measurement. We bound rather than estimate the time, so simply estimating rate, which can improve time estimates, does not improve the resulting bounds. There are two ways we could consider using historical rate information to improve our algorithm. Second-derivative assumption We could consider an assumption on the deviation between the second derivative of the local and reference clocks. Such an assumption effectively enables us to use historical rate information to reduce deterioration in the bounds, since an observed rate would then be known not to suddenly swing. Carrying a laptop from a cooled office to a park bench in the sunshine might suddenly change the rate of the oscillator in the laptop. Thus this assumption might be justified only if the thermal mass of each oscillator in the system is insulated from temperature changes in its environment. Furthermore, because the new assumption affects the squared term, its effectiveness at limiting bounds drift is quickly subsumed by the basic rate deviation assumption. Mostly-constant rate assumption Suppose that the rate deviation of any oscillator from a nominal rate is determined by the sum of two terms: one due to process limitations, which remains constant once the oscillator is put into service, and a second which varies due to thermal fluctuations. If the magnitude of the first term substantially dominates the second term, then we can measure the long-term rate, add conservative values reflecting the maximum deviation due to thermal effects, and have a dynamic bound on rate deviation that is much more precise than the static bound. Typical "AT cut" crystal oscillators used in microprocessor circuits deviate in rate by a factor less than  EMBED Equation.3  across a temperature range of 0-70C. The total variation, accounting for both process deviation and temperature deviation, is typically listed as . Thus, by measuring the constant term due to process variation, we stand to improve the deterioration rate of our bounds by more than an order of magnitude. Maintaining this long-term rate information without loss requires maintaining a pair of convex hulls and the tangents between them. This task takes  amortized time per measurement. With adversarial input, it can consume linear space; that is, we may need to store every measurement ever made to ensure optimal results. However, a simple greedy algorithm (discarding minimum-angle points on the convex hulls) saves space with minimal impact on accuracy. In practice, since an adversary controls the bounds we measure, we would apply the greedy pruning algorithm to maintain a constant number of points on the convex hulls. Scheduling The basic description of the aggregated measurement protocol in the previous section had internal nodes in the communication tree blocking until all of their children report in. That description is clearly inadequate, because a single faulty host can stall the entire protocol. Instead, let a host accept a hash from its child at any time, with each new hash displacing the last one from that child. According to some schedule, the host aggregates its childrens hashes and its own nonce, and submits a new hash to its parent. This protocol ensures that any host submitting its hash on time will see its nonce included in the measurement, while hosts that abstain only harm themselves and their children in the tree. The challenge, then, is to identify a suitable schedule for internal nodes to aggregate hashes, and a schedule for the root node to sign timestamps. First, the schedule should ensure that any host with a path to the root composed of non-faulty hosts succeeds in making enough measurements and with minimal delay so that it converges on good bounds. Second, a good schedule is parsimonious, to save network traffic and work by the aggregating hosts. Unsynchronized aggregation A simple schedule is for each host to aggregate periodically, but without any particular synchronization. The justification for this approach is that relying on synchronization is difficult, since that is what the entire protocol is trying to achieve. The tightness of the bounds computed by a participant is limited by the total delay between the construction of the nonce at  and the receipt of the timestamp at . If each host on the path from h to g aggregates periodically and without phase synchronization, then that delay will include waiting time proportional to the product of the aggregation period and the depth of h in the communication tree. Suppose that hosts make measurements with period . The clock-rate deviation requires that in the duration of one inter-measurement period, a hosts bounds must grow by , so there is no point in performing a measurement with precision much better than . So, we can relate measurement accuracy of , measurement period , aggregation period , network delay , and tree depth  with the following two conditions:  It is also clear that , for if one is much bigger than the other, then one source of error dominates the other, and we waste either measurements or aggregations. The cost of a measurement is that of sampling the reference clock (which is expensive in our environment) and performing a digital signature (which is typically expensive). The cost of aggregation is the computation of a hash and the sending and receipt of a (hash-sized) packet, once per participant in the system. For our application, we expect , , and ; we imagine  will be quite sufficient. Those choices lead to  and . The cost of  is a digital signature and BFT-state-machine operation every couple of minutes. The cost of  is that each host aggregates and sends ten packets per second, and hence receives some  packets per second, where  is the tree fanout. We are reasonably satisfied with these values from an engineering perspective: for our system, we can tune the parameters and discover that the resulting costs are acceptable. Ideally, however, it would be nice if the system could self-tune, milking out ideal bounds (within a constant factor of ). The present approach also wastes work proportional to : better clocks allow better synchronization, exploiting which requires more aggregations per measurement. It would be nice if the algorithm used resources constant in the quality of the clocks. Self-synchronized aggregation It would be nice to improve the aggregation schedule to not scale poorly with improved synchronization quality. Indeed, it seems that the aggregation tree should enable us to limit the number of packets received and the work performed by each host, including the reference host, to be proportional to the fanout  EMBED Equation.3 . The basic idea is to give each host a budget of  EMBED Equation.3  hash submissions per measurement interval. The host tries to use its budget to minimize the time its hash spends waiting at the next level in the tree. For example, if the host believes that the next measurement occurs at 4:00, and its local clock bounds (from prior measurements) give it an accuracy of  EMBED Equation.3 , then the host can send a packet every  EMBED Equation.3  seconds in that interval. If at least one arrives on time, then his measurement delay will be limited only by his parents delay (unavoidable), network delay to and from his parent (unavoidable), and the maximum  EMBED Equation.3  wait time. Since the latter quantity is smaller than the original  EMBED Equation.3 , we can expect the hosts bounds to converge to the sum of the unavoidable delay terms. There are a couple of problems with this basic approach. The first is that the host cannot know when the parents deadline is, since the parent must send its hash before the global measurement deadline to ensure it arrives on time. The second, related problem is that the host must anticipate the network delay, for if the global 4:00 occurs at the early end of its bounds, then a packet sent at exactly 4:00 will arrive too late to provide synchronization. But measuring network delay will not help, since an adversarial network can offer fast delivery for delay measurement and then poor performance when the timeliness matters. We have tried several approaches to mitigate these problems, but each has its limitations. One is to send a series of submissions before the deadline with geometrically-dropping delays between them. This approach discovers the network delay to within a constant factor (the base of the exponential progression) at each layer in the tree. Unfortunately, it means that hosts at layer  EMBED Equation.3  in the tree can hope to converge on  EMBED Equation.3 -bounds no better than exponential in  EMBED Equation.3 . A second strategy sends submissions for a  EMBED Equation.3 -wide window before the first time the host believes could be the deadline. The idea is that if  EMBED Equation.3  is bigger than the maximum network delay over an interval, then the window will accommodate the network delay, providing a measurement; if  EMBED Equation.3  is smaller, then the host already has good synchronization and does not mind missing a measurement opportunity. Unfortunately, hosts below the well-synchronized host also miss out on a measurement, even if they need one. With our best variation on this scheme, we could ensure that all hosts in the tree converge, but to  EMBED Equation.3 -bounds no better than quadratic in  EMBED Equation.3 . Bad ideas for aggregation scheduling Suppose we use an unsynchronized schedule with a variable aggregation period. Smaller periods occur near the beginning of the measurement period P as measured on the global reference timeline, and larger ones fill out the later times. Poorly synchronized hosts will find their hashes waiting a long time during the infrequent period, but that time is still short enough that the host improves its synchronization so that future submissions are better-aligned with the high-accuracy (low-delay) phase. This optimization offers both small aggregation-induced delays and reduced total message traffic versus the periodic approach. However, that reduced traffic is arranged in a (deliberately) clumpy fashion that only reduces long-term average resource usage, but not peak load, making the optimization of questionable value. Suppose we execute an auxiliary protocol in which each parent informs its children when (in local time) it plans to aggregate, and the children use that local information to deliver hashes in a timely fashion. This approach requires coming up with another (local) synchronization scheme, and then reasoning about why it works reliably. Indeed, the problem is harder than one might guess: as described in Section  REF _Ref52778337 \r \h 3.2.2, an adversary makes measurements of network behavior difficult to use. We suspect that this approach is doomed. Path selection In prior sections, we established a measurement protocol and a scheduling algorithm that keeps the protocol scalable with a small constant. We still have to address the problem that the path from a host to the global reference host involves some faulty host. The good news is that the faulty host cannot cause any host to compute incorrect bounds; it can only prevent a host from computing very tight bounds, perhaps no tighter than . Because the victim host knows that its bounds are poor, however, it can attempt the protocol along an alternate path to the root. If its bounds improve, it can abandon the path that (because of a faulty participant) produced lousy bounds. Obviously, the protocol doesnt scale if every host attempts to use every other host as a parent on each measurement, to maximize the likelihood of discovering a non-faulty path. A simple fix: let each correct host try two or three paths at any time, periodically discarding the worst-performing parent to test some other potential parent. Faulty hosts will observe no such restraint, of course; we expand this point shortly. Even a set of polite, restrained hosts can get into trouble using the present path-selection algorithm, however. Because each host is greedy, each will eventually discover that the shortest path to the root is to contact the root directly, forming a degenerate tree with only one layer. To preserve scalability, the protocol must enforce a limit on fanout. Perhaps we can enforce a tree by limiting the number of children each hosts accepts. Such behavior could cause our tree of correct hosts to self-organize into a tree with acceptable fanout. Unfortunately, it also presents a golden opportunity to faulty hosts: if the set of faulty hosts can clog all of the fanout slots at the root, they can impede access to the global reference clock by any of the correct hosts. We have not yet discovered a simple approach that can impose a well-shaped tree on a fully-connected graph in the presence of faulty hosts. Instead, we rely on the application of the algorithm to provide an adversary-resistant scalable subgraph from which the greedy algorithm samples edges to construct correct paths. In Farsite, the subgraph follows from the natural structure of the file service. First, there is a hierarchy of BFT replicated-state-machine groups. The topmost group is the global reference host (see Section  REF _Ref52680052 \r \h 4). If a group A delegates work to a group B, then every host comprising B is entitled to contact every host in group A. After all, the correct members of group A must trust that most members of group B are correct. Finally, if a host h is a client accessing resources managed by a group B, then h is allowed to contact members of B. If B is willing to provide file services to h, it should at least be willing to give h the time of day. Why is this subgraph sufficient to ensure effective clock synchronization in Farsite? If host h needs to synchronize with respect to B, then either it has some correct path to the root, if by no other means, then through a host in B, or else every member of some BFT group in the path from B to the root group is corrupt. If that is the case, then B is effectively faulty anyway. If B is faulty, then it is not important that h synchronize with it. The Global Reference Clock The Byzantine-fault-quantifying protocol can withstand faulty hosts in the system, but requires a trustworthy global reference clock. In our application and others, the reason for withstanding faulty hosts is that no particular host in the system is trustworthy. Hence we produce a trustworthy global clock from the set of untrusted hosts composing the root server group. To do so, we arrange for the BFT state machine to serve in the role of the reference host. First, the replicas composing the machine execute a Byzantine-fault-tolerant clock synchronization algorithm [ REF lamport1984 \h 6, REF schneider1987 \h 9, REF dolev1995 \h 2] to synchronize the correct hosts on a scalar time value consistent with real time. Then the state machine collects hashes, timestamps them, and signs its timestamp message so that it can be propagated back down the tree. Each individual host in the root server group belongs to the top layer of the communication tree. When each host performs an aggregation, it submits its hash into the BFT state machine by acting as a client to the same server group in which it is a server replica; the machines state thus collects the submitted hashes. To timestamp the hashes, the server group must execute an operation with an agreed-upon time value. Our BFT replication algorithm [ REF castro1999 \h 1] timestamps operations as follows: when the primary replica proposes a request as the next operation to be performed by the state machine, it also proposes a timestamp value. Each correct replica vets the proposal, and a replica vets the proposed timestamp by ensuring that it agrees to within some established bound  of the replicas notion of time. Such a bound must account for both precision limitations of the Byzantine clock synchronization algorithm and the message delay in delivering the proposal. If the proposal does not pass muster, correct replicas clamor for a view-change, to elect a new primary replica. Hence if  is tighter than the actual synchronization of the group, then the progress of the state machine will stall until the groups synchronization converges. Now that we have a mechanism for timestamping BFT state machine requests, we can declare what the state machine does with the submitted hashes: it simply adds a  record to its replicated state, and returns no value. As each replica decides that the timestamp deadline has passed, it gathers the set of  records stored in its view of the replicated state machine, signs them, and sends the signature out to each top-layer host in the communication tree. Each top-layer host collects  signatures of its hash to form a group-signed timestamp: effectively, a document signed by the BFT state machine. That signed timestamp is the message propagated back down the Merkle tree in the reply phase of the measurement protocol. The meaning of a group-signed  record must be interpreted carefully. Because the time value was only known within a bound of , the value  from Section  REF _Ref52701544 \r \h 3.1.1 is actually a time bound: . Injecting bounds into server groups The previous section detailed how we use (conventional, unscalable) BFT clock synchronization and BFT state machines to produce a trustworthy global reference clock. In so doing, the root server group acquires a notion of time: it is in fact the reference for all time in the system. In this section, we address how to make time-bound measurements available to server groups other than the root group. The present task is to take the measurements computed by the possibly-faulty replicas that compose a server group, and aggregate them to produce a correct time-bound for each operation performed by the group. Once we have achieved the task, then operations that need to evaluate the validity of leases granted by or to the group have available as an input bounds on the current time. The solution is parallel to the proposal and acceptance of scalar time in the root group. A correct primary replica with time bounds  will propose bounds . A correct replica with bounds will accept the proposed bounds if:  To ensure liveness, we must ensure that replicas can synchronize to within  EMBED Equation.3 . In the scalar proposal mechanism from the previous section, the choice of  accounts for the network delay and the relative synchronization of the replicas, which in that scenario were linked directly to one another by their participation in a BFT clock-synchronization protocol. Here,  must accommodate bounds that vary due to delays in the BFQ clock-synchronization tree, those convergence limits described in Section  REF _Ref52175542 \r \h 3.2. Thus, to remain truly scalable,  must vary with the depth of the members in the BFQ communication tree (which happens to correspond to the BFT server group tree in our application), or be otherwise automatically tuned. For our application, where we assume total system size , asymptotic scalability is not essential, and we can afford to select a very conservative . Related Work Haber and Stornettas time-stamping schemes exploit hashes to produce non-forgeable timestamps that scale to many clients [ REF haber1991 \h 3]. The measurement step in our protocol acquires a timestamp, although the requirements on it are different: it is not important, for example, that a client be able to prove to a third party when its nonce was stamped. Lamport and Mann propose a scalable protocol for distributing time from a trusted time source using hierarchical synchronization. Their protocol propagates path information, and hosts use path information and computed time intervals to discard invalid data from faulty or corrupt intermediaries [ REF lamport1997 \h 5]. Summary We have presented a scalable protocol and algorithms for synchronizing clocks in a distributed system that produces as output conservative bounds on the accuracy of the synchronization. The protocol is Byzantine-fault quantifying in that the results of malicious behavior are apparent in the bounds output by the protocol, and those bounds may be used to prevent safety violations. The three central components of the protocol are the measurement protocol, the schedule that exploits aggregation, and the path-selection algorithm that helps correct hosts route around malicious behavior. We explored how to use various assumptions on oscillator behavior to improve the quality of the bounds. We presented a technique for building a virtual global reference clock from untrusted hosts using conventional, non-scalable Byzantine-fault-tolerant clock synchronization. We presented a technique for gathering clock bounds computed by member replicas of a BFT state machine to produce a deterministic bounds pair useful inside the state machine. We discussed how this protocol fits into the context and engineering assumptions of the Farsite scalable filesystem. Farsites control structure provides an implicit graph for path selection. Non-critical timing in Farsite tolerates the modest quality bounds produced by a simple aggregation schedule. Farsites distrust of all hosts necessitates a virtual global reference clock and a mechanism for injecting bounds into a server group. Acknowledgements Our thanks to the Farsite team, of whose project this protocol is only a sliver. Thanks to Mike Sinclair and Tom Blank for sharing their experiences with real-world oscillators (Section  REF _Ref52701403 \r \h 3.1.4.2). Thanks to John Dunagan for discussion about maintaining long-term rate information (Section  REF _Ref52701403 \r \h 3.1.4.2). Thanks to Dan Simon for discussions about aggregation scheduling (Section  REF _Ref52175542 \r \h 3.2). References [ SEQ cites \* MERGEFORMAT 1] M. Castro and B. Liskov, Practical Byzantine Fault Tolerance, In Proc. 3rd OSDI, USENIX, February 1999. [ SEQ cites \* MERGEFORMAT 2] Danny Dolev, Joseph Y. Halpern, Barbara Simons, and Ray Strong. Dynamic Fault-Tolerant Clock Synchronization. J. ACM 42(1) pp. 143185, January 1995. [ SEQ cites \* MERGEFORMAT 3] S. Haber and W. S. Stornetta. How to timestamp a digital document. J. of Cryptology, 3(2), 1991. [ SEQ cites \* MERGEFORMAT 4] J. Howard, M. Kazar, S. Menees, D. Nichols, M. Satyanarayanan, R. Sidebotham, and M. West. Scale and Performance in a Distributed File System. ACM Transactions on Computer Systems, 6(1):51--81, February 1988. [ SEQ cites \* MERGEFORMAT 5] Leslie Lamport and Timothy Mann. Marching to Many Distant Drummers. Unpublished manuscript available at  HYPERLINK "http://research.microsoft.com/ users/lamport/pubs/pubs.html #lamport-drummers" http://research.microsoft.com/ users/lamport/pubs/pubs.html #lamport-drummers, 1997. [ SEQ cites \* MERGEFORMAT 6] Leslie Lamport and P. M. Melliar-Smith. Byzantine Clock Synchronization. PODC 84, 1984. [ SEQ cites \* MERGEFORMAT 7] R. Merkle, Protocols for Public Key Cryptosystems. IEEE Symposium on Security and Privacy, 1980. [ SEQ cites \* MERGEFORMAT 8] David L. Mills. Internet Time Synchronization: the Network Time Protocol. In Zhonghua Yang and T. Anthony Marsland (Eds.), Global States and Time in Distributed Systems, IEEE Computer Society Press, 1991. [ SEQ cites \* MERGEFORMAT 9] Fred B. Schneider, Understanding Protocols for Byzantine Clock Synchronization. Cornell Department of Computer Science TR 87-859, 1987. :<SU_gx  C L M V _ @ A 9  ^ @Ojkpq¾ºh7E9jh7E9UhBh  hKf`hKf` hKf`6hKf`h1hr>hgZh{:6h{:hM^hcFh/chr0h/cCJh/chM^CJ aJ hr0CJ aJ hShr0CJh?@cnubc{|}~]ijǼ߸߰ߥ߰ߖ߸h~IjhcFUjh1UjghoUh|#tjhoUjhBUh1jwhcFUh<6jh<6Uh h{h hBh|#tmHnHujh7E9UjhoU2ikq@Mt"#KLXY #  6}yupupu hfg6hfgh.h jhM<h`EHUj hM<h`EHUj hM<h`EHUjhM<h`EHUjhM<h`EHUjhM<h`EHUj]hM<h`EHU hM<6h.hM< hXAhXAh~IhXAh hB)679:KLNOftuyŸŞyjdWSN hmT6hmTj'hQh`EHU hk}20Jj%hk}2h`0JEHU hk}20J<j#hk}2h`EHUj!hQh`EHUhfgjhk}2h`EHUjhM<h`EHUjhQh`EHUhQjhk}2h`Ujhk}2h`EHUj3hk}2h`EHUhk}2j@hfgh`EHUe!f!g!m!!!!! "(""##*#+#?#@#S#T##########$$$&$'$N$O$\$]$_$`$q$r$$$%%%%%j7hM<h`EHUj5hM<h`EHUj3hM<h`EHUj<1hM<h`EHU h /6j(/h /h`EHUj-h /h`EHUj+h /h`EHUhvhgZh~2h / hmT6hmT2e!T#b%n&I'k(Y)*--//022468 9 :;==gd=gdgd gdO9+gdO9+gds*gd$gd$gdmT%%&&0&1&N&n&&&&&&&&&&&&''' '!'5'9''''''' (!(4(:((((((((رحرؕؑ؄wرؕرskjhwaUhwajFhM<h`EHUjDhM<h`EHUhe h(+86jAhehoEHU heEHh%|hgZ h6hj?hM<h`EHUj=hM<h`EHUh(+8 h /6j;hM<h`EHUh /j9hM<h`EHU*(((((((")().)5)6)7)S)W)x)y){)|)))))))))))**X*Y*******6,?,@,X,ԳԦԙԌ{wowjh[=Uh[=hk)jWQhM<h`EHUjLOhM<h`EHUjBMhM<h`EHUj8KhM<h`EHUj-IhM<h`EHU hy/hy/hy/hy/6 hy/6hy/h#h(+8h|#tmHnHujhwaUjHhcFUhwa*X,Y,Z,],^,_,*-w----------z.{../0122/22292Y2]2d222233444F5G5Z5[5\5]55555ͼͼͼͼͼ͸ͼ͸jbVhB8hB8EHUjVm%C hB8CJUVaJhB8jhB8U hj/Ehj/Eho9hwahO9+hs*jWTh.h$EHUh$jSh[=Uh~Chk)h[=h|#tjh[=UjaSh[=U055Q6S6l6o6x6667#7$7%797D78 9=:A:l:p:::::; ;L;P;~;;;<1<7<l<=====!>>>>0?1?V?W?w?x?׾׾׾׾׺׶Ӷӣ h6j_hh`EHUj]hM<h`EHUhZY h#h#h=hhEeh#h heh jZho9hoEHUhwaho9hj/EhB8hh" hj/Ehj/EjXhj/EhoEHU2=>G@HAAABC0EVGtGH2LNPSS W9YHYKZ[]^gd)igd gd#gd6gdNtgdYgd=$a$gdEegdgd#x?|?}??*@+@G@x@y@@@@@EAFAHAKAOAYArAsAAAAAAAAAAAAAAA`B覆vk^jguhgNh`EHUjrhw.h'UjphgNh`EHUjohgNh`EHUjmhgNh`EHUjkhgNh`EHUj'ihw.h`EHUhgNjghw.h`EHUjehw.h`EHUjchZYh`EHUjahZYh`EHUhw.h h6h#`BeBBBCCCCCCCC0D1D6D7DFDGDDDDDEEEE0EWE]E.F4F;FJQJRJSJTJ|J}JJJJJhKiK|K̿㰣㛗{l_j'hYhYEHUj86$C hYCJUVaJjhQhQEHUj6$C hQCJUVaJhQjhQUjhYhYEHUj5$C hYCJUVaJj hYhYEHUj5$C hYCJUVaJjhYUhYh=jߎhehoEHUhSqhe$|K}K~KKKKKKKKL0L1L2LmMpMNN)P*P=P>P?P@PePfPyPzP{P|PPPPPׯ׫ׯקט|o`jz8$C hQCJUVaJjUhQhQEHUjd8$C hQCJUVaJjbhQhQEHUjY8$C hQCJUVaJhNthYh$johQhQEHUj6$C hQCJUVaJjhQUhQjhYUjAhYhQEHUj6$C hQCJUVaJ!PPPPPPPPPPPPP QQ\Q]QpQqQrQsQQRRRRRRRRXSYSlSmSnSoSSSSSӴӧ朔xijz8$C hCJUVaJjhQhEHUjd8$C hCJUVaJjhUh hQhQj!hQhQEHUj.hQhQEHU hQ6j;hQhQEHUj8$C hQCJUVaJh$hQjhQUjHhQhQEHU'SSSSSSSSSSSS T/T4TbTcTTTTTTTUV W W*WWWX X X XXXGXqX~XXXXXXXXXXY9YHYJZKZZZ|\\>]ɼɱթͥjuhNth`EHUh)ihNth h|#tjhcFUjh$U h#6h$h'h#hHh6h65B*phhjhUjhQhEHU9>]u]w]]]^^^^^^_O_``aaaaaaabbbbbbbbbbbbcc;ciMiTiUiVi\i^igihimiwixiiiYj]jjjkkkkսѲսծݗᓗᓏh hi7~h!j/h]sGw,m|<^lmAK:ܳXnΒ'o Dd Tb  c $A? ?3"`?2TۣCJb10`!(ۣCJb1 RXJxcdd``> @c112BYL%bpu @c112BYL%bpu;vv0o8L+KRsA<.E.b Ya0Cbd`a6L Dd ,Tb   c $A ? ?3"`?2Ue9~BҚ)`Rf1 `!)e9~BҚ)`Rf XJxcdd``> @c112BYL%bpu +L=0adbR ,.I `g!, K~ Dd ,Tb ! c $A? ?3"`?2U*,3M;_1# `!)*,3M;_ XJxcdd``> @c112BYL%bpu +L=0adbR ,.I g!, K Dd Tb " c $A? ?3"`?2UmG߾r T1. `!)mG߾r T XJxcdd``> @c112BYL%bpu;vv0o8L+KRsA<.E.b Ya0Cbd`KKDd thb # c $A? ?3"`?2kAxu6FN[q9`!ikAxu6FN[: @ |7xcdd``ed``baV d,FYzP1n:&&v! KA?H1Z ㆪaM,,He`HI? @201d++&1l?z+8U`T T0ֲW+A|#8.oX >Fc60`Q aB[1@+ss\| nlg𹠡 ppBX;F&&\A DL,İrt?3_YiDd b $ c $A? ?3"`?2=!ĚƛL@,`!!ĚƛL@,:`!Hxcdd`` @c112BYL%bpuek:Dd b % c $A? ?3"`?2oH׭=w`!oH׭=b`Ghxcdd``6d 2 ĜL0##0KQ* W'd3H1)fYˀX(T UXRYP7`7LYI9 @>ۏ/? N300u9<&VJbʵy`+@FhV >0abԘe7Dd lb ' c $A? ?3"`? 2v݌?6Ξh0`!v݌?6Ξh xڝJAggI%8-HZ<@!TB_@l  >^H=fov D#/DՐG#h۶Ʉcy~q|Tk$L-AZK>pDyrZv*"*B<.Yrg0/qQ\YB́kFUL}a{7{\ޝ sYV9啐b(^b"ŒSAbCs_/Ń}?SN%LV-_~nDoyoRC hhsaݏ6 @c112BYL%bpu;vv0o8L+KRsA<.E.b Ya0Cbd`kL^bDd $b  c $A? ?3"`? 2!Po/`!!Po/̢HD hNxڝR=KA\BH(l4őX8K'XE "8 Wl]sGw,m|<^lmAK:ܳXnΒ'oDd W lb  c $A? ?3"`? 2 SP 1';[?>VfvR`!SP 1';[?>VfvxڝJ@ggk?DAC"Qj= *VBͳ'|"(O#~;҈5fgf;eL` #C5dž_Ze{%xF`0{º8"'^^q[UI}GAOLYj"'V16sl*fQm7Nlݽ㉑`&/N۔ZtC^4q╞A}Һ|%׫뀵^I/xjQr-C\Ӽ7Q^.$][ncyΪ&o=@uu?y7 ˾LjΦ5%{c$*6[aț# (&$$]ɻ [nSm0'2&_-w/E즰1Dd lhb  c $A? ?3"`?2{^\X|O`pZ4 ?W"`!O^\X|O`pZ4 ?@|xcdd`` @c112BYL%bpu 1 @c112BYL%bpu;vv0o8L+KRsA<.E.b P#İr!v120eTkKDd Hlb   c $A ? ?3"`?2LwOZ+ b].I(B(h`! wOZ+ b].I@X-xcdd``d 2 ĜL0##0KQ* WAVRcgbR x әzjx|K2B* Rj8 :@u!f0109Y@:3L& !) 㖄}7S?-cabM-VK-WMc~x V8pF;d@:3*a|m1?**_Ƈ*?U~&+|23D{@ Ma(wd2^&0nR ss*a=BX.TP`@sG/1p  #}+E廙W /fnfoNTLT!|,$!}`o P{夐'P_grwr03B('u ?F0o WA习 \eE.3+KRsA<.E.bB ;/GSf~WDd @b ( c $A? ?3"`?2^W"7ԋ ا;:D+`!2W"7ԋ ا;`: xcdd`` @c112BYL%bpuFc60`Q aB[1@+ss\| nlg𹠡 ppBX;F&&\A DL,İrt?3_Yi Dd ,Tb , c $A ? ?3"`?2Ue9~BҚ)`Rf13`!)e9~BҚ)`Rf XJxcdd``> @c112BYL%bpu +L=0adbR ,.I `g!, K~ Dd ,Tb - c $A ? ?3"`?2Ue9~BҚ)`Rf15`!)e9~BҚ)`Rf XJxcdd``> @c112BYL%bpu +L=0adbR ,.I `g!, K~ Dd Tb . c $A? ?3"`?2TۣCJb107`!(ۣCJb1 RXJxcdd``> @c112BYL%bpu @c112BYL%bpu @c112BYL%bpu0I2L dBv%'P Qp()J:pF=`TrAC ;LLJ% "CX1Ya~bM_ Dd Tb 2 c $A ? ?3"`?2U'(p816@`!)'(p8 HXJxcdd``> @c112BYL%bpu 1 @c112BYL%bpu +L=0adbR ,.I `g!, K~ Dd ,Tb 5 c $A ? ?3"`?2Ue9~BҚ)`Rf1F`!)e9~BҚ)`Rf XJxcdd``> @c112BYL%bpu +L=0adbR ,.I `g!, K~wDyK  merkle1980 Dd ,Tb  c $A ? ?3"`?2Ue9~BҚ)`Rf1qI`!)e9~BҚ)`Rf XJxcdd``> @c112BYL%bpu +L=0adbR ,.I `g!, K~ Dd Tb  c $A? ?3"`?2TۣCJb10|K`!(ۣCJb1 RXJxcdd``> @c112BYL%bpu @c112BYL%bpu @c112BYL%bpu +L=0adbR ,.I `g!, K~ Dd Tb  c $A? ?3"`?!2TۣCJb10Q`!(ۣCJb1 RXJxcdd``> @c112BYL%bpu @c112BYL%bpu?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefhijklmnopqrstuvwxyz{|}~0Root EntryA FpފGData glWordDocument@_ObjectPoolC ŠGpފG_1126526294;F ŠG ŠGOle CompObjfObjInfo !&+05:>?@ABCDFGHIJKLMNOQ FMicrosoft Equation 3.0 DS Equation Equation.39qX* 510 "6 FMicrosoft Equation 3.0 DS EqEquation Native F_1126446516 F ŠG ŠGOle CompObj fuation Equation.39qX  f FMicrosoft Equation 3.0 DS Equation Equation.39qX 0< kObjInfo Equation Native  )_1126446556 F ŠG ŠGOle  CompObj fObjInfoEquation Native )_1126446788,F ŠG ŠGOle CompObjfObjInfoEquation Native 5 FMicrosoft Equation 3.0 DS Equation Equation.39qXxM =5s FMicrosoft Equation 3.0 DS Equation Equation.39q_1126446648F ŠG ŠGOle CompObjfObjInfoX )5k FMicrosoft Equation 3.0 DS Equation Equation.39qXv<] )kEquation Native 6_1126446752F ŠG ŠGOle CompObj fObjInfo!Equation Native 6_1126446846$F ŠG ŠGOle CompObj#% fObjInfo&"Equation Native #)_1126447193")F ŠG ŠG FMicrosoft Equation 3.0 DS Equation Equation.39qX tw  FMicrosoft Equation 3.0 DS Equation Equation.39qOle $CompObj(*%fObjInfo+'Equation Native ()X P d FMicrosoft Equation 3.0 DS Equation Equation.39qX Щ$\  FMicrosoft Equation 3.0 DS Eq_1126447204'6.F ŠG ŠGOle )CompObj-/*fObjInfo0,Equation Native -)_11264472263F ŠG ŠGOle .CompObj24/fuation Equation.39qX P d FMicrosoft Equation 3.0 DS Equation Equation.39qX Щ$\ ObjInfo51Equation Native 2)_112644725418F ŠG ŠGOle 3CompObj794fObjInfo:6Equation Native 7)_1126521121=F ŠG ŠGOle 8CompObj<>9fObjInfo?;Equation Native <) FMicrosoft Equation 3.0 DS Equation Equation.39qX  Oh+'00 DP l x  U4AHq}a`di5< ?2jh145n`H0W&0;^ qr7FۙAF`q?\ou??8pAC C`;LLJ% "CD1|,İrt?0eTqk'Dd @b 7 c $A#? ?3"`?#2q+RsMX`!E+Rsxt xcdd`` @c112BYL%bpu @c112BYL%bpu @c112BYL%bpu;vv0o8L+KRsA<.E.b Ya0Cbd`jL^Dd b ; c $A&? ?3"`?'2?do@46M/?d^a`!do@46M/?d:Rxcdd`` @c112BYL%bpu @c112BYL%bpu c $A)? ?3"`?*2\&r'c8Yg`!0&r'cΒ@|Hxcdd``> @c112BYL%bpuz {Sm`!dE1>z {:@xcdd`` @c112BYL%bpue:5Dd b C c $A.? ?3"`?/2=v"l;* =q`!v"l;* :`!Hxcdd`` @c112BYL%bpu;FiYčܯI.tKdYf.%-IJ.5${EAbl!q6OL]5"Dd b 9 c $A)? ?3"`?82lOc#оUH|h`!@Oc#оU@Hxcdd`` @c112BYL%bpu#RpeqIj.ŠV "|byx9-`1Dd $@b < c $A,? ?3"`?;2{tTfR'FkUWh`!OtTfR'FkUHD xڕPJA}w%#iaV&`",(xFr`?,Lr}ABev(7@X]s\!"R!EUU TK6 t4 ,dܗ\Xqŕn2)|B>tk%_GJ7>zHY c $A-? ?3"`?=2?oX2ph`!oX2p:@xcdd`` @c112BYL%bpu2Np>].eO[*h`!"p>].eO[xt xcdd``> @c112BYL%bpu @c112BYL%bpu~7ۏ/? N30@deh[N%y`d++&1`7 L@2Wda%ԧ\ u;~LLJ% "CX1P#İr!v1c +LfDd @b E c $A4? ?3"`?B2?ǒߧ?Ph`!ǒߧ?:R xcdd`` @c112BYL%bpu 1eM :EDd b K c $A:? ?3"`?J2=BU%:j'k=Eh`!BU%:j'k=E:`!Hxcdd`` @c112BYL%bpuDd b M c $A;? ?3"`?L2=H#Ymrh`!H#Ym:`!Hxcdd`` @c112BYL%bpuDd b N c $A;? ?3"`?M2=H#Ymeh`!H#Ym:`!Hxcdd`` @c112BYL%bpuDd b O c $A9? ?3"`?N2=YCFrI`Xh`!YCFrI`:`!Hxcdd`` @c112BYL%bpueM :EDd b P c $A:? ?3"`?O2=BU%:j'k=EKh`!BU%:j'k=E:`!Hxcdd`` @c112BYL%bpu @c112BYL%bpu +L=0adbR ,.I @΃an K{DyK  _Ref52701544nDd Tb  c $A? ?3"`?92RϦ}5I4<`!RϦ}5I4<" e XJZxcdd`` @c112BYL%bpu=8MXAFh W]la'$37X/\!(?71XLH۳f:\Nq`ciINLLJ% "CD1byx96TWDd Tb  c $A? ?3"`?:2,{pXs6}`!u,{pXs6` PXJCxcdd``ed``baV d,FYzP1n:&! KA?H1Z , ǀqC0&dT201 @2@penR~CPHhXr @@ڈјUEa7L@2w;3M 2 M-VK-WMcNPs0r5my\ s)rȄJ>e.pJ; `padbR ,.IA!z9Dd dTb  c $A? ?3"`?;2L*$V5Sa/|4R`!L*$V5Sa/|4 ^ XJrxڥSJ@=wgZ.WE{E~-[H֮] "Z+ƹi)}@ńI9 ! mVWPDHG$|Wet.6 K#!qKFOr]DwI5qEΔ0z_= y' n\9y8-և# YY-ɑ3ouӝdzwGx)[A&}g(f^eufOߣS uRqVXUnXcR IAȹ|`Z?Й짫iwjf(}l5d+zf+b6v5ܩb~4xy8=j/*WDd hb  c $A? ?3"`?<2ߵ_FXmIR}`!uߵ_FXmIR@|Cxcdd``ed``baV d,FYzP1n:&! KA?H1Z, πqC0&dT20$1 @2@penR~CP^kS  @Ct9W1HF%1楈s`7L@223M 2 M-VK-WMcNPs05Cm y\ s)rcȄJ>e.pJ; `padbR ,.IA!`懺Dd Xb   c $A? ?3"`?=2:ܠv/`!:ܠv  7xڕK@]үkAD ҢMl7 ZҹAAwA* ;xwZ{{/)0L[1S80BBB}.иKK n^:GLό!K٣ ݭ4N}M+gy/%ˊu ْcU5i_lnû<5NҳHnc5p]BT>u_%/#~#uXyYCqG|眗u).zTÏllml֭.5Ǡ$ϓ,D 5xY;Doh/J>K! *iT.@]tPSW9'8[B5%4-FunƵ?ۡ2Dd b   c $A? ?3"`?>2<4%/)wN3~$`!4%/)wN3~$:@`!xcdd`` @c112BYL%bpuef4UDd b  c $A? ?3"`??2='BǑg`!'BǑg:@`!xcdd`` @c112BYL%bpu@@&^@5CJ\]aJT@!T [= Heading 3 & F >@&^ CJ\aJT@1T s* Heading 4 & F `@&^` CJ\aJP@P [= Heading 5 & F<@&56CJ\]aJF@F [= Heading 6 & F<@& 5\aJ@@@ [= Heading 7 & F<@&CJF@F [= Heading 8 & F<@& 6CJ]L @L [= Heading 9 & F<@&OJQJ^JaJDA@D Default Paragraph FontRiR  Table Normal4 l4a (k(No List*O* )iTODO 5phB'@B k}2Comment ReferenceCJaJ<< k}2 Comment TextCJaJ@j@ k}2Comment Subject5\H2H k}2 Balloon TextCJOJQJ^JaJ>OB> ' big equationdEH@OR@ ! referencesP^`P@L@b@ r0Date$da$CJaJJOJ r0Style1 d@& CJOJQJ\^JaJR>@R r0Title$<@&a$5CJ KHOJQJ\^JaJ 6U@6 B Hyperlink >*B*ph .           V.0;<ITUfgx0CTUV_AkQ ~ $9;NPeTbnIk Y!"%%''(**,.0 1 23556G8H999:;0=V?t?@2DFHKK O9QHQKRSUVXY{\<^W^^_&`acfgiZj~jlmqnsnrrsttSvwxzz}||}}P~Ds !"#$%&'()*+,/00000000000000000000x00 00V0V 0000000 00 000( 000000000( 00( 0000000000( 00%8 0%%0'0'8 0%%0*0*0* 0000000( 000050505050505050505( 0000V?0V?0V?0V?0V?( 0000K0K 009Q09Q09Q09Q09Q09Q09Q09Q 00<^0<^0<^0<^0<^0<^0<^0<^ 00Zj0Zj0Zj0Zj0Zj 00r0r 00t0t0t0t00z00{|0{|0{|0{|0{|0{|0{|0{|0{|0000000000000}<0 ;<TUfgx0T_kQ ~ $9;NPeTbnIk Y!"%%''(**,.0 1 23556G8H999:;0=V?t?@2DFHK O~jlmqnsnrxzz}||}}P~Ds/|00|00|00|00|00|00|00|00|00|00|00|00|00}|00}|00@0@0  00l  00W0W0W0W0W0W  00  0 00* 00 0 0 0 0 0 0 0 * 00* 0000000000* 00#: 0##0h%0h%: 0##0y(0y(0y( 0 0`/0`/0`/* 0`/`/030303030303030303* 0`/`/0=0=0=0=* 0`/`/<00<0C0`8T<0C0<0C0<0C0<0C0<0C0<0I0`T<0C0<0C0<0E0}<00D4D4D4|00|00<00|00xd<00<00`6%(X,5x?`BYF|KPS>]&fk4r!w|u.FJKLMOPQRTUVWXYZ\]^_`bcd =^|$.GINS[ae-Hp) = ? b | ~ i ?$Y$]$%%%F-Z-\-@@@@ A A=BQBSB|BBBhC|C~CCCC)H=H?HeHyH{HHHHHHH\IpIrIIJJXKlKnKKKKPPPZZZ`````````cccj5j;jnnn|ppprrrttth{{{{| |Z|t|x||||}/}1}}}}Q~l~n~E`b,zu.::::::::::::::::     X    l,2$Zkw) @0(  B S  ?. _Ref52171306 _Ref52679996 _Ref52701544 _Ref52701403 _Ref52175542 _Ref52175720 _Ref52778337 _Ref52247795 _Ref52171322 _Ref52680052 _Ref52706998 castro1999 dolev1995 haber1991 howard1988 lamport1997 lamport1984 merkle1980 mills1991 schneider1987V*0 1V?9Q<^<^Zj|}}Q~Eu/ ^#* 1s?GQGQG^V^}j|2}}o~c/ OLE Fields 79 OLE Fields 78 OLE Fields 77 OLE Fields 76 OLE Fields 74 OLE Fields 73 OLE Fields 72 OLE Fields 71 OLE Fields 60 OLE Fields 59 OLE Fields 58 OLE Fields 57 OLE Fields 56OLE Objects 25 OLE Fields 55OLE Objects 26 OLE Fields 54 OLE Objects 1 OLE Fields 24OLE Objects 27 OLE Fields 53OLE Objects 28 OLE Fields 52OLE Objects 29 OLE Fields 51OLE Objects 30 OLE Fields 50OLE Objects 31 OLE Fields 49OLE Objects 32 OLE Fields 48OLE Objects 33 OLE Fields 47OLE Objects 34 OLE Fields 46OLE Objects 35 OLE Fields 45OLE Objects 36 OLE Fields 44 OLE Fields 70 OLE Fields 69 OLE Fields 68 OLE Fields 67 OLE Fields 66 OLE Fields 65 OLE Objects 4 OLE Fields 21 OLE Objects 5 OLE Fields 20OLE Objects 37 OLE Fields 43OLE Objects 39 OLE Fields 41OLE Objects 40 OLE Fields 40OLE Objects 41 OLE Fields 39OLE Objects 42 OLE Fields 38OLE Objects 43 OLE Fields 37OLE Objects 44 OLE Fields 36OLE Objects 45 OLE Fields 35OLE Objects 46 OLE Fields 34OLE Objects 47 OLE Fields 33OLE Objects 48 OLE Fields 32OLE Objects 49 OLE Fields 31OLE Objects 50 OLE Fields 30OLE Objects 51 OLE Fields 29 OLE Objects 6 OLE Fields 19OLE Objects 55 OLE Fields 25OLE Objects 56OLE Objects 57OLE Objects 58OLE Objects 59OLE Objects 60OLE Objects 61OLE Objects 65OLE Objects 66OLE Objects 67OLE Objects 68OLE Objects 69OLE Objects 70OLE Objects 71OLE Objects 72OLE Objects 73OLE Objects 76 OLE Objects 7 OLE Fields 18 OLE Objects 8 OLE Fields 17 OLE Objects 9 OLE Fields 16OLE Objects 10 OLE Fields 15OLE Objects 11 OLE Fields 14OLE Objects 12 OLE Fields 13OLE Objects 13 OLE Fields 12OLE Objects 14 OLE Fields 11OLE Objects 15 OLE Fields 10OLE Objects 16 OLE Fields 9OLE Objects 18 OLE Fields 7OLE Objects 19 OLE Fields 6OLE Objects 20 OLE Fields 5OLE Objects 21 OLE Fields 4OLE Objects 22 OLE Fields 3OLE Objects 24 OLE Fields 12N` V  l l ?!?!v!v!!!""""$$$$H%H%%% &D&U&&3.3.l/l/11n:n:::;;<<<<<<?=?=j=j======= > >U>U>??MMN6N\NNyOOQ$RR SSTeTTUX^e^effgghhmimiujujjjkkkkoo!p!p\p\pppqquuvv/  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~2N` V  l l ?!?!v!v!!!""""$$$$H%H%%% &D&U&&3.3.l/l/11n:n:::;;<<<<<<?=?=j=j======= > >U>U>??MMN6N\NNyOOQ$RR SSTeTTUX^e^effgghhmimiujujjjkkkkoo!p!p\p\pppqquuvv/KMOSV^p) A C J b  i ?$^$`$f$%%%%%%F-]-^-d-e-f-@@@@@@@ A AAAA=BTBVBZB[B^B|BBBBBBhCCCCCCCCCCCC)H@HAHCHDHGHeHHHHHHPPPPPPZZZZZZ``````ccccccjX>>>?@@8AA1DGHKKNN[PQRRmZZZZ&`6aecLdiYj$n+nsnnooopqqrr ttz||}s}t}}} ~ ~1~2~O~~Dˀ̀U ,/333333333333333333333333333333333333333333333UgCS_~ B W $;Nk X!$_$`$a%%%%''**0 155V?t??@@7AABBCC1DGHKK[PQ9QHQ<^W^&`5aecKdiYjZj~jrrr ttttzzz{{||{|||st / Jon Howell *DhH lj   - 6u 7lj Ib< K T`W'"vZ \>3$ hh^h`hH. P^`PhH.. ^`hH... xp^`xhH....  ^`hH .....  X ^ `XhH ......  x^ `hH.......  8^`8hH........  `^``hH.........h P^`PhHh xx^x`hH.h 0^`0hH..h ^`hH... h ((^(`hH .... h ^`hH ..... h H H ^H `hH ...... h  ` ^ ``hH....... h h h ^h `hH........ hh^h`hH) ^`hH) 88^8`hH) ^`hH() ^`hH() pp^p`hH()   ^ `hH. @ @ ^@ `hH.   ^ `hH. hh^h`hH) ^`hH) 88^8`hH) ^`hH() ^`hH() pp^p`hH()   ^ `hH. @ @ ^@ `hH.   ^ `hH.h P^`PhHh xx^x`hH.h 0^`0hH..h ^`hH... h ((^(`hH .... h ^`hH ..... h H H ^H `hH ...... h  ` ^ ``hH....... h h h ^h `hH........h ^`hH.h ^`hH.h pLp^p`LhH.h @ @ ^@ `hH.h ^`hH.h L^`LhH.h ^`hH.h ^`hH.h PLP^P`LhH. hh^h`hH) ^`hH) 88^8`hH) ^`hH() ^`hH() pp^p`hH()   ^ `hH. @ @ ^@ `hH.   ^ `hH.^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH. hh^h`hH) ^`hH) 88^8`hH) ^`hH() ^`hH() pp^p`hH()   ^ `hH. @ @ ^@ `hH.   ^ `hH.h ^`hH.h P^`PhH..h 0^0`hH...h (x ^(`xhH.... h  @ ^ `hH ..... h  X^ `XhH ...... h ^`hH....... h 8H^`8hH........ h H`^H``hH......... \H -*D"vZb< K6u 7T`W I $                                   \Ps0g0g;dcp;dcprq]y1DW z{ &?B@9 $c$hN*s*O9+G.w.y/r0A1k}2~2{T7(+8B87E9T$ v)i/cek):"B{[(l(!H.M^ q#oYt}/Kz0@TTTT.P@UnknownGz Times New Roman5Symbol3& z Arial71 Courier5& zaTahoma"1hzzfzzfdyFoCoCA44d݂݂ 3QX ?M^:Scalable Byzantine-Fault-Quantifying Clock Synchronization Jon Howell Jon HowellFZZZZScalable Byzantine TR.doc###4           FMicrosoft Word Document MSWordDocWord.Document.89qRoot EntryA F(IData glWordDocument@_ObjectPoolC ŠGpފG      !"#$%&'()*+,-./7856 !&+05:>?@ABCDFGHIJKLMNORQSTUV1TablesSummaryInformation(B=DocumentSummaryInformation8ECompObjPj;Scalable Byzantine-Fault-Quantifying Clock Synchronizationcal Jon Howellzon on  Normal.dotz Jon Howellz2n Microsoft Word 10.0@F#@f@FfhG@FfhGo՜.+,D՜.+,|8 hp  Microsoft CorporationsC݂{ ;Scalable Byzantine-Fault-Quantifying Clock Synchronization TitleT@4 _PID_HLINKS_AdHocReviewCycleID_EmailSubject _AuthorEmail_AuthorEmailDisplayNameATf~<http://research.microsoft.com/ users  FMicrosoft Word Document MSWordDocWord.Document.89q/lamport/pubs/pubs.htmllamport-drummersY'tech report submission, MSR-TR-2003-67howell@microsoft.com Jon Howell