ࡱ> tjRoot Entry ( l1Q (  F@h#0&i#uT( WordDocument(k1. ( :(HQ,_T(ObjectPool ( 2t(7Pi#Pi#SummaryInformation1<  ((k( s  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefg4lmnopwvxMETAh5w7 1  35XPICT< 1j)j) 6HhLw~7PLwObjInfoMCompObj0Pj     !"#$%&'()*+,-./0123469:;<=>@ABCDEFGHIJLOQ FMicrosoft Word Document MSWordDocWord.Document.69qࡱࡱ> tjs database sizes grow to terabytes, generation often takes longer than evaluation. Large database load or generation operations last for more than a week. The goal here is to quickly generate a large database by using parallel algorithms and execution. To make the problem concrete, the goal is to generate a billion record accounts table for the TPC-A benchmark [TPC]. Generating and loading this table using sequential algorithms would take several days. The goal here is to invent algorithms and techniques that generate this billion-record table and its indices in an hour. In outline, the paper first postulates a model of parallel computer hardware and software, so that we can quantify the performance of each algorithm. Then, the paper shows how to convert a sequential load to a parallel load by partitioning the job and forking a process-per-partition. Next the tasks of synthetic data generation are investigated. Parallel algorithms are given for generating dense-unique-pseudo-random sequences, and for generating indices on these sequences. After that, the paper investigates generating non-dense non-uniform distributions with special attention paid to Zipfian and self-similar distributions. First consider parallel computer architecture and the assoicated performance and cost model . 2. The Computation Model We assume a shared-nothing computer architecture typified by the Tandem, Teradata, or Gamma machines, by workstation clusters from DEC, IBM, HP, Novell, and Sun, and by processor arrays like the Intel Hypercube [Stonebraker, Horst, Teradata, DeWitt 1, DeWitt 3]. In these systems, each processor has a private memory and one or more discs. The processors are connected via a high-speed network and processes communicate via messages. Parallelism and minimal process interaction are major design goals in these systems. Shared-disc systems like IBM's Sysplex and Digital's VMSclusters are gravitating toward this shared-nothing architecture as the number of processors grows. The ideas presented here apply to a spectrum of execution environments a many-small environment of hundreds of processors typified by Teradata and Gamma, and a few-big environment of tens of processors typified by Tandem and VMScluster. In Table 1, the two systems each have 3 bips (billion instructions per second) of processing power, 10 gb of ram, and 1 tb of disc storage (1000 discs). The prices are estimates: 100 $/MIPS, 30 $/mb ram, and 1 $/mb disc. These approximate 1994 prices. Table 1: Two large database machines of 1994, each costing 1.6M$. Many-Small Processors (100) Few-Big Processors (10) cpus + memory discs cpus + memory discs number: 100 x (30 mips+100 mb) 1000 x (1 gb/disk) 10 x (300 mips+1 gb) 1000 x (1 gb/disk) sum 3 bips +10 gb 1 tb 3 bips + 10 gb 1 tb price 100 x (3 k$ + 3 k$) 1000x 1k$ 10 x (30 k$ + 30 k$) 1000 x 1 k$ sum price: 600 k$ 1 M$ 600 k$ 1 M$ total price 1.6M$ 1.6M$ The ideas here apply to both architectures but to simplify the discussion, the many-small-processors design is assumed here. In this presentation, cpus are named by the numbers 0, 1, ..., 99. Discs are attached to particular cpus, and are correspondingly named D$ddd where ddd is the disc number [0...999]. For example, disc 223 is named D$223 and is the third disc of processor 22.  EMBED Word.Picture.6  Figure 2. A diagram of a many-small computer cluster consisting of 100 "cards" each containing a moderately powerful processor equivalent to the PC or workstation of 1993. The "card" includes the processor, its communication hardware, its memory and a small array of discs. The cards are inter-connected by a high speed interconnect shown as the "base-plane". It gives 1 GB/s point-to-point connectivity between any pair of processors.  For simplicity, assume that the data of a table or index is range-partitioned among all the 1000 discs in partitions of equal size. Each system has a slightly different syatax for table partitioning. To be concrete, we use Digital's Rdb syntax [Hobbs]. Partitions are called storage areas in Rdb and are often named by the disc on which they reside. Using Rdb syntax, the accounts table of the TPC-A Benchmark [TPC] on the many-small system would be defined by: create table ACCOUNTS ( ID unsigned integer not null, BALANCE decimal(12,2) not null, CUSTOMER unsigned integer not null, FILLER character(92) not null, primary key (ID) (1) ) create storage map PARTITION for ACCOUNTS store using (ID) in D$000 with limit of 0999999, in D$001 with limit of 1999999, in D$002 with limit of 2999999, in D$997 with limit of 997999999, in D$998 with limit of 998999999, otherwise in D$999; When an application selects, inserts, updates, or deletes a record of the accounts table, the SQL system locates the record in the correct partition. The application is unaware of the partitioning. Most sql systems provide a variant of this transparent partitioning. The computation model is fairly simple. A sequential record read or insert costs 5,000 instructions and a fraction of an IO. Algorithms here avoid anything but sequential record reads and writes because sequential operations can run at disk device speed (5 MB/second or 50,000 records per second), while random disk operations run a thousand times slower (50 IO/sec = 50 records/second) due to seek and rotational latencies. One-way process-to-process messages have a fixed cost of 3K instructions, and a marginal cost of one instruction per byte [Uren, Kronenberg, Thekkath]. Some systems cost ten times as much for such services. In any case, the algorithms here minimize sending messages. If necessary, they send a few large messages rather than many small ones. These computational costs are summarized in Table 2. Table 3: Cost of basic operations: record size 100 bytes sequential SQL record read or insert 5,000 instructions + 0.01 IO random SQL record read or insert 25,000 instructions + 1.00 IO disk sequential read/write rate 5 MB/sec = 50,000 records/sec disk random read/write rate 50 IO/sec = 50 records/sec cost of a one-way M-byte message 3 k+ M instructions (speed of light latency is minimal) 3. Sequential Database Generation The discussion begins by showing how to sequentially generate and populate a table. This algorithm is then generalized to one that generates each partition in parallel. To make the discussion concrete, the following sections take generating the TPC-A accounts table as a running example [TPC]. The table will have one billion hundred-byte records (.1 tb) partitioned among the 1000 discs as described in the data definition Program (1) above. Each of the one thousand discs will store 100 mb of data as a B-tree [Knuth]. Since B-trees are only 69% full at equilibrium, each disc will use 150 mb of storage to hold it's B-tree. This is well below the 1 gb capacity of small discs. The remainder of the disk space stores data from other tables. The accounts table above can be sequentially populated by the following simple SQL + C program /* sequentially load records with key in [base,base+count) into tablename */ void sequential_load( char *tablename, /* name of table to be loaded */ long base, /* start key of load */ long count) /* number of keys to load */ { exec sql begin declare section; long key; (2) exec sql end declare section; for (key = base; key < base + count; key++) exec sql insert into :tablename values(:key, 0, 0, ""); } In fact the entire database for a 10,000 tps-A database [TPC] can be loaded by the following program: void sequential_load("accounts", 0, 1000000000); (3) void sequential_load("tellers", 0, 100000); void sequential_load("branches", 0, 10000); Program (3) shows how to sequentially generate several tables with related statistics: the idea is that the branch:teller:account cardinalities should be in the ratios 1:10:100,000, as in the schema: The entity-relations ship diagram (4) shows this relationship. Branch B has the ten tellers: 10B, 10B+1, ...; and has the hundred thousand accounts: 100000B, 100000B+1,.... The partitioning of these tables would be defined to give each of the thousand discs ten branch records, one hundred teller records, and one million account records. With 1,000 discs, disc i would get branch numbers [i101..(i+1)101-1], teller numbers [i102..(i+1)102-1] and account numbers [i106..(i+1)106-1].  EMBED Word.Picture.6  (4) 4. Parallel Database Generation The ideas in Programs (2) and (3) are fine for loading small tables (less than a million records), but they use a single processor and so run one hundred times slower than an algorithm that divides the task into a hundred smaller ones each running in parallel on a separate processor. Program (3) runs at 6,000 records/second given the performance assumptions of Table 3 (5,000 instructions per insert, 30 mips, implies 6,000 inserts per second.) At that rate, the billion-record load would take almost two days. The same load could run in twenty minutes if done in parallel on the hundred-processor cluster described by Figure 2 and Table 3. Parallel algorithms require a way to create processes on specific cpus. Assume that a process can be created in a cpu by calling: int fork(int cpu) /* creates a process in the cpu and returns it's id */ The fork procedure is much like the UNIX fork() [Tanenbaum] but has a parameter specifying the cpu in which the process is to be created. As with UNIX, the fork returns zero to the child process, but returns the process identifier to the parent (forking) process. The child is a clone of the parent, executing the same program with the same current state, but a completely separate process environment. Program (4) below shows how to convert the sequential load of Program (2) above to a parallel loader. A parallel loader proceeds by spawning a load process in each processor. The spawning time is minimized by forking a binary tree of processes, forking 2n processes at depth n of the tree. Each node follows the logic: (1) If I am not a leaf, fork my left child and right child. (2) Load my partition. Figure 4 illustrates the idea. The logic is designed for a cluster of 2N nodes, but it works on smaller clusters of M nodes. If each fork takes about one second (a high estimate), the entire startup of one hundred processes will complete within eight seconds. Assuming the forking logic copies code and data from the forker, there will be no bottlenecks at startup.  EMBED Word.Picture.6 Figure 4. The process forking logic for a cluster of 15 processors. cpu 7 forks loaders into cpus 3 and 11, then cpu 7 proceeds to load partition 7.  The pseudocode for parallel load is: #define CPUS 100 (5) /* parallel load records into tablename */ /* first fork right and left child processes if right != left */ /* then sequentially load the partitions of this cpu */ void parallel_load(char * tablename, /* name of table to be loaded */ long records, /* number of records to load */ long left, /* cpu of left fork subtree */ long right, /* cpu of right fork subtree */ integer depth);/* recursion depth in forking */ { long my_cpu = floor((left+right)/2); long part_size = floor(records/CPUS); /* */ long him; /* id of forked process */ depth++; /* increase recursion depth */ if ( depth == 1 ) /* return after forking root process */ { if ( fork(my_cpu)) return; }; /* in center cpu. */ /* spawn left subtree of processes */ if ( left < my_cpu ) { him = fork( floor( (left + my_cpu-1) / 2) ); if (him == 0) /* child code */ {parallel_load(tablename, records,left,my_cpu-1,depth); return; } } /* spawn right subtree of processes */ if ( my_cpu < right ) { him = fork( floor( (my_cpu + 1 + right) / 2) ); if (him == 0) /* child code */ { parallel_load(tablename, records,my_cpu+1,right,depth); return; } } /* fill the partition of this process */ sequential_load(tablename, my_cpu * part_size, part_size); }; /* end of parallel_load() */ /* invoke parallel loader for a billion account records on 100 cpus */ void = parallel_load("accounts", 1000000000, 0, CPUS - 1, 0); This code is easily modified to fork a process-per-disc rather than a process-per-processor if the discs are the bottleneck. Notice that each generator process uses the same table name to generate the data (transparency). The table partitioning criterion causes the records all to go to each loading process's local disc. Parallelism often suffers from problems of startup, interference, and skew [Gerber, Smith]. Program (5) minimizes startup problems by parallel forking. Once the load process begins, the underlying system acquires locks on partitions rather than on whole tables at least that is the way many SQL systems work. So, each partition loader can proceed in parallel and in isolation. The load operation is typically not covered by transaction protection, so the recovery log is not a bottleneck-- rather it uses the old-master-new-master recovery technique of dumping a copy of the table when the load completes. Given these arguments, startup, interference, and skew should not be a problem for the parallel load. Using the assumptions of Figure 2 and Table 3, the algorithm should generate 600,00 records per second (60 MB/s) and generate1B records in less than 30 minutes. 5. Dense Unique Random Data Generation The parallel data generator of the previous section correctly generated the sequential account numbers but it did not generate the customer numbers it just left them as zeros. Customer number is an example of a general problem. Generating synthetic databases often requires a sequence of numbers (i.e. field values) with the following properties: Dense:All the integers in [0..n] appear in the sequence. Unique:Each integer appears exactly once. Random:The sequence appears to be "random" (is pseudo-random). These properties are often needed to make the cardinalities of selection expressions and join expressions predictable for example each customer should have exactly one account. Some applications need synthetic data that is not uniformly distributed. Section 8 gives some ways to transform uniform distributions into Gaussian, exponential, Zipfian, and other distributions. We know of several ways to generate dense-unique-random numbers in the range [1..n]. The original generator for the Wisconsin Benchmark [Bitton 1] kept an initially zero bitmap of length n and used the system random number generator to pick the next free element for the series. This algorithm was replaced in the original ASAP generator [Bitton 3] by a shuffle that built an array of pairs < (i, random()) | i=1,... , n >. The array was then sorted on the second element to produce a shuffle of the first element. The bit filter algorithm uses order n space and order n2 time (about n tries are required to set the last bit in the bitmap). The shuffle algorithm takes nlogn time and linear space, so is clearly superior to the bit filter. If space is not an issue, shuffle is a good way to generate a dense-unique random series. But for large databases or if several independent series are to be generated a more space-efficient algorithm may be needed. The obvious choice is to generalize the shuffle to a sort: Sort: Create a SQL table of two columns containing the dense sequence 1..N and a random sequence based on a popular random number generator. Then the dense sequence is ordered by the random sequence. Program (7) demonstrates this. exec sql create table T (sequence integer, rand integer); /* create temp table*/ (7) for (i = 1; i <= N; i++) /* fill it with pairs */ exec sql insert into T values (:i, :random());/* */ /* note that sequence values are dense and unique. */ /* If ordered by the rand field, they will be random. */ exec sql declare cursor answer for /* define a sql cursor to */ select sequence from T order by rand; /* get data in the random order */ exec sql open answer; /* read the table via that cursor*/ for (i = 0; i < N; i++) /* set next_value from */ { exec sql fetch answer into :next_value;/* next record */ /* process next value */ } /* */ exec sql close answer; /* */ This algorithm takes NlogN time and ~ EQ \r(N) main memory space (sorting) and ~N disc space (storage of table). For a billion records, the many-little configuration can do the sort in about thirty minutes and the job in less than an hour. Such schemes are somewhat inconvenient because they construct a set of files to drive data generation. It would be more convenient to have a simple subroutine that could generate the next element of the desired sequence in constant time and space (say 25 instructions and 100 bytes of storage). The idea for such an algorithm is to generate the numbers using a generator of the cyclic group of integers under multiplication. In essence, a random number generator is constructed for elements in the desired range. The algorithm is: Pick a prime p larger than n and a generator g for the multiplicative group modulo p. Then the series is: and the program to generate the series is: #define P xxx; /* see Table 5 for good values */ (8) #define G xxx; /* see appendix 1 for good values */ static seed = G; /* start the seed at G */ long next_value(long N) /* function to compute next value*/ { seed = (G * seed) % P ; /* seed = next in series mod prime*/ while ( seed > N ) seed = (G * seed) % P; /* discard all > N */ return seed; /* return new value */ } /* end of generator */ This scheme, due to Gray and Englert, was successfully used to generate very large "Wisconsin Benchmark" databases on the Intel Hypercube and is now the standard way to generate Wisconsin databases [DeWitt 2]. To understand how it works, consider the numbers between 1 and 10. The powers of 8 mod 11 form a dense unique sequence of these numbers: 8, 9, 6, 4, 10, 3, 2, 5, 7, 1. In general, if n is a prime the multiplicative group consisting of [1..n-1] has many generators: elements whose powers enumerate the group without repetition until they generate 1. Clearly, the generator scheme is preferable it takes constant time and constant space. But not just any generator will give a good pseudo-random sequence. There are many tests for "randomness". We used the spectral test recommend by Knuth [Knuth]. In his terminology, all the random number generators in this paper pass the spectral test "with flying colors" in dimensions 2 through 6. We applied the test to primes just larger than powers of ten and recommend the generators of Table 5. Section 7 describes how to use powers of 2 instead of primes. Table 5. A list of recommended primes and generators for each decade from 10 to one billion. Decade Prime Generator 10 11 2 100 101 7 1,000 1,009 26 10,000 10,007 59 100,000 100,003 242 1,000,000 1,000,003 568 10,000,000 10,000,019 1792 100,000,000 100,000,007 5649 1,000,000,000 2,147,483,647 16807 When the prime and generator are large compared to the machine's arithmetic, one needs to use a technique shown in Program (9) due to Schrage [Schrage] to keep the results from overflowing. P and G must be chosen so that B is less than A. This will always be true if g <  EQ \r(p) . Machines with 64-bit registers and arithmetic make this technique unnecessary. #define P xxx; /* see Table 3 for good values */ (9) #define G xxx; /* of prime and generator */ #define A (P / G); /* A = prime / generator */ #define B (P % G); /* B = prime mod generator */ static seed = G; /* start the seed at G */ long next_value(long N) /* function to compute next value */ { long seed_over_A = seed / A; /* compute the components of seed */ long seed_mod_A = seed % A; /* compared to A = (P/G) */ do /* loop if next is bigger than N */ { /* Use Schrage's function to */ seed = (G * seed_mod_A ) - (B * seed_over_A) ; /* compute G*seed mode P */ if ( seed < 0) seed = seed + P; /* without overflow */ } while ( seed >= N ) /* discard all >= N */ return seed; /* return new value */ } /* end of next_value() */ 6. Generating Random Data The ideas of the previous section can now be applied to generate a complete table. Program (3) above gave an example of generating several tables with related statistics: the idea there was that the branch:teller:account cardinalities should be in the ratios 1:10:100,000, as in the schema (4). This is a general phenomenon, but the requirements are often more complex. One requirement that was skipped in Program (2) was that the customer id field be unique and be uncorrelated with the account id rather it was just filled with zeros. The requirement is that each customer have a unique bank account as shown in (9).  EMBED Word.Picture.6  (9) Using the ideas of the pervious section, Program (2) can be refined to generate each partition of the account table in parallel, including the random-unique-dense customer number as follows: /* sequentially load records into i'th partition of tablename */ void sequential_load( char *tablename, /* name of table to be loaded */ long records, /* number of records in table */ long count, /* # records in each partition */ long part_no) /* number of partition to load */ { /* */ long i, j; /* loop control variables */ long base = part_no * count; /* start key for this partition */ long parts = records / count; /* number of partitions in file */ exec sql begin declare section; /* SQL variables */ long key, customer; /* account id, customer id */ (10) exec sql end declare section; /* */ for (i = 0; i < part_no; i++) /* the i'th processor skips to */ customer = next_value(records); /* p(i) in generator's sequence */ for (key = base; key < base + count; key++) /* now generate the partition */ {customer = next_value(records) - 1; /* get next customer number */ exec sql insert into :tablename ( id, balance, customer, filler) /* */ values(:key, 0,:customer, "");/*generate 1/n'th */ for (j = 0; j < CPUS; j++) /* skip seed to p(i+CPU) */ customer = next_value(records); /* i.e. skip p(j) of others */ } } /* end of sequential_load() */ The generator of the i'th cpu is invoked as: void sequential_load("accounts", records, records/CPUS, i); Each of the one hundred processors will compute the same random series based on the next_value(records) procedure. But each generator will only use one hundredth of the series. If the series is s0, s1, s2,....,s999999999, the i'th cpu will use the subsequence si, s1cpus+i, s2cpus+i, s3cpus+i, s4cpus+i,... Each of these subsequences is random and unique, and the union of them is dense. The i'th generator begins by calling next_value() "i-1" times to skip over the values of the other generators. Then after inserting a record, the generator calls the next_value() CPUS times, but only uses the last value returned. There is a lot of wasted work in this design: the series is computed one hundred times and each generator only uses 1% of the values it generates. The premise is that calls to next_value() are cheap (~25 instructions) so that 100 calls (2500 instructions) is small compared to the insert cost (~5000 instructions). The generator should produce 1b records in less than 1 hour on the many-small configuration. Scaling this algorithm to thousands of generators requires a variation that has less wasted work. One might partition the series into 1000 segments and precompute the starting point pi for each partition. These values could be stored in a global array. Then each partition generator would start at sx, for some x, and would use the sequence sx, sx+1, sx+2,... and no calls to next_value() would be wasted. A different approach avoids this pre-computation and minimizes wasted computation. Table 5 shows that for large N, P can be chosen just slightly larger than N (within 1% of it). Suppose we generate a database of P-1 elements rather than N elements. In that case, no members of the series would be discarded. In turn this means that each partition can compute its next element by multiplying the previous element by Gcpusmod P . Let n = N/cpus and assign the following series to the j'th partition for j = 0,...,cpus - 1: This series is very easy for partition j to compute. First it computes the first element a gj mod p and then B Gcpusmod p . These two numbers can be computed in ln(cpus) multiplies and divides. Then the j'th partition uses the series: <(a bi) mod p | i = 0,...,n -1>. In this approach, there is no need to precompute a partition table and there are no wasted calls to next_value(). If we accept this relaxed definition of partitions (the last partition may be slightly larger than the others and some elements may be a little larger than N), then it will turn out that computing indices is much easier These techniques can easily generate tables with 1:N relationships. Suppose, as in (9), the branches table is to have 1,000 records and the tellers table is to have 10,000 records. If, P is chosen as 1,009 and G is chosen as 229 (as Table 5 recommends), then branches can be generated as in Program (10). By using the same prime and generator for the branch field of the 10,000 record tellers table, the tellers table will have exactly 10 tellers for each branch identifier. More generally, each record in one table will match the value of m records in the second table if the second table has m times as many records, and the "join-fields" of the two records use the small-table generator. 7. Generating Indices on Random Data All the programs so far have carefully (but implicitly) generated data in primary key order, the next generated record is placed right after the previous record in the B-tree or other clustering mechanism. This means that the programs have generated the data in sequential order and so that disc io time has not been an issue. Modern discs can absorb data at 5 MB/s with the possibility of much higher data rates if striping is used [Kim]. Assuming 100 byte records, this is 50,000 records per second. If each generated record went to a random disk page, data rates would drop by a factor of 1000 to 50 records per second since each record would cause a seek, rotate, a read transfer and then a rotate and a write transfer. On 1993 discs, each random disk IO consumes about 20ms of disc time and the rate is at most 50 IO/s. So, it is essential that records be generated in sequential order unless the entire table can fit in main memory. Program (10) generates the account table in account.id order; but it generates the account.customer field in random order. Applications often need an index on such random fields. For example, an account.customer index would allow quick lookup of a customer's account given the customer number. Such an index is defined in SQL by: create unique index account_customer on account (customer); The index is actually a table with the schema: create table account_customer ( customer unsigned integer not null, id unsigned integer not null, (12) primary key (customer), foreign key (id) references account(id) ); If (i,j) is a record in the account_customer index then; i is the j'th value returned by next_value(records) or using G and P as defined in (8): i Gj mod P. More formally, j is the discrete logarithm of i [Coppersmith]. How can this index be generated in parallel with linear speedup and scaleup? One could compute the discrete logarithm, but [Coppersmith] indicates that each computation would be millions of instructions. Three schemes can be used: (1) Scan-and-sort: Read the base table in parallel, projecting out the two desired fields, and parallel sort the result into the target index. In SQL this would be expressed as: insert into account_customer select customer, id from account order by customer; (2) Generate-and-sort: In each processor, generate the index data to be stored by that processor's disks, sort it, and then insert it into the local index partitions. (3) Compute: Compute the discrete logarithm quickly and generate the index in the same way one generates the base table. The index is one billion records of eight bytes each, 8 gb in all. So, each of the 100 processors must deal with an 80 mb partition of the index. This will just fit in each processor's 100 mb memory. The scan-and-sort approach (scheme 1 above), the processors can generate the index data in parallel to the data generation, sending index records to the appropriate partitions (cpus) as the base table is generated locally. The receiving processors can sort the indices locally in their memory as the data arrives or is generated. This is a credible and scaleable technique, needing about 10 mb/s network bandwidth to move the 8 gb from source to destination for a one-hour job. But the technique is memory intensive, just barely fitting in the processor memories. If the table keys were larger of if there were more indices, then the scan-and-sort technique would require a disk-based sort. Scheme 2 is a more cpu-intensive sorting scheme - but uses no network messages. Each processor generates the entire base-table sequence, and extracts the index subsequence that applies to the local processor. In particular, if each partition has R index records and if the whole sequence is s1, s2 s3... then the i'th processor uses the subsequence: < | ir sj < (i+1)r)> These are the index entries for the i'th partition. They are then sorted on the sj attribute and inserted into the local index partition in sequential order. The following program shows this cpu-intensive enumerate-and-sort algorithm. /* global variables: seed , generator, CPUS, my_cpu */ /* R is records per partition */ void index_load(long records) { long R = records / CPUS; /* R records per partition */ long my_first_record = my_cpu * R; /* base of index partition */ long i,j = 0, customer; /* working variables */ exec sql begin declare section; /* */ struct { long account; /* array holding in-memory index to be */ long customer; /* sorted on the "random" customer id */ } sorted [R]; /* */ (13) exec sql end declare section; /* */ /* fill in the array with the unsorted values */ for (i = 0 ; i < records; i++) /* for each account number */ { customer = next_value(records); /* get next customer number */ if ( (my_first_record customer ) & /* if customer # is in the range*/ (customer < my_first_record + R) ) /* of this partition, use the entry*/ {sorted.account[j] = i; sorted.customer[j] = customer; j++} /* */ }; /* assert: now sorted[i] = G inverse */ /* sort the array on the second attribute (customer) */ sort(sorted) on sorted.customer; /* sort the array on customer attribute*/ for (j = 0; j < R; j++) /* copy the array to the index */ exec sql insert into account_index values (:sorted.customer[j] , :sorted.account[j]); }; The generation step should take 30 instructions per iteration, and there are 1 billion iterations, so it will take 1000 seconds on a 30 mips computer. The sort deals with ten million records. At 30 instructions per compare/exchange, an nlog(n) sort will need about 300 seconds. Once that is complete, the write of the data in bulk to the index (at 1000 instructions/insert) should take 300 seconds. This adds up to about thirty minutes. In summary, indices for large tables can be built in parallel while the base tables are being built. The generator described here can run in parallel with the base table generation if sufficient processors and memory are available. The third index technique involves quickly computing discrete logarithms. That is, given alternate-key value k, quickly compute primary key value i such that k Gi mod P. (14) Solving this problem for arbitrary k when p is a large prime is believed to be quite difficult. Solving this problem for arbitrary k when p is a large prime is believed to be quite difficult. Indeed, this is what makes some cryptographic protocols seem secure. Even for smaller primes, around a billion, each discrete logarithm calculation takes about  EQ \r(p) time and space. The sorting algorithm above would be faster. Picking p as a power of 2 , say 2n, allows computing discrete logs in n steps (logP time) and constant space. The following equation is the key to finding the discrete log of k when p is a power of 2: g2i 1 + 2 i +1 (mod 2 i+3). (15) The problem is that the values of the series G, G4, G16,.. , G2i,... are all congruent to 1 mod 4 (all their binary representations end in "01"). The solution is to divide the numbers by 4 to get a dense series. The following code computes the discrete log of k (the 2-adic log): #define POWER 32 /* the P will be 232 */ #define G 37117 /* Generator for P from Table 6 */ static long P = pow(2,POWER); /* compute P */ static long P_MASK = P - 1; /* mask to avoid mod P division */ /* do mod P multiplication, in the *4 + 1 space. The equation is: */ /* ( a * 4 + 1) * ( b * 4 + 1 ) = ( a + b + 4 * a * b ) * 4 + 1 */ (16) long mul(long a, long b) /* */ return ((a + b + 4 * a *b) & P_MASK); /* */ /* discrete_log(k) returns the discrete log of k with respect to G */ /* At entry G^(x) = 1 + 4 * j /* the invariant of the loop is: G^(x+ans) = 1 + 4 * up */ /* where the last i bits of up are zero */ long discrete_log(long j) /* */ { /* */ long up = j, /* up is j with G^i removed */ long i; /* index on radix bits of k */ long ans = 0; /* the target */ long radix = 1; /* radix = 2^i in the loop below */ long Gpow = G; /* Gpow = G^2i */ for ( i = 1; i < POWER; i ++) /* for each bit POWER */ { if (up & radix) /* if 2^i divides k */ { ans = ans + radix; /* equation (15) says add 2^i */ up = mul( up, Gpow ); }/* preserve loop invariant */ radix = radix * 2; /* advance radix = 2^i */ Gpow = mul( Gpow, Gpow ); /* advance Gpow = G^2i */ } /* end of loop */ /* now up = 0 so G^(x+ans) = 1 due to the invariant. */ /* by Fermat's theorum, x+ans = POWER so discrete log is POWER-ans */ return (POWER - ans); /* discrete log of k mod 2^POWER */ } /* end of discrete_log() */ Table 6 is a catalog of values for POWER and G which have passed the spectral test. Table 6. Powers of 2 and generators to compute discrete logs.Field (max value)POWERGG-11...28-1 25510295651...29-1 51111295651...210-11,02312535411...211-12,047135346371...212-1 4,0951411740611...213-1 8,1911512530,9331...214-116,3831622948,3651...215-132,7671722155,1571...216-165,5351846977,6931...217-1131,0711951731,4371...218-1262,14320589329,3491...219-1524,28721861747,7651...220-11,048,575221,1892,638,6371...221-12,097,151231,6535,577,1811...222-14,194,303242,33312,124,4691...223-18,388,607253,38132,611,6131...224-116,777,215264,62951,785,0211...225-133,554,431276,56532,056,8771...226-167,108,863289,293260,289,6691...227-1134,217,7272913,09394,679,2131...228-1268,435,4553018,509561,787,0131...229-1536,870,9113126,25331,247,4291...230-11,073,741,8233237,1173,730,050,1331...231-12,147,483,6473352,317766,551,6371...232-14,294,967,2953474,1016,731,589,3411...233-18,589,934,59135104,58127,095,900,2371...234-117,179,869,18336147,9735,486,951,117 The use of numbers near a billion strains the word size of "old" 32-bit computers. In particular, if p is bigger than 216 or so, multiplication modulo p cannot be done without some programming trick. Schrage's technique, as shown in program (9), can be used to fit such arithmetic into small words [Schrage]. In summary, indices on synthetically generated data can be built in one of three ways. Scan-and-sort, generate-and-sort, or compute. The computational method has some restrictions on the size of the table and on the generator, but is the most efficient approach for large tables. The computational approach is nicely suited to parallel algorithms. 8. Generating Data Having Non Uniform Distributions Having explained how to generate unique data, now we consider generating other data distributions. The examples above all required dense-unique values. Often, the database needs values obeying some common distribution. The size of cities, lengths of words, and frequency of words is known to follow a Zipfian distribution. Measurement errors often obey a Gaussian distribution, and the inter-arrival intervals of events often follow a Poisson or negative exponential distribution. Such domains are easily generated by skewing a uniform distribution. This section catalogs the standard distributions, and adds a little to the generation of self-similar and Zipfian distributions. Program(10) demonstrated the simplest case, repeating some value a constant number of times in another field. It generates ten accounts per branch repeating each branch number ten times in the account.branch domain.. Suppose we wanted a the values of some field or the number of child records to follow some more complex distribution. Then the following code might be appropriate create table parent ( master integer not null, rest char(96), primary key (master) ); create table child ( master integer not null, detail integer not null, rest char(92), primary key (master,detail), foreign key (master) references parent ); /* sequentially load records with key in [base,base+count) into tablename */ void sequential_load( char *parent, /* name of parent table */ char *child, /* name of child table */ long base, /* start key of load */ long limit) /* first key after load */ { /* */ exec sql begin declare section; /* */ long master, detail; /* master and detail key values */ (25) exec sql end declare section; /* */ for (master = base; master < base + count; master++) /* for each master key*/ {exec sql insert into :parent values(:master, ""); /* add master rec */ count = distribution(); /* create that many child recs */ for ( detail = 0; detail < count; detail++ ) /* */ exec sql insert into :child values(:master, :detail, ""); /* */ } /* end master loop */ } /* end sequential_load() */ This code encapsulates the distribution of child cardinalities. It only remains to describe ways of generating the popular distributions. The following table presents the code to generate the distribution on the left and a graph of the distribution on the right. See [Ripley], [Jain], or [Press] for more details. The programs below assume randf() returns values distributed uniformly in [0..1] and random() returns values distributed uniformly in [0..] or some approximation thereof (e.g. [0..232]). The uniform distribution equation: f(x) = 1 x e [0,1] mean: m = 0.5 standard deviation: s = .5 inverse distribution: x= randf() where randf() e [0,1] code: double uniform() {return randf();}  EMBED Word.Picture.6 Negative exponential equation: f(x) =  EQ \f(1,l) e EQ \s\up12(-lx) : x e [0,] mean:  EQ \f(1,l)  standard deviation:  EQ \f(1,l)  inverse distribution: -  EQ \f(ln(random()),l)  where random() e [0,] code: double neg_exp(double lambda) { return ln(random()) / lambda; }  EMBED Word.Picture.6 Gauss = Normal Distribution: equation: f(x) =  EQ \f(1,s\r(2p)) e EQ \s\up12(-\f(1,2)\b(\f(x-m,s))\s\up8(2))  : x e [0,] mean: m standard deviation: s inverse distribution: m +  EQ \f(s,6)\i\su(1,12,(random()-0.5))  where randf() e [0,1] deviation is ~.5% see [Ripley, pp 54] code: double gauss(double mu, sigma) { int i; double ans = 0.0; for (i = 0; i<12; i++) {ans = ans + (randf()) -0.5;} return (mu + ans/6); }  EMBED Word.Picture.6 Poisson equation: f(x) = e EQ \s\up12(-l) \f(lk,k!) : k = 0, 1, ... mean: l inverse distribution: [Ripley, pp 55]. code: long poisson(long lambda) { long n = 0; double c = pow(e,-lambda); double p = 1.0; while (p >= c) {p = p * randf(); n++;} return (n-1); };   EMBED Word.Picture.6 Self-Similar (80-20 rule) Integers between 1...N. The first hN integers get 1-h of the distribution. For example: if N = 25 and h= .10, then 80% of the weight goes to the first 5 integers. and 64% of the weight goes to the first integer. code: long selfsimilar(long n, double h) { return (1 + (int) (N * pow((randf(),log(h)/log(1.0-h))) ); };  EMBED Word.Picture.6 Zipf's "Law" Integers between 1...N. Integer k gets weight proportional to (  EQ \f(1,k) .) theta where 0 < theta < 1 is the skew code: long zipf(long n, double theta) { double alpha = 1 / (1 - theta); double zetan = zeta(n, theta); double eta = (1 - pow(2.0 / n, 1 - theta)) / (1 - zeta(theta, 2) / zetan); double u = randf(); double uz = u * zetan; if (uz < 1) return 1; if (uz < 1 + pow(0.5, theta)) return 2; return 1 + (int)(n * pow(eta*u - eta + 1, alpha)); };   EMBED Word.Picture.6  The only "new" distribution here is the self-similar one, often used for situations following the 80/20 rule or some other highly skewed self-similar distribution. Self similar distributions have the property that within any region of the distribution, the skew is the same as in any other region. So, for example, all subranges of the 80/20 (h=.20) self similar distribution follow the 80/20 rule. A set of values of the form 1..k is called a "hot spot" because any of the values in this set has more weight (and hence is "hotter") than any value outside the set. Self-similar distributions are characterized by the property that hot spots have a distribution similar the entire range of values. The Zipf distributions are characterized by the property that the frequency distribution is a straight line when plotted on a log-log graph. If the hot spot is supposed to be randomly spread throughout a range of values, then it is necessary to permute the values randomly. Generating random permutations is discussed elsewhere in this paper. In principle, generating a random permutation and generating a distribution are independent problems. However, hot spot distributions like the self-similar and Zipf distributions can be permuted more easily than general distributions. The trick is to assume that the "cold" values are uniformly distributed. This greatly simplifies the generation of the permutation since it now just involves choosing the relatively small number of "hot" values. This is especially important when it is necessary to perform the computation in parallel. The "hot" values can be chosen at one node and broadcast as a table to the rest of the nodes. The algorithms above are used to compute an index into the table of "hot" values. If the index is too large, then a value is chosen uniformly at random from all possible values (ignoring whether it is already in the table of hot values). The index computations can be done independently at each node so long as the seeds for the pseudo-random number generator are chosen independently. The number of hot values that require special treatment depends on how accurately one needs to represent the distribution in question. (One should bear in mind that the self-similar and Zipf distributions are themselves only approximations to what is observed in actual systems.) As an example, consider a self-similar distribution following the 95/5 rule. A table of just 313 out of a billion possible values would account for over 77% of the weight of this distribution. The program presented here to generate a Zipf distribution uses constants alpha, zetan and eta derived from theta and n. The function zeta returns the sum (1/1)theta + (1/2)theta +...+ (1/n)theta. The approximation uses the same technique as that in Knuth (volume 3, page 398), but corrects the weight assigned to the first two values to get a more accurate approximation. It is commonly thought that self-similarity and the Zipf distribution are the same, or at least close. This misconception apparently stems from a misleading approximation made in Knuth (volume 3, page 398). Knuth's approximation is adequate for the statistic being computed there but should not be construed as asserting that the self-similar and Zipf distributions are close in the usual probabilistic sense. In particular, other statistics can yield very different results for the self-similar and Zipf distributions. However, Knuth's calculation can be modified to produce a reasonable approximation as we noted above. The log-log graph of Zipf's distribution with parameter 0.5 is shown above. It shows the largest weight on 1, the second largest on 2, and so on. 9. Summary This paper first showed how to convert a simple sequential load into a parallel load turning a two-day task into a one-hour task. It then explored the ways to generate synthetic data. At first it focused on generating the primary keys of records and values uncorrelated to these keys: dense-unique-pseudo-random sequences. Then, attention turned to building indices on these synthetic tables either by sorting, or by using discrete logarithms. By careful selection of generators, the discrete log problem is tractable and indices can be quickly generated within the 1-hour limit we set for the billion-record load. The paper then looked at skewed distributions. It presented the standard ways to generate uniform, exponential, normal, and Poisson distributions. It went into more detail on the new topic of self-similar and Zipfian distributions. Using these techniques, one can generate billion-record databases in an hour, and a two terabyte databases per day. 10. Acknowledgments This paper has been in-progress since 1987. During that time, many people have contributed ideas to it. Some of the inspiration for the work came from Dina Bitton and Jeff Millman who showed us the data generators they used in their DBstar product. Dave DeWitt was an early user of these algorithms and has been a source of ideas and encouragement. Betty Salzberg explained the concept of discrete logs to Jim Gray an put him in contact with that community. 11. References [Bitton 1]Bitton, D., DeWitt. D, Turbyfill, C., Source code for Wisconsin Database Generator distributed on the "Wisconsin Benchmark Tape", Computer Science, U. Wisconsin, Madison, WI. 1984 [Bitton 2]Bitton, D., et al., "Benchmarking Database Systems: A Systematic Approach", 9th VLDB, Nov. 1983. [Coppersmith]Coppersmith, D., et. al., "Discrete Logarithms in GF(p)", Algorithmica, 1(1), 1986, pp. 1-15. [Thekkath] Thekkath, C.A., Levey, H.M., Limits to Low-Latency Communication on High-Speed Networks, ACM TOCS, 11(2), May 1993, pp. 179-203. [DeWitt 1]DeWitt, D. et. al., "Gamma - A High Performance Dataflow Database Machine", Proc. 12th VLDB, Sept. 1986, pp. 228-236. [DeWitt 2]DeWitt, D., et. al., "The Gamma Database Machine Project", IEEE Trans. on Knowledge and Data Engineering, March 1990. [DeWitt 3] DeWitt, D.J., Naughton, J.F., Schneider, D.A. "Parallel Sorting on a Shared-Nothing Architecture Using Probabilistic Splitting", Proc. First Int Conf. on Parallel and Distributed Info. Systems, IEEE Press, Jan. 1992, pp. 280-291. [Englert]Englert, S., et. al. "A Benchmark of NonStop SQL Release 2 Demonstrating Near-Linear Speedup and Scaleup on Large Databases", Proc. of 1990 ACM Sigmetrics Conference. May 1990. [Gerber]Gerber, R.H., "Dataflow Query Processing Using Multiprocessor Hash-Partitioned Algorithms", Ph.D. Thesis, Comp. Sci. TR 672, U. Wisconsin, Madison. Oct. 1986. [hobbs] Hobbs, L., England, K.,uren, Digital Press, Bedford, MA, 1991. [Horst]Horst, R., Chou, T., "The Hardware Architecture and Linear Expansion of Tandem NonStop Systems", Proc. 12th Int. Conf. Computer Architecture, June 1985. [Jain]Jain, R., The Art of Computer Systems Performance Analysis, John Wiley & Sons, New York, 1991 [Kim]Kim, M.Y., "Synchronized Disk Interleaving", IEEE TOC, 3(11), Nov. 1986. pp. 978-988. [Knuth]Knuth, D., The Art of Computer Programming, V. 2, Chapter 3, 2nd ed., Addison Wesley, 1981. [Kronenberg] Kronenberg, N., H. Levey, W. Strecker and R. Merewood. The VAXcluster Concept; An Overview of a Distributed System. Digital Technical Journal. 1(3): Jan. 1987, pp. 7-21. [Nyberg]Nyberg, C., Barclay, T., Gray, J., Lomet, D., "AlphaSort - A High-Speed Sort for RISC Machines" Digital San Francisco Systems Center Technical Report TR 93.2. February, 1992. [Press] Press, W.H., Teukolsky, S.A., Vettering, W.T., Flannery, B.P., Numerical Recipes in C - 2'nd Edition,Root Entry ( l1Q ( @h#͘%i#uT( WordDocument(k1. ( :(HQ,_T(ObjectPool ( 2t(Pi#Pi#SummaryInformation1<  ((k(  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefg4slmnopwvx      !"#$%&'()*+-./012356789:k..... .+.//00415171<1@111111111 2$24444'6(6)6@6A6B6C6D6M67"7&7L7T7777777999:9?999:0::;;<p<<<d?h?uAwAAAAúuD5KIuD5vKII uDhIece]c]U uDPZccV uDIuD5KIuD5vKIFAA2C9CRCSCmCtCCCEEEEFFFFFFFFFFGGGGGGGGGGGGGVIZIeIhIII?JqJJJJJJJKBLaL-MAMBMCMFMGMHMQMRMSMXMZM[M\M]M^M_M`MMMMMfQfffffffffffffffffgg g ggggg gDgEgggh$h%h)h*h iiiijjj(k)k*k+k5k6kSkTkUkXk[k^kakckvkkl l5l6lnlZc]ccVeVce]]cIVUVccuDcIuD5cKIPnlqllllllllllllYmZm^mcmlmpmqmsmwmxmmmmmmmmmmmmmmmmmmnnHnJnKnMnOnSnTn^n_n`nanbncngnknlnnnnnnnnnnnnnnnnooiouoppppppqZc] VZceVVe VZ]cV]cVZeVceVZccVcSqq+q,q\qdqqqqqqqwrxryrrrrss5s]t_tttttvvwww#w$w@wPwwwww9x:x?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefgyswvxz{|}~DocumentSummaryInformation8,_8902374354 FPi#6i#_8902374360 F^i#Pi#_890237437, FPi#Pi# w  !"#$%&'()*+-0123456789;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXZ]^_`abcdfijklmnopqrstuvwxyz{|}~3՜.+,0HPdlt |  MicrosoftfpX ࡱ> 3Oh+'0t  0 < HT\dlXXXXX NORMAL.DOT Jim Graynto2csMicrosoft Word for Windows 95@^в@i[h#@`9i#Gࡱ>ࡱ> q] ] NN"N  P[),Times .+X1  P,\: +2 ]0O*0)O80$7OE"M"M"4"""N _890237439 !|4[C ( FPi#ki#G _890237440$ Fki#ki#_890237441PLw'7x7PIC  Fki#i# !_890237442 Fi#0yi#_890237443r4Pr4PeKwd\`dd,w  F0yi#0yi#g#_890237445C7|7_8902348 Fi#i#_890237446g#g#g#5 Fi#pi#fowwPICddd,wd ,,L< Cambridge Univ. Press. Cambridge, 1992 [Ripley]Ripley, B.D., Stochastic Simulation, John Wiley, 1987. [Schrage] Schrage, L.E., "A More Portable FORTRAN Random Number Generator", ACM TOMS, V. 5(2), May 1979, pp 132-138. [Smith]Smith, M., et al., "An Experiment on Response Time Scaleability", Proc. 6th Int. Workshop on Database Machines, June 1989 [Stonebraker]Stonebraker, M., "The Case for Shared-Nothing", Database Engineering, V. 9(1), Jan. 1986. [Tanenbaum]Tanenbaum, A.S., Operating Systems: Design and Implementation, Prentice Hall, 1986. [Teradata]"The Genesis of a Database Computer: A Conversation with Jack Shemer and Phil Neches of Teradata Corporation", IEEE Computer, Nov. 1984. or DBC/1012 Database Computer System Manual, Release 1.3, C10-0001-01, Teradata Corp., Los Angeles, Feb. 1985. [TPC] "Transaction Processing Performance Council Benchmark A", Chapter 3 of Performance Handbook for Database and Transaction Processing Systems, Morgan Kaufmann, San Mateo, 1993. [Uren] Uren, S., "Message System Performance Tests", Tandem Systems Review, V3.4, pp. 27-32, Dec. 1986.  The correct processor speed term is SPECint, but that is approximately a MIPS (million instructions per second).  A later section will show how to fill in the customer number with a unique and uncorrelated customer ID. The customer number should be unique and uncorrolated with the account number. Here it is set to zero in all records. Also, sandard SQL does not allow table names to be host variables, but thsi and later programs assume it is allowed. One would have to use dyanic SQL to get this effect in a standard SQL systeem.  What is really needed is more akin to Unix exec than to fork, and has many more parameters.  As usual, we assume this table (index) is uniformly partitioned among the 1000 discs. The definitions of the partitions are omitted here for simplicity. Quickly Generating Billion-Record Synthetic Databases Quickly Generating Billion-Record Synthetic Databases  /= /= METAh5w7 1  PICT< 1j)j) HhLw~7PLw ObjInfo PICw~7d 0 YLMETAj)4 h5w7d 1:PICT d 1j)/@ObjInfowwsdKwddd,wd.PIC|LhLw7PLw META eL0METAObOb 4P\\PICT|77PICT,  j)<\@ObjInfoh5w7 0[PIC 0V,V, hLw7PLwLICMETAntry ( l1Q ( "( PICTcument(k1. ( :(Hh(ObjInfo ( 2t(]gPIC(j1<  (kL( META !PICT "ObjInfoPIC LMETA  #% PICT$%&'()*+,-./0 &56789@ObjInfoFGHIJKLMNOPUVWXY`PICdefgy vxLMETAntry ( l1Q ( '),( PICTcument(k1. ( *:(Hx(ObjInfol ( 2t(PICaryInformation1<  (k5L( META  +-  PICT$%&'()*+,-./0 .56789a@ObjInfoFGHIJKLMNOPUVWXY`PICdefghijklmn KLMETA7443r4Pr4PeKwd\`dd,w /1 F?PICT7445C7|7_8902348 2 F8ObjInfo6g#g#g#5 F7wPICddd,wd ,NL<METAh5w7 1  35XPICT< 1j)j) 6HhLw~7PLwObjInfoMPICw~7d 0 ( 5%  &*3%   HT(Q.5%  1=(:1%  P=\K+;3  OK[Z(XL5  P\\k+10  Pm\{)20  P~\)30  P\)50  P\)100 ]0/KOR0,DOL"M  "N################# ]"N########ࡱ> d    ^ .  & -OMNN!'OMNN'Times New Roman`w -!1XTimes New Roman`w -!2Y--O*O8)OE$7M'NLMM'5344'''NKQH'!5% !3%#!.5%Q!1%:!3Y>!5XL!10Y]!20Yn!30Y!50Y!100YOR/KOL,D% NMLHyEqBf=\8P2 & 'Ld Xࡱ> qࡱ> qd WORD͠d WORD  """{"g"S"?"+"" ""("F"h""""  +1 (42 (NaB0a#70"  (hN0-)0s(G0Fi0h00ɠa0a"90"d WORDp  c u 4e p 8 (l 1-hdWORD  a#7B0a0 2Ơ +\P(h-1)NdWORDࡱ> @&    .  & -Cc' ' '|z{{ 'hfgg 'TRSS '@>?? ',*++ ' ' 'E'')'(('GEFF'ighh'''''Times New Roman`w -!1!24!N-- XX 7#-(B'Times New Roman`w -!hN-)-Gs(iFh-- HH3 39"-3 ' & uc --pe !1-hl & '--- ||7# -2Z'- !(h-1)Ni & 'ࡱL@&dࡱ> qࡱ> q@    +0  +N)+1)l  `)02)l  )(3)l ""r"b"R"A"1"!"" """3"L"e"}""###########################################################LX ࡱ> qࡱ> q    .  & Times New Roman`w -!0!1,Symbolww`w -!l1Times New Roman`w -!2aSymbolww`w -!lfTimes New Roman`w -!3Symbolww`w -!l-''sqrr'cabb'SQRR'B@AA'2011'" !!'''n*'|'|'423|3'MKL|L'fde|e'~|}|}'|''} ' {w '{ v q'vo } h'ogw _'g_oW'_VhM'VM_D'MCW9'C:L1':1C(!'1"(:%'($!"/ &'!'$(!*')'%+',)& /'. ,* 0' 1 .+ 4' 3 1 / 5' 6 3 0 9' 8 6 4:'; 8 5>'=; 9?'?=;A'B?<"E'#DB@)F')G#DA/J'/I)G#E5K'5L/I)F;O'<N5L.JCP'BQ<N6KHT'HSBQ<ONU'NVHSBPTY'TXNVHTZZ'Y[TXOU^^'^]Y[TYc_'b`^]ZZfc'gbb`]^ld'jegbd_mh'ngjefcri'qjngkdtm'slqjohun'vnslpjyp'xqvntkzt'ysxqwozu'{vyswp}y'|x{vzt}z'}{|x{u~~'~}}{|y'~}}z'~~''''''''' & 'ࡱ> qtr "u1tv 8 1uv 8 1uv 8 "v  w +-3)s "u^"i]"]]"R]"F]";]"/]"$]"]" ]"]"^#################"B{########################"t##############"u1RS 8 1Y[ 8 1lm 8 1tv 8 1uv 8 1uv 8 "u]"v^"vm"v}"v"v"v"v  w`j)Um   w~)1)s  w)2)s  w)3)s "]########################################################1]^ 8 1.H0I 8 17E9F 8 1ABBC 8 1J?K@ 8 1R<S= 8 1f1g2 8 1o(p) 8 1p%r& 8 1r"s# 8 1rt 8 1su 8 1sv 8 1tv 8 1tv 8 1t v 8 1t v 8 1tv 8 "]"v"v "v/"v@"vO  w(  w"2)-2)s  w0:)  wAQ)-1)s  wPZ)  uuࡱ> LTPd T    .  & -vtuu'--vtvuvu-vsyp'Times New Roman`w -!-3Symbolww`w -!s u_]^^'j_h]i[ia'^_\]][]a'S_Q]R[Ra'G_E]F[Fa'<_:];[;a'0_.]/[/a'%_#]$[$a'_][a'_ ] [ a'_][a'`^\b'a`_b'ca_e'eca g' fed g' g fe h'i gek'ji hk'lj hn'mlkn'pmj$s'$qpo)r'(s$q o,u'.t(s"r4u'3v.t)r8x'7w3v/u;x'=y7w1uC{'Bz=y8xG{'F|B{>zJ}'K~F|AzP'NK~H}Q'RNJ}V'VRNZ'YVS\']YUa'`]Zc'b`^d'db`f'gdaj'igek'jihk'ljhn'mlkn'nmlo'onmp'ponq'qpor'rqps'sqrr'srqt'trss'tsru'ustt'ustt'ustt'utsv'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'vtuu'-SR[Ymlvtvuvu-v_t]u[ua'v_s]y^p^'vnslympm'v~s|y}p}'vsyp'vsyp'vsyp'vsyp'!maTimes New Roman`w -!1Symbolww`w -!sTimes New Roman`w -!2Symbolww`w -!sTimes New Roman`w -!3Symbolww`w -!s][_Y'[Z\Y'ZX\ V'XVZ T' VUWT'U SWQ'SQ UO'QO SM'ONPM' NMO&L'$M LN(K'(L$J N,H'.J(I"K4H'3I.G)K8E'7G3F/H;E'=F7D1HCB'BD=C8EGB'FCBA>EJ?'KAF@ABP?'N@K>HBQ<'R>N<J@V:'V<R;N=Z:'Y;V9S=\7']9Y8U:a7'_8]5[;a2'b5_4\6e3'd4b2`6f0'g2d1a3j0'h1g/f3i-'j/h.f0l-'l.j,h0n*'m,l+k-n*'n+m)l-o''o)n(m*p''p(o&n*q$'q&p%o'r$'r%q#p's!'s#q"r$r!'s"r q$t't rs!s'trs!s'tsru'ustt'ustt'utsv'vtuu'vtuu'vtuu'vtuu'vtuu 'vtuu 'vt uu 'v t u u 'v t u u'v tu u'vtu u'vtuu'-^]0I.H9F7EBCABK@J?S=R<g2f1p)o(r&p%s#r"t rusvsvtvtvt v t vt-^\]]'vsyp'v!sy p 'v0s.y/p/'vAs?y@p@'vPsNyOpO'Times New Roman`w -! !-2#Symbolww`w -!s+Times New Roman`w -! 1!-1BSymbolww`w -!sJTimes New Roman`w -! QvtutuI' & 'ࡱ> qtrࡱ> qࡱ> q ""z"m"_"R"E"8"*""" """8"P"i"""#######################################################################################################################   +0  ( 1  +0  +)1)l  3B)2)l  KX)3)l  br)4)l  {)5)l  )6)lࡱ> Lv ࡱ> qࡱ> q  " "s "e "W "I "; "- " " "1 8v    .  & - ~' '{yzz 'nlmm '`^__ 'SQRR 'FDEE '9788 '+)** ' ' ' 't-'' '9788'QOPP'jhii'''  '  '    '  !'   %'%  *'*% /'.*&2'3.)8'73/;';73?'?;7C'B?<E'FB>J'IFCL'LIFO'OLIR'ROLU'TRPV'WTQZ 'Y WU[!'["Y W]$']#["Y!_$'_$]#["a%'a%_$]#c&'c'a%_#e)'e(c'a&g)'g)e(c'i*'h*g)f(i+'j+h*f)l,'k-j+i)l/'m.k-i,o/'n/m.l-o0'o0n/m.p1'p1o0n/q2'r3p1n/t5's4r3q2t5't5r4s3s6't6s5r4u7'u8t6s4v:'v9u8t7w:'w:v9u8x;'x;w:v9y<'y<x;w:z='z>x<y:y@'z?y>x={@'{@z?y>|A'|Az@{?{B'|C{Az?}E'}D|C{B~E'~E|D}C}F'~F|E}D}G'~H}F|DJ'I}H~G~J'J~I}HK'K~JIL'LK~JM'NLJP'ONMP'PONQ'QPOR'SQOU'TSRU'UTSV'VUTW'WVUX'YWU['ZYX['[ZY\'\[Z]']\[^'_][a'`_^a'a`_b'ba`c'db`f'edcf'fedg'gfeh'igek'jihk'kjil'lkjm'mlkn'omkq'ponq'qpor'rqps'trpv'utsv'vutw'wvux'ywu{'zyx{'{zy|'|{z}'}|{~'}{'~''''''''''''''''''''''Times New Roman`w -!0!1 !0!1Symbolww`w -!l Times New Roman`w -!24Symbolww`w -!l9Times New Roman`w -!3LSymbolww`w -!lQTimes New Roman`w -!4cSymbolww`w -!lhTimes New Roman`w -!5|Symbolww`w -!lTimes New Roman`w -!6Symbolww`w -!l & 'ࡱ> qtr   + 1 " " " "4"I"^"r"  + 0   4I)(  ]p))  (1 ""   (0ס  "|נ١٠""!"3"E"W"i"{٠נࡱ> qS    .  & -   w ' 't r ss'f d ee'X V WW'J H II'< : ;;'. , --'  '  ''--Times New Roman`w -!1 - '   '!  '5344'JHII'_]^^'sqrr''Symbolww`w -!0 Times New Roman`w -! 5! ^!1'  'Symbolww`w -!0 & ')!1';3+C'ME=U'_WOg'qiay'{s' &  & 'ࡱ> qLS@ ࡱ> qࡱ> qx?d WORD? ? 46a8  +> Branch40 <48 (9Teller41d=8 +UAccount t%.$.%))$."6t! *)* !#%$%)* "8 ($ 1:10t'p0y0y*p+u'u0y"`t"m+u+u%m&r"r+u"` (#v 1:100,000 41=8 +NCustomer t2858525t2852585"5t2858525t2852585"5 (41:1ࡱY!   ? . --a6Times New Romanlw - !Branch ><40 !Teller9=1d !Account:f--$.%)$).-)6MC'-$ *#!$%)% *-%8$L<'!1:10$ -$y0p*u+u'y0-+u`KF'-$u+m%r&r"u+-&r`N=' ! 1:100,000#v-=1 !Customer:--$58525$52585-6455l'-$58525$52585-6455'!1:14'ࡱLY!ࡱ> qࡱ> qa  Q' X   +7  QUB~ X Q/BX X  %d8n(3e3  %98L)11  Q?e      !"#$%&'()*+,-./0123469:;<=>@ABCDEFGHIJLOX  H[(V9  Q?e X Q?eE X Q?be X  H+[5(V,1  H[)m5  Hm[)13  Qb* X Qb7` X Qbm X Qb X Qb X Qb: X QbGp X Qb} X  k~(y0  kF~P)62  k|~)64  k~)76  k~)68  k~.)210  kR~e)712  k~)614 q#}))}#&))}" q<AFLFA<H>JALFA"7UqZd*dZ&\(_*d"a#qY3e<e<\3Z5Y8e<"`9q=GGB?=G":{qYeeY[\e"_qZee^\Ze"Zq[gg^\[g"]qXddXZ\d"]qZ|ff^|\Zf"^qYfdpdfYl[n]pdf"]mq<YGeGeAY?[<]Ge"6Q q<F'F<$?%A'F"72q#")/)/)"&###)/"9 ࡱ> n: _   .  & --'Times New Roman`w -!7 & B~UBX/!33e!113: &  & e?!9Ve?eE?e?b!1V,!5V!13Vn &  & *b`b7bmbb:bpbGb}!0y!2yG!4y}!6y!8y!10y!12yS!14y &  & --$})#&)})-& 3N' &  & -$AFH<J>LAAF->U7J0`E?' &  & -$d&Z(\*_d-a(\#fW-' &  & -$<e3\5Z8Y<e-`9Z5f=T1' &  & -$GB?=G-?:{5tD' &  & -$eY[\e-_[cW' &  & -$e^\Ze-\ZX^' &  & -$g^\[g-]\^[' &  & -$dXZ\d-]Z`W' &  & -$f|^\Zf-^\`Z~' &  & -$fdlYn[p]fd-]n[m_lYo' &  & -$eGYA[?]<eG-?[6Q-GHe' &  & -$F$<%?'AF-?27%/?G' &  & -$/)")#&##/)-&# 3\' &  & 'ࡱ> qLn: ! FEࡱ> qࡱ> q@d WORD@ @ 4/T8 ,Times .+4 Branch41>-8 (9Teller42V?z8 +IAccount t&//&**/"/t"+#+"&&#+"0 ((1:10t(a1h1h+a,e(e1h"St#^,e,e&^'b#b,e"S )i 1:100,000ࡱ> y  @ . --T/Times New Romanlw - !Branch 4>-1 !Teller9?z2V !Account:Y--$/&**/-*/CE'-$+"&#&+-&0A>'!1:10(-$h1a+e,e(h1-,eSAHw'-$e,^&b'b#e,-'bSD?q' ! 1:100,000(k'ࡱL ࡱ> qࡱ> qLP"Rtࡱ> q "qmm  ##c#!#  !#c!"+q+--+-- "-#####  ###"u q"suuustuu "u#########  "u####1~t8 1{v 8"~  "{#######  "{###"{ q"y{{{yz{{ "{#########  "{####1rz8 1n| 8"r  "o#######  "o###" q" "#########  "####1 f8 1 b 8" f  " c#######  " c###" q" "#########  "####1 Z8 1V 8" Y  "V#######  "V###" q" "#########  "####1O8 1K 8"O  "K#######  "K###" q" "#########  "####1B8 1> 8"B  "?#######  "?###" q" "#########  "####178 14 8"7  "4#######  "4###" q" "#########  "####1+8 1( 8"+  "(#######  "(###" q" "#########  "####1! 8 1" 8"!   ""#######  ""###" q" "#########  "####1%8 1' 8"% q%''%''& "'#######  "'###1+8 1, 8 1Ax 8  @0Sk +1N30 mips  \1oj+100MB "Sl 1} 8  .A6(<Card 0 """"Q#9 X """"Q<S X """"QVl X """"Qo X """"Q-C X """"QF] X """"Q`v X """"Qy X """"Q ! X """"Q+ X 1/m  +o+)S 10 Discs +10 GB  UUUU1 8"t wwwwqkktk  "k ##I####c# wwww݄  # #I##cࡱ>P"R  -   . -- $m $-+-- & -$uustuu-%uustuu & t~-B( -v{ & --${~|}-%{~|} &  & -${{yz{{-%{{yz{{ & zr-B( -|n & --$ortop-%ortop &  & -$-% &  f-B( - b & --$c f h c d -%c f h c d &  & -$-% &  Z-B( -V & --$VY \WX-%VY \WX &  & -$-% & O-B( -K & --$KOQLM-%KOQLM &  & -$-% & B-B( -> & --$?BD?@-%?BD?@ &  & -$-% & 7-B( -4 & --$47946-%47946 &  & -$-% & +-B( -( & --$(+-(*-%(+-(* &  & -$-% & ! -B( -" & --$" !""""-%" !"""" &  & -$-% & %-B( -' & --$'%''&-%'%''& & +-B( -,-xATimes New Roman`w - !30 mipsN1 !100MBj2TRSS'} !Card 0< & -B( wwww-9#-B( wwww-S<-B( wwww-lV-B( wwww-o-B( wwww-C--B( wwww-]F-B( wwww-v`-B( wwww-y-B( wwww-! -B( wwww-+ & --$/mm// ! 10 Discs . !10 GB 5-B( UUUU---B( """"-$ktk'ࡱ> qtr{U|Y|l|} }w}x}}}}}8~:~%&'*+-./;?cfhijpquxz}Ղւ؂قqrՃ߃imBH nv12XYknopqſ VZ]cVZc]cIVcUc$Vc0V]cVZccVeVcVceVcVZc]UccKqrs <=SUV'TXY\^efgl/0DEqsݑޑNUceUZUcVccecVce VZ]cVceVeVcIuDcIVZccVceVcJ $./459=GHMORXbchjms~—ėǗΗܗݗ"$'/=>CEHP_`egjrјҘט٘ܘ#cecUca#$)+.9MNSUXcwx}Ιϙԙ֙ٙ'(-/2@XY^`cqÚњ hw(8@m]cVZ]ccVcceVZcccecWmnШѨߨ:;BGOVop©éĩũʩ˩̩ҩөԩթة٩کߩ!#%&'@BCH춲]IuD]Ic$ec uDIuD5KIuD5vKII uDIU]c]UcUccec ]c$eFHIJWX\]^|}ժ֪"#(,-.1456789>?@FGMNPQUXZ[\]adefgԫګݫޫݿݿc uDIUcUcuD5KIuD5vKII uDlIU]cc$euD]I]I]JOT\a '(-./59:<=>?DEF^_PQUVmnoprz饝uD5KIuD5vKI uDXIce]IcUcuD5KIuD5vKI uDIU]c] uDIIcc0c$Ic$ ]c$I]c$> >A^_`eklmpuv ()@ABCFüԼܼ򳫼UVeVccuD5KIuD5vKI uDlIcU]c]cVceVIuDVIVuD5KIuD5vKII uDIUc?}?B,0BW253TUrstoqeuuDV]c]c]c uDP uDPVcUB68V89G'o!h!h! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !h!h!h04h8p @ xH2h8p @ xH2h8p @ xH2h8p @ xHo)G I [ u   G  T U NO-F!h!h!h!! ! ! ! ! ! ! ! ! ! ! !!h!h!h!h!h!h!h!!h!h 8<8 X !<2h8p @ xH04h8p @ xH)*_^-.56!h!h!!!!~!~!~!~!j!b!v!h!h!h!h!h& ' ( ) & ' ( ) @$& ' ( ) @ H (#$& ' ( ) @ 0 !$& ' ( )   %& ' ( ) 2h8p @ xH2h8p @ xH67ST  @i"S" # _#`#`h`#`h @h@h@h!h!h!!!!!!!!!!!!!!!h!h !h!h hPP2h8p @ xHP$ `#### $`$$$$% ( (k(()2)g)j))!! ! ! ! ! ! !!g!hg!hg!he!e!e!e!e!e!2h8p @ xH&)')()))@ H&)')()))\ &)')()))\ &)')()))@ H"&)')()))2h8p @ xH))))(*+****,+- .+.0071133`444'6(6D6667!!!!!!h!!!!h!2!!h!h!h!!h!h!h!h!h!h!h:hhP$  H$2h8p @ xH77#7Q7778M8}8889@9j999 :7:M:::::;4;i;;;; <G<m<<<<!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!h2h8p @ xH2h8p @ xH"<;><>AA2CmCCCSETE\G]GUIVI?JAJJJ KSKKKK%L&LdLLLL M,M-M"N%N|U|!!h!h !h!h!h!h!h!h!h!h!h!h!h! !h!h!h!h!h!!!!!!h!h!h!j!h!h!h!!!!h/h8p @ xH%U||x}y}`rЃ*lɄ MnɅ L([C{Ëċd3!h!h!h !h!h!h!v!h!!!!!!!!!!!!!!!!!!!!!!!!h!h!h!h!j!h,hp@ P !,$hh&34'DEw*+qޑ TȒ.^ EsRl*N!h!h!o!h!h!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!p!L$(  $'*./9=@CGHRX[p! PhhhPhyhhPhyhhP'P@ 8@    !P4 8@    P! [^bcmsvy~ǗΗї՗ܗݗhhhPhhhPhhhPhhhPhh   'P@ 8@    ݗ'/26=>HPSW_`jruyhPhhhPhhhPhhhPhhhPhh'P@ 8@      ǘјҘܘ#$.9<BMNXchPhhhPhhhPhhhPhhhPhhh'P@ 8@      cflwxÙΙϙٙ'(2@CPhhhPhhhPhhhPhhhPhhhP'P@ 8@      CJXYcqt{ÚњԚܚhhhPhhhPhhhPhhhP'P@ 8@     'P@ 8@       Z['(Xl٢ 8?Σ4k{hh!!h!h!h!F!h!h!h!h!h!!!!!!!!!!!!!!!!!!!'P@ 8@         ^_~بBHYo(_ӪԪժh!!!!!!!!!@h@h@h@h@h@@@@@ @ @h@w@@@@h@@@@@@ @ @h@< Pp!2h8p @ xH#hԫ 'OUt̬ W`߭"6IR@@h@h@h@@h@@@@@@@@@@@@@@@c @c @h@S@h@h@@@@@@@@@@ Pp!2h8p @ xH$RSTUqrخ7ntܯ7vǰ˰Hp@@@@ @ @h@h@h@h@h@h@h@@@@@h@e@e@h@h@@h@h@@@@@@@@@@@@2h8p @ xH Pp!L$ !"#$%&'(DEFٳڳ[\67Լ$%56 !@@@@@@@@@@@@!h!h!h!h!h !h!h!h!h!h!h!h!h!h!h!h!h!!h!h!h!h!h!h!!!!h!h  Pp!)N(7d U]+lf.1STrmno!!!!!!!!!!!!!!!!!!!!!!!!!!h!!!R!!!!!!$!$0(!!$K2@2Normal0h ]a c*@* Heading 1  U]c&`& Heading 2xU]c$`$ Heading 3hUc"A@"Default Paragraph Font *@ Endnote Referenceh(`(TOC 2 X !&`&TOC 1 X !" `""Footer !"`2"Header !$&`A$Footnote Referencece$@R$ Footnote Text)`a Page Number*Or*foilspP]&O&new_text@BOBcode) hp@  !$]c"O"footnotecJOJ reference2, <~ P 0i%4.9us}    sss}}}}}}}  !! !!!!!!!!! !  ! ! !!!!!!!!!!!!H-6Q! +3>@GQ'Zep@xs'O- Ff" &(  /   %8T4r h:?y.AP_nlq{q#mHemnopqrstuvwxyz{|}o6`#)7< QYSeU|3[ݗcChR~[bu7OQ*++(3@3B3RJ_JuVV\\\o¦Ħ٦&B]է"68fݨ'EUmo _l(@B  :::11:1:11111:111:1::1:CJKU),6=F$,8 %(-/79?CIqxz`dx|DH7:wzKOQ[]e!$&$%%%%%%%%p&s&&&&&&&''V'Y'''''((((((c*d*--&.).8.;.A.D.F.I.g.j.....223333333344A4J4444444`5c5q5x55555556 66 6k6r666777#7D7J7777788"8(8L8R88888 999%9'9-90999;9D9Z9g9999999AAB"B1B8BBBCCCCGGJGmGqGGGGGGGGGGGlHpHHHHHHH,I/I[IbIjIkIqIrIIIIIIIJJBJGJwMMMMMMMMNNQNTNNNNN,O2OOO?PHPPP\QjQRRRRUUUUVVWWWX>XIXXXXY!Y+Y5Y@YOYUYZ Z [[[,[\$\]]]]]]]]^^^^_#___``6`7`=`>`A`H`X`\```````a aUaXafaoaaabbb&bgbvbbbbbbbbb>cHccccccccc.d:dvdzddde"e%e)e ffggggvhhiiiiZj^jjjjjjjjjckgkkkkkkkkkkk+l5lilslt#t@tPtttQuauuuqvtvvvvvvvvvwwAwBwwwyy{{~~~~;?S^oqFIՀ߀0?BHruʁЁtwςЂ؂ق"Tcۃ IJՄ߄$'4AQ`guosXYrsfg̍ύŽߏ ORde&'z}ÑǑ!$')-/3BFĒ<HPW")Yahv ix:BX]fiߢ $'գ¤$)27di?EGIikx{|}֩ةyϪҪzŬ-0W[hm}Ʈɮ `dù7<inǼμ:@˾׾ yNT^e OUY_nz)13;CH 8>BHDHPSTXvz15CM|)0V[]bhl?Dhl *,6?DIQYais ,24:<@^cmtv}+.2grt!/7w} Tz$]i9?ipoKJim Gray?\\at-internet\www$\http\research\BARC\Gray\SyntheticDataGen.doc@\\BARC-RAS\HP1Ne00:winspoolHP LaserJet 4Si/4Si MX PS\\BARC-RAS\HP1 P|p odXetter odXLPRIV''''\\BARC-RAS\HP1 P|p odXetter odXLPRIV''''[[[[Times New Roman Symbol &Arial 1Courier"Helvetica MBookmanBookman Old Style Times New YorkTimes New Roman",,GXpY+Jim Grayࡱ> htijܥhW e_TT0 0  3<3 5 5 5$666666jf76OS8x~9~9~9~9~9~9~9Q!Q!Q!Q8YQER1SSXCTOS 5~9m~~9~9~9~9OSP 5 5~98PPP~9V 5~9 5~9Q͘%i#.55T33 5 5~9QPKPQuickly Generating Billion-Record Synthetic Databases Jim Gray, Prakash Sundaresan Digital San Francisco Systems Center, 455 Market, San Francisco, CA., 94105 {Gray, Prakash}@SFbay.enet.dec.com Susanne Englert Tandem Computers Inc., 19333 Vallco Parkway, Cupertino, CA, 95014 Englert_Susanne @ Tandem.com. Ken Baclawski Computer Science, Northeastern University, 360 Huntington Av. Boston, MA. 02115 kenb@ccs.neu.edu Peter J. Weinberger Bell Laboratories, 600 Mountain Ave, Murry Hill, NJ., 07974 pjw @ research.att.com Abstract: Evaluating database system performance often requires generating synthetic databases ones having certain statistical properties but filled with dummy information. When evaluating different database designs, it is often necessary to generate several databases and evaluate each design. As database sizes grow to terabytes, generation often takes longer than evaluation. This paper presents several database generation techniques. In particular it discusses: (1)Parallelism to get generation speedup and scaleup. (2)Congruential generators to get dense unique uniform distributions. (3)Special-case discrete logarithms to generate indices concurrent to the base table generation. (4)Modification of (2) to get exponential, normal, and self-similar distributions. The discussion is in terms of generating billion-record SQL databases using C programs running on a shared-nothing computer system consisting of a hundred processors, with a thousand discs. The ideas apply to smaller databases, but large databases present the more difficult problems. Table of Contents  TOC 1. Introduction 1 2. The Computational Model 1 3. Sequential Database Generation 4 4. Parallel Database Generation 5 5. Dense Unique Random Data Generation 8 6. Generating Random Data 11 7. Generating Indices on Random Data 12 8. Generating Data Having Non Uniform Distributions 14  TOC 9. Summary 19 10. Acknowledgments 23 11. References 24  1. Introduction Evaluating database system performance often requires generating large synthetic databases ones with certain statistical properties, but otherwise filled with dummy information. When evaluating different database designs, it may be necessary to generate several databases and evaluate each design. As database sizes grow to terabytes, generation often takes longer than evaluation. Large database load or generation operations last for more than a week. The goal here is to quickly generate a large database by using parallel algorithms and execution. To make the problem concrete, the goal is to generate a billion record accounts table for the TPC-A benchmark [TPC]. Generating and loading this table using sequential algorithms would take several days. The goal here is to invent algorithms and techniques that generate this billion-record table and its indices in an hour. In outline, the paper first postulates a model of parallel computer hardware and software, so that we can quantify the performance of each algorithm. Then, the paper shows how to convert a sequential load to a parallel load by partitioning the job and forking a process-per-partition. Next the tasks of synthetic data generation are investigated. Parallel algorithms are given for generating dense-unique-pseudo-random sequenc Cambridge Univ. Press. Cambridge, 1992 [Ripley]Ripley, B.D., Stochastic Simulation, John Wiley, 1987. [Schrage] Schrage, L.E., "A More Portable FORTRAN Random Number Generator", ACM TOMS, V. 5(2), May 1979, pp 132-138. [Smith]Smith, M., et al., "An Experiment on Response Time Scaleability", Proc. 6th Int. Workshop on Database Machines, June 1989 [Stonebraker]Stonebraker, M., "The Case for Shared-Nothing", Database Engineering, V. 9(1), Jan. 1986. [Tanenbaum]Tanenbaum, A.S., Operating Systems: Design and Implementation, Prentice Hall, 1986. [Teradata]"The Genesis of a Database Computer: A Conversation with Jack Shemer and Phil Neches of Teradata Corporation", IEEE Computer, Nov. 1984. or DBC/1012 Database Computer System Manual, Release 1.3, C10-0001-01, Teradata Corp., Los Angeles, Feb. 1985. [TPC] "Transaction Processing Performance Council Benchmark A", Chapter 3 of Performance Handbook for Database and Transaction Processing Systems, Morgan Kaufmann, San Mateo, 1993. [Uren] Uren, S., "Message System Performance Tests", Tandem Systems Review, V3.4, pp. 27-32, Dec. 1986.  The correct processor speed term is SPECint, but that is approximately a MIPS (million instructions per second).  A later section will show how to fill in the customer number with a unique and uncorrelated customer ID. The customer number should be unique and uncorrolated with the account number. Here it is set to zero in all records. Also, sandard SQL does not allow table names to be host variables, but thsi and later programs assume it is allowed. One would have to use dyanic SQL to get this effect in a standard SQL systeem.  What is really needed is more akin to Unix exec than to fork, and has many more parameters.  As usual, we assume this table (index) is uniformly partitioned among the 1000 discs. The definitions of the partitions are omitted here for simplicity. Quickly Generating Billion-Record Synthetic Databases Quickly Generating Billion-Record Synthetic Databases  /= /= 678G H I [ \ b c u v  I Q -F<R 01KMQU\^*_rDGUc^Z^Z]U]]c uDPZcVcZ]ccIuDcIIUccUUcL578OPQRT^\dps " "_#`#i##$%&!&~&&' 'q's'''((Z(^(i(j(k((())**,+---------------. uDIe uDP]U]ZcVZ]cUuD5KIuD5vKII uDIcVcK