next up previous contents index
Next: Classification of Join Algorithms Up: Parallel Joins Previous: Fragment-And-Replicate Technique

Symmetric Partitioning Technique

     

A more general, but also more delicate parallel joining technique is based on symmetric partitioning where all participating relations are partitioned. We have encountered this method already in the discussion of the Grace hash join (figure 3.14). Symmetric partitioning splits one `big' join into several smaller and independent ones:  
 \begin{displaymath}
R \Join_{\scriptscriptstyle C}Q \;=\;
R_1 \Join_{\scriptscri...
 ..._1 \,\cup\, \cdots \,\cup\, R_m \Join_{\scriptscriptstyle C}Q_m\end{displaymath} (9)
where the Rk and Qk are referred to as fragments  of the relations R and Q, respectively.

The partial joins  $R_k \Join_{\scriptscriptstyle C}Q_k$ are independent from each other and can therefore be processed concurrently, i.e. in parallel. This parallelisation technique can be applied to each join algorithm that was presented in section 3.4. The Grace hash join in figure 3.14, for example, can be considered as a parallel hash join if the `for k=1 to m do' loop is parallelised, i.e. if its body is executed concurrently.

Figure 3.19 shows the structure of this family of parallel joins. It is divided into three stages:

1.
In a partitioning stage the two input relations R and Q are partitioned into fragments $R_1, \dots, R_m$ and $Q_1,
 \dots,Q_m$ respectively such that tuples in any Rk can only join with tuples in Qk. This secures the independence of the partial joins.  
2.
In a joining stage the partial joins $R_k \Join_{\scriptscriptstyle C}Q_k$ are executed in parallel (for $k=1,\dots,m$) which creates m local, partial results.  
3.
Finally, in a merging stage the partial results are merged (i.e. collected) to form the global join result.  
Let us look at the search strategy of this family of parallel joins by using the same scenario as for the algorithms presented in section 3.4. The partitioning stage results in the same effect as encountered by the partitioning performed in the sequential hash joins . Therefore, figures 3.12 and 3.13 equally represent the search strategy of a parallel nested-loop join . Nevertheless, we add a further example in figure 3.20 because it makes certain issues more obvious: in this example, the join $R
\Join_{\scriptscriptstyle C}Q$ is divided into four partial joins, each of which is processed by the nested-loop algorithm. This means that an exhaustive search is performed over four partial cartesian products  $R_k \times
Q_k$ for k=1,2,3,4. In terms of our graphic representation, this means that the four grey rectangular areas in figure 3.20 are processed.




  
Figure: The structure of a parallel join based on symmetric partitioning.
\begin{figure}
 \epsfxsize=0.9\textwidth
 \centerline{
 \epsffile{/home/tz/work/thesis/fig/stages.ps}}
 \centerline{
 } \end{figure}




  
Figure: Search strategy of the symmetric partitioning technique with the partial joins performed as nested-loops.
\begin{figure}
 \epsfxsize=0.9\textwidth
 \centerline{
 \epsffile{/home/tz/work/thesis/fig/j-parallel.ps}}
 \centerline{
 } \end{figure}

The parallelisation, motivated by (3.6), works very well for equi-joins  because it is easy to create disjoint fragments $R_1, \dots, R_m$ ($Q_1,
 \dots,Q_m$respectively) which allow the partial joins to be independent. Unfortunately, many nonequi-joins  and - as we will see in chapter 4 - many types of temporal joins cannot be divided into disjoint fragments and maintain the independence of the partial joins at the same time. One has then to decide whether to sacrifice either the disjointness or the independence with the first one being the preferable option in most cases.

The selectivities  of the partial joins are much higher than for the original join because the partitioning concentrated tuples with similar values in associated fragments Rk and Qk. This is an important point in the sense that the selectivity is an important issue for the choice of the most appropriate (sequential) join algorithm for computing the partial joins $R_k \Join_{\scriptscriptstyle C}Q_k$. In a set of experiments that we conducted on partitioned temporal joins, for example, we observed average selectivities of 80% for the partial joins of a join $R
\Join_{\scriptscriptstyle C}Q$.Such high figures make the nested-loop algorithm  the favourable option for processing the partial joins because there will not be a large number of unnecessary tuple comparisons and, at the same time, avoids any overhead caused by sorting or hashing the data.

There are other ways of parallelising a join operation which differ from the ones presented in this section. The ones we presented are, however, the most common ones. One variation, for example, is to interleave the partitioning and joining stages similar to the case of sequential join algorithms. Further variations arise from different characteristics of the underlying parallel hardware architecture. Some parallel machines, for example, have specifically optimised communication facilities, such as broadcasts. This leads to specific cost models for this particular machine which might favour certain join strategies which are discarded when employing more general cost models. The interested reader might look at the papers that describe parallel join algorithms in detail, such as [Kitsuregawa et al., 1983], [DeWitt and Gerber, 1985], [Gerber, 1986], [Wang and Luk, 1988], [Schneider and DeWitt, 1989], [Kitsuregawa and Ogawa, 1990], [Wolf et al., 1990], [Keller and Roy, 1991] or [Wolf et al., 1993].


next up previous contents index
Next: Classification of Join Algorithms Up: Parallel Joins Previous: Fragment-And-Replicate Technique

Thomas Zurek