Shellsort is one of the oldest sorting algorithms, named after its inventor D.L. Shell (1959) [She 59]. It is fast, easy to understand and easy to implement. However, its complexity analysis is a little more sophisticated.
The idea of Shellsort is the following:
The effect is that the data sequence is partially sorted. The process above is repeated, but each time with a narrower array, i.e. with a smaller number of columns. In the last step, the array consists of only one column. In each step, the sortedness of the sequence is increased, until in the last step it is completely sorted. However, the number of sorting operations necessary in each step is limited, due to the presortedness of the sequence obtained in the preceding steps.
Example: Let 3 7 9 0 5 1 6 8 4 2 0 6 1 5 7 3 4 9 8 2 be the data sequence to be sorted. First, it is arranged in an array with 7 columns (left), then the columns are sorted (right):



Data elements 8 and 9 have now already come to the end of the sequence, but a small element (2) is also still there. In the next step, the sequence is arranged in 3 columns, which are again sorted:



Now the sequence is almost completely sorted. When arranging it in one column in the last step, it is only a 6, an 8 and a 9 that have to move a little bit to their correct position.
Actually, the data sequence is not arranged in a twodimensional array, but held in a onedimensional array that is indexed appropriately. For instance, data elements at positions 0, 5, 10, 15 etc. would form the first column of an array with 5 columns. The "columns" obtained by indexing in this way are sorted with insertion sort, since this method has a good performance with presorted sequences.
The following program sorts an array a from index position 0 through n1. The number of columns used for arranging data in each step is in array cols. Thus, data are arranged in 1391376 columns in the first step and in one column in the last step. (Note that essentially nothing is done if the number of columns h is larger than the number of data elements n.) Each column is sorted by insertion sort. First, data of the second row (beginning at i = h) are sorted to the correct position in their column, then data of the third row (when i reaches value 2h) etc.
Algorithm Shellsort
The correctness of the algorithm follows from the fact that in the last step (with h = 1) an ordinary insertion sort is performed on the whole array. But since data are presorted by the preceding steps (h = 3, 7, 21, ...) only few insertion sort steps are sufficient. How many exactly will be the subject of the following analysis. The above sequence of h's (denoted as hsequence in the following) is just one of several possible; actually, the performance of Shellsort depends on which hsequence is used.
The postage for a letter is 16^{F}, for a postcard it is 11^{F}. But there are only stamps of 3^{F} and 7^{F} available. Is it possible to stamp the letter and the postcard exactly?
Obviously, the problem is to represent the numbers 16 and 11 as linear combinations k·3 + l·7 with nonnegative integer coefficients k and l. Which natural numbers can be combined from multiples of 3 and 7 and which cannot?
Definition: Let g, h ∈ ℕ. We call a number f a combination of g and h, if f can be represented as a linear combination f = kg + lh with coefficients k, l ∈ ℕ_{0}.
The letter can be stamped exactly, since 16 is a combination of 3 and 7, namely 16 = 3·3 + 1·7.
Definition: Let g, h ∈ ℕ be relatively prime. With N(g, h) we denote the (finite) set of all natural numbers that are not combinations of g and h and with γ(g, h) the largest of these numbers:
N(g, h) = { f ∈ ℕ  ¬ ∃ k, l ∈ ℕ_{0} : f = kg + lh }
γ(g, h) = max;(N(g, h))
Example: Let g = 3, h = 7. Then
N(g, h) = {1, 2, 4, 5, 8, 11}, and γ(g, h) = 11
The postcard cannot be stamped exactly, since 11 is not a combination of 3 and 7.
Proposition: Let g, h ∈ ℕ be relatively prime. Then
γ(g, h) = (g1)·(h1) – 1
i.e. every number f with f ≥ (g1)·(h1) is a combination of g and h.
Proof: (Exercise)
Definition: Let h ∈ ℕ_{0}. A sequence a_{1,} ..., a_{n} is called hsorted, if for all i ∈ {1, ..., nh} it holds that
a_{i}≤a_{i+h}
A hsorted sequence is obtained by arranging the sequence in an array with h columns and then sorting the columns. A 1sorted sequence is sorted.
Proposition: Let g, h ∈ ℕ. A gsorted sequence remains gsorted after hsorting it.
Proof: see [Knu 73]
Definition: A sequence that is gsorted and hsorted is called g,hsorted.
Proposition: A g,hsorted sequence is g+hsorted.
Proof: We have to show that for all i ∈ {1, ..., n(g+h)} it holds that
a_{i}≤a_{i+g+h}
But this is the case because a_{i}≤a_{i+g}, since the sequence is gsorted, and a_{i+g}≤a_{i+g+h}, since the sequence is hsorted.
As an immediate consequence, we have the following
Proposition: If a sequence is g,hsorted, then it is kg+lhsorted for all k, l ∈ ℕ_{0}, i.e. the sequence is fsorted for each f that is a combination of g and h.
Proposition: Let a be a g,hsorted sequence, where g and h are relatively prime. Then for all elements a_{i} and a_{j} the following holds:
j – i ≥ (g1)·(h1) ⇒ a_{i}≤a_{j}
i.e. to the right of each a_{i} only the next (g1)·(h1) – 1 elements can be smaller.
Proof: Let j – i = f ≥ (g1)·(h1), then f is a combination of g and h. Therefore the sequence is fsorted, which means that
a_{i}≤a_{i+f} = a_{j}
Proposition: Let a be a g,hsorted sequence, where g and h are relatively prime, and let d be a variable. If both g and h are in O(d), then O(n·d) sorting steps are sufficient for dsorting the sequence.
Proof: To the right of each element a_{i} at most (g1)·(h1) – 1 elements are smaller. dsorting the sequence means arranging it as a twodimensional array with d columns and sorting the columns.
In the column under a_{i} only every dth of these smaller elements can occur. This means that a_{i} has to be exchanged with at most ((g1)·(h1) – 1) / d elements or, since g and h are in O(d), with O(d) elements.
Since this holds for all a_{i} (i = 1, ..., n), there are O(n·d) sorting steps needed for dsorting the sequence.
From this proposition upper bounds for the complexity of Shellsort can be derived.
Theorem: With the hsequence 1, 3, 7, 15, 31, 63, 127, ..., 2^{k} – 1, ... Shellsort needs O(n·n) steps for sorting a sequence of length n (Papernov/Stasevic [PS 65]).
Proof: Let h_{t} be the h closest to n. We analyze the behavior of Shellsort separately for the elements h_{k} with k ≤ t and with k > t.
Let k ≤ t. Since h_{k} = 2^{k} – 1 we have the conditions mentioned above that h_{k+1} and h_{k+2} are relatively prime and in O(h_{k}). Therefore, O(n·h_{k}) sorting steps suffice for h_{k}sorting the data sequence. Since the h_{k} form a geometric series, the sum of all h_{k} with k = 1, ..., t is in O(h_{t}) = O(n). Thus O(n·n) sorting steps are needed for this part where k ≤ t.
Now let k > t. When the sequence is arranged as an array with h_{k} columns there are n/h_{k} elements in each column. Thus, O((n/h_{k})^{2}) sorting steps are needed to sort each column, since insertion sort has quadratic complexity. There are h_{k} columns, therefore the number of sorting steps for h_{k}sorting the entire data sequence is in O((n/h_{k})^{2}·h_{k}) = O(n·n/h_{k}). Again, the n/h_{k} form a geometric series whose sum is in O(n/h_{t}) = O(n). Therefore, again O(n·n) steps are needed for this part where k > t.
It can be shown that for this hsequence the upper bound is tight. But there is another hsequence that leads to a more efficient behavior of Shellsort.
Theorem: With the hsequence 1, 2, 3, 4, 6, 8, 9, 12, 16, ..., 2^{p}3^{q}, ... Shellsort needs O(n·log(n)^{2}) steps for sorting a sequence of length n (Pratt [Pra 79]).
Proof: If g = 2 and h = 3, then γ(g, h) = (g1)·(h1) – 1 = 1, i.e. in a 2,3sorted sequence to the right of each element only the next element can be smaller. Therefore, O(n) sorting steps suffice to sort the sequence with insertion sort. Considering elements with odd and with even index separately, it becomes clear that again O(n) sorting steps suffice to make a 4,6sorted sequence 2sorted. Similarly, O(n) sorting steps suffice to make a 6,9sorted sequence 3sorted and so on.
The above hsequence has the property that for each h_{k} also 2h_{k} and 3h_{k} occurs, so O(n) sorting steps suffice for each h_{k}. Altogether there are log(n)^{2} elements in the hsequence; thus the complexity of Shellsort with this hsequence is in O(n·log(n)^{2}).
The hsequence of Pratt performs best asymptotically, but it consists of log(n)^{2} elements. Particularly, if the data sequence is presorted, a hsequence with less elements is better, since the data sequence has to be scanned (by the foriloop in the program) for each h_{k}, even if only a few sorting steps are performed.
By combining the arguments of these two theorems hsequences with O(log(n)) elements can be derived that lead to a very good performance in practice, as for instance the hsequence of the program (Sedgewick [Sed 96]). But unfortunately, there seems to be no hsequence that gives Shellsort a worst case performance of O(n·log(n)) (see [Sed 96]). It is an open question whether possibly the average complexity is in O(n·log(n)).
Shellsort can be implemented as a sorting network if the datadependent insertion sort is replaced with bubble sort. With the hsequence 2^{p}3^{q} the sorting network consists of O(n·log(n)^{2}) comparators. This is the same number of comparators as in bitonic sort.
The following figure shows the corresponding sorting network for n = 8.
Figure 1: Shellsort network for n=8
Shellsort was originally published by D.L. Shell [She 59].
[Knu 73] D.E. Knuth: The Art of Computer Programming, Vol. 3  Sorting and Searching. AddisonWesley (1973)
[Pra 79] V. Pratt: Shellsort and Sorting Networks. Garland, New York (1979)
[PS 65] A. Papernov, G. Stasevic: A Method of Information Sorting in Computer Memories. Problems of Information Transmission 1, 6375 (1965)
[Sed 88] R. Sedgewick: Algorithms. 2nd edition, AddisonWesley (1988)
[Sed 96] R. Sedgewick: Analysis of Shellsort and Related Algorithms. In: Josep Díaz, Maria Serna (Eds.): Algorithms  ESA '96, Fourth Annual European Symposium, Barcelona, Lecture Notes in Computer Science, Vol. 1136, Springer, 111 (1996)
[She 59] D.L. Shell: A HighSpeed Sorting Procedure. Communications of the ACM, 2, 7, 3032 (1959)
Next: [up]