It took a few months of work to get the enumeration program to seemingly handle this case. Runtime was a couple of weeks to generate an output file of 147GB. This contains a lot of duplication as filtering by the generation program is minimal.
I use an offline program to filter the output and cut it down by removing overlaps etc. I decided to split it down such that I process all the 6-bit numbers followed by the 7-bit and so on through 32-bit numbers.
I managed to get each bucket down to the smallest size possible (you can't remove an entry without missing numbers) by removing entries that overlap. These have been checked against a large range of data.
Any 6-bit number can be generated with 5 small steps via the binary method. So that case is trivial:
// 24745
1 6 6 @(a)+@(b)+@(c)+@(d)+@(e)+@(f)
The remaining possible bit counts of 7-32 are in different files:
5 Small 7-bit 5 Small 8-bit 5 Small 9-bit 5 Small 10-bit
5 Small 11-bit 5 Small 12-bit 5 Small 13-bit 5 Small 14-bit
5 Small 15-bit 5 Small 16-bit 5 Small 17-bit 5 Small 18-bit
5 Small 19-bit 5 Small 20-bit 5 Small 21-bit 5 Small 22-bit
5 Small 23-bit 5 Small 24-bit 5 Small 25-bit 5 Small 26-bit
5 Small 27-bit 5 Small 28-bit 5 Small 29-bit 5 Small 30-bit
The following tables shows the distribution of convex sets for each bit count:
\(c(r)\) | Primitive | \(c(r)\) | Primitive |
---|---|---|---|
6 | 1 | 20 | 2751504 |
7 | 3157 | 21 | 2342736 |
8 | 65004 | 22 | 2233569 |
9 | 344861 | 23 | 1552926 |
10 | 1065864 | 24 | 1024999 |
11 | 2154582 | 25 | 603499 |
12 | 3379181 | 26 | 685084 |
13 | 4286622 | 27 | 691285 |
14 | 4825321 | 28 | 784262 |
15 | 4802830 | 29 | 739552 |
16 | 4701640 | 30 | 641837 |
17 | 4177702 | 31 | 361384 |
18 | 3831503 | 32 | 150416 |
19 | 3048287 |