4

Let's suppose we have a group $G$ of finite order $n$. We want to algorithmically compute all subgroups, and there are some ways to do that.

First one:

  1. Compute all subsets.
  2. Verify for each of them if they're closed. Then we're done.

It's not precisely an efficient algorithm. For $n=24$ we already have 16 millions of subsets to check.

A second one would be to compute all the generated subgroups:

  1. Get a prime decomposition $n=p_1^{e_1}\cdots p_{r}^{e_r}$.
  2. Compute $$b=\sum_{i=1}^r (e_i+1).$$
  3. Compute all subsets with at most $b$ elements
  4. Compute the generated subgroups of such subsets. We're done.

In this case we need an upper bound on the number of generators a subgroup can have. I chose $b$ because it's a slightly worse bound than the one found here, but that doesn't depend on the structure of the group or its subgroups (since it's information we're not given).

I'd like to know. Is there a better way? And... is there a better bound for the second method?

I mean manually we can use a lot of the structure of the group so most of the time computing all the subgroups is way easier, but if we want a computer to do it I think it may be hard to implement something like that.

Edit: There are some ideas in the comments.

  • Using Sylow (How?)
  • Computing only the subsets which order divides the order of the group. The number of subsets you need to compute is $$\sum_{d\mid n} \binom{n}{d}$$ while the one in my second method is $$\sum_{i=1}^b \binom{n}{i}.$$ At least in the case of $n=24$ that method is not as good in the sense that you have to compute 3587174 subsets instead of 190050 with my method.

Edit 2: Because the approach to the problem is different, I don't think this and this questions contain the answer to mine. My questions are 1. If there is a better method (this one is answered), and 2. How to improve the bound for my method (this one has not been answered).

Edit 3: For a way to slightly improve the bound I used while still not depending on the structure of each group we just need to see that it always holds that $f_i\geq 1$ (Because of First Sylow theorem and that every $p$-group has a cyclic subgroup of order $p$). So we can use $$b=\sum_{i=1}^r e_i$$ as a better bound. This is a huge improvement for the case of $n=24$, since in this case $b=4$ and we only need to compute 12950 subsets.

ml0105
  • 14,674
  • 3
    You can use the fact that the order of a subgroup must divide the order of the group to vastly decrease the number of subsets you need to check. – Joseph Camacho May 19 '22 at 23:50
  • 1
    Do you know Sylow's theorems? – Shaun May 19 '22 at 23:52
  • 2
  • @JosephCamacho Indeed, that would be an approach. The issue is, for example with $n=24$, if you compute all the subsets with 2, 3, 4, 6, 8, 12 and 24, those are still 3587174 subsets, which is more than my second method. It would be nice if you had a way to compute both. – LeviathanTheEsper May 20 '22 at 00:50
  • @Shaun Yes, of course. Issue is, suppose we're given a group, and we have not computed any of its subgroups. Then we cannot, a priori, use the Sylow subgroups to compute the remaining subgroups. We need to first compute all the Sylow subgroups. Is there a good way to compute them a priori? – LeviathanTheEsper May 20 '22 at 00:51
  • @HallaSurvivor Not completely. It does answer that there is a better way. It does not answer if there is a better bound for the second method. – LeviathanTheEsper May 20 '22 at 00:53
  • 1
    You can cut down on subsets a lot. Once you add an element, you must add all its powers, for example. – Arturo Magidin May 20 '22 at 04:19
  • 1
    Why don't you start by generating first all cyclic groups, and then iterate over all already generated subgroups and see what they generate together with each of the cyclic subgroups? – j.p. May 20 '22 at 06:21

1 Answers1

3

For small groups, there are techniques to squeeze out more information. However, in general, I would not expect good algorithms. Here are some reasons why.

  • Wilson exhibits a family of class $2$ $p$-groups of exponent $p$ that have identical subgroup profiles. Now the subgroup lattice of class $2$ $p$-groups of exponent $p$ is a complete isomorphism invariant for these groups. However, we cannot efficiently compute the lattice. Nonetheless, Wilson gives enumerative results for the multiplicity of each subgroup (https://theoryofcomputing.org/articles/v015a019/v015a019.pdf).

  • The number of isomorphism classes for groups of order $p^{n}$ ($p$ a prime) is $p^{O(n^{3})}$. In general, $p$-groups are quite complicated and are believed to be the hard case for isomorphism testing. Note that class $2$ $p$-groups of exponent $p$ are determined entirely by their cohomology. We don't understand group cohomology all that well, as it pertains to algorithmic group theory.

  • Even over finite simple groups, the subgroups are quite diverse (https://arxiv.org/abs/2007.10439).

  • When we take direct products of groups, the number of subgroups grows quite a bit. Think about the lattice of subspaces for $\mathbb{F}_{p}^{n}$. See also Goursat's Lemma. Neuen and Schweitzer subsequently extended Goursat's Lemma to the case of $3$ direct factors (https://arxiv.org/abs/1607.03444).

It's also worth taking a minute to consider what constitutes a "good algorithm." One natural criterion is that the algorithm runs efficiently in practice. On the other hand, when we consider computational complexity (making precise how efficiently the problem can be solved), we need to consider the runtime of the algorithm relative to the size of the input.

If you are working in succinct input models (e.g., permutation groups, polycyclic presentations, etc.) such as is done in the Computational Group Theory community, then polynomial-time is $\text{poly}(\log |G|)$, since you will be given a generating sequence as input. If you are given the Cayley table, then polynomial time is $\text{poly}(|G|)$. Note that the Cayley table requires $O(|G|^{2} \log |G|)$ bits to write down, and so this is not a practical input model- even writing down the Cayley table for $p$-groups is too expensive.

Enumerating the subgroup lattice when given a Cayley table is not practical, and it certainly won't be in more succinct input models.

ml0105
  • 14,674