I am building a code to solve an optimization problem defined as
\begin{array}{cl} \text{maximize} & f(x) \\ \text{subject to} & x\in\mathcal{X}, \end{array}
where $f:\mathbb{R}^n\to\mathbb{R}$ is concave with respect to $x\in\mathbb{R}^n$, and $\mathcal{X}\subseteq\mathbb{R}^n$ is a simplex set, e.g.,
$$\mathcal{X}=\left\{ x\in\mathbb{R}^n : x_i \ge 0 , \sum_i x_i \le c\right\}$$
In this regard, I made a code using the Frank-Wolfe method (a.k.a. conditional gradient method). However, many papers dealing with convex problems said that "Since the above problem is a convex one, it can be solved any convex programming tools, e.g., interior-point method."
Why were many authors mentioning the interior-point method, instead of the conditional gradient one? I think both methods can solve constrained convex problems and the main difference between them is whether the algorithm base is gradient or Hessian.
Is there a special reason that many authors only mention the interior-point method? If the interior-point method is better than the Frank-Wolfe one, I will rebuild my code using the interior-point one, instead of the Frank-Wolfe one.