Given a set $X \subset R^n$ with $m$ points. We can find it's Convex Hull and together with set of extreme points $E(X)$. And none of any points are linear multiplier of each other.
Under a linear projection of $f: R^n \to R^{n-1}$, we can find extreme points of $Y:=\{f(x)| x \in X\}$, denote $E(Y)$. which would again be extreme points of $X$, since linear projection preserves convexity.
$PI((E(Y))) \cap E(X) \neq \emptyset$, the pre-image of extreme points of $Y$ contains a subset of extreme points of $X$
There are ${m \choose n }$ such linear projection defined by the data points. Namely picking $n$ data points, looking at the affine subspace they span, and then projecting orthogonally onto that affine subspace
Would the union of ${m \choose n}$ linear projection and find extreme points in the lower dimension be sufficient to recover all the extreme points in the original set?
would this hold $E(X)\subseteq \cup_{i \in {m \choose n}}PI(E(Y_i))$ ?
For example, if in 2d space(n=2) and 10 points (m=10). Any two points can define a line, we would have ${10 \choose 2}$ lines defined by the data. If we project the 10 points to any line, the convex hull would form a line segment, we would detect 2 extreme points. If we project towards all the lines and collect all the extreme points there would be $2*{10 \choose 2}$ points, which contains duplicates of course. but if we deduplicate them, would all the extreme points of the 2d space being captured by these procedures?