Daniel J. Bernstein (and others) have expressed concern over how "verifiably random" curve parameters are generated. He points out that hashing a public seed doesn't prevent, say, the US government (NIST/NSA) from choosing a seed that produces parameters with a one-in-a-million weakness that is not yet known to the academic community.
If the seed were hashed iteratively instead of just once, would this help address these concerns? If my thinking is correct, iterating a hash function $2^{40}$ times on the seed when producing the standard should make one-in-a-trillion weaknesses mostly infeasible and one-in-a-million weaknesses much more difficult.