This question might sound facetious, but it is a genuine question which I am very much interested in. I apologize in advance if it is too conceptual or philosophical, but I'm optimistic that I might gain some mathematical insight from an answer.
There has been a long standing interest ever since Godel to add new and "true" axioms to set theory. I take it to be definitional that the point of such a program is to eliminate/reduce "non-standard" models of set theory, where a model's non-standardness is judged either by its fit to our intuitive concept of "set" and/or "size" or by some other metaphysical or aesthetic standard. It seems to be the case that a rather trivial part of our conception of the set theoretic universe is that there exist no sets that are models of all set-theoretic truth. That is, every model of set theoretic truth (which, like everything, is a set) will be non-standard in all sorts of ways. It will be absolutely tiny since it is a set rather than a proper class, it won't have all the "real" cardinals, or the "real" membership relation (sometimes), etc. So, my case rests on the following claim:
(1) Every model of set theory (which is a set) will be non-standard according to our conception of the entire set-theoretic universe.
However, once (1) is granted, doesn't it trivially follow that set-theoretic truth (where truth is determined by our conception rather than the axioms) should be inconsistent, since being inconsistent is equivalent to not having any models? If so, doesn't this have serious implications for math and\or philosophy? (i.e. if our very conception of set is inconsistent wouldn't this undermine the "realist" program of finding axioms that capture this conception?)