To take a trivial example, you cannot find the largest number from a
set of $n$ numbers without checking them all which requires linear
time. No big proof. But there are algorithms that do not always require
reading all the data. A nice example is searching all occurences of a
pattern in a string, which may not require reading the whole string (Boyer-Moore algorithm).
But I should not repeat what is already answered, probably better than
I would.
However, there is a point in the question that calls for some more
remarks regarding lower bound (or complexity bounds in general).
Actually the choice of what is a single computational step is
irrelevant, as long as computational steps can be considered as having
a constant upperbound (and lower bound). The complexity result will be
the same since it is defined up to a constant. Taking 3 comparisons as
unit operations, or only a single one, makes no difference.
The same is true regarding the size of data which serves as reference
to evaluate the cost of the computation. Taking a single integer, or
two integers as unit of size makes no difference.
However, the two choices must be related.
You may consider that a number $n$ is a single unit of data if you
consider that operations on integers, such as addition or comparison,
are unit operations. You may also consider that it is $\log n$ units
of data since it takes $O(\log n)$ digits to represent a number. Then,
of course, addition and comparison are no longer unit operations, and
their cost depends on the values of the operands.
Whether an operation can be considered having unit cost is tightly
related to what data can be considered as having unit size. And that
depends on what level of abstraction you choose for your model of
computation.