One way of thinking of outlier detection is that you're creating a predictive model, then you're checking to see if a point falls within the range of predictions. From an information-theoretic point of view, you can see how much each observation increases the entropy of your model.
If you are treating this data as just a collection of numbers, and you don't have some proposed model for how they're generated, you might as well just look at the average. If you're certain the numbers aren't normally distributed, you can't make statements as to how far 'off' a given number is from the average, but you can just look at it in absolute terms.
Applying this, you can take the average of all the numbers, then exclude each number and take the average of the others. Whichever average is most different from the global average is the biggest outlier. Here's some python:
def avg(a):
return sum(a)/len(a)
l = [100, 50, 150, 200, 35, 60 ,50, 20, 500]
m = avg(l)
for idx in range(len(l)):
print("outlier score of {0}: {1}".format(l[idx], abs(m - avg([elem for i, elem in enumerate(l) if i!=idx]))))
>>
outlier score of 100: 4
outlier score of 50: 10
outlier score of 150: 3
outlier score of 200: 9
outlier score of 35: 12
outlier score of 60: 9
outlier score of 50: 10
outlier score of 20: 14
outlier score of 500: 46