A late answer, but I recently had cause to perform some entropy estimation and calculated some chis.
For context, in uniformly distributed random bytes, the target chi is ~255 leading to a p value of ~ 0.5. As one definition of randomness is in-compressibility, it follows that you cannot differentiate a compressed file from a truly random one. The caveat though is the level of possible compression. A compressed file requires control and format structures within it that significantly differentiate it from perfectly random. These control structures throw out the calculated p values in a chi test. So some examples of compressed data:-
.zip p < 0.0001
.jpg p < 0.0001
.png p < 0.0001
Remember random data would have a p ~ 0.5 on average. More specifically, a Kolmogorov–Smirnov test of these p values should see them uniformly distributed 0 to 1. So at this point my answer would be that yes, you can use a chi test to identify random data.
But compression algorithms have improved and I've found fp8 which is a PAQ8 derivative. It's the most powerful compression program that I found that can be easily compiled. The same files now give the following chis having been compressed by fp8:-
.zip.fp8 p = 0.93
.jpg.fp8 p = 0.14
.png.fp8 p = 0.38
On prima facie evidence, these compressed files produce chi p values consistent with fully random data. So my final answer is no, you cannot differentiate random data from compressed data using a chi test.
Some further insight into chi and p might be had here.