In most cases this will make no sense, a test if your data is correct may end up in comparing file A1 to a copy of the file itself A2. Lets say you need a new version of A1 because the format of your data has evolved in version 2.0 of your software, you would change it to A1_V2, and now your test tells you that A2 is different from A1_V2 - so you copy A1_V2 to A2_V2 (or edit it until both files are equal again). But if you made an error in the first transition from A1 to A1_V2, you now introduced the same error into A2_V2 - and the test does not prevent that. The only form of quality assurance related to the content of A1 that will work (especially if A1 is part of the requirements, as you wrote), is to let a second person proofread A1.
However, if A1 is in a data format which can be checked for consistency (for example, an XML file, maybe with a schema), then it makes sense to have a test for A1 beeing consistent with that file format. Of course, you probably have already such a test, because if A1 is already part of an automatic test of some production code, then this code reads A1. And if the reading routine is designed to be robust and fail-safe, as any reading code should be, then it does already this kind of consistency checking and will show up an error is A1 is broken. So in most cases, this means you don't need any extra tests especially for data. But beware: such a test does not guarantee that the contents of A1 are correct, it only gives you a formal check.
There is, however, no rule without exceptions. There can be special cases (which I expect to be rare) where your specific test data can be checked for containing some content, maybe in a non-obvious form, to make sure the test will work as intended. For example, you may have a complex test database, and you want to make sure later manual edits of that database don't destroy your set of ten well-designed, non-obvious test cases contained in that database. Then it would make sense to write an automatic test to verify that.
Also, if your data is itself some kind of "code" (for example, some scripts or functions written in some kind of DSL), then it is obvious that automatic tests make sense here.
I would like to mention that it does not make a difference if we don't think about test data or data to be released to production as part of your product (since such data can be always used a test data).
To summarize: in most real-world cases I guess it won't make sense, but the more complex your data is and the more implicit redundancies or formal constraints your data has got, the more tests maybe a good idea. So think about your real case and apply some common sense - that will surely help ;-)