What is it used for?
Data ingestion into a distributed datastore (e.g. HDFS). See image (I did not make the image and am only including the image for visual aid). There are other tools that will help you with the ingestion of data as well (Storm and Sqoop are mentioned).
At which stage of a BigData analysis is it used?
It is used for data ingestion into your distributed datastore (e.g. HDFS). So for example a webserver is running logging information into /var/logs/webserver.log. Apache Flume can look at that file, grab what it needs out of it and send it to HDFS. Once the data gets put into your datastore you can then utilize other tools to analyze the imported data (e.g. Hive, Pig, MR, etc.)
And what are prerequisites for learning it?
Understanding of how to write scripts, edit configuration settings, and your way around Linux would be the absolute minimum to get started. This set of instructions are old but a starting point would be to look at the hortwonworks tutorial on flume. http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-server-log-data/
If you would like me to elaborate more I would be glad to but I wanted to try and meet your requirement of a simple, short explanation.
