Should this necessarily be done at application level, or can the database (MySQL InnoDB) also do this reliably with INSERT INTO SELECT?
It is reliable, until it is not.
Honestly, this all depends on how complex your calculation logic is and how well it can be implemented in SQL. Your vague description
save the totals of the booking amounts in another table depending on the customer, account no., product,
sounds like something which can be implemented perfectly with a few simple group-by SQL statements. "Several million" booking rows are no order of magnitude which should be a huge problem for most contempary database servers when the subtotals can be calculated by a linear scan. However, you may not have told us the full story here, maybe your real requirements are more complex - only you know.
If I were in your shoes, I would simply try this out with SQL, which is probably the most straightforward and simple solution. In case you need to join in other, related tables, you have to care for proper indexing, of course. When this does not work well, runs too long etc, you can still switch to something more complicated.
I would have to mark (UPDATE) each data record that I have already processed in order to continue at the correct point if the program terminates.
Before going that route, I would first try if you can implement this without any markers - this is simpler and may be fully sufficient. Start by creating the aggregated tables in a single transaction. If the server terminates unexpectedly during execution, the transaction should be rollbacked, and nothing bad happens - you can simply retry the operation at a later point in time. If you need more than one transaction for filling an aggregate table, implement this in an idempotent way, where you can simply repeat the whole process and get always the same result. In this case, it might be sufficient to clear an aggregate table before refilling it. Or, you try to define the transactions which fill your aggregates so you can determine from the content of the aggregate table which data was already processed successfully and which not. Adding additional status columns or tables to the source data for tracking former processing steps is an optimization which has the potential to proof itself premature and unneccessary.
When you really run into one of the potential issues scetched in Ewan's answer, then it is still time enough to switch to something more complicated and try out if a solution implemented at "the application level" will really behave better. Don't forget getting the data out of the database and the results back into it introduces some extra overhead - in terms of running time, resource usage and programming effort - which needs to be balanced by some real benefits.
See also: If there are two ways of approaching a task, how should one choose between them?