Current LLMs have no memory by themselves, so they will not "accumulate skills" after in-context learning.
You can, nevertheless, provide in the LLM prompt any information that you obtain externally. This is typically done in many cases:
- The ChatGPT web interface sends the whole conversation to the model at each interaction, so that the model can use it as context and simulate that it "remembers" the conversation. Actually, the model does not remember anything, because the whole conversation is offered at the prompt.
- You can keep a database with whatever knowledge you need your LLM to have and use it to feed the LLM as you need. For instance, for question-answering setups, it is usual to have a knowledge database to extract texts that can be useful to answer the user's question. When the user asks their question, the knowledge database is queried to retrieve relevant texts, which are then used as context in the LLM prompt along with the question itself.