Current large language models (LLMs) generate text of significantly lower quality when prompted in languages other than English. This stems from the fact that they were trained on mostly English texts with only small portions of data in other languages. This talk explores various techniques that enable leveraging an already pre-trained LLM by adapting it to Czech language.