We present an approach to improving non-English prompts based on backtranslation invariance (the semantics of the prompt should not change after automatic translation to English and back). It improves prompts in non-English languages for a variety of Large Language Models (LLMs), including GPT-4-o, Llama-3.1, and Mixtral8x7B. We evaluate the approach for Russian and Finnish languages. In the benchmark of removing commas from a sentence, the proposed approach achieved an accuracy increase of 42% for Russian and 54% for Finnish compared to noninvariant prompts (LLaMA). In the benchmark of counting commas, accuracy increase of 19% for Russian and 11% for Finnish (GPT).