Best Practices for Prompt Modifier Shaping: Navigating Model Behavior and Responsiveness

Some models resist the formatting, length, or tone modifiers we give them. But what if that's not a bug, but a feature we can work around?
Context and Purpose
In recent experiments, we explored how prompt modifiers influence the length, style, and structure of outputs from large language models (LLMs). While the overall results showed promise, one model consistently stood out by not standing out: GPT-3.5-Turbo.
This article explores:
Why some models ignore modifiers
How to shape prompts around that
Emerging best practices for dealing with modifier resistance
Why Modifiers Sometimes Fail
Language models are trained on massive corpora, but their behavior can vary based on how they've been optimized and aligned. Here are some reasons a model might resist your prompt instructions:
1. System prompt filtering
GPT-3.5-Turbo often places more weight on its initial "system" instruction. Any modifier embedded in the user prompt may be de-emphasized.
2. Overgeneralization
Smaller or cheaper models (like Turbo or some distilled variants) prioritize coherence and safety over stylistic fidelity. They default to a "safe" general style.
3. Implicit prompt normalization
Some models actively rewrite or normalize your prompt behind the scenes. That means your carefully crafted tone hint might get wiped out.
Engineering Around Resistance: A Prompt Shaping Strategy
When your model ignores modifiers, don’t fight it directly. Instead:
1. Inject in the most sensitive location
Place the instruction in a system prompt if available, or right at the top of the user message, before any question.
Example:
System: Respond in a bullet-point style using no more than 50 tokens.
User: What is anemia?
2. Create placeholder patterns
If the model prefers structure, wrap your instructions like a template:
Q: What is anemia?
A: [Brief, professional response in 50 words or less.]
This hints to the model to fulfill a role rather than just answer.
3. Combine constraints with tone
Mix hard constraints ("no more than 60 words") with soft nudges ("in a friendly tone"). Sometimes soft and hard constraints work better in tandem.
4. Use few-shot examples (if you can afford them)
Demonstrate the desired output pattern. Even GPT-3.5-Turbo is more likely to follow examples than plain instruction.
From Modifiers to Model Mapping
Different models are sensitive to different prompt positions and styles. In our broader experiments, we found:
Modifier | Mistral | Claude 3 | GPT-3.5-Turbo |
brief | clear | nuanced | often ignored |
minimal+30 tokens | follows | mostly follows | needs tricks |
business tone | strong | strong | mixed |
bullet format | strong | strong | usually ignored |
What's Next: A Modifier Responsiveness Catalog?
We believe this area deserves more structured research. That’s why we’re working toward:
A reproducible catalog of models and their modifier responsiveness
A taxonomy of modifier types: formatting, tone, structure, brevity, voice
A suite of injection patterns: template injection, role masking, few-shot override
Let us know if you're experimenting too — collaboration welcome.
Conclusion
Prompt modifiers can become a powerful technique — not only to tweak tone or length but also to control formatting and behavior for specific use cases.
These techniques can be combined with input normalization (removing formatting conflicts) or template scaffolding to ensure more consistent responses.
Key takeaways:
Modifier resistance is common — learn to work with it
Injection-point reasoning is an emerging best practice
Prompt engineering is not just about what, but where
Models vary — your prompt shaping needs to adapt
The road to great prompting isn’t always direct. Sometimes, it’s strategically shaped.
Further Reading
Prompt Shaping: Measuring the Impact of Prompt Modifiers on Output Size and Format
Prompt Compression Techniques for LLMs, Arora et al., 2023
Special thanks to OpenRouter.ai for enabling wide-scale experimentation with multiple model APIs.
Subscribe to my newsletter
Read articles from Alex Alexapolsky directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Alex Alexapolsky
Alex Alexapolsky
Ukranian Python dev in Montenegro. https://www.linkedin.com/in/alexey-a-181a614/