Using Low-Level Prompts for High-Accuracy AI Coding

Posted . Visible to the public.

The key to unlocking the full potential of LLMs Show archive.org snapshot in coding lies in crafting precise prompts. The main challenge is learning how to structure prompts effectively to guide the model toward accurate results. Further evidence supporting this is the fact that Aider already writes ~70% of its own code (as of 02/2025) Show archive.org snapshot . However, when starting out, your results may fall short of efficiently generating large portions of your code with the assistant’s help.

In this card, I explore a prompting technique I’ve found useful for making edits across multiple files. Along the way, I will share practical insights on how to apply this method consistently.

Prompt Structure

The following structure provides a clear and repetitive pattern that helps the LLM understand and accurately describe code modifications. By maintaining a structured format, we minimize ambiguity and ensure consistency across multiple prompts.

<CUD action> <location>:  
  <action>:  
    <detail 1>;  
    <detail 2>;

You can even chain multiple prompts for different locations using this pattern.

Rule of Thumb: Focus on What You Want to Achieve Rather Than How to Achieve It

LLMs excel at figuring out the details when the end goal is clearly described and sufficient context is provided.

Example

Create notification.resolver.ts:  
  Return the state$ property from the notification service:  
    Use an async resolver;  

Update overview.component.ts:  
  Replace injected notification service:  
    Use @Input() id: number with component input binding;  

Update notification.service.ts:  
  Define notification$: BehaviorSubject:  
    Store the last fetched notification;  
    Update notification$ when a new notification is fetched;  

Using Efficient Communication

To improve clarity and accuracy, use precise, information-dense keywords in your prompts. LLMs excel at pattern recognition, and specific terminology reduces ambiguity. Beyond that giving examples for high level APIs of your imagened class design or using an test as specification can also be a highly meaningful input for an LLM on what to achieve.

Be Specific and Use Domain Language to Achieve Accurate Results

In general, the more colloquial your communication, the higher the chances of misinterpretation by the LLM.

Also providing actual references to named classes, variables or methods are key details for the coding assistant to find out where changes should be applied. The same applies for providing expected types, for example of a function signature or the return value.

Information-Dense Keywords

  • Actions: create, update, delete, edit, mirror, move, replace, refactor, add, ...
  • Programming Concepts: method, function, string, number, object, class, module, interface, ...

By consistently using these keywords, you help the LLM understand intent and desired actions with minimal room for misinterpretation. For example, the move and replace keyword are really on point to communicate moving or replacing specific parts of your code.

Use Mirror to Step Into Reusable Pattern

The mirror keyword can be powerful to mirror patterns within existing files or given examples.

The Importance of Context Management

Beyond using precise prompts, context management is equally important to ensure that the LLM has all necessary information without needing to infer missing details. If an LLM lacks context or is overwhelmed by it, it may misinterpret instructions or produce inconsistent results. Thus, to make this technique work you have to assure your LLM is only able to see the relevant files it needs for the current edits.

More importantly, this approach will help you understand the context of the code files the LLM needs to complete the task. As you develop a habit of planning necessary edits in advance, you’ll be able to craft more efficient prompts.

Beware of Your Current Chat Window

To prevent overloading the LLM, consider clearing the chat periodically. However, keeping the chat active can be beneficial, as it provides context. This can help with prompt chaining, planning, refining prompts, and providing resources.

Why This Works

This structured approach enables LLMs to recognize patterns, significantly improving the accuracy of their output. By using precise language and managing context effectively, we minimize the likelihood of incorrect guesses.

Getting Started

To get started, I recommend using this pattern more frequently than necessary. With experience, it can fully replace manual edits and serve as a solid foundation for correctly specifying requirements and plans in more advanced AI-based coding workflows Show archive.org snapshot (like separating code reasoning and editing Show archive.org snapshot ).

Choosing the right model

Choosing the right coding model is crucial, as not all excel at code editing. The aider polyglot leaderboard Show archive.org snapshot tests models across multiple languages on challenging problems, ensuring reliable evaluation. By focusing on real-world editing skills, it helps to pick models that improve efficiency and reduce errors.

For example (as of 02/2025), OpenAI’s o1 (high reasoning) offers top performance, while o3-mini (medium reasoning) provides strong results at a lower cost. However, due to its significantly higher cost—$186.5 compared to $18.16 for o3-mini—o1 is best suited for highly analytical and complex problems.

Limitations & Best Practices

This method is particularly useful for repetitive changes, such as:

  • Replacing complex string patterns
  • Refactoring code
  • Making consistent edits across files

While this technique greatly improves accuracy, occasional flaws in the generated edits are still possible—often due to missing details that force the LLM to guess. A good practice is to review the first output and assess its accuracy (aim for at least 80-90% correctness). If it falls short, refining the prompt is usually more efficient than manually fixing incorrect output, which sometimes introduces even more issues.

Push the Limits

You should actively strive to push the limits of the LLM's capabilities with the given principles to achieve the best possible results.

While assessing the overall initial accuracy, it is also good practice to reflect on which parts of your prompt were on point and where you may have missed crucial details for your LLM assistant. This will help you identify effective communication patterns for future prompts.

Reflect on Your Prompt in the First Review

You should build a habit to review the output beforehand.

Frankly, blindly accepting an unreviewed output is the biggest bottleneck in AI-assisted coding development. You may suddenly find yourself hunting for bugs without knowing what changed, or worse, mistakenly accepting changes that are unacceptable—or even dangerous—potentially introducing security flaws. This also applies to not fully understanding the output. You must keep up with the assistant and develop your own understanding alongside it.

Always Test the Waters Before Jumping In

Blindly accepting an unreviewed output is one of the biggest bottlenecks in the AI coding assisted development process.

Felix Eschey
Last edit
Felix Eschey
License
Source code in this card is licensed under the MIT License.
Posted by Felix Eschey to makandra dev (2025-02-20 10:47)