The key to unlocking the full potential of LLMs Show archive.org snapshot in coding lies in crafting precise prompts. The main challenge is learning how to structure prompts effectively to guide the model toward accurate results.
In this card, I explore a prompting technique I’ve found useful for making edits across multiple files.
Prompt Structure
The following structure provides a clear and repetitive pattern that helps the LLM understand and accurately describe code modifications. By maintaining a structured format, we minimize ambiguity and ensure consistency across multiple prompts.
<CUD action> <location>:
<action>:
<detail 1>;
<detail 2>;
You can even chain multiple prompts for different locations using this pattern.
Example
Create notification.resolver.ts:
It returns the state$ from the notification service:
Use an async resolver;
Return an observable;
Update overview.component.ts:
Replace injected notification service:
Use @Input() id: number with component input binding;
Update notification.service.ts:
Define notification$: BehaviorSubject:
Store the last fetched notification;
Update it when a new notification is fetched;
Using Efficient Communication
To improve clarity and accuracy, use precise, information-dense keywords in your prompts. LLMs excel at pattern recognition, and specific terminology reduces ambiguity. Beyond that giving examples for high level APIs of your imagened class design or using an test as specification can also be a highly meaningful input for an LLM on what to achieve.
Information-Dense Keywords
-
Actions:
create
,update
,delete
,edit
,mirror
,move
,replace
,refactor
,add
, ... -
Programming Concepts:
method
,function
,string
,number
,object
,class
,module
,interface
, ...
By consistently using these keywords, you help the LLM understand intent and desired actions with minimal room for misinterpretation.
The Importance of Context Management
Beyond using precise prompts, context management is equally important to ensure that the LLM has all necessary information without needing to infer missing details. If an LLM lacks context or is overwhelmed by it, it may misinterpret instructions or produce inconsistent results. Thus, to make this technique work you have to assure your LLM is only able to see the relevant files it needs for the current edits.
More importantly, this approach will help you understand the context of the code files the LLM needs to complete the task. As you develop a habit of planning necessary edits in advance, you’ll be able to craft more efficient prompts.
Why This Works
This structured approach enables LLMs to recognize patterns, significantly improving the accuracy of their output. By using precise language and managing context effectively, we minimize the likelihood of incorrect guesses.
Getting Started
To get started, I recommend using this pattern more frequently than necessary. With experience, it can fully replace manual edits and serve as a solid foundation for correctly specifying requirements and plans in more advanced AI-based coding workflows Show archive.org snapshot .
Limitations & Best Practices
This method is particularly useful for repetitive changes, such as:
- Replacing strings
- Refactoring code
- Making consistent edits across files
While this technique greatly improves accuracy, occasional flaws in the generated edits are still possible—often due to missing details that force the LLM to guess.
A good practice is to review the first output and assess its accuracy (aim for at least 80-90% correctness). If it falls short, refining the prompt is usually more efficient than manually fixing incorrect output, which sometimes introduces even more issues.