Skip to content

Prompt Engineering for the Endodontic Resident

By Dr. Tung Bui

How smarter prompts lead to smarter practice and learning

Introduction

Prompt engineering; the practice of crafting purposeful and structured inputs for large language models (LLMs), is fast becoming a key skill for the endodontic resident. For residents, AI tools like ChatGPT, Claude, and OpenEvidence can act as co-pilots. But to make the most of them, residents must learn how to frame prompts effectively. Like setting the stage for an actor, prompting AI is about defining the role, context, task, and tone to receive clinically relevant, safe, and useful responses.

Think Like a Director: Framing the Scene

Effective prompts help large language models (LLMs) perform more like endodontic specialists, and one of the most practical strategies for guiding an AI response is the “role-task-tone” approach. For example, a user might say, “You are a board-certified endodontist,” to assign the role, followed by, “Evaluate this patient’s lingering pain post-retreatment,” as the task, and add, “Explain in a compassionate, patient-friendly way,” to set the tone. LLM output quality improves when both the role and task are explicitly stated. The LLM will generate more relevant and structured responses when it is pre-framed as a subject-matter expert rather than when it receives an open-ended query.

Clinical Use Cases: Efficiency Meets Precision

Endodontic residents can apply prompt engineering to enhance various aspects of clinical training. By crafting effective prompts, they can improve clinical decision-making by eliciting more precise differential diagnoses, treatment planning suggestions, and evidence-based recommendations from AI tools. Prompt engineering also streamlines documentation by generating well-structured clinical notes, patient education materials, and referral letters with greater speed and consistency. Additionally, it can support the analysis of complex cases by guiding AI to synthesize literature, compare treatment options, and simulate expert reasoning, ultimately serving as a valuable adjunct to both education and practice.

Diagnostic Support

Prompt: You are an endodontist. List differential diagnoses for persistent pain following nonsurgical retreatment of a maxillary premolar. Include next diagnostic steps. Provide references from peer reviewed studies.”

This prompt structure enables AI to simulate clinical reasoning, often mirroring diagnostic pathways that align with textbook or evidence-based protocols. While AI lacks clinical intuition, its ability to summarize diagnostic options aligns with recent findings that LLMs can support, but not replace clinical judgment.

CBCT Justification

Prompt: Provide evidence-based reasons for using CBCT in a suspected vertical root fracture of a previously treated molar.”

Using AI for evidence synthesis has already proven valuable. Tools like OpenEvidence can cite AAE guidelines and systematic reviews in seconds, reducing cognitive and time burden while supporting evidence-based dentistry.

Uncommon Clinical Conditions

Prompt: Act as an oral medicine expert. Provide causes and treatment options for burning mouth syndrome. Suggest treatment options. Include when to refer.”

This is particularly helpful for residents encountering orofacial pain that falls outside the pulp-periodontal spectrum. LLMs can highlight medical-dental overlaps but must be supervised to avoid misapplication of outdated or oversimplified data.

Academic Applications:

Residents can apply prompt engineering to enhance various aspects of clinical training. By crafting effective prompts, they can improve clinical decision-making by eliciting more precise differential diagnoses, treatment planning suggestions, and evidence-based recommendations from AI tools. Prompt engineering also streamlines documentation by generating well-structured clinical notes, patient education materials, and referral letters with greater speed and consistency. Additionally, it can support the analysis of complex cases by guiding AI to synthesize literature, compare treatment options, and simulate expert reasoning, ultimately serving as a valuable adjunct to both education and practice.

Literature Summarization

Prompt: Summarize the main findings from recent articles in the Journal of Endodontics on regenerative endodontics. Include citations.”

AI can be a rapid literature assistant. However, clinicians should confirm citations since LLMs have a reputation of fabricating falsehoods to “please” the user.

Board Preparation

Prompt: Create a board-style clinical case question about root perforation management. Include four answer choices and rationale.”

LLMs are helpful as supplementary tools for test preparation. They should be used to reinforce, not replace structured board review.

Patient Communication

Prompt: Explain to a patient why a root canal may need to be redone. Keep it under 100 words and written at a sixth-grade reading level.”

Prompt engineering makes it easy to generate plain-language explanations, a skill that is vital for improving patient understanding and satisfaction.

Tools of the Trade

  • ChatGPT: General-purpose LLM with high-quality prose and reasoning capabilities.
  • Claude: An LLM with good at long-context memory and nuanced language.
  • Perplexity AI: Web-augmented model with reliable citation links.
  • OpenEvidence: Medical-specific LLM offering literature-based, structured answers with references.

Using multiple tools ensures a broader evidence base and enables cross-validation of facts, which enhances the reliability and depth of clinical or academic conclusions. Different AI platforms and databases often draw from unique datasets, algorithms, and reasoning strategies. By consulting more than one tool, users can compare outputs, identify discrepancies, and converge on well-supported answers rather than relying on a single source that may have limitations or biases. This approach not only strengthens critical thinking but also mirrors the multidisciplinary process of peer review and collaborative diagnosis, fostering a more rigorous and nuanced understanding of complex issues.

Ethics and Oversight

While AI output can be fast and impressive, residents must approach it with caution and responsibility. It’s essential to fact-check everything, as large language models (LLMs) are known to “hallucinate” sources or generate plausible-sounding but incorrect information. Protecting patient privacy is also critical; identifiable data should never be entered into publicly hosted tools. Clinicians should guard against automation bias, where a confident AI-generated answer may unduly influence decision-making; clinical judgment should always take precedence.

Conclusion

Prompt engineering is a powerful skill for the endodontic resident. It allows AI to be used not just as a novelty, but as a functional tool; supporting clinical decisions, enhancing writing, and accelerating learning. Like hand skills, good prompts improve with deliberate practice. By learning how to “direct the scene,” residents can harness AI’s potential while maintaining professional standards and clinical oversight.

Dr. Tung Bui is a Board-certified endodontist with Specialized Dental Partners, practicing in Tucson, Arizona. He also serves as a clinical endodontic instructor and lecturer with the NYU Langone AEGD Tucson program and Spartanburg Regional Healthcare System AEGD program. When not extending the life of teeth, he is sourcing and roasting exquisitely rare third-wave coffees and pursuing outdoor adventures. As a futurist investor, he devotes his time into exploring emerging and disruptive technologies. He currently chairs the AAE Connection Committee. Disclosure: The author has no financial interests, and the opinions expressed are solely his and not those of the AAE.  AI tools were used for editing. You can contact Dr. Tung Bui at apexologist@gmail.com.