top of page
Search

Want Better Results from ChatGPT (or Any LLM)? Try Contextual Prompts

Most of us have had this experience: you ask ChatGPT (or any large language model) a question, and the answer comes back flat. You think to yourself, “Really? That’s it?”

Instead of moving forward with useful insights, you spend more time re-asking the question, tweaking your words, or trying again from a different angle. Frustrating, right?

When I first started experimenting with prompts, the results improved. But I’ve since learned there’s an even better way: Contextual Prompts.


Why Context Matters

The results of contextual prompting are striking: sharper, more accurate, and highly usable answers. And it makes sense.

Think about onboarding a new hire. If you gave them a task with no background, would you expect a great result? Of course not. You’d provide context, guidance, and examples. AI works the same way.

Yes, it takes more effort upfront. But like any skill, you get faster with practice — and the quality of the output makes it worth it.


From Stanford: Contextual Engineering

This approach has been articulated well by Jeremy Utley, Adjunct Professor at Stanford and co-author of Ideaflow. He calls it Contextual Engineering: treating AI like a teammate, not just a tool. Instead of short, search-like prompts, Utley recommends giving detailed background (brand, objectives, audience, tone, constraints). In his words:

“I would recommend never prompting a model without at least 400 words of context.”

Utley’s research shows that when you treat AI like a new colleague , giving it direction, assigning roles, asking it to explain its reasoning , the quality of the results improves dramatically.


Utley’s Key Principles

Provide Rich Context (≥400 words)

  • Most people use prompts like a Google search: 5–10 words.

  • Utley recommends giving detailed background (brand, objectives, audience, constraints).

  • “I would recommend never prompting a model without at least 400 words of context.” – Capgemini


Treat AI as a Teammate, Not a Tool

  • Anthropomorphizing AI improves collaboration and output quality.

  • “If you want to work with an LLM, treat it like you would treat a new team member.” – Capgemini

  • “Most surprising discovery… LLMs respond to human-like treatment in remarkably human-like ways.” – Jeremy Utley


Use Chain of Thought & Clarifying Questions

  • Encourage step-by-step reasoning: “Walk me through your thought process.”

  • Instruct the AI to ask clarifying questions before answering — shifting from directive to collaborative dialogue.

Set Explicit Roles & Personas

  • Define the perspective: e.g., “You are a McKinsey partner in medtech commercialization.”

  • Personas anchor the model’s framing and style.

Feedback Loops

  • Share edits and instruct the AI to learn from them.

  • “Here’s how I modified your output. What can you learn from these changes?”

Cross-Model Collaboration

  • Utley often dictates to one model (ChatGPT), then feeds it into another (Claude, Gemini, Perplexity) for critique and iteration.

  • This multi-model interplay improves rigor and reduces blind spots.


Prompt Engineering vs. Contextual Engineering

Dimension

Prompt Engineering

Contextual Engineering (Utley)

Context Depth

Minimal (few words)

Rich (≥400 words, brand, objectives, nuance)

AI Framing

Tool-like

Teammate/persona-driven

Thinking Process

Single-turn, shallow

Chain of Thought + clarifying questions

Feedback Integration

Rare

Explicit feedback loops

Multi-model Collaboration

Seldom

Cross-model critique (ChatGPT ↔ Claude, etc.)

Executive Actions & Recommendations


Pilot a Contextual Engineering Sprint

  • Pick a high-stakes deliverable (e.g., payer-access slide deck).

  • Draft a 400–600-word brief (objectives, audience, constraints).

  • Have one model generate, another critique, and iterate.

Embed Persona-Driven Prompts in Workflows

  • Standardize prompts: “You are a regulatory strategist with 15 years in oncology.”

  • Store reusable prompt templates in a knowledge hub for your team.

Mandate Chain-of-Thought

  • Require models to explain their reasoning, not just give answers.

  • Always ask: “What assumptions are you making?”

Establish Feedback Loops

  • After revising AI output, feed changes back and ask: “How would you adapt next time?”

  • Build organizational memory with iterative refinement.

Train Teams

  • Run workshops showing side-by-side results of short prompts vs. contextual prompts.

  • Track how quality improves when context is added.

Measure Impact

  • KPIs: time saved, revision rates, peer-review quality scores, adoption across teams.

  • Use these as performance indicators for AI-enabled workflows.


The Payoff

By applying Contextual Engineering, you transform AI from a basic Q&A machine into a collaborative partner. The more context you provide, the more decision-ready the outputs become.


For business leaders, that means sharper decks, clearer strategy briefs, stronger regulatory submissions, and faster execution.

Utley’s framework reinforces what many of us have experienced firsthand: AI is at its best when we give it the kind of direction we’d give a junior colleague. The upfront effort pays back in speed, quality, and confidence in the results.


Final Thought

Treat AI like an eager assistant you’re coaching. Provide it with context, give it a role, ask for reasoning, and close the loop with feedback. The extra work up front turns into sharper, faster, more actionable insights.


Or, as Jeremy Utley puts it, think of this as moving beyond “prompt engineering” into contextual engineering ,


ree

where AI is not just a tool, but a true teammate.

 
 
 

Comments


SGAI Logo White_edited_edited_edited_edi
youtube logo
Strategic Growth AI Youtube Channel Coming Soon

© 2025 Strategic Growth AI Inc. All rights reserved. Strategic Growth AI provides
decision-support services. We do not guarantee outcomes. Forecasts are illustrative
scenario analyses based on stated assumptions and client inputs; actual results may
differ materially. Nothing on this site is medical, legal, tax, or investment advice.
SGAI Clarity Intelligence™ is an unregistered trademark of Strategic Growth AI Inc.
Third-party marks belong to their owners and indicate sourcing, not endorsement.
Do not include personal health information in web forms request a secure channel.

Professional Networks (membership/participation; not endorsements)
UBC Entrepreneurship • BC Tech • AIinBC • Life Sciences BC

About Us • Solutions • Implement AI • SGAI Clarity • Fractional Leadership • SGAI Academy • Insights • Contact

Cookie SettingsYour Privacy Choices (California)Disclaimers & Disclosures Privacy PolicyTerms & Conditions
 

bottom of page