Curated prompts for writing, analysis, coding, research, and agents. Copy what you need on the free tier, or unlock the full library with Learn Pro.
How to use these prompts
Fill in the highlighted fields with your own content. Copy the prompt and paste it into any AI chat tool. Tweak the wording as you go, these are starting points, not fixed scripts.
Search across every category
Clarify the tone of a draft
Get a read on whether your writing matches the tone you are going for.
toneediting
Read the draft below and tell me what tone it currently strikes (e.g. formal, casual, urgent, detached). Then tell me which three words best describe the voice. If it does not match a tone, suggest specific edits to three sentences that would shift it.
Draft:
Turn bullet notes into a first draft
Bridge the gap between raw thinking and a usable paragraph.
draftingoutlines
Turn these bullet points into a first draft of . Keep my voice, do not add claims I did not make, and flag anything where you had to invent a connection.
Bullets:
Rewrite for a shorter word count
Cut length without losing the point.
editingconcision
Rewrite the text below so it is under words. Preserve the main argument and any specific numbers, names, or examples. Cut hedging, repetition, and filler. Show me the cut version and list the three biggest things you removed.
Text:
Explain something to a non-expert
Make technical writing accessible without dumbing it down.
clarityaccessibility
Rewrite the passage below so a can understand it. Keep the accuracy, replace jargon with plain language on first use, and use one concrete analogy. Do not add information that is not already in the passage.
Passage:
Generate headline options
Get a range of headlines rather than one mediocre compromise.
headlinesideation
Based on the draft below, give me 10 headline options across these styles: 3 descriptive (tells you what it is about), 3 curiosity-driven (makes you want to click), 2 provocative (takes a stance), 2 benefit-led (promises what you get). Keep each under 12 words.
Draft:
Outline a long-form article
Scaffold before you write — much easier than restructuring later.
outlinesstructure
I want to write an article about for . The core argument is . Give me a section-by-section outline with a working heading, the main point of each section, and one example or piece of evidence I should find. Flag any sections where the argument is weakest.
Edit for active voice
Tighten prose by converting passive constructions where it helps.
editinggrammar
Review the text below and identify every passive voice construction. For each one, decide whether active voice would be clearer (sometimes passive is correct — e.g. when the actor is unknown or unimportant). Show me a before/after for the ones worth changing, and leave the rest.
Text:
Adapt copy for a different channel
Same message, different format — without starting from scratch.
repurposingchannels
Take the content below, originally written for , and adapt it for . Adjust length, tone, and structure to fit the channel conventions. Keep the core message and any specific details.
Content:
Summarise feedback into concrete edits
Turn vague feedback into a clear revision list.
revisionfeedback
I got the following feedback on my draft. Help me turn it into a specific list of edits I can action. For each piece of feedback, tell me: (1) what change to make, (2) where in the draft, (3) whether I should push back or accept. Flag any feedback that contradicts other feedback.
Draft:
Feedback:
Draft a product update
Announce a change to users in a way that is clear and not hypey.
announcementsproduct
Write a short product update announcing . Audience is . Length: . Tone: honest and specific. Cover what changed, why, and what (if anything) the user needs to do. No phrases like 'excited to announce' or 'game-changing'.
Craft a narrative arc for long-form content
Give a long piece shape so it reads as a story, not a list of points.
narrativestructure
I am writing a on . The key points I want to cover are . Help me find a narrative arc: a hook, a tension or question driving the piece, the turning point, and the payoff. Suggest where each of my key points fits in that arc, and what I might cut if it does not serve the arc.
Structure a counterargument that stays collegial
Disagree in writing without burning bridges.
disagreementcommunication
I want to push back on the following argument without being dismissive or escalating. Help me draft a response that (1) acknowledges what is valid in their position, (2) names the specific point I disagree with, (3) gives my reasoning with evidence, (4) leaves room for them to update without losing face. Keep it under .
Their argument:
My position:
Compare two options with pros and cons
Get past gut feel with a structured comparison.
decisionscomparison
I am choosing between and for . Give me a pros and cons table for each, then identify the three criteria that matter most for my situation and score each option against them. End with your recommendation and what would change your mind.
Extract risks from a document
Surface what could go wrong before someone else does.
riskreview
Read the document below and identify every risk it mentions, implies, or fails to address. Categorise each risk as , rate its likelihood and impact as low/medium/high, and flag the top three I should address first.
Document:
Build a simple decision matrix
Make a decision defensible by scoring options against criteria.
decisionsframeworks
I need to decide between . The criteria that matter are . For each criterion, tell me (1) how to weight it from 1-5 based on my situation (), (2) how each option scores on it (1-5 with reasoning). Produce a weighted total and the final ranking.
Summarise metrics for leadership
Turn raw numbers into a story executives will actually read.
reportingmetrics
Here are this period's metrics: . Write a 5-bullet summary for a leadership audience. First bullet is the headline (what is the one thing they need to know). Next three are the most important movements and what drove them. Last is what this means for next period. No jargon, no hedging, no 'excited to report'.
Spot gaps in an argument
Pressure-test your reasoning before someone else does.
critiquereasoning
Here is an argument I am making: . Play the role of a thoughtful skeptic. What assumptions am I making that I have not justified? What counter-evidence might exist? What alternative explanations could fit the same facts? Rank the three most serious gaps and how I could address them.
Turn raw data into takeaways
Get from "here is the data" to "here is what matters".
datainsights
Below is raw data from . Identify the three to five most important patterns or anomalies. For each, tell me (1) what the pattern is, (2) why it matters, (3) what action it suggests or what further investigation it calls for. Flag anything that looks surprising or counter to expectations.
Data:
Stress-test an assumption
Find out where a belief breaks before reality does.
assumptionsreasoning
I am operating on the assumption that . Challenge it systematically: (1) what evidence supports it, (2) what evidence contradicts or complicates it, (3) under what conditions would it no longer hold, (4) what is the cost of being wrong, (5) what would I need to observe to update.
Draft questions before a review
Go into a meeting with sharper questions than "any thoughts?".
meetingsreview
I am going into a for . Here is the material being reviewed: . Generate 10 specific questions to ask, grouped by: (1) clarifying questions to understand the work, (2) probing questions to stress-test the thinking, (3) forward-looking questions about implications. Skip generic questions.
Prioritise issues by impact
Cut through a long list and decide what to do first.
prioritisationtriage
Here is a list of issues / bugs / problems: . For each, estimate impact (how many people affected, how severe) and effort (hours, complexity). Sort into four buckets: quick wins, big bets, low priority, avoid. Recommend the top three to tackle first and why.
Explain variance from plan
Investigate why actuals diverged from the plan.
variancereporting
Planned: . Actual: . Walk through the variance: which items are materially off plan, and for each, generate three plausible explanations ranked by likelihood. Note which would be easy vs hard to verify. Flag any variance that might mask a compounding issue.
Diagnose a team performance drop
Separate real signal from noise when metrics slip.
diagnosticsmetrics
A team metric () has dropped from to over . Help me think through root causes. Generate a diagnostic tree: possible causes grouped by . For each leaf, suggest one piece of evidence that would confirm or rule it out. Highlight the two cheapest to investigate.
Evaluate a proposal against strategy
Check whether a shiny idea actually fits the plan.
strategyevaluation
Our strategy is . Someone has proposed . Evaluate: (1) does this serve our stated priorities, (2) what would it trade off, (3) what assumptions does it make about our resources or market, (4) is the timing right. Give a clear verdict and the one question whose answer would change it.
Explain a bug from a stack trace
Get a second opinion on what the error is actually telling you.
debuggingerrors
Here is a stack trace and the relevant code. Walk me through: (1) what the error actually means, (2) the most likely cause given this code, (3) two less likely but plausible causes, (4) the smallest change that would verify the diagnosis. Do not suggest a fix yet.
Stack trace:
Code:
Suggest tests for a function
Make sure you cover the cases you would otherwise forget.
testingcoverage
Here is a function. Suggest a test suite covering: happy path, edge cases (empty input, boundary values, nulls), error paths (invalid input, downstream failures), and any concurrency or ordering concerns. For each test, give a descriptive name that reads like a specification and a one-line note on what it proves.
Function:
Refactor for readability
Clean up code without changing what it does.
refactoringreadability
Refactor the code below for readability without changing its behaviour. Prioritise: clearer names, smaller functions with single responsibilities, removing duplication, and making control flow easier to follow. Show the refactored code and a short list of the specific changes you made and why.
Code:
Review a pull request diff
Get a rigorous first pass before your teammate sees it.
code reviewquality
Review this diff as if you were a senior engineer on the team. Call out: correctness issues, potential bugs, security concerns, test gaps, style inconsistencies, and anything that would confuse a future reader. Distinguish blocking comments from nitpicks. End with a verdict: approve, request changes, or comment.
Diff:
Generate a commit message
Write commits that your future self and reviewers will thank you for.
gitcommits
Here is the diff of changes I am about to commit. Write a conventional commit message: a type prefix (feat/fix/docs/refactor/test/chore), a concise summary line under 72 characters, and a body paragraph explaining the why (not the what — the diff shows the what). Flag anything that should probably be split into a separate commit.
Diff:
Break a feature into tasks
Turn a vague feature request into a plan you can actually execute.
planningdecomposition
I need to build . The stack is . Break this down into a sequence of small PRs, each shippable on its own. For each, give a title, the scope (what's in, what's out), approximate size (XS/S/M/L), and dependencies on previous PRs. Flag anywhere I should de-risk with a spike first.
Document an API endpoint
Produce reference docs a stranger could use without guessing.
documentationapi
Document the following endpoint in a format suitable for . Include: method and path, purpose, auth requirements, request schema with field descriptions, response schema, all possible error responses with causes, and a realistic curl example. Flag anything ambiguous that needs clarification from the implementer.
Endpoint code:
Debug a failing CI step
Get unstuck when the pipeline is red.
cidebugging
My CI pipeline is failing at . Here is the log output and the config for that step. Walk me through the diagnosis: what the log is telling me, the three most likely causes, what to check first, and a minimal change to test each hypothesis. Call out anything that looks like a flaky test vs a real failure.
Log:
Config:
Propose a minimal fix
Fix the bug without rewriting the module.
bugsfixes
Here is a bug and the surrounding code. Propose the minimal change that fixes the bug without introducing new risk. Show the exact diff. Then tell me: what is the larger refactor this code needs (if any), and why I should or should not do it in the same PR.
Bug:
Code:
Translate requirements into acceptance criteria
Write criteria so unambiguous that the test cases write themselves.
requirementsspecs
Here is a feature requirement: . Rewrite it as a set of acceptance criteria in Given/When/Then format. Cover the happy path, the main edge cases, and at least two error conditions. Flag anywhere the requirement is ambiguous and you had to make an assumption.
Assess the cost of a dependency
Before you npm install, decide if you actually should.
dependenciesarchitecture
I am considering adding to solve . Evaluate: (1) what does it actually do, (2) what are the lighter-weight alternatives (including writing it yourself), (3) maintenance signals (last release, open issues, maintainer activity), (4) security and licence considerations, (5) bundle or build impact. Give a recommendation: add, avoid, or 'build a small version yourself'.
Design the interface before the implementation
Shape the API first so the code that uses it reads well.
api designarchitecture
I need to build that does . Before I write the implementation, design its public interface. Give me: function signatures with types, naming conventions, error handling contract, and an example of what calling code looks like. Explain tradeoffs you considered (e.g. one function vs many, sync vs async, options object vs positional args).
Create a research brief
Scope a piece of research before you start collecting.
planningscoping
I want to research to answer . Draft a research brief covering: scope (what is in, what is out), key sub-questions to investigate, types of sources to prioritise, what success looks like (when will I know I am done), and the main risks of getting this wrong. Call out anything I should clarify before starting.
Draft interview questions
Get interview questions that surface insight, not just confirmation.
interviewsuser research
I am interviewing about . My goal is to learn . Draft 10 questions, mixing: (1) open-ended context questions, (2) behavioural questions about specific past events, (3) questions that probe for contradictions, (4) one or two hypothesis-testing questions. Avoid leading questions and yes/no traps.
Summarise a source with citations
Produce a clean summary you can actually reference later.
summarisationsources
Summarise the source below in . Capture: the main argument, the evidence used, the author's stance, and any notable caveats. Use direct quotes only where the exact wording matters, and always with a reference back to the source (page or section). At the end, note any claims that seem underdefended or contradicted by other things you know.
Source:
Compare methodologies
Pick the right approach by understanding what each one gives up.
methodologycomparison
I need to study . Compare these methodologies: . For each: what it is good at, what it misses, cost and time to run, and the kind of evidence it produces. Recommend one primary and one complementary method for my question.
Extract open questions from notes
Find the gaps in what you know before someone else does.
gapssynthesis
Read the notes below and extract every open question or unresolved issue, whether it is stated explicitly or implied. For each, note (1) the question, (2) why it matters, (3) who or what could answer it, (4) how urgent it is to resolve. Group related questions together.
Notes:
Build a literature-style outline
Organise what you have read into an argument, not a list.
literature reviewsynthesis
I have read on . Build a literature-style outline that organises these by theme or position, not chronologically. Show: the main debates or tensions, where sources agree, where they disagree, and what no one has addressed. Suggest where my own contribution could fit.
Evaluate source credibility
Decide how much to trust something before you quote it.
evaluationsources
Evaluate the credibility of this source: . Consider: who is the author and what are their qualifications and potential biases, what is the publication venue and its editorial standards, when was it published (is currency a concern), what evidence does it cite, and how do its claims compare to the consensus in the field. Give a credibility rating from 1-5 with justification.
Turn research notes into a memo
Convert a scattered knowledge dump into a one-pager someone will read.
memoscommunication
Take my research notes and turn them into a one-page memo for . Structure: (1) headline finding in one sentence, (2) context (why we investigated this), (3) key findings (three to five, each with the evidence), (4) implications / recommendations, (5) what is still uncertain. Cut anything that does not serve this structure.
Notes:
Define scope for a research topic
Rein in a topic that wants to sprawl.
scopingplanning
I want to research . Help me narrow it. Propose three different scoping angles — a narrow version, a medium version, and a broad version — and for each, specify the research question, what would be in scope, what would be out, and roughly how much time it would take. Recommend which to pick if my goal is .
List unknowns to validate next
Turn "we do not know enough" into a prioritised list.
unknownsprioritisation
Based on the work below, produce a list of the most important things we do not yet know. For each unknown, note: (1) why it matters for the decision or next step, (2) the cheapest way to find out, (3) what we would do differently depending on the answer. Rank them so the biggest decision-changers are at the top.
Context:
Find analogous cases from other domains
Borrow insight from fields that have already solved a version of your problem.
analogiesinspiration
I am working on . Suggest five analogous situations from other domains (e.g. other industries, historical examples, scientific fields) where a similar structural problem shows up. For each, describe the analogy, what worked in that domain, and what would and would not transfer to mine.
Design a small validation study
Plan a lightweight study that can change your mind.
validationexperiments
I have a hypothesis: . Design a small study that could validate or invalidate it in with . Specify: the method, the participants or data sources, the signal I am looking for, the decision rule (what result would make me update), and the main threats to validity. Keep it scrappy and focused.
Define a safe task for an agent
Scope an agent task tightly so it cannot do something you did not intend.
scopingsafety
I want an agent to . Help me define this as a safe, bounded task. Specify: (1) exactly what inputs the agent can access, (2) what outputs or actions it is allowed to produce, (3) explicit things it must never do, (4) the human checkpoint where I review before anything irreversible happens. Flag any part of the task that is too vague to delegate safely.
Write a handoff between agents
Pass context cleanly from one agent to the next.
multi-agenthandoff
Agent A has just finished . Agent B needs to pick up from there to do . Write the handoff payload: (1) the relevant context Agent B needs, (2) anything Agent A tried that did not work, (3) explicit assumptions carried forward, (4) the success criteria for Agent B's portion. Keep it short enough that context does not balloon.
Draft guardrails for a workflow
Turn "be careful" into specific rules an agent can follow.
guardrailsgovernance
I am setting up an agent workflow for . Draft guardrails covering: (1) hard limits (actions that should never happen, e.g. spending over $X, touching production data), (2) soft limits (actions that require human approval), (3) escalation triggers (when the agent should stop and ask), (4) audit requirements (what must be logged). Be specific — avoid generic phrases like 'use good judgment'.
Specify success criteria for automation
Know whether the agent did what you wanted.
evaluationsuccess criteria
I am automating . Define success criteria that I could use to evaluate whether the agent did a good job: (1) measurable outcomes (what should be true when it is done), (2) quality criteria (how to tell a good result from a bad one), (3) failure signatures (what a bad run looks like, not just "it failed"), (4) what I should spot-check manually even when it appears to succeed.
Plan a human review checkpoint
Insert the right review at the right moment — not every moment.
human-in-the-loopreview
For an agent doing , design human review checkpoints. For each checkpoint, specify: (1) where in the flow it sits, (2) what the human is actually reviewing (not 'everything'), (3) what decision they are being asked to make, (4) what the agent does while waiting. Avoid reviewing things a human cannot meaningfully evaluate faster than re-doing the work.
List permissions an agent needs
Scope permissions tightly — no more, no less.
permissionssecurity
I want an agent to . List the minimum permissions it needs: (1) which tools or APIs, (2) scoped to what data or actions (e.g. 'read-only', 'write to this folder only'), (3) any time or rate limits. For each permission, justify why it is needed. Flag any permission that feels broader than strictly required and suggest a tighter alternative.
Write a failure-mode checklist
Think through what can go wrong before it does.
failure modesreliability
For an agent doing , enumerate the failure modes. Cover: (1) input errors (malformed, missing, adversarial), (2) reasoning errors (wrong conclusions, hallucinations, loops), (3) tool errors (API failures, rate limits, permission denied), (4) downstream impact (who gets hurt when it fails silently). For each, note the earliest signal and how the agent should respond.
Summarise what an agent did
Produce a review-friendly summary from an agent run.
auditsummarisation
Here is the log of an agent run. Summarise it for a human reviewer: (1) what the agent set out to do, (2) the key actions it actually took, (3) anything it tried and abandoned, (4) the final outcome, (5) anything notable or unexpected that deserves a second look. Keep it under and flag anything that warrants follow-up.
Log:
Request a rollback plan
Know how to undo before you let an agent do.
rollbacksafety
I am about to let an agent perform . Before I run it, write the rollback plan: (1) what state change the action will cause, (2) exactly how to reverse it step by step, (3) what is not reversible (flag these prominently), (4) what to do if rollback itself fails. If any part of this is 'you cannot roll it back', say so clearly.
Clarify boundaries for tool use
Set clear rules about when an agent reaches for a tool vs asks a human.
tool usepolicy
An agent has access to the following tools: . For each tool, specify: (1) when the agent should use it, (2) when it should not (prefer asking or doing nothing), (3) parameters or inputs that should always be validated first, (4) how to handle the tool failing or returning unexpected output.
Design an approval flow for sensitive actions
Route risky actions through a human approval step that actually works.
approvalsgovernance
For an agent that may need to perform , design an approval flow. Specify: (1) what triggers an approval request (be specific — dollar amount, data type, action category), (2) what information the approver sees (enough to decide, not a wall of logs), (3) who approves and their timeout behaviour, (4) what the agent does while waiting, (5) how the approval is recorded for audit.
Audit an agent integration for security gaps
Review an existing agent setup the way a security engineer would.
securityaudit
Here is my agent integration: . Audit it the way a security engineer would. Identify: (1) over-broad permissions, (2) missing validation at trust boundaries, (3) actions that should require approval but currently don't, (4) logging or audit gaps, (5) ways the agent could be prompted or manipulated into unintended actions. Rank findings by severity.