ChatGPT’s Secret Codes: 30 Commands That Can Save You Hours
Author(s): Yelpin Sergey Originally published on Towards AI. Picture this: you ask ChatGPT to write copy for a landing page. Technically, the result looks fine. No obvious mistakes. The length is acceptable. The text is readable enough. But it still falls flat. There’s no voice, no point of view, no real substance. You rewrite the prompt, then rewrite it again, and somehow the output keeps missing the target. Now you’re staring at five different versions, and not one of them is usable. If that sounds familiar, you’re not alone. I know that loop very well. Now picture a different workflow. You type one simple line: /ACT AS: UX copywriter And suddenly the answer comes back in the right tone, with the right emphasis, with the right internal logic — often close enough to drop straight into a mockup without hours of cleanup. Of course, you can always add more context, clarify the task, and refine the request. But that’s not the point here. What matters is this: a short role instruction can dramatically reduce the amount of prompting you need, because ChatGPT understands immediately what kind of specialist it is supposed to be. This is not magic. It’s not some hidden hack. And it’s definitely not a secret OpenAI mode. It’s simply a compact instruction that shifts ChatGPT from “generic answer machine” into a narrower, more useful expert mode. Most people never use it that way. In this article, I’ve collected more than 30 commands that actually hold up in practice. I tested them on real tasks, removed the fluff, and organized the rest into a clear system: what each command does, why it matters, and how to use it. You’ll also find examples, practical scenarios, and a simple setup guide. Telegram channel with free prompts What slash commands are — and why they work Slash commands are short text instructions you place at the beginning or end of a prompt. The slash itself does not activate anything inside ChatGPT. It simply helps the model interpret your request faster and more precisely: the role, the output format, the tone, and the depth of the answer. These aren’t official OpenAI features, and they’re not hidden somewhere in the interface. You won’t find them listed as a native product function. They’re better understood as prompt shorthand — patterns users discovered through repeated use. At some point, people noticed that abbreviations like ELI5, TLDR, or SWOT tend to produce fairly stable outputs. That makes sense. These labels are common across the internet. The model has seen them countless times during training and learned to associate them with specific response structures. Do they work every time? No. A command can be ignored if your prompt is too long, if multiple instructions clash, or if the expected output format isn’t explicit enough. A few simple ways to reduce that risk: lock the format with commands like /FORMAT: TABLE or /FORMAT: MARKDOWN avoid stacking more than 3–4 labels in a single prompt for important tasks, ask the model to review itself with /EVAL-SELF 30+ commands worth keeping in your toolkit I pulled these from current 2025–2026 materials, tested them against real use cases, and kept only the ones that behave consistently enough to be useful. Everything weak or redundant got cut. What follows is the practical version: a clean list grouped by category, so you can find the right mode quickly instead of digging through examples. Simplifying and explaining When you need to break down a complex idea, explain it clearly to a client, or remove unnecessary cognitive load, these commands are especially useful. /ELI5 — explain it like I’m five; strips out jargon and leans on analogy /ELI10 — similar, but slightly more advanced; still simple, just less childlike /STEP-BY-STEP — turns the answer into numbered steps /FIRST PRINCIPLES — explains a concept from the ground up, starting with fundamentals /SIMPLIFY — rewrites or explains something in simpler language without losing meaning Brevity and focus Sometimes you have ten minutes before a call and a twenty-page report to get through. Or a client sends a giant brief that needs to be distilled fast. This is where compression commands earn their place. /TLDR — condenses text into 2–3 sentences /BRIEFLY — answers in 1–2 sentences /EXEC SUMMARY — gives an executive summary with conclusions and action points /SMARTBRIEF — summarizes with the emphasis on what matters most Output format and structure If you’ve ever pasted a ChatGPT answer into Google Sheets and then spent fifteen minutes manually cleaning up columns, you already know why formatting commands matter. Same goes for those moments when you need a table and get a wall of prose instead. /FORMAT AS: [format] — forces a specific structure: TABLE, JSON, CSV, MARKDOWN /CHECKLIST — turns the answer into a checklist /SCHEMA — outputs structured data /LISTIFY — turns any text into a clean list /OUTLINE — builds the structure of an article or document /WIREFRAME — describes a wireframe in text form Role, tone, and audience This is the strongest category by far. Ask ChatGPT to “write button copy,” and the output will change dramatically depending on whether the model thinks it’s a UX writer, a marketer, or a product designer. Role affects everything: vocabulary, priorities, framing, and depth. /UXMODE — analyzes the task as a UX expert /ACT AS: [role] — switches into a specialist mode, for example: /ACT AS: UX researcher /TONE: [tone] — sets the tone of voice, for example: /TONE: friendly /AUDIENCE: [who] — adapts the answer for a specific audience, for example: /AUDIENCE: beginner designers /JARGON — uses professional terminology where relevant /JARGONIZE — converts plain language into more professional documentation-style writing /HUMANIZE or /HUMAN — makes the text sound more natural and less robotic /STYLE=[style] — writes in a specific style or mimics a known format, for example: /STYLE=TED Talk /ROLE: TASK: FORMAT: — combines three key parameters in one line /3 LEVELS — explains the same idea at three different levels of complexity Analysis […]