AI vibe coding landing page playbook — industries vs functions
I Analyzed 1 Million Prompts — And Here Are The Findings — Industry Version
What 90 days of data revealed about why some industries thrive with AI tools while others struggle — and how to position your project for success.

The question that started this research
Three months ago, I started learn about the churns on a data dashboard, showing about 6500 paying users — with 70% already left, and a question that had been nagging me for years: why do some people create magic with AI coding tools while others rage-quit after a few attempts?
After a decade in tech product design — and the last few years obsessively focused on AI — I thought I understood the patterns of general users. I was wrong with AI. Or rather, I was only seeing part of the picture — everyone sees AI differently, indeed.
The data I analyzed told a story far more nuanced than “good prompts get good results.” It revealed industry-specific patterns, project-type challenges, and counterintuitive behaviors that challenge everything the AI community assumes about effective human-machine collaboration.
This isn’t another “10 tips for better prompts” article. This is a research-backed exploration of how real people — across dozens of industries and wildly different project types — actually interact with AI vibe coding tools. And more importantly, what separates those who build extraordinary things from those who walk away frustrated.
Let me share what we learned about the industry dimension.
The research foundation
Before diving into insights, let me establish what we were working with. Over 90 days, I analyzed 6500 paying users across ten major markets. Of these, about 25% — were still paying at the end of the period, while the rest of 75% users had churned. My core analysis focused on 4000 users from the top ten markets, examining more than a million of prompts with a median of 78 prompts per user and a mean of 180 prompts.
We measured five dimensions: usage depth, content quality, emotional expression, behavioral patterns, and industry characteristics. What emerged wasn’t a simple formula but a complex web of signals that, once understood, becomes remarkably actionable — especially when viewed through the lens of what people are actually trying to build.
The industry divide — What you build matters more than how you build It — with/without vibe coding
Perhaps our most striking finding was the dramatic variation in success rates across different project types. The difference between the highest and lowest performing categories exceeded three times — a gap so large it suggests that industry context fundamentally shapes the AI coding experience.
At the top of the success hierarchy sat tools and productivity applications. Among 35 users building in this category, the retention rate reached 48.5%. These weren’t casual experimenters — they were people building functional applications with clear requirements and defined functionality. Their prompts read like technical specifications, describing features, user flows, and specific behaviors they needed the system to support.
Close behind came education and learning projects. More than 20 users in this category showed a 42.9% retention rate. These projects benefited from structured content patterns and established educational frameworks. When someone builds a course platform or learning management system, there are conventions to follow — lesson structures, progress tracking, assessment formats. This structure translated directly into clearer prompts and more satisfying outcomes.
Real estate applications claimed the third spot with 40% retention. These projects succeeded because they followed standard layouts with clear information hierarchy — property listings, search filters, contact forms, image galleries. The patterns are well-established, and AI tools excel at implementing established patterns.
Landing pages and marketing sites showed 36.4% retention — our largest high-performing sample. These projects had focused scope and measurable goals. Users knew what conversion action they wanted, what message they needed to communicate, and what aesthetic would resonate with their audience. That clarity translated into effective prompts.
Data and analytics dashboards rounded out the top tier at 36.2% retention. These were technical users with specific requirements, often coming from data science or business intelligence backgrounds. They understood their data structures, knew what visualizations they needed, and could articulate requirements precisely.
The struggling categories — where AI coding hits friction
At the opposite end of the spectrum, certain project types consistently struggled. Legal and law-related projects showed just 19.2% retention — the lowest of any category we measured. These projects faced complex requirements and compliance concerns that AI tools couldn’t easily navigate. A lawyer building a client intake form isn’t just designing a form — they’re ensuring it captures information in legally meaningful ways, complies with confidentiality requirements, and integrates with case management workflows. That complexity proved difficult to communicate effectively in prompts.
Fitness and yoga studios showed 21.7% retention. These projects struggled with visual expectations and booking system complexity. Studio owners had specific aesthetic visions — calming, energetic, welcoming — that proved difficult to articulate. They also needed booking systems, class schedules, instructor profiles, and payment processing, creating technical complexity that frustrated users without development backgrounds.
Bug fixing and troubleshooting represented a special case with 23.7% retention. These weren’t users starting new projects — they were people trying to fix existing problems. Their prompts showed negative word rates ranging from 45% to 81%, reflecting the frustration that drove them to seek AI assistance in the first place. The challenge here wasn’t just technical — it was emotional. People debugging code are already frustrated, and that frustration colored their communication in ways that made productive collaboration difficult.
Design and UI projects showed 23.5% retention. These projects struggled with subjective outcomes and iteration-heavy workflows. Unlike a data dashboard with clear functional requirements, a portfolio site or brand showcase involves aesthetic judgments that vary by individual taste. Users found themselves in endless iteration cycles, trying to articulate visual preferences that felt obvious to them but proved difficult to communicate precisely.
Blog and media projects showed 24.5% retention. These struggled with content strategy complexity. Building a blog isn’t just about layouts — it’s about information architecture, content categorization, reading experiences, and publication workflows. Users who thought they were asking for a simple blog template discovered they needed to make dozens of strategic decisions about how their content would be organized and presented.
The website building paradox
Retention rates across different countries ranged from 26.7% to 33.9%, suggesting that cultural factors influenced even this most universal of project types.
More revealing was the variation in first conversation length. Users in China averaged 260 characters in their initial prompts for website projects, while users in Brazil averaged 831 characters — more than three times longer. Yet both groups showed similar retention rates, suggesting that different communication styles could lead to similar outcomes when the underlying project type was well-defined.
Website projects succeeded when users came with clear requirements: the type of site (portfolio, business, e-commerce), the key pages needed, the primary user actions, and the aesthetic direction. They struggled when users approached with vague requests like
“I need a website for my business”
without articulating what the business did, who the audience was, or what the site needed to accomplish.
The bug fixing challenge — non-coders do not really understand how
Bug fixing deserves special attention because it represents a fundamentally different use case from greenfield development. Users weren’t coming to AI tools with creative vision — they were coming with problems that had already consumed hours of their time and considerable frustration.
The data showed this clearly in the emotional language patterns. Bug fixing prompts contained negative words at rates between 45% and 81% — far higher than any other category. Users wrote things like “this error keeps appearing,” “I can’t figure out why this breaks,” “nothing I try works,” and “this is driving me crazy.”
This emotional state created a vicious cycle. Frustrated users wrote less clear prompts, focusing on their emotional experience rather than technical details. The AI, lacking crucial context about the codebase, environment, and what had already been tried, provided generic suggestions. This led to more frustration, more negative language, and eventually abandonment.
The users who succeeded with bug fixing approached it differently. They provided complete context: exact error messages, relevant code snippets, what they’d already attempted, expected versus actual behavior, and their environment details. They treated the AI as a debugging partner rather than a magic solution, asking questions like “what could cause this error message in this context?” rather than demanding “fix this bug.”
The E-commerce equation — complexity meets expectations
E-commerce projects revealed another dimension of industry-specific challenges. These weren’t just websites — they were complete business systems requiring product catalogs, shopping carts, payment processing, inventory management, shipping calculations, and customer accounts.
Users building e-commerce sites often underestimated this complexity. They’d start with prompts like “I need an online store for handmade jewelry” without recognizing they were actually requesting a system with dozens of interconnected features. As the conversation progressed and the full scope became apparent, many users became overwhelmed.
The successful e-commerce builders approached their projects in phases. They’d start with the product display and browsing experience, get that working, then add cart functionality, then payment processing, then customer accounts. This incremental approach kept each conversation focused and achievable, building confidence through a series of wins rather than attempting everything at once.
Geographic variation in e-commerce success proved particularly interesting. Brazilian users building e-commerce sites showed 38.9% retention , while American users showed just 24.1% retention. This likely reflected different competitive contexts — American users had access to mature e-commerce platforms like Shopify, making the DIY AI approach less compelling, while Brazilian users faced different market conditions that made custom solutions more attractive.
The service business — the success ones
Certain service business categories showed surprisingly strong performance. Restaurants and food service projects achieved 35.3% retention. Medical and healthcare clinics reached 34.6% retention. Hair salons and beauty services hit 33.3% retention. Real estate agencies, as mentioned earlier, achieved 40% retention.
These categories shared common characteristics that made them well-suited to AI vibe coding. They had established website patterns — service listings, booking systems, contact information, photo galleries, testimonials. Users in these industries often had clear mental models of what they needed because they’d seen competitors’ websites and knew what features mattered to their customers.
The prompts from successful service business users reflected this clarity. A restaurant owner might write: “I’m building a site for a farm-to-table restaurant. I need a home page highlighting our philosophy, a menu page that can be easily updated seasonally, an online reservation system, a gallery showing our dishes and atmosphere, and a contact page with location and hours. The aesthetic should feel warm and inviting, emphasizing natural materials and local sourcing.”
This level of specificity — describing not just what they wanted but why, and connecting features to business goals — characterized successful service business projects across categories.
The professional services struggle
In contrast, certain professional service categories consistently struggled. Legal services showed just 19.2% retention. Fitness and wellness services achieved only 21.7% retention. These weren’t failures of the AI tools — they were mismatches between user expectations and what AI vibe coding could realistically deliver.
Legal professionals needed more than attractive websites. They needed intake forms that captured information in legally meaningful ways, content that navigated advertising restrictions, confidentiality protections, and integration with case management systems. The gap between “I need a law firm website” and “I need a compliant legal practice management system with public-facing components” proved difficult to bridge.
Fitness professionals faced different challenges. They had strong aesthetic visions — often inspired by boutique studios they’d visited — that proved difficult to articulate. They’d say things like “I want it to feel energizing but also calming” or “the vibe should be welcoming but also serious about results.” These contradictory requirements, combined with complex booking and class management needs, created frustration.
The users who succeeded in these challenging categories typically had one of two advantages: either they brought technical backgrounds that helped them articulate requirements precisely, or they dramatically lowered their initial scope, focusing on a simple informational site first and planning to add complex features later through other means.
The repetition signal — industry-specific patterns
Our analysis of prompt repetition — users sending identical prompts multiple times — revealed fascinating industry patterns. Certain project types and user segments showed dramatically higher repetition rates, suggesting specific points of friction.
Cleaning and maintenance services in the United States showed 13.85% repetition rates. Mobile application projects in the United Kingdom showed 13.66% repetition. Medical clinic projects in the United States showed 13.03% repetition. Data analytics projects in Brazil showed 12.82% repetition. CRM and business management projects in South Korea showed 12.4% repetition, while similar projects in the United States showed 10.59% repetition. E-commerce projects in Brazil showed 10.28% repetition.
These high repetition rates signaled users encountering obstacles they didn’t know how to overcome. Rather than modifying their approach, they tried the same request again, hoping for different results. This behavior correlated strongly with eventual churn — users with repetition rates above 20% showed just 25.3% retention, compared to 35.6% retention for users with 1–5% repetition rates.
The industry pattern suggested that certain project types had common sticking points where users got stuck. Cleaning services needed booking systems but didn’t know how to describe their scheduling requirements. Mobile app projects needed responsive designs but struggled to articulate cross-device requirements. Medical clinics needed HIPAA-compliant forms but didn’t know how to specify compliance requirements. Data analytics projects needed specific visualization types but lacked the vocabulary to describe them precisely.
The content length signal — variety communication styles
First prompt length varied dramatically by industry and proved predictive of success. Users building bug fixes in India averaged 1,223 characters in their first prompts with 28.3% using structured formatting, achieving 50% retention. Users building bug fixes in Brazil averaged 2,167 characters with 25.1% structure, but achieved only 13.3% retention — suggesting that length alone wasn’t sufficient without other success factors.
Cleaning service projects in the United States averaged 1,313 characters with just 6.9% structure, yet achieved 38.5% retention. Medical clinic projects in the United States averaged 1,026 characters with 15% structure and 27.3% retention. These variations suggested that different industries had different optimal communication patterns — what worked for a technical debugging session differed from what worked for a service business website.
The consistent pattern across industries was that users who provided comprehensive context in their first prompt — whether through length, structure, or both — showed better outcomes than those who started with minimal information and tried to build up through conversation. The specific threshold varied by project type, but the principle held: front-loading context paid dividends.
Templates are great — however, it is only the step to get into the door
Across all industries, users who started from templates or examples showed better outcomes than those starting from scratch. But templates proved more valuable in some contexts than others.
For standardized project types — restaurants, salons, real estate, basic e-commerce — templates provided enormous value. Users could point to an example and say “like this, but with these specific changes,” giving the AI a clear reference point. The conversation became about modifications rather than creation from nothing.
For unique or complex projects — custom applications, specialized professional services, novel business models — templates proved less helpful. Users would start with a template that seemed close, then spend dozens of prompts trying to modify it into something fundamentally different. They would have been better served starting fresh with a comprehensive description of their unique requirements.
The most successful template users treated them as inspiration and starting points, not as solutions to be modified. They’d look at examples to understand what was possible, then write their own detailed prompts describing what they actually needed, occasionally referencing template elements they wanted to incorporate.
The scope definition challenge
One pattern appeared consistently across struggling projects in all industries: scope ambiguity. Users who couldn’t clearly define what they were building — what features it needed, what problems it solved, what success looked like — struggled regardless of industry.
This manifested differently across project types. For websites, it appeared as uncertainty about what pages were needed and what each page should accomplish. For applications, it appeared as vague feature descriptions without clear user flows. For e-commerce, it appeared as uncertainty about the customer journey from discovery to purchase.
The users who succeeded had done the hard work of scope definition before their first prompt. They’d thought through their user’s journey, identified the key features that journey required, and prioritized those features. They understood the difference between “must have” and “nice to have.” They could articulate not just what they wanted but why — what business goal or user need each feature served.
This preparation work proved far more valuable than prompt engineering tricks. A user with clear scope could write a mediocre prompt and still get useful results. A user with vague scope could write a perfect prompt and still end up frustrated because they didn’t actually know what they wanted.
Industry-specific success strategies
Based on our analysis, here are concrete strategies for different project types:
For high-success categories like tools, education, real estate, landing pages, and data analytics, lean into your natural advantages. These project types have clear requirements and established patterns. Start with comprehensive first prompts that describe your specific use case, reference similar examples you’ve seen, and clearly articulate how your project differs from the standard pattern. Use structured formatting with lists and sections. Provide complete context upfront rather than revealing requirements gradually.
For challenging categories like legal, fitness, bug fixing, design, and blogs, adjust your approach. Break large projects into smaller, more defined phases. For legal projects, start with a simple informational site and plan to add complex features through specialized tools. For fitness projects, separate aesthetic concerns from functional requirements — get the booking system working first, then refine the visual design. For bug fixing, provide complete technical context including error messages, code snippets, environment details, and what you’ve already tried. For design projects, use reference images and specific vocabulary rather than subjective descriptions. For blogs, make explicit decisions about content architecture before requesting implementation.
For e-commerce projects, recognize the inherent complexity and plan accordingly. Start with product display and browsing, then add cart functionality, then payment processing, then customer accounts. Each phase should be a separate, focused conversation. Don’t try to build the entire system at once. Consider whether existing platforms might better serve your needs — the data suggests that in markets with mature e-commerce tools, DIY AI approaches struggle to compete.
For service businesses like restaurants, salons, clinics, and real estate, leverage your domain expertise. You understand your customers and what they need from your website. Translate that understanding into specific features and user flows. Don’t just describe what you want — explain why, connecting each feature to a customer need or business goal. Use examples from competitors not to copy but to establish a shared vocabulary for discussing features.
For professional services facing compliance or complexity challenges, be realistic about what AI vibe coding can deliver. These tools excel at creating attractive, functional websites with standard features. They struggle with specialized compliance requirements, complex integrations, and industry-specific functionality. Use AI for what it does well — creating the public-facing site — and plan to address specialized requirements through other means.
Different professionals iterate issues differently
Successful iteration patterns varied by industry. Technical users building tools or analytics dashboards iterated with precise modifications: “change the chart type from bar to line,” “add a filter for date range,” “make the API response include user metadata.” Each iteration moved the project forward in a specific, measurable way.
Service business users iterated differently, often focusing on aesthetic refinements: “make the header image larger and more prominent,” “change the color scheme to feel warmer,” “add more white space between sections.” These iterations were more subjective but no less valid — they were refining the emotional impact and brand expression of the site.
E-commerce users iterated around user experience flows: “make the add to cart button more prominent,” “show related products on the product detail page,” “add a progress indicator to the checkout process.” Their iterations focused on reducing friction in the purchase journey.
The common thread among successful iterators across all industries was specificity. They didn’t say “make it better” or “that’s not quite right.” They identified exactly what needed to change and articulated that change clearly. This specificity came from understanding their own goals well enough to evaluate whether the current implementation served those goals.
Some of the future thoughts
In this research, the result suggests that the “one tool for everything” is probably not a good idea. Everyone needs different pieces of legos.
The dramatic variation in success rates across project types — from 48.5% for tools down to 19.2% for legal services — indicates that general-purpose AI vibe coding tools serve some use cases far better than others.
Expect to see increasing specialization. Industry-specific AI tools optimized for restaurant websites, real estate listings, fitness studios, or e-commerce stores will likely outperform general-purpose tools for those specific use cases. These specialized tools can encode industry knowledge, provide relevant templates, ask the right questions, and generate appropriate features without requiring users to articulate every requirement.
For users today, this means honestly assessing whether AI vibe coding is the right approach for your specific project. If you’re building something with clear requirements and established patterns — a restaurant site, a landing page, a portfolio — these tools offer tremendous value.
If you’re building something with complex compliance requirements, specialized functionality, or highly subjective creative goals, you may need to combine AI tools with other approaches or wait for more specialized solutions to emerge.
Final thoughts
Analyzing over a million prompts across various industries reveals a crucial insight: the nature of your project is as important as your approach to building it. Different projects, such as creating a restaurant website, debugging code, or developing a legal practice management system, will yield vastly different experiences, even for users with similar skills. This disparity reflects the reality that some problems have established solutions, while others are ambiguous and require specialized knowledge and compliance with regulations.
Successful users understand this context and choose projects that align with the strengths of AI-driven coding. They provide clear requirements and domain knowledge, enabling AI tools to perform optimally. The key question is not whether AI can assist in building your project, but whether the type of project aligns with what AI tools do best and if you are prepared to approach it accordingly.
For projects like tools, educational platforms, service business sites, landing pages, or data dashboards, clear communication and a willingness to iterate lead to success. Conversely, tackling complex bug fixes, professional services, or highly creative designs may require adjusting expectations and adopting a hybrid approach that combines AI tools with other solutions. Ultimately, understanding your project’s alignment with AI capabilities is essential for achieving your goals in the age of AI-assisted creation.
This analysis is based on anonymized, aggregated data from a vibe coding platform. Individual results vary, but the patterns described reflect statistically significant trends across large user populations.
AI vibe coding landing page playbook — industries vs functions was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.