Hola-Dermat: Personalized Skincare Agentic AI Assistant, Powered by Qdrant + Perplexity + CrewAI

“Finding the right skincare product is like finding a needle in a haystack… except the haystack is the entire internet, and you’re not even sure what a needle looks like.”

Illustration from MIRRAR + Author

If you’ve ever stood in a skincare aisle, staring at rows upon rows of products, each promising to transform your skin… you know the struggle. Do I need hyaluronic acid? What about retinol? Is this SPF 30 or 50? And most importantly, will this actually work for my skin type, in my climate, with my lifestyle?

Enter Hola-Dermat a personalized skincare regimen assistant that doesn’t just throw product recommendations at you, but actually understands your unique skin needs, environmental factors, and preferences to craft a tailored morning and night routine.

https://medium.com/media/f85bf6fabdd759a64ad912889194daba/href

But here’s the kicker: this isn’t just another chatbot. Hola-Dermat leverages cutting-edge technologies like Qdrant’s ACORN algorithm, Perplexity web search, and CrewAI’s agentic workflows to solve a problem that’s plagued recommendation systems for years: the dreaded “zero results” problem.

Lets Go!!

Let me take you through how we built this, why it matters, and how these technologies work together to create something truly powerful…

Problem: Why Skincare Recommendations Are Broken

Picture this: You’re a software engineer living in Hyderabad, India. Your skin is combination-dry, slightly acne-prone. You spend 18 hours a day in front of screens (yes, I see you nodding…), get minimal sun exposure, and you’re currently using a face wash that makes your skin feel like the Sahara desert.

Ohh My Skin…

Now, try finding products that:

  • Work for combination-dry, acne-prone skin
  • Are available in Hyderabad (or at least India)
  • Address blue light damage from excessive screen time
  • Provide hydration without being too heavy
  • Don’t break the bank

Traditional recommendation systems would either:

  1. Return zero results because your filters are “too specific”
  2. Give you generic products that don’t account for regional availability
  3. Ignore environmental factors like UV index, air quality, and humidity
  4. Forget what you’ve tried before and keep recommending the same things
Hero Enters the Game!

This is where Hola-Dermat steps in. It’s not just a product search engine it’s an intelligent assistant that understands context, learns from your history, and adapts to your environment.

Solution: Conversational AI Meets Vector Search

How LLMs Transform the User Experience

The magic starts with a simple conversation. Instead of filling out a 50-field form (which, let’s be honest, nobody wants to do), users can just… talk.

User: "Hey nice to meet you. My skin is a combination one but slightly 
towards the dry and acne prone. I am from south india and I stay
in hyderabad Telangana. I am a software engineer, with a screen
time of 18h a day and very less sun exposure. I am using just a
normal face wash once a day but it dries up my skin like hell."

From this single message, Hola-Dermat extracts:

  • Skin type: Combination-dry, acne-prone
  • Region of origin: South India
  • Current location: Hyderabad, Telangana
  • Occupation: Software engineer
  • Screen time: 18 hours/day (high)
  • Sun exposure: Low
  • Current product issue: Face wash causing excessive dryness

This intelligent parsing happens through pattern matching and natural language understanding, allowing the system to build a comprehensive user profile without the friction of traditional forms.

The LLM (we’re using Claude Sonnet 4.5) then uses this profile to:

  1. Understand the user’s specific needs
  2. Ask clarifying questions only when necessary
  3. Generate natural, conversational responses
  4. Make recommendations that feel personalized, not templated

Why Vector Databases? The Semantic Search Revolution

Here’s where things get interesting. Traditional databases are great for exact matches: “Find all products with SPF 50.” But skincare is nuanced. A user might say “I need something for dry, sensitive skin that helps with dark spots” and the system needs to understand that this translates to products containing ingredients like niacinamide, alpha arbutin, or vitamin C.

Okay that’s confusing…

Vector databases solve this by converting text into high-dimensional vectors (embeddings) that capture semantic meaning. Products with similar ingredients, benefits, or use cases end up close together in this vector space.

In Hola-Dermat, we use Qdrant an open-source vector database that’s been gaining serious traction in the AI community. Here’s how we set up our product collection:

def create_products_collection(client: QdrantClient) -> None:
"""Create the products collection with ACORN configuration."""
collection_name = settings.PRODUCTS_COLLECTION

client.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(
size=EMBEDDING_DIM, # 384 dimensions from sentence-transformers
distance=Distance.COSINE,
),
optimizers_config=OptimizersConfigDiff(
indexing_threshold=10000,
),
hnsw_config=HnswConfigDiff(
m=16,
ef_construct=100,
),
)

Each product gets converted into an embedding that captures its essence:

def create_product_text(product: dict) -> str:
"""Create searchable text from product data."""
text_parts = [
product.get('name', ''),
product.get('brand', ''),
product.get('description', ''),
product.get('product_type', ''),
', '.join(product.get('skin_type_compatibility', [])),
', '.join(product.get('key_benefits', [])),
', '.join(product.get('ingredients', [])), # Ingredients as a list
]
return ' '.join(text_parts)
# Generate embedding
embedding = get_embedding(product_text) # Returns 384-dimensional vector

When a user searches for “something hydrating for dry skin,” the system:

  1. Converts the query into a vector
  2. Finds products with similar vectors (semantic similarity)
  3. Ranks them by relevance
  4. Returns the most appropriate matches

But here’s the thing… semantic search alone isn’t enough. You also need filtering and that’s where things get complicated.

That’s a hell of a filter

The Zero Results Problem.. When Filters Become Your Enemy 😈

Imagine you’re searching for products that:

  • Are compatible with combination-dry skin
  • Available in India
  • Contain hyaluronic acid
  • Can be used in the morning
  • Cost less than $50

In a traditional system, you’d apply these filters sequentially:

  1. Filter by skin type → 50 products
  2. Filter by region → 20 products
  3. Filter by ingredient → 5 products
  4. Filter by usage → 2 products
  5. Filter by price → 0 products

Game over. Zero results. The user is frustrated, and you’ve lost them.

I don’t wanna loose customers

This happens because traditional filtering uses AND logic every condition must be met. But in the real world, products might be:

  • Perfect for your skin type but available in a different region (but still shipable)
  • Contain similar ingredients (peptides instead of hyaluronic acid, but same benefit)
  • Slightly above budget but worth the extra cost

Qdrant’s ACORN algorithm solves this exact problem.

ACORN: The Algorithm That Changed Everything

ACORN (Algorithm for Complex OR-query Navigation) is Qdrant’s answer to the zero-results problem. Introduced in Qdrant v1.16, ACORN intelligently handles complex filters by understanding when to relax constraints.

Here’s how it works in Hola-Dermat:

def search_products(
query_text: str,
skin_type: Optional[str] = None,
regions: Optional[List[str]] = None,
usage: Optional[str] = None,
required_ingredients: Optional[List[str]] = None,
limit: int = 10
) -> List[Dict[str, Any]]:
# Build ACORN filter conditions
filter_conditions = []

if skin_type:
filter_conditions.append(
FieldCondition(
key="skin_type_compatibility",
match=MatchValue(value=skin_type)
)
)

if regions:
# Product must be available in at least one region (OR logic)
filter_conditions.append(
Filter(
should=[ # "should" means at least one must match
FieldCondition(
key="regions_available",
match=MatchValue(value=region)
)
for region in regions
]
)
)

if required_ingredients:
# At least one ingredient must match
ingredient_conditions = [
FieldCondition(
key="ingredients",
match=MatchValue(value=ingredient.lower().strip())
)
for ingredient in required_ingredients
]
filter_conditions.append(
Filter(should=ingredient_conditions)
)

# ACORN automatically handles complex filtering
query_filter = Filter(must=filter_conditions) if filter_conditions else None

# Perform hybrid search (semantic + keyword)
results = client.search(
collection_name=collection_name,
query_vector=query_embedding,
query_filter=query_filter, # ACORN handles this intelligently
limit=limit,
score_threshold=0.3,
)

ACORN’s magic lies in its ability to:

  1. Understand filter relationships: It knows that “available in India OR Asia” is more flexible than requiring both
  2. Relax constraints intelligently: If no products match all filters, it can relax non-critical ones
  3. Maintain relevance: Even when relaxing filters, it ensures results are still semantically relevant
  4. Handle nested conditions: Complex filters with AND/OR logic don’t break the system

In production, this means:

  • Higher conversion rates: Users find products even with specific requirements
  • Better user experience: No more frustrating “no results” pages
  • Scalability: Works efficiently even with millions of products and complex queries

Hybrid Search: Semantic + Keyword = Best of Both Worlds

Wait Wait Wait!!!

But wait, there’s more! Hola-Dermat doesn’t just rely on semantic search. We use hybrid search combining semantic similarity with keyword matching.

# Semantic search (understands meaning)
results = client.search(
collection_name=collection_name,
query_vector=query_embedding, # Vector-based
query_filter=query_filter,
limit=limit,
)
# Keyword search (exact matches)
keyword_results = client.scroll(
collection_name=collection_name,
scroll_filter=Filter(
must=[
FieldCondition(
key="text",
match=MatchText(text=query_text) # Text-based
)
] + filter_conditions
),
limit=limit,
)
# Combine and deduplicate
products = combine_results(results, keyword_results)

Why both? Because sometimes you need exact matches (like brand names or specific ingredient names), and sometimes you need semantic understanding (like “something for dry skin” matching products with “hydration” or “moisturizing”).

Perplexity: Bringing Real-Time Web Intelligence

Here’s where Hola-Dermat gets really smart. While Qdrant stores our curated product database, Perplexity brings real-time web intelligence to the table.

Why Perplexity?

  1. Current Information: Weather data, UV index, air quality all updated in real-time
  2. Regional Product Availability: What’s actually available in Hyderabad today might not be in our database
  3. Latest Reviews and Trends: New products, reformulations, discontinued items

Here’s how we integrate Perplexity:

def research_weather_atmosphere(
region: str,
include_forecast: bool = True,
days_back: int = 3,
days_forward: int = 5
) -> Dict[str, Any]:
"""Research current and forecasted weather conditions."""
query = f"""Provide detailed weather and atmospheric conditions for {region}:
- Current temperature and temperature trends over the past {days_back} days
- Current UV index and UV index forecast for the next {days_forward} days
- Current Air Quality Index (AQI) and air quality trends
- Humidity levels
- Any relevant environmental factors affecting skin health"""

payload = {
"model": "llama-3.1-sonar-large-128k-online",
"messages": [
{
"role": "system",
"content": "You are a weather and environmental data expert..."
},
{
"role": "user",
"content": query
}
],
"temperature": 0.2,
"max_tokens": 2000
}

response = httpx.post(PERPLEXITY_API_URL, headers=headers, json=payload)
return response.json()

This data helps Hola-Dermat understand:

  • UV Protection Needs: High UV index? Recommend stronger SPF
  • Hydration Requirements: Low humidity? Emphasize moisturizing products
  • Air Quality Impact: High AQI? Suggest barrier-repair products

Dynamic Product Discovery

But WAITTTT! if Qdrant doesn’t have products matching the user’s needs, Perplexity searches the web for regional products, and then we add them to Qdrant automatically:

class QdrantAddProductTool(BaseTool):
"""Tool for adding new products to the Qdrant database."""

def _run(self, product: Dict[str, Any]) -> str:
"""Execute product addition."""
success = add_product_to_collection(product)
return f"Successfully added product '{product.get('name')}' to database"

This creates a self-improving system that gets smarter over time.

Multi-Collection Architecture: Products and History

Qdrant’s multi-collection support is perfect for Hola-Dermat. We maintain two separate collections:

1. Products Collection

Stores all skincare products with their embeddings, metadata, and searchable text.

payload = {
'id': product['id'],
'name': product['name'],
'brand': product['brand'],
'description': product['description'],
'product_type': product['product_type'],
'skin_type_compatibility': product['skin_type_compatibility'],
'regions_available': product['regions_available'],
'price_range': product['price_range'],
'usage': product['usage'],
'key_benefits': product['key_benefits'],
'ingredients': product['ingredients'], # List of ingredients
'text': product_text, # For hybrid search
}

2. History Collection

Tracks user interactions, product recommendations, feedback, and results.

def update_user_history(
user_id: str,
interaction_type: str,
product_id: Optional[str] = None,
product_name: Optional[str] = None,
feedback: Optional[str] = None,
results: Optional[str] = None,
rating: Optional[int] = None
) -> bool:
"""Store user interaction in history collection."""
history_text = f"{interaction_type} {product_name} {feedback} {results}"
embedding = get_embedding(history_text)

payload = {
"user_id": user_id,
"timestamp": datetime.now().isoformat(),
"interaction_type": interaction_type,
"product_id": product_id,
"product_name": product_name,
"feedback": feedback,
"results": results,
"rating": rating,
"text": history_text
}

client.upsert(
collection_name=settings.HISTORY_COLLECTION,
points=[PointStruct(id=point_id, vector=embedding, payload=payload)]
)

Why Separate Collections?

  1. Different Use Cases: Products need product-focused search; history needs user-focused search
  2. Performance: Smaller, focused collections are faster to query
  3. Scalability: Can scale each collection independently
  4. Privacy: User history can have different retention policies

The Power of History

When a user returns, Hola-Dermat searches their history:

def search_user_history(
user_id: str,
query_text: Optional[str] = None,
limit: int = 20
) -> List[Dict[str, Any]]:
"""Retrieve user's historical interactions."""
filter_conditions = [
FieldCondition(
key="user_id",
match=MatchValue(value=user_id)
)
]

if query_text:
# Semantic search within user's history
query_embedding = get_embedding(query_text)
results = client.search(
collection_name=settings.HISTORY_COLLECTION,
query_vector=query_embedding,
query_filter=Filter(must=filter_conditions),
limit=limit,
)

This allows the system to:

  • Remember what products the user tried
  • Understand what worked and what didn’t
  • Avoid recommending products they’ve already rejected
  • Build a personalized preference profile over time

CrewAI: Orchestrating the Agentic Workflow

Now, here’s where it all comes together. CrewAI orchestrates the entire workflow, making intelligent decisions about when to use which tool.

What is CrewAI?

CrewAI is a framework for building agentic AI systems AI agents that can:

  • Use multiple tools
  • Make decisions autonomously
  • Work together to solve complex problems
  • Learn from their actions

In Hola-Dermat, we have a single agent (the Skincare Consultant) with access to multiple tools:

def create_skincare_agent() -> Agent:
"""Create the main skincare expert agent."""
return Agent(
role='Personalized Skincare Consultant',
goal='Understand user needs and provide personalized recommendations',
backstory="""You are an expert dermatologist with deep knowledge of
regional skincare practices, environmental factors affecting skin health,
and product formulations...""",
llm=llm, # Claude Sonnet 4.5
tools=[
PerplexityWeatherTool(), # Weather research
PerplexityProductResearchTool(), # Web product search
PerplexitySkinAnalysisTool(), # Environmental analysis
QdrantProductSearchTool(), # Product database search
QdrantHistorySearchTool(), # User history retrieval
QdrantHistoryUpdateTool(), # History updates
QdrantAddProductTool() # Add new products
]
)

The Agentic Workflow

When a user requests recommendations, the agent:

  1. Checks User History: “What has this user tried before?”
  2. Researches Weather: “What’s the UV index and air quality in Hyderabad?”
  3. Analyzes Environment: “How do these conditions affect skincare needs?”
  4. Searches Products: “Find products matching the user’s profile”
  5. If No Results: “Search Perplexity for regional products”
  6. Adds New Products: “Store discovered products in Qdrant”
  7. Generates Recommendations: “Create morning and night regimens”
  8. Updates History: “Track what was recommended”

All of this happens autonomously. The agent decides:

  • When to use Perplexity vs Qdrant
  • Which filters to apply
  • How to combine information from multiple sources
  • What questions to ask the user

Task Definition

Here’s how we define the agent’s task:

task_description = f"""Analyze the user's profile and create personalized skincare recommendations.
User Profile:
- Skin Type: {profile_dict.get('skin_type')}
- Current Region: {profile_dict.get('region_stay')}
- Occupation: {profile_dict.get('occupation')}
- Screen Time: {profile_dict.get('screen_time')}
- Sun Exposure: {profile_dict.get('sun_exposure')}
...
Your task:
1. Check the user's history to understand past product usage
2. Research weather conditions for {region}
3. Analyze how regional factors affect skin needs
4. Search the product database for matching products
5. If no products found, use Perplexity to research regional products
6. Add any new products to the database
7. Create comprehensive morning and night regimens
8. Explain why each product was chosen
9. Silently update user history (background task)
"""

The agent executes this task, making decisions along the way, and returns a comprehensive recommendation.

Data Ingestion: Building the Foundation

Before any of this magic happens, we need to populate Qdrant with products. Here’s our ingestion pipeline:

def ingest_products(products: list[dict], client) -> None:
"""Ingest products into Qdrant."""
collection_name = settings.PRODUCTS_COLLECTION
points = []

for product in products:
# Create searchable text
product_text = create_product_text(product)

# Generate embedding using sentence-transformers
embedding = get_embedding(product_text)

# Create point with metadata
point = PointStruct(
id=hash(product['id']) % (2**63),
vector=embedding,
payload={
'id': product['id'],
'name': product['name'],
'brand': product['brand'],
'description': product['description'],
'product_type': product['product_type'],
'skin_type_compatibility': product['skin_type_compatibility'],
'regions_available': product['regions_available'],
'price_range': product['price_range'],
'usage': product['usage'],
'key_benefits': product['key_benefits'],
'ingredients': product['ingredients'],
'text': product_text, # For hybrid search
}
)
points.append(point)

# Batch upsert for efficiency
client.upsert(
collection_name=collection_name,
points=points
)

Our product data structure looks like this:

NOTE: If you don’t have any products don’t bother, we will search and add new discovered ones after exploratory web search.

{
"id": "prod_001",
"name": "Hyaluronic Acid Hydrating Serum",
"brand": "GlowSkin",
"description": "Intensive hydrating serum with 2% hyaluronic acid...",
"ingredients": ["hyaluronic acid", "peptides", "ceramides"],
"product_type": "serum",
"skin_type_compatibility": ["dry", "normal", "combination", "sensitive"],
"regions_available": ["US", "EU", "Asia", "Global"],
"price_range": "$$",
"usage": "both",
"key_benefits": ["Hydration", "Plumping", "Moisture retention"]
}

Notice how ingredients are stored as a list rather than boolean flags. This makes the data more flexible and searchable.

The Streamlit Interface: Making It All Accessible

All of this powerful technology is wrapped in a clean, conversational Streamlit interface:

def main():
st.title("✨ Hola-Dermat")
st.markdown("### Your Personalized Skincare Assistant")

# Initialize session state
ChatManager.initialize_session_state()

# Display chat history
for message in ChatManager.get_messages():
with st.chat_message(message["role"]):
st.write(message["content"])

# User input
user_input = st.chat_input("Type your message here...")

if user_input:
# Intelligent profile extraction
ChatManager.extract_profile_info(user_input, profile)

# If profile complete, trigger CrewAI workflow
if ChatManager.is_profile_collected():
with st.spinner("Analyzing your profile..."):
recommendations = generate_recommendations(profile, user_id)
st.markdown("### Your Personalized Skincare Regimen")
st.markdown(recommendations['recommendations'])

The interface handles:

  • Intelligent parsing of user messages
  • Progressive profile building (only asks for missing info)
  • Background processing (hides agent actions from users)
  • Clean output (filters out verbose logs)

Production Impact

Let’s talk about what this means in production…

Scalability

  • Qdrant’s ACORN algorithm handles millions of products efficiently
  • Multi-collection architecture allows independent scaling
  • Hybrid search ensures fast results even with complex queries

User Experience

  • No zero results: ACORN ensures users always find something relevant
  • Personalization: History collection builds long-term user profiles
  • Real-time intelligence: Perplexity keeps recommendations current

Business Value

  • Higher conversion: Users find products that actually match their needs
  • Reduced churn: Personalized experience keeps users engaged
  • Data insights: History collection provides valuable user behavior data

Technical Advantages

  • Self-improving: New products automatically added to database
  • Flexible filtering: ACORN handles complex queries gracefully
  • Maintainable: Clear separation between products and user data

Conclusion: The Future of Agentic AI Systems

Hola-Dermat represents more than just a skincare assistant it’s a blueprint for how agentic AI systems can solve complex, personalized problems across various domains.

Key Takeaways

  1. Agentic AI enables systems to make autonomous decisions, use multiple tools, and adapt to user needs
  2. Qdrant’s ACORN algorithm solves the perennial zero-results problem, making production systems more reliable
  3. Vector databases enable semantic understanding that traditional databases can’t match
  4. Hybrid search combines the best of semantic and keyword matching
  5. Multi-collection architecture allows for scalable, maintainable systems
  6. Real-time web intelligence (via Perplexity) keeps recommendations current and relevant
  7. User history creates long-term personalization that improves over time

The Bigger Picture

The technologies powering Hola-Dermat Qdrant, CrewAI, Perplexity, and modern LLMs aren’t just tools. They’re the building blocks of a new generation of AI systems that:

  • Understand context instead of just matching keywords
  • Learn from interactions instead of being static
  • Make intelligent decisions instead of following rigid rules
  • Scale gracefully instead of breaking under complexity

Whether you’re building a recommendation system, a search engine, or any application that needs to understand user intent and provide personalized results, these patterns apply.

Qdrant’s ACORN algorithm, in particular, addresses a problem that’s plagued production systems for years. By intelligently handling complex filters and avoiding zero-result scenarios, it makes vector databases truly production-ready.

And CrewAI? It’s the orchestrator that brings it all together making agentic AI accessible, powerful, and practical.

So the next time you’re building something that needs to understand users, search intelligently, and adapt over time… remember Hola-Dermat. The future of AI isn’t just about bigger models it’s about smarter systems that work together seamlessly.

Now, if you’ll excuse me..aaaa… I need to go update my skincare routine. My screen time is showing… 😅

Want to build your own?

Check out the Hola-Dermat repository and start experimenting with Qdrant, CrewAI, and agentic AI systems.


Hola-Dermat: Personalized Skincare Agentic AI Assistant, Powered by Qdrant + Perplexity + CrewAI was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Liked Liked