A Soverign Conversational Assistant Powered by ALIA and Mistral for the AI Act Age: Architecture, Governance and Evaluation

Cultural-heritage destinations are adopting digital twins and Living Labs to improve conservation, safety, and visitor experience. Operationalising these initiatives requires trustworthy interfaces capable of answering questions grounded in authoritative sources under public-sector governance constraints. We present a Sovereign Conversational Assistant (SCA) based on a small-language-model (SLM) plus retrieval-augmented generation (RAG) platform designed for the next generation Libelium Heritage Living Lab. This assistant is, therefore, agnostic from any specific LLM. Testing has focused on the usage of newly released, Barcelona Supercomputing Center’s own BSC-LT/ALIA-40b-instruct-2601 as well as the mistralai/Mistral-Small-3.2-24B-Instruct-2506, one of the SoTA standard bearer for mid-size SLMs. It integrates provenance logging, safety controls, and language enforcement. We evaluate the assistant benchmark on 19 tests across five categories: historical queries, client experience, data analysis, hallucination resistance, and safety/ethics. Our findings reveal that while both models adeptly retrieve factual historical and operational information, their reliability diverges under complex conditions. Mistral achieved a 100% pass rate across all tests, demonstrating strong analytical capabilities without hallucination and keeping up with the multilingual and safety guardrails, too. In contrast, ALIA struggled with numerical values drifting during data analysis and exhibited vulnerabilities in cross-language scenarios. The results show that a compact, sovereign RAG stack running on ALIA can meet core information needs in English and Spanish for Heritage Living Labs, while highlighting the necessity of refusal robustness and explicit multilingual control for public-facing deployment.

Liked Liked