Back to Projects
AIFeatured

AI-Powered Chatbots

Technical LeadTawuniya Insurance05/2024 — Present

Key Highlights

  • 3 AI chatbots deployed across departments
  • 85% reduction in HR query response time
  • 92% accuracy on common questions
  • RAG architecture for context-aware responses
LLMRAGNLPPythonTypeScriptReact

AI-Powered Chatbots

Overview

Designed and deployed three AI-powered chatbots at Tawuniya Insurance, leveraging Large Language Models and Retrieval-Augmented Generation (RAG) to automate internal operations and improve employee experience.

The Challenge

Internal HR operations were heavily manual:

  • Employees waited hours or days for answers to common HR questions
  • HR team was overwhelmed with repetitive inquiries
  • No self-service option for policy lookups, leave balances, or benefits info
  • Knowledge was scattered across multiple documents and systems
  • Architecture

    RAG Pipeline

    We implemented a Retrieval-Augmented Generation architecture:

  • Document Ingestion — HR policies, employee handbooks, and FAQ documents were processed, chunked, and embedded
  • Vector Store — Document embeddings stored for fast similarity search
  • Query Processing — User questions are embedded and matched against the knowledge base
  • LLM Generation — Context-enriched prompts sent to the LLM for accurate, grounded responses
  • Conversation Management

  • Multi-turn conversation support with context preservation
  • Intent classification for routing to appropriate knowledge domains
  • Fallback to human agents for complex queries
  • Conversation analytics for continuous improvement
  • Security & Privacy

  • All data processed within company infrastructure
  • Role-based access control for sensitive HR information
  • Audit logging for compliance requirements
  • Results

  • 85% reduction in HR query response time
  • 92% accuracy on common questions
  • 3 chatbots deployed across different departments
  • Positive feedback from 500+ employees
  • Lessons Learned

  • RAG significantly improves factual accuracy over pure LLM approaches
  • Chunking strategy matters — we tested overlap sizes from 50 to 200 tokens
  • Feedback loops are essential for continuous improvement
  • Guardrails prevent hallucinations on sensitive HR topics