Faheem AI — adaptive learning for thousands of Egyptian students
I own the backend for a Ministry-of-Education-certified AI tutoring platform. 13+ microservices on Azure, multimodal chat with text/voice/image, Azure OpenAI + RAG, and a subscription/voucher system I designed from scratch.
View live product- Microservices
- 13+
- Environments
- 4
- Students
- Thousands
- Recognition
- HolonIQ MENA EdTech 50
Tech stack
- .NET 8
- Python (Flask-RESTX, aiohttp)
- Angular
- Azure Container Apps
- Azure OpenAI
- Azure AI Search
- PostgreSQL
- Redis
- Docker
- GitHub Actions
- IdentityServer4
Situation
Appenza Studio was building Faheem AI — an adaptive learning app later published by the Egyptian Ministry of Education and awarded a spot on the HolonIQ MENA EdTech 50. The platform needed to let students ask curriculum questions through text, voice notes, or photos of their homework, and receive grounded, curriculum-accurate AI responses in real time.
Early prototypes were a monolith in Python that couldn’t scale past a few hundred students. The team needed production-grade infrastructure, an AI chat loop with state, multimodal input, and a monetization layer — without a huge headcount.
Task
As core backend engineer since early development, I was responsible for:
- Designing a microservices architecture that could scale to tens of thousands of concurrent students
- Integrating Azure OpenAI + Azure AI Search for Retrieval-Augmented Generation over the official Egyptian curriculum
- Building the AI chat feature with conversation state, supporting text, voice, and image inputs
- Migrating core services from Python to .NET without downtime
- Building the subscription + voucher code system from scratch
- Owning Azure infrastructure across four environments with CI/CD
Action
Architecture. I split the monolith into 13+ independently deployable microservices on Azure Container Apps — separating concerns like identity, chat, curriculum retrieval, subscriptions, media processing, and voice. Shared auth flows through IdentityServer4 with OAuth 2.0 / OpenID Connect / JWT, so every service validates tokens consistently. Four environments (dev / QA / staging / prod) with promotion gates.
AI chat pipeline. The chat feature uses conversation state stored in Redis and Postgres, with a retrieval step against Azure AI Search to ground responses in the official curriculum. I built the tool-calling / prompt orchestration in Python (Flask-RESTX + aiohttp) so the AI team could iterate on prompts independently, while the .NET services handled business logic and durable writes.
Multimodal input. Voice notes are transcribed via Azure Speech Service; images (e.g., a photo of a handwritten math problem) go through an OCR + vision pipeline before hitting the LLM. The same chat endpoint accepts all three modalities, returning streamed responses.
Python → .NET migration. Core revenue-adjacent services (subscriptions, user management, authorization) were migrated from Python to .NET 8 Web API using Clean Architecture (Domain / Application / Infrastructure / API). Strangler-pattern rollout, feature flags, zero downtime.
Subscription & vouchers. Full billing module from scratch: tiered plans, voucher codes (school distribution), trial periods, grace periods, webhooks. The voucher system alone unlocked school-by-school rollouts the business team had been asking for.
DevOps. GitHub Actions pipelines for each service: build → test → container push to Azure Container Registry → deploy to Container Apps. Deployment reliability and release speed both went up materially.
Result
- Ministry of Education certification — the app is officially published on the Egyptian MoE platform, a trust bar no startup feature-set can fake.
- HolonIQ MENA EdTech 50 award for 2024.
- 13+ microservices across 4 environments running reliably, with independent deploys.
- Thousands of students using AI chat for real curriculum questions, daily.
- Reduced authentication latency on the user login API through targeted query and token caching work.
- Mentored 2 junior engineers — code reviews, Clean Architecture enforcement, and architectural decision records.
What I’d do differently
If starting today I’d reach for a managed agent orchestration framework earlier (LangGraph-style) instead of hand-rolling tool calls, and add evaluation infrastructure (golden question sets + automated grading) from day one so prompt changes could be regression-tested before hitting prod.