FutureScapes began as a question: could AI create genuinely personalised educational experiences - not just content delivery, but adaptive, responsive learning that meets each student where they are? Building the answer required rethinking how data, intelligence, and user interaction connect. The result is a production-grade platform architected from scratch.
Why This Architecture
Every architectural decision was driven by educational goals:
Graph-First Data Modelling
Because learning is relational - concepts connect to concepts, students connect to progress, and understanding emerges from traversing those connections. A traditional relational database would fight this; a graph database embraces it.
Custom Multi-Agent Orchestration
Because educational support isn't a single prompt-response interaction. It's a coordinated process - researching context, analysing needs, crafting responses - that mirrors how expert educators think. Off-the-shelf frameworks couldn't deliver the control needed, so a purpose-built agent system with A2A protocol was developed from scratch.
Retrieval-Augmented Generation
Because AI responses must be grounded in real curriculum content, not hallucinated. The RAG pipeline ensures every interaction draws from authoritative sources.
Microservices Isolation
Because different capabilities evolve at different speeds. The AI layer iterates rapidly; the authentication layer needs stability. Separating concerns enables both.
Architecture Overview
The platform comprises five independent backend services, each with its own responsibility, technology choices, and deployment lifecycle. Services communicate via REST APIs, unified through an NGINX reverse proxy handling SSL termination, routing, and security.
| Layer | Technologies |
|---|---|
| Frontend | React 18, D3.js, Framer Motion, Keycloak-js |
| API Gateway | NGINX with path-based routing, SSL termination |
| Backend Services | Flask 3.0, FastAPI (5 services) |
| AI/LLM | LiteLLM Proxy (Anthropic, Gemini, OpenAI), LlamaIndex |
| Agent Orchestration | Custom A2A protocol, purpose-built agent framework |
| Data | Neo4j (graph), ChromaDB (vector), Redis (cache/broker) |
| Identity | Keycloak (OAuth2/OIDC), JWT with refresh rotation |
| Infrastructure | Docker Compose (10+ containers), GitHub Actions CI/CD |
AI & Intelligence Architecture
This is the heart of the platform - where architectural decisions directly enable educational outcomes.
Multi-Provider LLM Integration
A LiteLLM Proxy provides a unified API layer across multiple LLM providers, enabling consistent model access, routing, and failover from a single integration point.
- →Anthropic Claude, Google Gemini, and OpenAI models available through a single API
- →Centralised model configuration via JSON - switch providers without code changes
- →Optimise for cost, capability, or latency depending on the task
The platform isn't locked to any single AI vendor - providers can be swapped or added without touching application code.
Custom Agent Orchestration with A2A Protocol
Rather than relying on third-party agent frameworks, a purpose-built agent system was designed and developed from scratch:
- →Custom base agent architecture providing a consistent foundation for all specialised agents
- →A2A (Agent-to-Agent) communication protocol enabling structured inter-agent messaging and coordination
- →Agent manager orchestrating multi-agent workflows with task delegation and result aggregation
- →Purpose-built specialist agents - each designed for a specific analytical capability
- →Long-running tasks processed asynchronously via Celery workers with dedicated task queues
- →Real-time progress tracking via Redis pub/sub and WebSocket support
This approach provides full control over agent behaviour, communication patterns, and error handling - something off-the-shelf frameworks couldn't deliver at the level of precision needed for educational applications.
RAG Pipeline
Grounded, accurate responses require grounded, accurate retrieval:
- →LlamaIndex orchestrating the full pipeline
- →ChromaDB for vector storage and semantic search
- →Smart chunking with metadata preservation
- →Redis caching reducing LLM costs by 40%+
Every AI response can cite its sources - essential for educational credibility.
Expert System
AI behaviour is managed through a configurable expert system:
- →YAML-based prompt templates with version control
- →Dynamic variable substitution for context-aware prompts
- →Non-technical editing of AI behaviour without code changes
This means educators can refine AI interactions without developer involvement.
Data Architecture
Graph-First Modelling
The platform uses Neo4j as its primary data store - a deliberate choice for modelling learning as a network of relationships:
- →Intuitive relationship traversal via Cypher queries
- →Pattern matching across deeply connected entities
- →Flexible schema evolution as the domain model grows
- →Connection pooling and query optimisation for performance
Polyglot Persistence
Different data types need different storage strategies:
| Store | Purpose |
|---|---|
| Neo4j | Primary data - entities and relationships |
| ChromaDB | Vector embeddings for semantic search |
| Redis | Cache, message broker, pub/sub, session store |
| PostgreSQL | Keycloak identity persistence |
Infrastructure & DevOps
Containerised Architecture
- Docker Compose orchestrating 10+ containers
- Multi-stage builds for optimised images
- Network segmentation via bridge networks
- Volume mounts for hot reload in dev
CI/CD Pipeline
- GitHub Actions workflow
- Parallel service builds
- Automated health checks
- Mock auth for testing
Security
- OAuth2/OIDC via Keycloak
- JWT with refresh rotation
- Role-based access control
- SSL/TLS + HSTS + CSP headers
Engineering Patterns
Three-Tier Service Architecture
Every backend service follows a consistent internal pattern:
- →Routes Layer - API endpoints and request handling
- →Services Layer - Business logic and orchestration
- →Repository Layer - Data access abstraction
Async Task Processing
For computationally intensive AI operations:
- 1.Client submits request → receives task ID
- 2.Request queued to Redis broker
- 3.Celery worker processes in background
- 4.Client polls for progress
- 5.Results retrieved on completion
This pattern keeps the UI responsive while AI does heavy lifting.
What I Built
This isn't a weekend project or a tutorial follow-along. FutureScapes represents:
- →18+ months of design, development, and iteration
- →5 microservices with distinct responsibilities and tech stacks
- →10+ containerised services orchestrated for local and production deployment
- →Full authentication system with role-based access control
- →Production-grade AI pipeline with multi-provider support
- →Custom agent framework with A2A protocol - built from scratch, not off-the-shelf
- →Graph database modelling for complex educational relationships
- →RAG pipeline with document processing, embedding, and retrieval
All designed, architected, and built from scratch.
Skills Demonstrated
| Architecture | AI/ML | Frontend | Infrastructure |
|---|---|---|---|
| Microservices Design | LLM Integration | React 18 | Docker & Compose |
| API Design | Custom Agent Systems | D3.js Visualisation | CI/CD (GitHub Actions) |
| Graph Databases | A2A Protocol | OAuth2/OIDC | NGINX Configuration |
| Event-Driven Patterns | RAG Pipelines | State Management | Security Hardening |
| Polyglot Persistence | Vector Search | Responsive Design | Environment Management |
FutureScapes demonstrates what's possible when architectural thinking meets educational purpose - a platform built not just to work, but to enable genuinely new approaches to learning.
