Back to Projects

Oscilar's AI Risk Decisioning Platform

A real-time AI-powered fraud and risk decisioning system for financial institutions

Team Size: 4
Role: Senior Full Stack/AI Engineer
Duration: Feb 2023 – Present

Tech Stack

java iconJava
spring iconSpring Boot
python iconPython
fast-api iconFastAPI
kafka iconApache Kafka
reactjs iconReact
typescript iconTypeScript
nextjs iconNextJS
postgresql iconPostgreSQL
redis iconRedis
pytorch iconPyTorch
xgboost iconXGBoost
scikit-learn iconScikit-learn
kubernetes iconKubernetes
aws iconAWS
ci-cd iconCI/CD
jenkins iconJenkins

Project Gallery

Oscilar's AI Risk Decisioning Platform 1
Oscilar's AI Risk Decisioning Platform 2

Introduction

Oscilar's AI Risk Decisioning Platform is a unified "Risk Operating System" that provides a 360-degree view of every customer and transaction. Built by Oscilar, an AI-native risk decisioning company, the platform consolidates fraud, credit, and compliance risk management into a single system, replacing disconnected point solutions with an integrated AI-first approach that delivers sub-100ms decision latency.

Key Features

  • Real-time fraud detection and risk scoring with sub-100ms decision latency across financial transactions
  • Event-driven decisioning workflows using Kafka for instant alerts and automated actions processing thousands of events per second
  • ML-powered anomaly detection using PyTorch, XGBoost, and Scikit-learn for fraud, account takeover, and credit risk signals
  • Unified data foundation consolidating internal and external data sources with identity resolution and KYC/KYB integrations
  • Analyst dashboard (React + TypeScript + Next.js) for monitoring transactions, reviewing risk outcomes, and taking action
  • LLM-powered case summarization for investigation efficiency and automated risk analysis
  • Explainable AI with SHAP/LIME techniques providing transparent reasoning for every risk decision
  • REST API layer for seamless integration with financial systems and internal tools
  • Redis caching layer reducing data retrieval latency by ~40%

Technical Insights

  • Built scalable Java + Spring Boot microservices and Python (FastAPI) services to process high-volume transactional traffic with real-time ML model integration
  • Implemented Apache Kafka pipelines for low-latency streaming, data enrichment, and decision triggers across the platform
  • Designed and optimized PostgreSQL schemas for transaction and user data, ensuring consistency for high-volume financial workloads
  • Implemented Redis caching strategies reducing data retrieval latency by ~40% for real-time decisioning services
  • Integrated PyTorch, XGBoost, and Scikit-learn models improving fraud detection accuracy and reducing false positives by ~20%
  • Deployed microservices-based architecture on Kubernetes with AWS infrastructure for scalable, highly available systems
  • Built retrieval-based pipelines combining real-time and historical data for contextual accuracy and explainability of AI-generated insights
  • Automated delivery through CI/CD pipelines (Jenkins) reducing deployment time by ~35%

Challenges and Solutions

  • High-volume transaction load and low-latency requirements: Designed microservices for horizontal scaling on Kubernetes and used Kafka streaming to process events with sub-100ms latency.
  • Integrating ML models into production without impacting latency: Built dedicated Python/FastAPI services for model serving, with Redis caching to minimize repeated computations.
  • Keeping fraud detection strong without excessive false positives: Integrated PyTorch, XGBoost, and Scikit-learn models with explainability techniques (SHAP/LIME) for transparent decisioning.
  • Case investigation bottlenecks for analysts: Developed LLM-powered case summarization and retrieval-based pipelines to reduce manual review effort.
  • Database performance under financial workload pressure: Optimized PostgreSQL schemas and implemented Redis caching to handle high-volume queries efficiently.

Outcome

  • Reduced average decisioning response time to under 100ms for transaction risk evaluation
  • Improved fraud detection accuracy and reduced false positives by ~20% through ML model integration
  • Reduced data retrieval latency by ~40% with Redis caching strategies
  • Cut deployment time by ~35% through automated CI/CD pipelines
  • Enabled real-time processing of thousands of events per second via Kafka event-driven architecture