An introduction to who I am, what I'm building, and why it matters. Recording in progress.
AI & Data · Strategy · Technology Leadership
|
I help enterprises build systems where good behavior is easy — and bad behavior is obvious.
I get called in when the problem is real and the answer isn't on anyone's roadmap yet. Sometimes that's a leadership team trying to figure out how to actually operationalize AI. Sometimes it's an engineering organization that's grown faster than the architecture beneath it. Sometimes it's an executive who knows something is wrong but can't see it clearly enough to fix it.
The work has crossed a lot of territory — commercial software products, global engineering organizations, managed services platforms, compliance frameworks, executive advisory practices. The through-line isn't a job title. It's the ability to move between building, shipping, and advising without losing fluency in any of them — and to know, from having done all three, exactly where the bodies are buried before the project starts.
The intersection that matters most is real-time data architecture and AI readiness — specifically, building the foundations that make AI outcomes predictable rather than aspirational. Most organizations are sitting on pipelines that weren't designed to support what they're being asked to do today. Getting that right, without tearing everything down, is the hard problem I find genuinely interesting.
The thread through all of it is trust — in the systems, in the data, in the people interpreting it. I've spent a long time learning how to build that, and how to spot where it's quietly missing.
Building the data foundations that make AI outcomes predictable — not aspirational. Real-time context, governed pipelines, and systems that don't lie to the models running on top of them.
Event-driven architecture as a first principle, not an afterthought. Systems that react, adapt, and deliver — without the batch-processing debt that quietly undermines everything downstream.
Turning raw pipeline output into something with ownership, a contract, and a lifecycle. The difference between data that exists and data that can be trusted.
From a team of one to global engineering organizations — building managed services, developer platforms, and automation frameworks that compound in value over time.
Sitting in the room where the strategy is set and the architecture is decided — and making sure those two conversations are actually about the same thing.
Governance, compliance, and security built in — not bolted on. FedRAMP, HIPAA, NIST, GDPR. The frameworks that determine whether an enterprise can actually move.
A short film on how I think, what I've built, and why the intersection of systems, trust, and AI is where I've chosen to spend my time.
Every role built on the last — from founding-team engineer to IPO architect to global team leader to executive advisor. The depth is real because the progression was earned, not skipped.
Trusted advisor to enterprise organizations on data streaming readiness, AI enablement, and event-driven platform strategy. Design and facilitate Domain-Driven Design engagements, streaming activation workshops, and Center of Excellence programs for CDOs, CIOs, and data platform leaders. Develop strategic frameworks linking platform capabilities to business outcomes — including real-time personalization, streaming governance, and Data Product Marketplace activation — across financial services, CPG, insurance, and hospitality sectors.
Progressive leadership across four roles spanning a decade. Built and productized managed cloud services — Airflow, Kafka, SAS Container Runtime — generating $16M+ ARR. Architected the MAS Automation Framework on AKS/EKS managing 1,000+ Kubernetes environments globally. Migrated SAS CI360 to Kubernetes-backed Kafka, reducing infrastructure costs by 69%. Led globally distributed engineering teams across the US, EMEA, and APAC. Integrated generative AI — including Snowflake Cortex, Azure OpenAI, and LLaMA — into delivery pipelines. Held compliance responsibility across FedRAMP, HIPAA, HITRUST, NIST, ISO 27001, MARS-E, GDPR, and SCHREMS II. Selected speaker at Confluent Current and Snowflake Summit.
Architected a distributed data warehousing solution for real-time clinical trial analytics on AWS and IBM Cloud. Built high-availability Hadoop infrastructure with Hive, Pig, YARN, and Spark. Designed secure ETL pipelines for life sciences data, implemented NIST, MARS-E, and ISO 27001-aligned security controls, and launched a managed Elasticsearch offering for distributed search and indexing.
Joined the founding engineering team for PureCloud — a cloud-native contact center platform that scaled post-acquisition into Genesys's core SaaS offering. Built event-streaming pipelines using Kafka, AWS EMR, and S3 feeding Redis and Cassandra for real-time analytics. Architected a partitioned data warehouse and Data Access Layer supporting VoIP, call automation, and customer engagement at scale.
Led design and implementation of a multi-terabyte analytics platform supporting ChannelAdvisor's IPO — enabling real-time e-commerce analytics across search, PPC, and inventory channels. Engineered fault-tolerant federated sharding across large datasets, directed BI and ETL pipeline delivery, and led migration from SQL Server to PostgreSQL. Managed a cross-functional engineering team through a period of significant scale and organizational growth.
Progressive early career across healthcare, fintech, and technology. Primary developer for a predictive OLAP platform at Visionary Systems — contributing to data architecture that positioned the company for acquisition by TransUnion. Early innovations in real-time reporting, replication, and hybrid data architectures across SQL Server, Oracle, and OLAP/ETL environments.
Interactive 3D presentation platform built for a large-scale executive AI strategy event at a Fortune 50 CPG company. Real-time presenter sync, animated slide components, and live data streaming lab infrastructure for simultaneous hands-on workshop attendees.
View Project →Event-driven risk intelligence platform for a major hospitality and gaming brand. Real-time pipelines for live odds monitoring, anomaly detection, and risk exposure reporting — built on Snowflake Iceberg with sub-second latency requirements.
View Project →Production-grade governed AI orchestration platform for a leading data trust and privacy organization. RBAC, Schema Registry, and Snowflake-backed AI pipelines — demonstrating that governance and velocity are not in conflict.
View Project →An independent proof-of-concept exploring AI-augmented creative production. Architected an end-to-end pipeline that transforms static comic panels into animated sequences — combining computer vision, generative video models, and FFmpeg composition. Built to explore the boundary between narrative structure and machine perception.
View Project →Dual-track real-time intelligence platform for a major financial institution — covering lending decisioning and payment fraud detection. Event-driven pipelines feeding inference models with sub-second latency, backed by Iceberg for analytics continuity.
View Project →Designed and delivered a structured streaming maturity and AI readiness framework across multiple business units of a leading mortgage platform — including workshop facilitation guides, executive briefings, and a prioritized transformation roadmap.
View Project →Built and scaled the managed services platform powering a major enterprise software company's cloud offerings — from zero to $16M+ ARR. Airflow, Kafka, and container runtime managed at scale across 1,000+ Kubernetes environments globally.
View Project →A governance-first AI system that themes this site's 3D visualization daily. Cloud Run Job calls Anthropic Claude at midnight, validates the output against a strict schema, enforces safety rules server-side, and writes to GCS. The API key never reaches the browser. You are looking at it.
View Project →Additional case studies and open source work in progress.
Circle the nouns — that's the standard workshop advice. But nouns don't deliver outcomes. A data product is a bounded-context capability that owns its state and events, exposes intentional interfaces, and upholds explicit guarantees for quality, timeliness, security, and change. The noun makes it findable. The capability makes it valuable. The guarantee makes it safe.
Read →Data doesn't rot. It drifts. The real enemy isn't corruption — it's answers that look right but aren't anymore. This piece introduces the Doorman, Ropes, and Clock framework: three controls that prevent trust from eroding slowly through systems that feel fine but are quietly running on yesterday's world.
Read →On AI hallucination, prompt engineering, and data governance — written in a deliberately non-corporate voice.
Coming SoonMaturity models are useful until they become a checkbox. The streaming space has a taxonomy problem — here's what a better framework looks like.
Coming SoonData streaming and platform engineering strategies for the enterprise.
Data products and governed AI architectures at scale.
Most AI initiatives fail not because the models are wrong, but because the data feeding them is. I build the infrastructure that makes AI outcomes repeatable — real-time context pipelines, governed ingestion layers, and schemas designed so the model isn't guessing about what it's receiving.
The goal isn't AI that works in a demo. It's AI that behaves the same way in production tomorrow as it does today.
Batch processing was an engineering compromise born from hardware limits. Most enterprises inherited the mindset along with the systems. I design architectures where the data model reflects reality as it happens — not a summary of what happened overnight.
The shift changes more than the latency. It changes what you can build on top of it.
The difference between a dataset and a data product is accountability. A data product has an owner, a schema contract, a versioning strategy, and consumers who can build on it without breaking when it changes.
I design the lifecycle, governance, and marketplace structures that make data trustworthy enough to build AI on top of — and defensible enough to put in front of a regulator.
I've scaled engineering teams across four continents, built managed services platforms generating $16M+ ARR, and automated the operational overhead out of systems that would otherwise require armies to run.
The work isn't glamorous from the outside. But it's what makes everything else possible — and it compounds in ways that most organizations don't fully see until they don't have it.
The most dangerous gap in most technology organizations isn't a skills gap — it's the translation gap between what engineers build and what executives fund. I've spent years earning credibility on both sides of that table.
I can walk into a board-level discussion on AI strategy and a deep-dive with a platform engineering team in the same day, and say something useful in both rooms.
Compliance isn't a checkpoint at the end of a project. It's a constraint that needs to be in the design from the start. I've navigated FedRAMP, HIPAA, NIST, ISO 27001, MARS-E, GDPR, and SCHREMS II across cloud, data, and AI contexts.
The frameworks are different. The principle is the same: build systems where the safe path is also the easy path.