// Signal → Semantics → Query

Making physical
environments
queryable.

A platform that ingests any sensory data — 3D scans, documents, audio, RF signals — extracts structured semantic scene graphs, and makes them available for natural language reasoning. Built for robotics. Designed for every domain.

Read the Thesis Collaborate
Scroll to explore
Core Thesis

The gap between
sensing and knowing.

Frontier language models are remarkably capable reasoners — but they operate on text and images that are already semantic. Point clouds, RF spectra, volumetric scans, and sensor telemetry are signal-first: they require substantial domain-specific processing before any meaning can emerge.

This gap — between raw physical signal and queryable semantic knowledge — is where robotics systems still struggle most. A robot that can navigate a known map is not the same as a robot that can answer "is there enough clearance for a wheelchair between the desk and the door?"

"The most defensible position is to be the signal-to-semantics translation layer that sits upstream of whatever foundation model the world prefers."

I'm building a universal semantic memory platform — a system that ingests any sensory or documentary data source, extracts meaning through domain-specific adapters, and exposes that meaning through a unified graph store and natural language query interface. The 3D spatial search product is the first instantiation. Every adapter compounds the value of the infrastructure already built.

Universal core.
Domain-specific adapters.

The critical architectural boundary: adapters handle raw signal processing and produce a normalized graph. The universal core handles everything else — storage, retrieval, reasoning, and user interaction.

Universal 01
User Profile Store
Persistent representation of goals, background, and domain vocabulary. Acts as a scoring lens on every query — results are ranked by relevance to this specific user's work, not general similarity.
Universal 02
Graph Store + Vector Index
Typed nodes with attributes, typed edges with weights, and a vector embedding per entity. pgvector for semantic retrieval. NetworkX for graph traversal. Schema-agnostic — any adapter's output lands here.
Universal 03
RAG Agent Layer
LangGraph orchestration. Tools call into the graph store, vector index, and adapter-specific query functions. The agent constructs answers by traversing the scene graph with spatial, semantic, and temporal constraints.
Adapter 04
Signal Processing
Domain-specific ingestion — Open3D for point clouds, PyMuPDF for documents, librosa for audio. Each adapter's only contract: produce typed nodes, typed edges, embeddings, and confidence scores.
Adapter 05
Semantic Labeling
CLIP / SigLIP for zero-shot visual classification. CLAP for audio. NER for documents. The goal: every extracted entity has a human-readable label before it reaches the graph store.
Adapter 06
Relationship Extraction
Spatial proximity, containment, and occlusion for 3D data. Co-authorship and affiliation for documents. Temporal co-occurrence for audio. Edges encode the structure that makes graph queries meaningful.

Every signal domain.
One query interface.

Each adapter translates a raw signal domain into the universal graph schema. Building a new adapter doesn't require touching the core — it only requires implementing the extraction contract.

3D Spatial
PLY · OBJ · PCD · E57
Point cloud segmentation, RANSAC plane detection, CLIP-based zero-shot object labeling. Spatial relationships encoded as proximity, containment, and clearance edges.
In Development
Document
PDF · HTML · MD · DOCX
Named entity recognition for people, organizations, topics, and dates. Co-authorship and affiliation graph construction. User profile relevance scoring at extraction time.
In Development
CAD / BIM
IFC · STEP · DWG · RVT
Semantically rich building models with room types, material properties, and system relationships already encoded. Natural language querying of architectural intent vs. as-built reality.
Planned — Tier 1
Video / Live Feed
MP4 · RTSP · ROS bag
Temporal sequence of spatial scene graphs. Frame-by-frame entity tracking with identity persistence. The live-data version of the static point cloud pipeline — Phase 2 of the robotics integration.
Planned — Tier 1

R.A.D. Lab AI
Spatial Search

This platform started from a simple question: what if you could upload a 3D scan of any space and ask it questions in plain English? The deeper we got, the more we realized the question generalizes — to documents, to audio, to any signal domain where meaning has to be extracted before it can be reasoned over.

We're at the early stages and actively looking for research collaborators — particularly in robotics, spatial computing, and signal processing — who see the same gap I see and want to work on closing it together.

Phase 1 Target 3D Spatial Search SaaS
Adapters in Dev 3D Spatial + Document
Looking For Research Collaborators
Status Active Development

// Open to Collaboration

Let's close
the gap together.

If you're working on problems where physical signal needs to become queryable knowledge — in robotics, spatial computing, medical imaging, or anywhere else — we'd like to hear from you.

Particularly interested in: robotics navigation, spatial AI, signal processing, and embodied AI research.