Capability Use Case
Real-Time Crime Center Design & Implementation
Purpose-built fusion centers that aggregate video, sensor, dispatch, and social feeds into a unified operational picture for incident command.
Executive Summary
We design and implement real-time crime centers (RTCCs) that consolidate video feeds, CAD dispatch events, gunshot detection alerts, LPR data, and field intelligence into a single unified operating picture. Our RTCCs have enabled agencies to reduce average response times by 28% and improve case clearance rates by increasing the quality and speed of intelligence available to incident commanders. Every deployment is tailored to the agency's operational model, integrating with existing CAD, RMS, and VMS platforms rather than requiring wholesale system replacement.
The Challenge
Law enforcement agencies operate in fragmented information environments. Dispatch events arrive through CAD systems, video feeds live in VMS silos, LPR data sits in separate databases, gunshot detection alerts route through vendor-specific portals, and field intelligence exists in RMS systems or, worse, in individual officers' notebooks. When a critical incident unfolds, commanders must manually correlate data across 5-8 different systems, each with its own interface, authentication, and search paradigm. Minutes spent toggling between applications during an active shooter or pursuit directly cost lives.
Video wall infrastructure in traditional operations centers is often static—fixed camera layouts assigned to monitors with manual switching by operators. During a critical incident involving a mobile suspect or multi-site event, operators waste precious time finding the right cameras, pulling up relevant feeds, and managing wall layouts. The video wall should dynamically respond to the incident, automatically surfacing cameras near an event location and following suspect movement across jurisdictional coverage.
Data volume is a separate challenge. A mid-sized city RTCC may ingest 2,000+ camera feeds, 500+ daily CAD events, 50,000+ daily LPR reads, and dozens of sensor alerts. Without intelligent filtering and correlation, operators drown in information. The system must surface relevant signals while suppressing noise, escalating events that match threat patterns, and maintaining situational awareness without cognitive overload.
Our Approach
Our RTCC platform is built on a React + TypeScript single-page application that serves as the primary operator interface, backed by a real-time data fusion engine running on Node.js with WebSocket push for sub-second event delivery. The GIS layer uses Mapbox GL JS with custom tile sets that include building footprints, camera locations, sensor zones, patrol unit positions, and critical infrastructure overlays. Every data source—CAD, VMS, LPR, gunshot detection, field reports—feeds into a Kafka-based event bus that normalizes disparate formats into a common event schema before delivery to the operator dashboard.
Video wall orchestration is driven by incident context. When a CAD dispatch event triggers, the system automatically identifies cameras within a configurable radius of the event location using spatial indexing, pulls live RTSP feeds from the VMS, and populates a dynamic wall layout optimized for the incident type. Pursuit events trigger corridor camera sequences along probable routes. Perimeter events display a ring of cameras around the affected zone. Operators can override any automatic selection, and the system learns from operator adjustments to improve future automatic layouts.
The correlation engine applies configurable rule sets that identify compound events: an LPR hit within 500 meters and 5 minutes of a gunshot detection alert, for example, automatically escalates to a priority incident with fused context. Historical pattern analysis layers on top, flagging when current events match known crime patterns (time of day, location clustering, suspect vehicle descriptions). All correlated data is packaged into shareable incident briefs that can be pushed to patrol MDTs, detective workstations, or partner agency systems via standard CAD-to-CAD interfaces.
Key Capabilities
Multi-Source Data Fusion
Normalizes and correlates events from CAD/RMS, VMS, LPR, gunshot detection, IoT sensors, and social media feeds into a unified event stream with automatic compound-event detection and escalation.
Dynamic Video Wall Orchestration
Incident-driven camera selection automatically surfaces relevant feeds on video walls based on event location, type, and progression, with operator override and layout learning for continuous improvement.
GIS-Centric Operational Picture
Interactive mapping with real-time positions of patrol units, camera coverage zones, sensor detection areas, and incident markers provides commanders with immediate spatial awareness of the operational environment.
Incident Brief Generation
Automated packaging of correlated intelligence—video clips, LPR reads, sensor alerts, officer reports—into structured incident briefs shareable with field units, detectives, and partner agencies within minutes of an event.
Technical Architecture
The data fusion engine consumes events from heterogeneous sources via protocol-specific adapters: CAD events via CAD-to-CAD XML over TCP (NIEM 5.0 schema), VMS alerts via ONVIF Event Service subscriptions (WS-BaseNotification), LPR reads via REST webhook callbacks, gunshot detection via vendor APIs (ShotSpotter/SoundThinking SST, Shooter Detection Systems), and patrol unit positions via AVL (Automatic Vehicle Location) feeds over APCO P25 LRRP. Each adapter normalizes incoming events to a common Protobuf schema that includes temporal, spatial, and categorical dimensions before publishing to the central Kafka cluster.
Spatial correlation uses PostGIS with R-tree spatial indexes to perform sub-millisecond proximity queries. When a new event arrives, the correlation engine queries for temporally proximate events (configurable window, default 5 minutes) within a spatial radius (configurable, default 500 meters) and evaluates compound event rules expressed in a domain-specific rule language stored in PostgreSQL. Rule evaluation supports boolean logic, temporal ordering constraints, and entity-matching predicates (e.g., same vehicle color + plate partial match). Matched compound events generate escalation notifications pushed via WebSocket to operator dashboards and via SMTP/push to mobile command staff.
Video wall control uses the Userful or RGB Spectrum video wall processor API, abstracted behind a common interface that maps logical display zones to physical monitor regions. The RTCC application sends layout commands via REST API, specifying RTSP URIs for each zone, and the wall processor handles decoding and rendering. For departments using Genetec or Milestone VMS, the application leverages the VMS SDK to request transcoded streams optimized for wall display resolution, reducing bandwidth consumption for multi-megapixel cameras displayed on standard 1080p wall monitors.
Specifications & Standards
- Event Throughput
- 10,000+ events/minute with < 200 ms correlation
- CAD Integration
- NIEM 5.0, APCO CAD-to-CAD, NG911 i3
- Video Sources
- 2,000+ simultaneous RTSP streams
- GIS Engine
- Mapbox GL JS, PostGIS R-tree spatial index
- Video Wall
- Userful / RGB Spectrum API, up to 64 displays