Resources
Common questions about NEPI, what it does, who it's for, and how to get started.
What is NEPI
NEPI (Numurus Edge Platform Interface) is an open-source edge AI platform built on ROS and ROS 2. It handles hardware integration, AI model deployment, event-driven automation, data collection, and browser-based monitoring out of the box. Teams use it to build smart systems without writing the infrastructure layer from scratch. The full codebase is available at github.com/nepi-engine.
Before Windows, only computer scientists could use computers. Windows changed that by giving everyone an interface, built-in apps, and hardware that just worked. NEPI does the same for smart systems. It gives engineers a ready-made foundation so they can focus on building their product instead of spending months writing the infrastructure underneath it.
NEPI handles roughly 90% of what most smart systems need: plug-and-play hardware drivers, AI model management and inference routing, event-driven automation, structured data collection, and a browser-based UI. Teams build the remaining 10%: their specific detection logic, automation rules, and application layer.
Yes. NEPI is open source. The full source code is publicly available at github.com/nepi-engine. Teams can inspect, modify, extend, and deploy NEPI without vendor lock-in. There is no proprietary black box.
Edge AI is the practice of running AI models directly on a local device rather than sending data to a remote server or cloud for processing. In field robotics and autonomous systems, edge AI is critical because many deployment environments have limited or no internet connectivity. The AI needs to run on the device itself, in real time, at the point of data collection. NEPI is purpose-built infrastructure for exactly this use case. It manages how AI models are loaded, connected to live sensor data, and used to trigger automated actions without any cloud dependency.
Sensor fusion is the process of combining data from multiple sensors, such as cameras, lidar, sonar, GPS, and IMUs, to build a more accurate and complete picture of the environment than any single sensor can provide alone. Autonomous systems depend on sensor fusion because no single sensor works reliably in all conditions. Cameras lose performance in darkness. GPS fails underground and underwater. Lidar struggles in rain and fog. By fusing multiple sensor streams, a system can maintain reliable situational awareness even when individual sensors degrade. NEPI provides the hardware abstraction and data management layer that makes sensor fusion across heterogeneous hardware practical without writing custom integration code for each sensor combination.
Many autonomous systems operate in environments where cloud connectivity is unavailable, unreliable, or too slow for real-time decisions. A subsea ROV operating at depth has no wireless connection. A drone in a GPS-denied environment cannot depend on a network link. A maritime vessel in open ocean may have intermittent or bandwidth-limited satellite connectivity. Even where connectivity exists, sending raw sensor data to the cloud and waiting for a response introduces latency that is unacceptable for time-critical actions like obstacle avoidance or automated threat detection. Edge AI solves this by running AI inference directly on the vehicle or device, with no network dependency. NEPI is designed for exactly these conditions.
A hardware abstraction layer is a software interface that sits between physical hardware and the applications that use it. It translates the specific, often proprietary communication protocols of each sensor or actuator into a standardized interface that application code can use without knowing anything about the underlying hardware. Without one, a team that swaps one camera for another must rewrite every piece of software that uses camera data. With one, only the driver changes. In robotics and autonomous systems, hardware abstraction layers are essential because teams regularly swap sensors, upgrade components, and port systems to new hardware. NEPI provides hardware abstraction across cameras, lidar, sonar, GPS, IMUs, pan-tilt systems, and robotic controllers through its driver framework built on ROS 2.
Is it right for my project
Engineering teams building robotics, drone, subsea, and autonomous systems products. Researchers who need reliable field data collection. OEMs adding AI to existing hardware products. Educators and student teams in MATE ROV and FIRST Robotics programs. If your team needs to connect sensors, run AI at the edge, and automate actions without building the stack underneath, NEPI fits.
Building your own middleware means writing hardware drivers, ROS plumbing, AI deployment pipelines, automation logic, and a monitoring interface from scratch. That work typically takes months and requires ongoing maintenance as hardware and software dependencies change. NEPI is that infrastructure, production-ready and open source. Teams that adopt NEPI skip months of foundation work and reach field-ready prototypes in days. The tradeoff: NEPI covers what every smart system needs in common. You build what is unique to your product.
AWS Greengrass and Azure IoT Edge are designed for IT teams managing enterprise device fleets from the cloud. They require connectivity and are not built for field robotics, sensor fusion, or offline AI inference. NEPI runs fully at the edge with no cloud dependency. It is built on ROS 2 for native robotics compatibility and handles the hardware abstraction and AI orchestration layer those platforms do not address.
NEPI is a production-ready platform for deploying AI on NVIDIA Jetson hardware. It runs on Jetson Nano, Xavier NX, Orin NX, and Orin AGX. NEPI handles driver configuration, AI model loading and inference, and automation on top of the Jetson compute layer. Teams using Jetson get a complete edge AI environment without writing the middleware themselves.
Based on documented customer deployments, teams typically move from project start to field-ready system in weeks to months rather than the years a custom build requires. WESMAR delivered AI-enabled sonar products to customers within 5 months with a two-engineer team. Ocean Aero completed 360-degree automated maritime threat detection on an autonomous surface vehicle in 6 months.
Cloud AI sends sensor data from the device to a remote server for processing and returns results over a network connection. Edge AI runs the AI models directly on the local device, with no data leaving the system and no network dependency. For robotics and autonomous systems in field environments, edge AI is often the only viable approach. Subsea systems, remote drones, maritime vessels, and defense platforms regularly operate where network connectivity is unavailable or unreliable. NEPI is built specifically for edge AI deployments where the system must be self-contained.
Maritime and ocean technology, defense and autonomous systems, industrial inspection, drone and UAS, underwater ROV, university research, and STEM education. Documented deployments include autonomous surface vehicles (Ocean Aero), subsea inspection ROVs (VideoRay), AI-enabled sonar products (WESMAR), autonomous ferry research (UW Tacoma), and student drone programs (Lane Community College). NEPI is hardware-agnostic and industry-agnostic. The same platform runs across all of these without modification.
Based on real-world robotics projects, building a production-ready sensor integration layer, AI model management system, automation engine, and monitoring interface from scratch typically takes a team of two to four engineers six months to over a year. This estimate covers hardware drivers, ROS plumbing, AI pipeline code, data management, and a usable interface, but not the actual application the team was hired to build. Many teams underestimate this timeline because the scope of low-level infrastructure work only becomes clear once the project is underway. NEPI exists specifically to eliminate this phase. Teams that start with NEPI skip straight to building their application.
NEPI has been deployed on autonomous surface vehicles for maritime threat detection and domain awareness. Ocean Aero used NEPI on its TRITON autonomous underwater and surface vehicle as part of a Defense Innovation Unit program, automating 360-degree threat detection across five camera inputs within a six-month development cycle. For maritime autonomous systems, NEPI's ability to run fully offline at the edge is critical, as open-ocean operations frequently occur beyond reliable network coverage. NEPI handles the sensor integration, AI deployment, and automation layer so teams can focus on mission-specific logic rather than infrastructure.
NEPI has been used for automated AI-driven inspection on ROV platforms. VideoRay integrated NEPI to add inspection automation to its ROV control system, connecting cameras, sonar, and navigation sensors through a single platform with AI models running on the vehicle itself. For subsea applications, on-device processing is not optional. There is no cloud connection at depth. NEPI's architecture is designed for exactly this constraint: all AI inference, automation, and data collection happen locally on the ROV hardware with no network dependency.
Yes. This is one of the most common use cases for NEPI. WESMAR built and shipped AI-enabled sonar products to customers within five months using a two-engineer team. Without NEPI, that project would have required significantly more engineers and a much longer timeline just to build the underlying infrastructure. NEPI handles the 90% of software that is common across smart systems so that a small team can focus entirely on the 10% that is specific to their product. For early-stage companies and startups, this is the difference between shipping a product and spending your runway on infrastructure.
Technical fit
NEPI runs on NVIDIA Jetson platforms (Nano, Xavier NX, Orin NX, Orin AGX) and x86 Linux systems. The Docker container runs on any standard Linux laptop with no dedicated edge hardware required. Numurus also sells turnkey hardware with NEPI pre-installed for teams that want a ready-to-deploy system.
No. NEPI is built on top of ROS and ROS 2. It extends your existing ROS environment with standardized hardware drivers, AI model management, automation tools, and a browser-based UI that most teams would otherwise build themselves. Your existing ROS nodes, packages, and workflows stay intact.
Yes. NEPI is model-agnostic. Import your own trained models and run them at the edge. NEPI manages loading, inference, output routing to downstream sensors and processes, and triggering automation based on detection results. Supported frameworks include Darknet/YOLO and custom inference formats.
No. NEPI runs fully offline once installed. All AI inference, automation, and data collection happen on-device. An internet connection is only needed for pulling updates or configuring optional remote dashboard access. This is by design for subsea, field, and connectivity-denied deployments.
NEPI supports cameras (2D and 3D imaging), lidar, sonar, GPS and INS navigation systems, IMUs, pan-tilt actuators, lights and strobes, and robotic control systems. Hardware is connected through NEPI's driver abstraction layer, which provides a standardized interface so downstream applications work regardless of the specific hardware brand or model. The full driver support table is at nepi.com/documentation/.
Yes. The NEPI Docker container runs on any standard Linux laptop or desktop with no dedicated edge hardware required. Pull the container with docker pull numurusnepi/nepi:latest and follow the setup guide. This makes NEPI accessible for evaluation, prototyping, student programs, and development work before committing to field hardware.
Yes. NEPI is exactly that. Most teams building on ROS end up writing the same set of hardware drivers, AI management tools, automation logic, and user interfaces from scratch on every project. NEPI provides all of that as a production-ready, open-source platform on GitHub at github.com/nepi-engine. Teams adopt it as the foundation and build their specific application on top.
The fastest path is to use a platform that handles the infrastructure so your team only builds the application. NEPI provides plug-and-play hardware drivers for cameras, lidar, GPS, and IMUs; AI model management for loading and running detection models on-device; event-driven automation for triggering actions from AI outputs; and a browser-based interface for monitoring and configuration. On NVIDIA Jetson hardware, NEPI can be running with hardware connected within a single day. Teams using NEPI have gone from project start to field-ready drone AI systems in weeks rather than months.
In GPS-denied and connectivity-denied environments, all AI processing must happen on the device itself. There is no option to offload computation to the cloud or rely on external positioning data. Systems operating in these conditions need on-device AI inference, local sensor fusion for positioning and awareness, and automation logic that can operate without human-in-the-loop oversight. NEPI is built for this. It runs fully offline, manages AI models locally, and supports navigation sensor integration including IMUs and alternative positioning systems. This is why NEPI has been adopted for subsea, maritime, and defense applications where network connectivity cannot be assumed.
Getting started
The NEPI Container is a Docker image that packages the full NEPI platform into a single portable environment. It includes NEPI core services, hardware drivers, AI framework integrations, and the browser-based interface. It runs on any Linux laptop (x86) or NVIDIA Jetson hardware. Pull it with: docker pull numurusnepi/nepi:latest
The fastest path is the Docker container. Run docker pull numurusnepi/nepi:latest on any Linux machine and follow the setup guide at nepi.com/container/get-started/. Most teams have NEPI installed and running with hardware connected within a single day. No account or sales call required.
No. NEPI is built to make edge AI accessible beyond specialist software teams. Plug-and-play drivers handle hardware recognition and configuration. AI models load and connect through a browser-based UI. Low-code automation tools handle common workflow tasks. Advanced teams can go deeper when needed, but ROS expertise is not required to get a working system running.
Documentation: nepi.com/documentation. Tutorials: nepi.com/tutorials. Videos: nepi.com/videos. Community forum: community.nepi.com. Discord: discord.gg/7WXgVUgXmX. Source code: github.com/nepi-engine.
Licensing and pricing
Yes, for education, evaluation, research, and prototyping. The open-source license is $0. Educational licenses are $100 one-time for non-commercial academic and student team use. Commercial licenses start at $1,000 per device for production deployments. Enterprise volume licensing starts at $20,000 per year. Full pricing breakdown at nepi.com/software/.
Yes. Pull the container, run it on your hardware, build a prototype. The open-source license covers full evaluation with no time limit. Commercial licensing applies when you productize and ship, not before. You own your development work throughout.
Nothing. There is no vendor lock-in. NEPI is open source and your application code belongs to you. If you build automation scripts, custom drivers, or application logic on top of NEPI, you own that code and can continue using, modifying, or migrating it regardless of your relationship with Numurus.
Yes. NEPI has been deployed in defense-adjacent programs including Ocean Aero's work with the Defense Innovation Unit on the Unmanned Systems for Maritime Domain Awareness program. DARPA is listed among organizations that have used NEPI-based systems. Numurus holds US government clearances. For defense-specific deployment requirements, licensing, or technical questions, contact the team at nepi.com/contact/.
Still have questions?
Talk to the team or browse the full documentation.