Output
In a similar manner as the "Data" group allows managing sources, endpoints in this group allow different outputs to be registered, listed and deleted.
Once an output has been registered, it can be attached to a single active process. Four types of output are available:
rtsp
: a processed video feed through RTSP protocol. The feed can be consumed directly, as the API acts as a server.rtmp
: a processed video feed through RTMP protocol. The feed must be consumed by a server on the user's side.detection_data
: a stream of detection metadata in json format through a websocket connection. This output has a simpler integration workflow, as it does not need to be pre-registered, and becomes automatically available when a process starts. A sample server for reading this data stream is also provided below.drone_data
: a stream of the telemetry data received in the drone through a websocket connection. The type and the url must be provided when registering an output. URL's header must be in accordance with communication protocol (RTMP/RTSP). Each output has type has its own requirements, which are detailed below.
Output types
RTMP Output
When registering the RTMP Output (POST /output), the URL must contain the IP address and Port of the server to which you are streaming, as well as the path of the stream on that given server, i.e., rtmp://1.2.3.4:1935/output_rtmp_stream. The body for such a request would be as follows:
{
"alias": "Correct RTMP",
"type": "rtmp",
"url": "rtmp://1.2.3.4:1935/output_rtmp_stream"
}
This output requires a server on the user's side. There are multiple open source and paid solutions available for setting up such server, a popular one being media-mtx.
RTSP Output
The rtsp
output differs from the rest video outputs. When registering the RTSP Output (POST /output
) the URL must contain only the path of the stream, no IP nor PORTs will be given, i.e. "rtsp://output_rtsp_stream". This output does not require a server on the user side, an ingestable RTSP stream will be provided by our end.
Sample body for the RTSP Output:
{
"alias": "Correct RTSP",
"type": "rtsp",
"url": "rtsp://output_rtsp_stream"
}
The ingestable stream created when attaching a RTSP output to an active process (see Workflow), would be as follows for the sample body provided: rtsp://senseaeronautics.com:8554/{username}/output_rtsp_stream
Detection Data Output
This output provides a stream of detection metadata in json format. It is the more lightweight output, as it does not require a video stream, and enables a more advanced integration with the API. It is available via WebSocket. This output will always be available and does not have to be registered nor set up by the user.
The WebSocket can be accessed at wss://{product}.senseaeronautics.com/process/{process_uuid}/detection_data
, and only allows for a single client. If a connection with a client is created, no new connections will be allowed until the current connection is closed.
This script illustrates how messages can be received and decoded on the client side.
Message format
The detection data message follows the format:
{
"timestamp": "2024-05-04T08:26:15.123Z",
"source_uri": "rtsp://input.stream:1234",
"process_uuid": "3f2b7a70-9c4b-4a6a-a28d-3b2a4a4f3c13",
"active_tracks":
[
{...track_1...},
{...track_2...},
...
]
}
Timestamp is given in Zulu time.
Each track follows the format:
{
"track_id": 0,
"class_id": 1,
"class_name": "vehicle",
"bbox": [263.0, 115.0, 14.0, 9.0],
"scaled_bbox": [0.20546875, 0.1597222222222222, 0.0109375, 0.0125],
"confidence": 24,
"latitude": 42.231233,
"longitude": -8.333333,
"altitude: 720,
"distance: 50,
}
Fields are the following:
timestamp
: Detection timestamp in Zulu time.source_uri
: Input stream URI.process_uuid
: UUID of this active process.active_tracks
: List of active tracks on stream, each track has the following keys:track_id
: The ID assigned to a tracked detection. Takes value 0 when tracking is not enabled.class_id
: The ID of the class of the detection.class_name
: The name of the class of the detection.bbox
: The bounding box of the detection in the original resolution of the input video. The format is[x, y, width, height]
.scaled_bbox
: The bounding box of the detection given as a percentage of the input resolution on each axis. The format is[x, y, width, height]
.confidence
: The confidence of the detection between 0 and 100%.latitude
: Geographic latitude coordinate of the detection.longitude
: Geographic longitude coordinate of the detection.altitude
: Altitude of the detection in meters (ASL).distance
: Distance from the observer to the detected object (meters).
Drone data output
This output provides the telemetry obtained in our system in json format. It's avalaible via WebSocket on wss://{product}.senseaeronautics.com/process/{process_uuid}/drone_data
. The same script avalaible in the section before can be used to read the drone data in the client side. An example:
{
"timestamp": 1231234,
"latitude": 37.7749,
"longitude": -122.4194,
"speed": 15.2,
"drone_pitch": -5.3,
"drone_yaw": 120.0,
"drone_roll": 2.1,
"sensor_hfov": 90.0,
"sensor_vfov": 60.0,
"camera_pitch": -3.5,
"camera_yaw": 118.5,
"camera_roll": 0.0,
"altitude": 150.7,
"rel_altitude": 30.5
}
Where the fields are:
timestamp
: Detection time un Unix epoch/time.latitude
: Geographic latitude coordinate.longitude
: Geographic longitude coordinate.speed
: Speed of the drone.drone_pitch
: Drone inclination (pitch) in degrees.drone_yaw
: Drone orientation (yaw) in degrees.drone_roll
: Drone lateral rotation (roll) in degrees.sensor_hfov
: Sensor horizontal field of view in degrees.sensor_vfov
: Sensor vertical field of view in degrees.camera_pitch
: Camera inclination (pitch) in degrees.camera_yaw
: Camera orientation (yaw) in degrees.camera_roll
: Camera lateral rotation (roll) in degrees.altitude
: Absolute altitude (ASL) in meters.rel_altitude
: Relative altitude in meters.
Output resolution
By default, outputs will have the same resolution as the associated input (outputs with this configuration will have their settings set to "null"). Different resolutions can be set by providing the width and height in the settings
field, as follows:
{
"alias": "Different resolution RTSP",
"type": "rtsp",
"url": "rtsp://output_rtsp_stream",
"settings": "720p"
}
Available resolutions are: 360p, 720p and 1080p.
This additional key can be added to any of the outputs, though it only makes sense on video outputs: detection_data
being a stream of json data, does not have a resolution. As explained before, bounding boxes are always given with respect to the original resolution of the input video. Thus, this key is just ignored for detection_data
outputs.
This allows configuring different resolutions for the output stream. For details on the workflow for enabling an output, see Workflow.
Workflow
The workflow to use an output is was already outlined in the docs introduction, but it can be summed up in the following steps:
- Start your process, which will have a unique
process_uuid
. - Post a new output, as explained in the previous section or pick an existing one. This will give you an
output_uuid
. - Attach the output to the running process with
POST /process/{process_uuid}/{output_uuid}
Remember this workflow does not apply to websocket data stream, which is readily available as soon as the process starts.
A flux diagram of attaching an output to a process is shown in the following image:

Note multiple outputs may be attached to the same process in order to visualize ir on different resolutions, protocols and formats.
Extra: streaming response output
There is an additional endpoint in the output group: /feed
, which provides a basic http stream of the processed frames.
It is not optimized for performance or integration, but it is useful for quick testing, and thus is provided in the response when a process is started.