Faceplate allows you to work with video streams that come in real from external video cameras. The system provides conversion and transmission of video streams to mimic diagrams. For each video stream entering the system, a motion detector can be configured, which can be tied to a tag. To set up video streams, use the editor located on the “Video” tab:
Вставить рис. Редактор видео
In the left part of the editor is a list of already configured camcorders. When a camera is selected, the video stream coming from the camera is displayed in the right part of the editor (only in RUN-TIME mode). Control Panel:
|Update the display base of cameras. If several developers are simultaneously working on a project, the button may serve to receive updates made by others.
|Adding a new camera
|Editing camera settings
|Delete selected camera
Configuring camera connection settings
When adding a new video camera, the following settings dialog opens:
|The name of the source in the system. It is recommended to use names reflecting the installation location of the camera. The name must be unique within the project.
|The setting allows you to disconnect/connect the source in runtime mode, saving settings and bindings
|Specifies the density of the video frame, the default is 800 x 600 px. ATTENTION! This setting significantly affects the CPU requirements of the Faceplate server.
|The frequency of video frames, the default 10 frames/sec
|The source of the video can be:
|COMPUTER VISION (motion detection)
|The motion detection algorithm uses the subtraction technique of successive video frames and allows you to respond to changes in color or light in certain parts of the image. Settings allow you to set the sensitivity of the algorithm to color changes and the size of the captured targets.
|Specifies the minimum detectable deviation in color. The illumination of each pixel of a frame is characterized by a number from 0 to 255. If the next frame shows a change in this characteristic by the value of a more tuned light threshold, the algorithm regards it as a detected motion.
|The parameter defines the minimum area of the target to be captured (in pixels). If the change is observed at an area less than a predetermined threshold, the algorithm does not respond.
|The algorithm allows you to define several simultaneously moving targets. If the targets are from each other at a distance less than a predetermined proximity threshold, the algorithm regards their one target.
|Only relevant if the motion detector is linked to a tag (see Configuring the response for the motion detector). The setting determines the minimum duration of a video clip (sec.) Stored in the database when motion is detected.
|Settings allow you to define the response to motion detection (see Setting Up a Reaction for Motion Detection).
Video output to mnemonic scheme
To output a video stream from a configured video camera to a mnemonic, use the graphic control “Video”. It is enough to position the element on the mimic and set the name of the configured video source in its camera property.
Reaction setup for motion detection
To determine the response to motion detection in a video stream, you need to configure a trigger. The trigger field is the tag field (see the Tag Editor). If motion is detected, the system will coax the preset specified tag field, which will remain cocked while movement is observed. If a message is configured for this field (see Message system), then a motion will be generated when a motion is detected, with the possibility of sending by email and / or SMS. The recorded video fragment containing the motion will be attached to the message and can be viewed by the operator using the message archive or the list of active messages using the “Video” button (see the Operator's Guide). Additionally, the tag field can be bound to the memory area of the controller station (see Connections) and / or act as a trigger for the server script (see the Script Editor).