input.visual
Description
Reads inputs from multiple visual sources
- image or video file on local storage
- folder of images or videos
- online cloud source
- CCTV or webcam live feed
- class Node(config=None, node_path='', pkd_base_dir=None, **kwargs)[source]
Receives visual sources as inputs.
- Inputs
none
: No inputs required.- Outputs
img
(numpy.ndarray
): A NumPy array of shape \((height, width, channels)\) containing the image data in BGR format.filename
(str
): The filename of video/image being read.pipeline_end
(bool
): A boolean that evaluates toTrue
when the pipeline is completed. Suitable for operations that require the entire inference pipeline to be completed before running.saved_video_fps
(float
): FPS of the recorded video, upon filming.- Configs
filename (
str
) – default = “video.mp4”.
If source is a live stream/webcam, filename defines the name of the MP4 file if the media is exported.
If source is a local file or directory of files, then filename is the current file being processed, and the value specified here is overridden.mirror_image (
bool
) – default = False.
Flag to set extracted image frame as mirror image of input stream.resize (
Dict[str, Any]
) – default = { do_resizing: False, width: 1280, height: 720 }
Dimension of extracted image frame.source (
Union[int, str]
) – default = https://storage.googleapis.com/peekingduck/videos/wave.mp4.
Input source can be:
- filename : local image or video file
- directory name : all media files will be processed
- http URL for online cloud source : http[s]://…
- rtsp URL for CCTV : rtsp://…
- 0 for webcam live feed
Refer to OpenCV documentation for more technical information.frames_log_freq (
int
) – default = 100. 1
Logs frequency of frames passed in CLIsaved_video_fps (
int
) – default = 10. 1
This is used byoutput.media_writer
to set the FPS of the output file and its behavior is determined by the type of input source.
If source is an image file, this value is ignored as it is not applicable.
If source is a video file, this value will be overridden by the actual FPS of the video.
If source is a live stream/webcam, this value is used as the FPS of the output file. It is recommended to set this to the actual FPS obtained on the machine running PeekingDuck (usingdabble.fps
).threading (
bool
) – default = False. 1
Flag to enable threading when reading frames from camera / live stream. The FPS can increase up to 30%.
There is no need to enable threading if reading from a video file.buffering (
bool
) – default = False. 1
Boolean to indicate if threaded class should buffer image frames. If reading from a video file and threading is True, then buffering should also be True to avoid “lost frames”: which happens when the video file is read faster than it is processed. One side effect of setting threading=True, buffering=True for a live stream/webcam is the onscreen video could appear to be playing in slow-mo.
- 1
advanced configuration
Technotes:
The following table summarizes the combinations of threading and buffering:
Threading
False
True
Buffering
False/True
False
True
Sources
Image file
Ok
Ok
Ok
Video file
Ok
!
Ok
Webcam, http/rtsp stream
Ok
+
!!
Table Legend:
Ok : normal behavior
+ : potentially faster FPS
! : lost frames if source is faster than PeekingDuck
!! : “slow-mo” video, potential out-of-memory error due to buffer overflow if source is faster than PeekingDuckNote: If threading=False, then the secondary parameter buffering is ignored regardless if it is set to True/False.
Here is a video to illustrate the differences between a normal video vs a “slow-mo” video using a 30 FPS webcam: the video on the right appears to be playing in slow motion compared to the normal video on the left. This happens as both threading and buffering are set to True, and the threaded
input.visual
reads the webcam at almost 60 FPS. Since the hardware is physically limited at 30 FPS, this means every frame gets duplicated, resulting in each frame being processed and shown twice, thus “stretching out” the video.