import torchvision
video_path = "path_to_a_test_video"
reader = torchvision.io.VideoReader(video_path, "video")
reader.seek(2.0)
frame = next(reader)
VideoReader implements the iterable API, which makes it suitable to
using it in conjunction with itertools for more advanced reading.
As such, we can use a VideoReader instance inside for loops:
reader.seek(2)
for frame in reader:
frames.append(frame['data'])
# additionally, `seek` implements a fluent API, so we can do
for frame in reader.seek(2):
frames.append(frame['data'])
With itertools, we can read all frames between 2 and 5 seconds with the
following code:
for frame in itertools.takewhile(lambda x: x['pts'] <= 5, reader.seek(2)):
frames.append(frame['data'])
and similarly, reading 10 frames after the 2s timestamp can be achieved
as follows:
for frame in itertools.islice(reader.seek(2), 10):
frames.append(frame['data'])
Each stream descriptor consists of two parts: stream type (e.g. ‘video’) and
a unique stream id (which are determined by the video encoding).
In this way, if the video contaner contains multiple
streams of the same type, users can access the one they want.
If only stream type is passed, the decoder auto-detects first stream of that type.
Parameters:
src (string, bytes object, or tensor) – The media source.
If string-type, it must be a file path supported by FFMPEG.
If bytes should be an in memory representatin of a file supported by FFMPEG.
If Tensor, it is interpreted internally as byte buffer.
It must be one-dimensional, of type torch.uint8.
stream (string, optional) – descriptor of the required stream, followed by the stream id,
in the format {stream_type}:{stream_id}. Defaults to "video:0".
Currently available options include ['video', 'audio']
num_threads (int, optional) – number of threads used by the codec to decode video.
Default value (0) enables multithreading with codec-dependent heuristic. The performance
will depend on the version of FFMPEG codecs supported.
path (str, optional) –
Examples using VideoReader:
Optical Flow: Predicting movement with the RAFT model
Optical Flow: Predicting movement with the RAFT model
Video API
Video API
get_metadata() → Dict[str, Any][source]
Returns video metadata
Returns:
dictionary containing duration and frame rate for every stream
Return type:
(dict)
seek(time_s: float, keyframes_only: bool = False) → VideoReader[source]
Seek within current stream.
Parameters:
time_s (float) – seek time in seconds
keyframes_only (bool) – allow to seek only to keyframes
Current implementation is the so-called precise seek. This
means following seek, call to next() will return the
frame with the exact timestamp if it exists or
the first frame with timestamp larger than time_s.
set_current_stream(stream: str) → bool[source]
Set current stream.
Explicitly define the stream we are operating on.
Parameters:
stream (string) – descriptor of the required stream. Defaults to "video:0"
Currently available stream types include ['video', 'audio'].
Each descriptor consists of two parts: stream type (e.g. ‘video’) and
a unique stream id (which are determined by video encoding).
In this way, if the video contaner contains multiple
streams of the same type, users can access the one they want.
If only stream type is passed, the decoder auto-detects first stream
of that type and returns it.
Returns:
True on success, False otherwise
Return type:
(bool)
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
www.linuxfoundation.org/policies/. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see www.lfprojects.org/policies/.