|Video editing system|
|Sunday, 08 October 2006|
Video editing systems are used to combine selected video scenes into a desired sequence. A video editor communicates with and synchronizes one or more video tape recorders (VTRs) and peripheral devices to allow editing accurate within a single video field or frame. A user communicates with the editor using a keyboard, and the editor communicates with the user via a monitor that displays information. In the area of video editing, the editor defines and shapes the video and/or audio until the message to be delivered is accomplished. Video editors in the film and broadcasting industries make full use of their skill and experience when editing the great variety of video productions that reach the market. Technology for editing digital video has progressed to a point where it can be readily processed and handled on computers. Video production systems have been provided wherein digital video can be readily captured, edited, and displayed for various purposes, such as broadcast television and film and video program post-production. In general, editing of video is performed by dividing video into small continuous segments termed "cuts", and rearranging the cuts as intended by an editor. Editing of pictures is required for erasing unwanted pictures and recording only those pictures which are needed after a set of pictures are photographed. Most video editing programs assemble clips in a storyboard that looks like a picture book, with the clips in sequence. Many programs also include a timeline, which is a linear representation of the clips that simplifies production-wide effects like audio tracks and logo overlays. When special effects are to be introduced within the sequence of clips an editor will modify an image by either removing a portion or specifying where an effect is to be placed. The image can be modified in a number of known ways such as pixel-by-pixel intraframe manipulation or other "painting" programs. Different types of typical video editing techniques currently exist. One technique is referred to as "assemble editing" which assembles each image scene. Another type of technique is referred to as "insert editing" which incorporates a desired picture into a base image.
Generally, video editing may be divided into two categories linear video editing and non-linear video editing. Linear video editing systems, such as those for videotape and photographic film, are relatively old. In linear video editing, the images are taken in a sequential order. By contrast, in nonlinear video editing, scenes may be taken in any order and later edited according to a desired sequence. Because video editors are now freed from the linear constraints of tape-based systems, digital storage based systems are commonly referred to as non-linear. Whether linear video editing or non-linear video editing approach is to be taken generally depends on the video system that is to be used. Nonlinear video editing systems allow a user to join, manipulate and/or modify digital or digitized information from various video sources to create a finished work for rendering to an appropriate storage media or output. Nonlinear editing systems commonly process video that is stored in a subsampled component video format. There are a number of subsampled component video formats, such as YCrCb 4:2:2 video data. Data in this format may be received as RGB component video data that is converted into YCrCb data. A non-linear video editing system stores video and audio data as files on storage media such as a hard disk drive. A non-linear editing system permits an editor to define a video program, called a composition, as a series of segments of these files called clips. Each clip is labeled and information related to it e.g., length of clip, number of frames, time, etc. are also accessible. Nonlinear video editing systems typically receive analog or digital video from a video tape recorder, digitize and compress the video, and store the compressed digital video on local storage for random access during creation and assembling of a video program. The stored digital video data available to such systems are typically limited by the amount of available local storage. While non-linear video editing system may be more complicated, the advantage is that the video may be taken in any sequence and later, through careful observation of the video and a thoughtful process, the video may be manipulated to communicate the message in the manner the editor wishes to convey with maximum impact.
Computer systems with motion video editing tools have been used to produce major motion picture films, television shows, news broadcasts and in corporate settings to edit motion video. Nonlinear editing on computer oriented systems involves digitizing analog media data recorded from a linear source, such as videotape or film, and storing the digitized media data on a storage device, such as a magnetic disk drive. Analog video signals are received by the computer for conversion one image (frame) at a time at a fixed rate. Each frame is converted into a digital representation of the frame and stored in a memory file containing a sequence of such frames (video sequence). The video files containing the video sequences are identified by a particular name, or by a graphic representation of the video sequence. With the advancement of computer technology, further improvements have been made to the video editing system through a process called digitization. The digital media may be a digitized version of a film or videotape or digital media produced through live capture onto a disk of a graphics or animation software application. Once digitized, the non-linear editing system permits rapid access to media data at any point in the linear sequence for subsequent manipulation of media data portions into any order. The digitization of video has had a profound impact on non-linear video editing system. In a digital video editing system, video is stored in a storage medium such as magnetic discs or laser discs thereby allowing the video to be retrieved randomly and displayed on a display device such as a monitor. This alleviates the burdensome technique of cutting and splicing. Developments in computer technology have resulted in a proliferation of devices capable of processing motion video on a computer. Generally, the video edit processing is performed at an editing device, the composite processing is performed at a video switcher, and the special effect processing is performed at a special effect device. Video editing software is available in two classes: professional products and consumer products. Both classes provide a similar set of capturing, editing, and rendering features.
Digital video devices require the use of image compression. JPEG, MPEG, and DVD are related compression standards for image compression which make storage of images and image sequences feasible. JPEG (joint photographic experts group) is the internationally accepted standard for image data. JPEG is designed for compressing full color or gray-scale still images. MPEG (motion pictures experts group) is a standard promulgated by the International Standards Organization (ISO) to provide a syntax for compactly representing digital video and audio signals. The syntax generally requires that a minimum number of rules be followed when bit streams are encoded so that a receiver of the encoded bit stream may unambiguously decode the received bit stream. In an MPEG environment, video sequences are represented by compressed bitstreams, which are composed of group of pictures (GOP) units. A GOP is usually fixed at a certain number of frames, such as 15 frames, and can contain intra (I), predicted (P), and bi-directional (B) frames. The MPEG standard has four types of image coding for processing, the I-frame, the P-frame, the B-frame and the D-frame (from an early version of MPEG, but absent in later standards). An I frame can be independently encoded or decoded and contains only information present in the frame itself. However, a P and B frame must be encoded or decoded using information from a reference frame, which can be either an I or P frame. A P frame is encoded or decoded depending on a past reference frame and a B frame can be encoded or decoded with a dependence on a past frame, a future frame, or both past and future frames. Digital video products such as DVD, JPEG, and MPEG encoders/decoders are data stream processing devices, they operate on a sequential stream of data. The data streams being operated on are encoded with one or more levels of data compression. Typically, a decoding process begins when an MPEG bit stream containing video, audio and system information is demultiplexed by a system decoder that is responsible for producing separate encoded video and audio bit streams that may subsequently be decoded by a video decoder and an audio decoder.