This filter provides a smart, motion-based deinterlacing capability. In static picture areas, interlacing artifacts do not appear, so data from both fields is used to provide full detail. In moving areas, deinterlacing is performed. Also, some clips derived from telecined material can appear to be interlaced when in fact they only require field shift and/or swaps to recover progressive (noninterlaced) frames. The filter provides advanced processing options to deal with these clips.
The following options are available:
Field-only differencing: When this option is checked, only inter-field comparisons are made to detect motion. If a pixel differs from the corresponding pixels in the previous and following fields (the lines above and below the current line), the pixel is considered to be moving.
Frame-and-field differencing: When this option is checked, both inter-frame and inter-field comparisons are made to detect motion. If a pixel differs from the corresponding pixel in both the previous field and the previous frame, the pixel is considered to be moving.
The correct choice for differencing depends on the input video; each mode has problems with some kinds of clips. Field-only differencing alone may tend to overestimate motion. Frame-only and frame-and-field differencing may tend to underestimate it. Also to be considered is execution time; frame-only will be the fastest, followed by field-only, followed by frame-and-field.
In my experience, good results with a wide range of clips are obtained with frame-only differencing at a threshold of 15, and so these are the filter defaults. Motion map denoising is helpful with field-only differencing as it reduces the amount of picture detail that is erroneously detected as motion.
Compare color channels instead of luma: When comparing pixels, if this box is unchecked (default), the lumanance values are compared. When this box is checked, the individual color channels (red, green, and blue) are compared. Luma comparison is good for general video and especially where static alpha-blended logos appear (because video noise can cause subtle color changes that would be detected by color channel compares). Color channel comparisons are good for cartoons and other clips with large solid color areas.
Show motion areas only: When selected, only the moving areas of the image are displayed; static areas are black. This option can be used to assess the suitability of the choice of option settings and threshold.
Blend instead of interpolate in motion areas: If this checkbox is not selected (default), then, in motion areas, the filter will discard one field's data and recreate it by interpolating new lines from the retained field's lines. In static areas, both field's data is used. If this checkbox is checked, then, in motion areas, instead of interpolating, the filter blends each line with the lines above and below. This has the effect of blending the fields, which tends to blur out the interlacing artifacts. The choice of interpolation versus blending depends on the nature of the input video and your own esthetic preferences. The interpolate mode avoids the halos you can get with the blend mode, but it introduces some small amount of stairstepping, and may tend to emphasize any video noise that is present. Try both ways and see which you prefer for a given video clip.
Use cubic for interpolation: When doing line interpolation (not blending), if this option is not selected, a linear interpolation using two lines is used. If this option is selected, a cubic interpolation using 4 lines is used. Cubic interpolation is better but slower.
Motion map denoising: A dilemma for users of the filter is that to get the best deinterlacing, we like a low threshold, such as 10-15. A low threshold ensures that residual interlacing artifacts don't sneak through. But if we set the threshold too low, video noise gets detected as motion. This causes two undesirable results. First, the motion noise causes a random sprinkling of deinterlaced areas in what should be static areas. This often manifests as a kind of sparkling, which is objectionable. Second, any extra false motion that is detected reduces the picture area that is passed through from both fields, reducing the perceived resolution of the overall picture. So, what we want is a low threshold but without the effects of video noise. When this checkbox is checked, extra filtering is added in the motion detection pipeline (not in the main video pipeline, so the ouput video is not compromised) that does a good job of suppressing false motion noise. The downside is that the filter runs slower. Use the "Show motion areas only" option to tweak the threshold fairly low without introducing false motion noise. This option is especially helpful with field-only differencing.
Motion Threshold: This value determines the difference between a pixel and its corresponding value in the previous field or frame that must be exceeded for the pixel to be considered moving. A threshold that is too high will allow interlacing artifacts to slip through. A threshold that is too low will cause too much of the image to be treated as moving, reducing the perceived resolution. A too-low threshold will also tend to emphasize noise. Without motion map denoising (see above), a threshold of 15-25 is good. With denoising, 10-20 is good. You can view the effect of threshold on motion detection by selecting the "Show motion areas only" checkbox.
Scene Change Threshold: Sometimes when a scene change occurs between the fields of a frame, the result is not satisfactory. This option permits you to set a threshold of change such that if the threshold is exceeded, the entire frame will be treated as moving, i.e., the entire frame will be interpolated or blended. The value to be specified is the percentage of moving area detected; for example, with the default value of 30, if 30 percent or more of the frame is detected to be moving, the entire frame will be treated as moving. Note that the percentage calculation is made prior to motion-map denoising (if enabled).
Before proceeding into the analysis, please note that, due to the way that this filter buffers frames internally, it may appear to not work correctly when scrubbing the time line or when single stepping backward. But single stepping forward will always work and, of course, saved processed video will always be correct. It is always advisable to hit the rewind button before starting processing.
First, it is necessary to understand how capture cards might vary. Let us assume that the source material is simply a stream of bottom and top fields, as follows:
b1t1b2t2b3t3b4t4...
The symbol 'b' indicates a bottom field and 't' indicates a top field. The number indicates the frame number of the original progressive frame. Thus, fields b1 and t1 are both from frame 1 and contain information from the same temporal moment.
The capture card will capture this stream in memory, varying in two ways. First, it might start capturing on a bottom field or a top field. Second, it might place either the bottom field or the top field first in memory. This leads to 4 ways in which the stream may be captured, as follows:
1) b1t1-b2t2-b3t3-b4t4...
2) t1b1-t2b2-t3b3-t4b4...
3) t1b2-t2b3-t3b4-t4b5...
4) b2t1-b3t2-b4t3-b5t4...
where the '-' character indicates a frame boundary. A given capture card will be characterised by which of these capture patterns it uses. Note that if capture pattern 1 is used, no filtering is needed to re-create the original progressive frames.
It might seem that all we need to do now is to define the operations that the filter must perform to change each of the above capture patterns to the desired deinterlaced end result b1t1-b2t2-b3t3-b4t4. That would only tell half the story. The problem is that there is an alternative form for the input stream! Consider these two ways in which the original material may be telecined:
1) b1t1b2t2b3t3b4t4... (as before)
2) b1t2b2t3b3t4b4t5... ('perverse' telecining)
Note that both will appear fine when displayed on an interlaced display, but a capture of telecine type 2 with capture pattern 1 will no longer give deinterlaced output on a progressive display without filtering! This means that we must now consider 8 patterns, i.e., 4 capture patterns times 2 telecining types. For each type, we will have a specific operation that will need to be done to re-create the original progressive frames. We see below that by combining 3 building blocks we can cover all the possibilities. The building blocks are applied in order and each one is optional:
swap fields on input --> shift field phase by one --> swap fields on output
A phase shift by one means that a stream b1t1-b2t2-b3t3... becomes xxb1-t1b2-t2b3-t3b4...
Following are all 8 possibilities together with the processing required for each:
Case 1: Telecine 1, capture 1 gives after capture:
b1t1-b2t2-b3t3...
Required action: none.
Case 2: Telecine 1, capture 2, gives after capture:
t1b1-t2b2-t3b3...
Required action: swap on input.
Case 3: Telecine 1, capture 3, gives after capture:
t1b2-t2b3-t3b4...
Required action: phase shift.
Case 4: Telecine 1, capture 4, gives after capture:
b2t1-b3t2-b4t3...
Required action: swap on input, followed by phase shift.
Case 5: Telecine 2, capture 1, gives after capture:
b1t2-b2t3-b3t4...
Required action: phase shift, followed by swap on output.
Case 6: Telecine 2, capture 2, gives after capture:
t2b1-t3b2-t4b3...
Required action: swap on input, followed by phase shift, followed by swap on output.
Case 7: Telecine 2, capture 3, gives after capture:
t2b2-t3b3-t4b4...
Required action: swap on input.
Case 8: Telecine 2, capture 4, gives after capture:
b2t2-b3t3-b4t4...
Required action: none.
Unfortunately, it is not a trivial matter to determine which correction to apply. If one is sure that the material is telecined from progressive material, one can try all possibilities. When the correct one is found, one then knows for the future what the capture pattern must be for the capture card used. Thereafter, only two corrections need be tried to allow for the two possible telecine patterns. I wish things were simpler, but, alas, it is not so.
Note also that the telecining method can change in mid-clip. This filter does not adapt to such changes. A different filter, Telecide, is available that can adapt to such changes and output a continuous stream of progressive frames.
Field swap before phase shift: Swaps the fields of the frames before an optional phase shift.
Phase shift: Shifts the field phase by one, i.e., b1t1-b2t2-b3t3... becomes xxb1-t1b2-t2b3-t3b4...
Field swap after phase shift: Swaps the fields of the frames after an optional phase shift.
Disable motion processing: When checked the filter will perform only the Advanced Processing and will not follow it with normal motion processing. This is useful for getting only a shift and/or swap when that is sufficient for processing telecined material. When this is unchecked the Advanced Processing will be performed first and then full motion processing will be performed.
For additional information, version updates, and other filters, please go to the following web site:
Filters for VirtualDub
http://sauron.mordor.net/dgraft/index.html
Donald Graft
August 6, 2001
(C) Copyright 1999-2001, All Rights Reserved