DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of Group II (claims 1-10) in the reply filed on 09/17 is acknowledged.
Claims 1-10 and 21-30 are now pending.
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 08/01/2023, 11/05/2024 and 08/01/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information referred to therein has been considered by the examiner.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-10 and 21-28 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
Claim 1 recites in the bottom line “the plurality of regions”. It is not clear whether they are before or after the rendering.
Claim 21 recites “select a first frame interpolation technique between at least a second frame interpolation technique or a third frame interpolation technique” which is confusing. It is not clear what is meant by “between at least a second frame interpolation technique”. In other words, it is not clear what are the options when selecting the first frame interpolation technique. Please clarify.
Claim 22 recites “select a fourth frame interpolation technique between at least the second frame interpolation technique or the third frame interpolation technique”. It is not clear what is meant by “between at least the second frame interpolation technique”.
Claim(s) not mentioned specifically is/are dependent on indefinite antecedent claims.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 1 is rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Yang, et al. (hereafter referred to as “Yang”, "Depth-assisted frame rate up-conversion for stereoscopic video." IEEE Signal Processing Letters 21.4 (2014): 423-427).
Regarding claim 1, Yang discloses a method of video frame interpolation, comprising:
determining motion in a frame of video (page 423-424, section “A. Motion Vector Computation”, the Motion Vector Field (MVF) is computed);
determining occluded motion in the frame of the video based on the motion in the frame; determining a plurality of regions of an interpolated frame based on the occluded motion (page 424, section “B. Block Classification”, based on MV, each image block is defined as a depth-continuous block or a depth-discontinuous block. A depth-discontinuous block contains occluded motion, see section “D. Depth-Discontinuous Block Interpolation”);
selecting a video frame interpolation technique for each region of the plurality of regions of the interpolated frame based on the occluded motion; rendering each region of the plurality of regions of the interpolated frame using the video frame interpolation technique selected for each region (section “C. Depth-Continuous Block Interpolation” and “D. Depth-Discontinuous Block Interpolation”. the depth-continuous blocks are interpolated according to (7), while depth-discontinuous blocks are interpolated according to (9)); and
compositing the interpolated frame from the plurality of regions (Figs. 2&3).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 4-6, 21-23 and 25-28 are rejected under 35 U.S.C. 103 as being unpatentable over Yang ("Depth-assisted frame rate up-conversion for stereoscopic video." IEEE Signal Processing Letters 21.4 (2014): 423-427), and in view of Djelouah et al. (hereafter referred to as “Djelouah”, US 2024/0163395).
Regarding claim 2, Yang discloses the method of claim 1, but fails to further disclose where selecting the video frame interpolation technique comprises selecting between: a machine-learning based video frame interpolation technique and a non-machine-learning based video frame interpolation technique.
In the same field of video frame interpolation, Djelouah discloses (pg. [0011]-[0012]) a machine learning model-based video frame interpolator that incorporates known regions of an intermediate frame to improve interpolation quality. As indicated in Djelouah (pg. [0012]), a partial rendering of the intermediated frame can “provide results meeting the desired quality using a fraction of the time compared to a full rendering of the intermediate frame”.
According to Yang, a depth-discontinuous block may indicate that the foreground object has moved, hence disocclusion or occlusion regions appear (see section D). It is desirable to generate good quality video frames especially in the foreground object regions.
The benefit of using machine learning based techniques is well known in the art. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Djelouah with that of Yang to yield the invention as described in claim 2. This combination (modification) could be made using known methods with no changes to the operating principles of either reference to produce the predictable results of improved image quality in regions of foreground objects (i.e., the depth-discontinuous blocks in Yang) by using machine learning based method, as for example taught by Djelouah.
Regarding claims 4-6, please refer to analysis of claim 2, Yang in view of Djelouah discloses rendering a depth-discontinuous block (i.e., a first region) using machine-learning based video frame interpolation technique as taught in Djelouah; rendering depth-continuous blocks using the simple non-machine-learning based video frame interpolation technique of Yang (equation (1) or (7)), and compositing the interpolated frame like those illustrated in Figs. 2&3 of Yang.
Claims 21-23, 25 and 26 have been analyzed and are rejected for the same reasons as outlined above in the rejection of claims 4, 5, 5, 5, and 5, respectively. Machine-learning based video frame interpolation technique (the claimed “first frame interpolation technique”) is used to render depth-discontinuous blocks (i.e., the claimed first region), whereas the simple non-machine-learning based video frame interpolation technique of Yang (equation (1) or (7), the claimed “fourth frame interpolation technique”) is used to render depth-continuous blocks (i.e., the claimed second region). Both Yang and Djelouah’s system are computer-based. Processor(s) and storage(s) are the main building blocks of a computer system.
Regarding claim 27, Yang in view of Djelouah discloses the image processing apparatus of claim 25, where the non-machine-learning-based technique comprises: generating a forward interpolated frame by performing a forward warp using the motion; generating a backward interpolated frame by performing a backward warp using the motion; and blending the forward interpolated frame and the backward interpolated frame (Yang, section “C. Depth-Continuous Block Interpolation”. See equations (1) and (7), the MVF is computed by bidirectional predictions, Bt-1(pj+vj) is a forward interpolated frame and Bt+1(pj-vj) is a backward interpolated frame).
Regarding claim 28, Yang in view of Djelouah discloses the image processing apparatus of claim 21, where: determining the occluded motion between the frames of the video data comprises determining contrasting motion in the motion between the frames of the video data, and selecting the first frame interpolation technique is further based on the contrasting motion (Yang, section “B. Block Classification”, A block is defined as a depth-discontinuous block if the measured local changes of the depth (σt-1(p+v) and σt+1(p-v), which represent contrasting motion) are not smaller than the predefined threshold Tσ).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yang ("Depth-assisted frame rate up-conversion for stereoscopic video." IEEE Signal Processing Letters 21.4 (2014): 423-427).
Regarding claim 3, Yang discloses the method of claim 1, but fails to expressly disclose feathering areas between each of the plurality of regions of the frame.
However, given the fact that Yang uses different methods to interpolate different types of blocks (section C and D), blocking artifacts would most likely appear in the interpolated frame, reducing image quality. Feathering those block boundary regions to remove/reduce the grid-like distortion would have been desirable and necessary.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 3 from the teachings of Yang.
Claims 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Yang ("Depth-assisted frame rate up-conversion for stereoscopic video." IEEE Signal Processing Letters 21.4 (2014): 423-427), and in view of Kokaram, et al. (hereafter referred to as “Kokaram”, "Motion‐based frame interpolation for film and television effects." IET Computer Vision 14.6 (2020): 323-338).
Regarding claim 7, Yang discloses the method of claim 1, but fails to expressly disclose where compositing the interpolated frame comprises generating a mask.
In the same field of video frame interpolation, Kokaram discloses (section 3.3 on page 326) generating “the motion field as well as visibility maps w1, w2. These maps act as soft occlusion indicators. This updated data is used in (8) to generate the final interpolated frame”. The visibility maps that indicate occlusion defines a mask.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Kokaram with that of Yang to yield the invention as described in claim 7. This combination (modification) could be made using known methods with no changes to the operating principles of either reference to produce nothing more than highly predictable results.
Regarding claim 8, Yang discloses the method of claim 1, but fails to expressly disclose where determining the occluded motion in the frame comprises generating an occlusion map indicating areas of contrasting motion.
In the same field of video frame interpolation, Kokaram discloses (section 3.3 on page 326) generating “the motion field as well as visibility maps w1, w2. These maps act as soft occlusion indicators. This updated data is used in (8) to generate the final interpolated frame”. Also see section 6.2 “similar to the use of visibility maps or soft occlusion maps by Jiang et al. [5]. We use the gradient of the motion field (choosing forward or backward direction depending on which is greater) as the measure of confidence in the interpolation.”
Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Kokaram with that of Yang to yield the invention as described in claim 8. This combination (modification) could be made using known methods with no changes to the operating principles of either reference to produce nothing more than highly predictable results.
Regarding claim 9, Yang discloses the method of claim 1, where determining the motion in the frame of the video comprises section “A. Motion Vector Computation”). Yang does not expressly disclose performing optical flow between frames of the video.
However, as stated in Kokaram (page 324, left column), in “the motion picture effects, industry frame interpolation has become synonymous with optic flow estimation … it is essential that the interpolated frames do not disrupt the motion in the existing sequence”.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 9 from the combined teachings of Yang and Kokaram.
Regarding claim 10, Yang in view of Kokaram discloses the method of claim 9. Kokaram further discloses where determining the occluded motion in the frame of the video comprises performing edge detection on the optical flow to determine contrasting motion in the frame of the video (section 6.2 “similar to the use of visibility maps or soft occlusion maps by Jiang et al. [5]. We use the gradient of the motion field (choosing forward or backward direction depending on which is greater) as the measure of confidence in the interpolation”. See equations (23) and (24), edges are detected based on “the differentials of the horizontal and vertical components of motion in the horizontal h and vertical directions v”).
It is essential that the interpolated frame preserves motion of an object which is defined by edges or contours. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 10 from the combined teachings of Yang and Kokaram.
Claims 24, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Yang ("Depth-assisted frame rate up-conversion for stereoscopic video." IEEE Signal Processing Letters 21.4 (2014): 423-427) in view of Djelouah (US 2024/0163395), and further in view of Kokaram, ("Motion‐based frame interpolation for film and television effects." IET Computer Vision 14.6 (2020): 323-338).
Regarding claim 24, Yang in view of Djelouah discloses the image processing apparatus of claim 21. Yang further discloses detecting an edge based on the occluded motion being above a threshold change in motion (Yang, section “B. Block Classification”, A block is defined as a depth-discontinuous block if the measured local changes of the depth (σt-1(p+v) and σt+1(p-v), which represent contrasting motion) are not smaller than the predefined threshold Tσ).
The Yang and Djelouah combination as applied to claim 21 but fails to expressly disclose generating an edge map.
In the same field of video frame interpolation, Kokaram discloses generating an edge map (section 3.3 on page 326, generating “the motion field as well as visibility maps w1, w2. These maps act as soft occlusion indicators. This updated data is used in (8) to generate the final interpolated frame”. Also see section 6.2. In equations (23) and (24), edges are detected based on “the differentials of the horizontal and vertical components of motion in the horizontal h and vertical directions v”).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Kokaram with that of Yang in view of Djelouah to yield the invention as described in claim 24. This combination (modification) could be made using known methods with no changes to the operating principles of any of the references to produce nothing more than highly predictable results.
Regarding claims 29 and 30, please refer to analysis of claim 10 on performing optical flow and edge detection. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claims 29 and 30 from the teachings of Yang in view of Djelouah and Kokaram.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LI LIU whose telephone number is (571)270-5363. The examiner can normally be reached on Monday-Friday, 8:00AM-4:30PM, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LI LIU/Primary Examiner, Art Unit 2666