DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-14, 16-20 are currently pending in the present application, with claims 1, 10, and 14 being independent, claim 15 is cancelled.
Response to Amendments / Arguments
Applicant’s arguments, see Pg. 11, filed 10/28/2025, with respect to claims 1, 7, 11, and 13 have been fully considered and are persuasive. The claim objections of claims 1, 7, 11, and 13 has been withdrawn.
Applicant’s arguments, see Pg. 11-12, filed 10/28/2025, with respect to claims 1-20 have been fully considered and are persuasive. The 35 U.S.C. § 112 rejection of claim 1-20 has been withdrawn.
Applicant’s arguments, see Pg. 12-19, filed 10/28/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. § 102 and 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, and 6 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yang et al. (CN 106954076 A), hereinafter referred to as “Yang”.
Regarding claim 1, Yang discloses a method, comprising:
receiving a plurality of image frames as inputs (Par. 0060; one viewpoint video in a multi-view video is selected as a video that requires frames to be inserted),
calculating one or more internal motion vector (MV) fields between a past frame (PF) and a current frame (CF) of the plurality of image frames (Par. 0010; (1) Motion estimation: According to the previous frame image and the current frame image, motion vector estimation is performed using a unidirectional motion estimation algorithm to obtain a forward motion vector field and a backward motion vector field, respectively),
generating a foreground MV field and a background MV field of the one or more internal MV fields (Par. 0017; The current frame image is divided into equal-sized image blocks. The above frame is a reference frame. The unidirectional motion vector estimation algorithm calculates the motion vector of each image block in the current frame image to obtain the backward motion vector field. Par. 0030-0031; (3-3) According to the depth information, the sub-image block obtained in the step (3-1) is divided into two types: a foreground sub-image block and a background sub-image block… Using different block matching criteria, the motion vectors of the foreground sub-image block and the background sub-image block of the previous frame image, the foreground sub-image block of the current frame image and the background sub-image block are respectively calculated),
processing the one or more internal MV fields and the foreground and background MV fields to generate one or more motion vector with depth (MVD) fields, wherein processing the one or more internal MV fields comprises generating virtual depths of the one or more internal MV fields to generate the one or more MVD fields (Par. 0032-0037; Preferably, in the step (3-3), the specific method of distinguishing the foreground sub-image block and the background sub-image block is: (3- 3-1): Calculate the maximum depth value of the corresponding depth value of the sub-image block in the step (3-1). And an average depth value of the corresponding depth value of the initial master image block of the sub-image block; (3-3-2): Divide the sub-image block into a foreground sub-image block and a background sub-image block according to the relationship between the maximum depth value and the average depth value in the step (3-3-1). Preferably, in the step (4), the following steps are specifically included: (4-1) Dividing the insert frame into equal-sized image blocks, taking the motion vectors of the previous frame image and the current frame image obtained in the step (1) and the step (3) as reference, and inserting the image in the frame Blocks perform motion vector assignments), wherein each virtual depth is generated based on a current MV field, a regional foreground MV field, and a regional background MV field (FIG. 1-2 and Par. 0078; to divide the image block in the current frame ft+1 into two types of occlusion image blocks and non-occlusion image blocks, and further divide the occlusion image blocks in ft+1 into overlay image blocks and non-overlays…Par. 0084-0096; Assume that Q is a final quadtree-divided sub-image block. The maximum depth value of Q corresponds to the average depth of the initial parent image block…Formulas (6)-(9) where Q is either the foreground sub-image block or Q is the background sub-image block…then Q is the background sub-image block, otherwise, Q is the foreground sub-image block),
and outputting the one or more MVD fields for image processing (Par. 0059; (4) Motion vector allocation and frame insertion: The motion vectors are assigned to each image block in the insert frame, and the image block reconstruction in the insert frame is performed using bi-directional motion compensation to realize frame insertion).
Regarding claim 2, Yang discloses the method of claim 1, and further discloses wherein the one or more internal MV fields between the PF and the CF comprise an internal phase 0 MV field of the PF and an internal phase 1 MV field of the CF (Par. 0056; unidirectional motion estimation algorithm to obtain a forward motion vector field and a backward motion vector field. Par. 0069; motion estimation is performed on each image block in ft-1 to obtain a forward motion vector MVFf. Par. 0073; motion estimation is performed on each image block in ft+1 to obtain a backward motion vector MVFb).
Regarding claim 3, Yang discloses the method of claim 1, and further discloses wherein the one or more internal MV fields are calculated at a block level wherein each block comprises a group of pixels (Par. 0065-0073; The first step is to divide ft-1 into equal-sized image blocks of size N×N (N is an integer multiple of 4). Assuming that Bt-1 is one of the image blocks, and ft+1 is used as the reference frame, the following formula is used to calculate Bt-1 forward motion vector vf: (1), (2) …Where p1 represents the pixel coordinates…The second step: divide ft+1 into equal-size image blocks of size N×N (N is an integer multiple of 4). Assuming that Bt+1 is one of the image blocks, ft-1 is used as the reference frame, and the following formula is used to calculate Bt+1 forward motion vector vb: (3), (4) …Where p2 represents the pixel coordinates).
Regarding claim 4, Yang discloses the method of claim 3, and further discloses wherein processing the one or more internal MV fields comprises decomposing the one or more internal MV fields from block level to pixel level (Par. 0082-0083; For block-type image blocks (including overlay-type image blocks and non-overlay-type image blocks), quadtree partitioning is divided into four equal-size sub-image blocks; the sub-image block is then further quad-tree partitioned…if pixels of multiple reference images are projected to pixels of the same virtual view image when a virtual view is generated, we assign the pixel value of the reference pixel with the smallest depth value to the virtual view. Pixels, and the conventional virtual image synthesis method based on depth image rendering assigns the pixel value of the reference pixel with the largest depth value to the virtual view pixel. Par. 0092-0096; p3 represents the pixel coordinates).
Regarding claim 6, Yang discloses the method of claim 1, and further discloses wherein the plurality of image frames are modified according to pixel values, wherein the pixel values indicates whether a given pixel belongs to object, special effect, or other (Par. 0043-0046; foreground object…if pixels of multiple reference images are projected to pixels of the same virtual viewpoint image when a virtual viewpoint is generated. In this case, we assign the pixel value of the reference pixel with the smallest depth value to the virtual view pixel, which breaks the conventional pixel value assignment of the reference pixel with the largest depth value in the virtual view synthesis method based on depth image rendering to the virtual view point).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (CN 106954076 A), hereinafter referred to as “Yang”, in view of Cheng et al. (US 10142651 B1), hereinafter referred to as “Cheng”.
Regarding claim 7, Yang discloses the method of claim 1, and further discloses wherein the one or more MV fields are internally generated (Par. 0010; (1) Motion estimation: According to the previous frame image and the current frame image, motion vector estimation is performed using a unidirectional motion estimation algorithm to obtain a forward motion vector field and a backward motion vector field, respectively…)
Yang does not disclose and are combined with one or more externally obtained MV fields to generate the one or more MVD fields.
In the same art of motion vectors, Cheng discloses and are combined with one or more externally obtained MV fields (partial MV) to generate the one or more MVD fields (Column 2, lines 46-57; A motion vector calculator (MVC) 118 may read the partial MV data from the memory 114 (e.g., using MV decoder 116) and overwrite the internally generated MV along with any global/regional information from the server side).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combine externally obtained MV fields as taught by Cheng with the internally generated MV fields as taught by Yang. The motivation lies in the advantage of improving accuracy to generated MV fields. Only internally generated MV fields may lack global or regional scene context, that can be obtained from externally obtained MV fields. Therefore, combining two sources of motion data would provide additional corrections, reduce errors in depth generation (Cheng Column 2, lines 46-57; to improve its three-dimensional (3D) recursive performance), and is a well-known design choice in motion vector processing pipelines where multiple MV sources are available.
Regarding claim 8, Yang discloses the method of claim 1, but does not disclose wherein generating the foreground MV field and the background MV field comprises: determining potential background MVs and foreground MVs via MV projection, determining reliability of each of the potential background and foreground MVs, and accumulating reliable background MV's and reliable foreground MVs into the background MV field and the foreground MV field, respectively
In the same art of motion vectors, Cheng wherein generating the foreground MV field and the background MV field comprises: determining potential background MVs and foreground MVs via MV projection (Column 4, lines 30-36; the true MV generator 524 usually generates an MV for each 8×8 block. Besides the MV field, additional information may be generated such as global MV, global motion model, regional foreground/background MV, and regional brightness compensation value),
determining reliability of each of the potential background and foreground MVs (Column 2, lines 46-57; A motion vector calculator (MVC) 118 may read the partial MV data from the memory 114 (e.g., using MV decoder 116) and overwrite the internally generated MV along with any global/regional information from the server side),
and accumulating reliable background MV's and reliable foreground MVs into the background MV field and the foreground MV field, respectively (Column 3, lines 23-29; SAD.sub.np (i.e., to denote reliability of the MV).
Yang and Cheng are combined for the reason set forth above with respect to claim 7.
Claim(s) 5, and 10-11, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (CN 106954076 A), hereinafter referred to as “Yang”, in view of Ninan (WO 2023150488 A1).
Regarding claim 5, Yang discloses the method of claim 1, but does not disclose wherein the plurality of image frames are luma image frames.
In the same art of motion vectors, Ninan discloses wherein the plurality of image frames (texture images) are luma image frames (Par. 0015 and Par. 0019; texture images. A texture image may be represented in a color space of multiple color channels such as an RGB color space, a YCbCr color space, and so forth…motion vectors may be generated or estimated from texture image content as indicated by texture pixel values in texture image coding operations. For example, intensity, luma, chroma or color values of texture pixel values of pixels in two time adjacent texture images).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate luma image frames as taught by Ninan into the MV estimation technique of Yang. The motivation lies in the advantage of reduce computational complexity and memory usage while still preserving the motion and texture information necessary for accurate motion vector calculation. Luma frames are commonly used in video compression to primarily focus on luma components rather than color. This combination yields predictable results in simplifying calculations without a loss of motion estimation accuracy.
Regarding claim 10, Yang discloses generate input image frames including a past frame (PF) and a current frame (CF) each comprising a plurality of pixels divided into a plurality of blocks (Par. 0060-0073; one viewpoint video in a multi-view video is selected as a video that requires frames to be inserted. This viewpoint video is referred to as a current viewpoint video…ft-1 and ft+1 are the previous frame image and the current frame image in the current view video…divide ft-1 into equal-sized image blocks of size N×N…p1 represents the pixel coordinates…divide ft+1 into equal-size image blocks of size N×N…p2 represents the pixel coordinates),
calculate block level motion vector (MV) fields between the PF and the CF (Par. 0064-0073; motion estimation is performed on each image block in ft-1 to obtain a forward motion vector MVFf…motion estimation is performed on each image block in ft+1 to obtain a backward motion vector MVFb),
generate a block level virtual depth for each of the block level MV fields , wherein the block level virtual depth is generated based on a current block level MV field (Par. 0078; to divide the image block in the current frame ft+1 into two types of occlusion image blocks and non-occlusion image blocks, and further divide the occlusion image blocks in ft+1 into overlay image blocks and non-overlays), a regional foreground MV field, and a regional background MV field of the block level MV fields (Par. 0084-0096; Assume that Q is a final quadtree-divided sub-image block. The maximum depth value of Q corresponds to the average depth of the initial parent image block…Formulas (6)-(9) where Q is either the foreground sub-image block or Q is the background sub-image block…then Q is the background sub-image block, otherwise, Q is the foreground sub-image block),
decompose the block level MV fields and block level virtual depths into pixel level MV fields and pixel level virtual depths (Par. 0082-0083; For block-type image blocks (including overlay-type image blocks and non-overlay-type image blocks), quadtree partitioning is divided into four equal-size sub-image blocks; the sub-image block is then further quad-tree partitioned…if pixels of multiple reference images are projected to pixels of the same virtual view image when a virtual view is generated, we assign the pixel value of the reference pixel with the smallest depth value to the virtual view. Pixels, and the conventional virtual image synthesis method based on depth image rendering assigns the pixel value of the reference pixel with the largest depth value to the virtual view pixel. Par. 0092-0096; p3 represents the pixel coordinates),
and output the pixel level MV fields and pixel level virtual depths for image processing (Par. 0097-0101; (4) Motion vector allocation and frame insertion…Using the above method, bi-directional motion compensation is performed for each image block or image sub-block inserted in the frame ft to realize frame insertion).
Yang does not disclose one or more processors and non-transitory memory communicably coupled to a game engine, wherein the memory stores instructions executable by the one or more processors that, when executed, cause the processors to.
In the same art of motion vectors, Ninan discloses one or more processors and non-transitory memory communicably coupled to a game engine (Par. 0028 mechanisms as described herein form a part of a media processing system including…video game device…game machine), wherein the memory stores instructions executable by the one or more processors that, when executed, cause the processors to (Par. 0018; a system, an apparatus, or one or more other computing devices performs any or a part of the foregoing methods as described. In an embodiment, a non-transitory computer readable storage medium stores software instructions, which when executed by one or more processors cause performance of a method).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate a processors and memory coupled to a game engine as taught by Ninan to execute the MV estimation technique of Yang. The motivation lies in the advantage of implementing motion estimation and virtual depths into a gaming environment. By coupling processors and memory to a game engine, the system can generate MVs for applications like frame rate up-conversion, interpolation, and object segmentation during gameplay, where real-time performance is critical. The combination is a well-known design choice in gaming and video rendering systems to improve visual fidelity and reduce latency.
Regarding claim 11, Yang in view of Ninan discloses the system of claim 10, and further discloses wherein calculating the block level MV fields between the PF and the CF comprises detecting foreground MVs and background MVs between the PF and the CF, wherein detecting foreground MVs and background MVs comprises using MVs related to a past of past frame (PPF), the PF, and the CF (Yang Par. 0084-0096; Assume that Q is a final quadtree-divided sub-image block. The maximum depth value of Q corresponds to the average depth of the initial parent image block…The Q motion vector v can be calculated using the following equation: (5)..Formulas (6)-(9)…then Q is the background sub-image block, otherwise, Q is the foreground sub-image block).
Yang and Ninan are combined for the reason set forth above with respect to claim 10.
Regarding claim 13, Yang in view of Ninan discloses the system of claim 10, and further discloses wherein the block level MV fields between the PF and the CF comprise one or more block level phase 0 MV fields and the PF and one or more block level phase 1 MV fields of the CF (Yang Par. 0056; unidirectional motion estimation algorithm to obtain a forward motion vector field and a backward motion vector field. Par. 0069; motion estimation is performed on each image block in ft-1 to obtain a forward motion vector MVFf. Par. 0073; motion estimation is performed on each image block in ft+1 to obtain a backward motion vector MVFb).
Yang and Ninan are combined for the reason set forth above with respect to claim 10.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (CN 106954076 A), hereinafter referred to as “Yang”, in view of Ninan (WO 2023150488 A1), and in further view of Zhang et al. (US 20230115057 A1), hereinafter referred to as “Zhang”.
Regarding claim 12, Yang in view of Ninan discloses the system of claim 10, but does not disclose wherein the image processing comprises one or more of frame interpolation, extrapolation, and reprojection.
In the same art of motion vectors, Zhang discloses wherein the image processing comprises one or more of frame interpolation, extrapolation, and reprojection (Par. 0009; FIG. 2 illustrates an example overview of the frame extrapolation and reprojection using application-generated motion vectors and depth information).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate extrapolation and reprojection as taught by Zhang, into the combined system of He and Cheng. The motivation lies in the advantage of providing additional image processing techniques to enhance spatial and temporal rendering, allowing the system to shift viewpoints in real-time, providing better user experience. Extrapolation and reprojection reduces computational load by predicting frames without re-rendering to generate accurate, high-quality visual outputs. As described in Zhang, extrapolation and reprojection provide a comfortable visual experience at high frame rates (Par. 0004). This combination yields predictable results in improving visualizations beneficial in AR/VR and high frame rate gaming applications.
Allowable Subject Matter
Claim 14, 16-20 is/are allowed.
The following is a statement of reasons for the indication of allowable subject matter:
Prior art fails to fairly suggest a method, comprising: calculating a motion vector (MV) of a first block between a past frame (PF) and a current frame (CF) of an input image wherein the input image comprises a plurality of pixels partitioned into a plurality of blocks determining potential foreground MVs and potential background MVs for the first block
determining reliable foreground MVs and reliable background MVs of the potential foreground and background MVs for a corresponding region, wherein determining the reliable foreground MVs and reliable background MVs of the potential foreground and background MVs is based on previous global foreground MVs, previous regional foreground MVs, and previous regional background MVs,
generating a virtual depth of the first block based at least in part on the reliable foreground and background MVs of the corresponding region, wherein the virtual depth is generated based on a first difference in MVs between a current MV field and the foreground MVs of the corresponding region and a second difference in MVs between the current MV field and the background MVs of the corresponding region, as claimed in method claim 14.
Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNY N TRAN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615