DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 10-18, 20-27 and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al (US Pub. 2020/0084427).
With respect to claim 1, Sun discloses A computer-implemented method, (see figure 3) comprising:
generating a first [augmented] frame by combining a first image and a first frame of a first frame pair, (see paragraph 0045, wherein …for processing by the optical flow decoder 110, the top level (l=L), the initial optical flow estimate is initialized to 0 and provided to the warping layer(s) 125 and optical flow estimator layer(s) 140 by the upsampler 152. Beginning at the top level of the feature pyramids, the features of the second image at the current level are warped using the initial optical flow estimate. For subsequent levels of the feature pyramids, the features of the second image at the current level are warped using the refined optical flow estimate, w.sup.1 computed by the optical flow decoder 110 for the next higher (coarser) pyramid level that is upsampled by the upsampler 152);
generating, via an optical flow estimation model, a first flow estimation based on a second frame of the first frame pair and the first [augmented] frame, (see paragraph 0004; wherein …optical flow …using dedicated neural networks “optical flow estimation model”…; and paragraph 0050, wherein …For the first iteration to estimate the optical flow, the top (l=L) level of the feature pyramid for the second image is warped toward the top level of the feature pyramid for the first image using an initial optical flow estimate. Importantly, the feature pyramid structures and warping enable a reduction in the search range (in pixels) used to compute the partial cost volume…The optical flow estimate is then computed using the top level of the first feature pyramid, …and the initial optical flow estimate…); and
updating one or both of parameters or weights of the optical flow estimation model based on a first loss between the first flow estimation and a training target, (see paragraph 0070, wherein …loss function unit 225 outputs updated parameters to the scene flow estimation system 150…), as claimed.
However, Sun fails to explicitly disclose generating a first augmented frame by combining a first image and a first frame of a first frame pair; and generating, via an optical flow estimation model, a first flow estimation based on a second frame of the first frame pair and the first augmented frame, (emphasis added), as claimed.
But, the wrapped image is read as augmented frame as described in paragraph 0032, wherein …The optical flow decoder 110 uses an upsampled optical flow computed for the previous (l−1) level of the pyramid structures to warp the features of the second image for the lth level…”, as claimed.
Therefore, it would have been obvious to one ordinary skilled in the art at the effective date of invention to simply utilize the teaching of the Sun to generate the wrapped image i.e. augmented frame, to use to generate the optical flow as suggested, this yields the predictable results as claimed.
With respect to claim 2, Sun further discloses generating a second augmented frame by combining a second image and the second frame (see paragraph 0050, wherein … The optical flow estimate is then computed using the top level of the first feature pyramid, …and the initial optical flow estimate. The computed optical flow estimate is then upsampled and the process is repeated (starting at the warping “generating a second augmented frame”) for the (1=l−1) level of the feature pyramids…), wherein:
the first image and the second image correspond to different frames of a second frame pair; and the second frame pair is different than the first frame pair, (see figure 2A, the left and right images and the frames are i and i+1 input to the scene flow estimation system “different frames”), as claimed.
With respect to claim 3, Sun further discloses generating, via the optical flow estimation model, a second flow estimation based on the first frame and the second frame, (see figure 2A, the input as the frames i and i+1; and see paragraph 0050, wherein … The optical flow estimate is then computed using the top level of the first feature pyramid, …and the initial optical flow estimate. The computed optical flow estimate is then upsampled and the process is repeated (starting at the warping “generating a second augmented frame”) for the (1=l−1) level of the feature pyramids…), as claimed.
With respect to claims 4 and 5, Sun further discloses updating one or both of the parameters or the weights of the optical flow estimation model to minimize a second loss between the second flow estimation and the training target; and wherein the training target is a ground truth visual flow between the first frame and the second frame, (see paragraph 0070, wherein …updated parameters to the scene flow estimation system 150. The parameters are updated to reduce differences between the ground-truth annotations and the optical flow, disparity, and occlusion estimations…), as claimed.
With respect to claim 6, Sun discloses all the elements as claimed and as rejected in claim 4 above. However, Sun fails to explicitly disclose wherein the first loss is based, at least in part, on a mixing ratio indicating a ratio of the first image combined with the first frame, as cliamed.
But, it is well-known “Official Notice” in the art to have loss function for the model/neural network based on a mixing ratio (see ref. 11,537,139 col. 6, lines 40-47) to establish the parameters of the model.
Therefore, it would have been obvious to one ordinary skilled in the art at the effective date of invention to simply utilize the conventional knowledge “official notice” of using the loss function for the model/neural network based on mixing ratio to yield the predictable results.
With respect to claim 7, Sun further discloses wherein the training target is the second flow estimation., (see paragraph 0070, wherein … The parameters are updated to reduce differences between the ground-truth “training target” annotations and the optical flow, disparity, and occlusion estimations. In an embodiment, backward propagation through the layers of the scene flow estimation system 150 is used to update the parameters), as claimed.
With respect to claim 10, Sun discloses all the elements as claimed and as rejected in claim 1 above. However, Sun fails to explicitly disclose wherein the first image and the first frame are combined by superimposing the first image onto the first frame.
But, it is well-known “Official Notice” in the art to superimpose two frames (see ref. 2021/0192750 paragraph 0186) to attain a final image from a camera mounted on a vehicle.
Therefore, it would have been obvious to one ordinary skilled in the art at the effective date of invention to simply utilize the conventional knowledge “official notice” to superimpose two frames to yield the predictable results.
With respect to claim 11, Sun further discloses wherein the first frame pair is a pair of frames from a sequence of frames, (see Abstract wherein …in a video sequence…), as claimed.
Claims 12-18 and 20 are rejected for the same reasons as set forth in the rejections of claims 1-7 and 10, because claims 12-18 and 20 are claiming subject matter of similar scope as claimed in claims 1-7 and 10.
Claims 21-27 and 29 are rejected for the same reasons as set forth in the rejections of claims 1-7 and 10, because claims 21-27 and 29 are claiming subject matter of similar scope as claimed in claims 1-7 and 10.
Claim 30 is rejected for the same reasons as set forth in the rejections of claim 1, because claim 30 is claiming subject matter of similar scope as claimed in claim 1. Furthermore, Sun discloses one or more processors; and one or more memories coupled with the one or more processors and storing instructions operable, when executed by the one or more processors, to cause the apparatus to: receive a first frame and a second frame, (see figure 2A, and for training see paragraph 0068), as claimed.
Allowable Subject Matter
Claims 8, 9, 19 and 28 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIKKRAM BALI/Primary Examiner, Art Unit 2663