DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 3-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044).
To claim 1, Lam teach a method (Fig. 1, paragraph 0015) comprising:
obtaining a plurality of images, wherein each respective image of the plurality of images includes a corresponding representation of a scene captured by a camera (610 of Fig. 6, paragraph 0050);
obtaining corresponding motion data generated by a motion sensor, wherein the corresponding motion data represents an amount of motion present when the respective image was captured (paragraphs 0029, 0034, obvious in motion sensor equipped with the camera platform);
determining, for each respective image of the plurality of images, a corresponding sharpness metric indicative of a sharpness with which the respective image represents the scene (620 of Fig. 6; paragraph 0050);
determining, for each respective image of the plurality of images, a corresponding weight based on the corresponding sharpness metric of the respective image (paragraph 0052); and
determining an output image by combining the plurality of images according to the corresponding weight of each respective image (Figs. 4, 6-8; paragraphs 0053-0059).
But, Lam do not expressly disclose the camera on a vehicle, the motion sensor on the vehicle, determining, for each respective image of the plurality of images and based on the corresponding motion data, a corresponding sharpness metric indicative of a sharpness with which the respective image represents the scene.
Williams teach an integrated system (Fig. 9) utilizing motion data that corresponds to an image frame capture time to analyze scene stability (paragraph 0024), wherein evaluation of image sharpness may be determined using multiple types of data, e.g., image data, auto-focus and auto-exposure data, motion sensor data, etc. (paragraph 0028).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching of Williams into the method of Lam, in order to implement dynamically adjust camera setting in vehicle application preference.
To claim 19, Lam and Williams teach a system (as explained in response to claim 1 above).
To claim 20, Lam and Williams teach a non-transitory computer-readable medium having stored thereon instructions that, when execute by a computing device, cause the computing device to perform operations (as explained din response to claim 1 above).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Zivkovic (US2019/0045142).
To claim 3, Lam and Williams teach claim 1.
Though it is obvious on stability affecting imaging quality, Lam and Williams do not expressly disclose wherein the corresponding sharpness metric is inversely proportional to the amount of motion present when the respective image was captured.
Zivkovic teach sharpness metric is inversely proportional to the amount of motion present when the respective image was captured (paragraph 0042), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Lam and Williams for further metric implementation.
Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Miller et al. (US12284440).
To claim 4, Lam and Williams teach claim 1.
Lam and Williams teach wherein the corresponding motion data represents one or more of a displacement, a speed, or an acceleration of the vehicle when the respective image was captured, and wherein the motion sensor comprises one or more of: (i) an inertial measurement unit, (ii) a speedometer, (iii) a lidar device, or (iv) a radar device (Williams, paragraph 0038, accelerometers, which makes speedometer an obvious application).
Miller teach a tracking system comprised of a camera and a motion sensor (Fig. 1) being applied on a vehicle (column 10 lines 2-5), wherein a sharpness metric for an image captured during a particular motion (column 3 lines 12-29, column 6 lines 1-29), and wherein said motion sensor measures speed column 10 lines 55-60), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams, in order to further sensor implementation.
To claim 5, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose wherein determining the corresponding sharpness metric comprises: determining the corresponding sharpness metric based on processing the respective image using a machine learning model
Miller teach determining the corresponding sharpness metric comprises: determining the corresponding sharpness metric based on processing the respective image using a machine learning model (column 3 lines 12-29, column 6 lines 1-29), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams, in order to further machine learning implementation.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Celik et al. (US2019/0373162).
To claim 6, Lam and Williams teach claim 1.
But, Lam and Miller do not expressly disclose wherein determining the corresponding sharpness metric comprises: determining a gradient value representing a gradient of the respective image; and determining the corresponding sharpness metric based on the gradient value.
Celik teach determining a gradient value representing a gradient of the respective image; and determining the corresponding sharpness metric based on the gradient value (paragraphs 0026-0027), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams, in order to further image analysis.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Kitajima (US2004/0239795).
To claim 7, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose wherein determining the corresponding sharpness metric comprises: determining an exposure value representing an exposure time used in connection with generating the respective image; and determining the corresponding sharpness metric based on the exposure value.
Kitajima teach a digital camera, wherein exposure time affects sharpness (Figs. 6A-B, paragraphs 0098-0130), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for sharpness analysis by design preference.
Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Chen (US2025/0272800).
To claim 8, Lam and Williams teach claim 1.
Lam and Williams teach wherein determining the corresponding sharpness metric comprises: determining, for each respective pixel of a plurality of pixels of the respective image, a corresponding optical flow value representing an apparent motion of the respective pixel between the respective image and at least one other image of the plurality of images; and
determining the corresponding sharpness metric based on the corresponding optical flow value of each respective pixel of the plurality of pixels (obvious in Lam, paragraphs 0006-0012).
In furthering said obviousness, Chen teach image sharpness degree may be determined based on a weight coefficient based on a global motion amplitude caused by lens movement (paragraph 0078), wherein the global motion amplitude may be obtained based on an optical flow method (paragraph 0075), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for sharpness analysis by design preference.
To claim 9, Lam and Williams teach claim 1.
Lam and Williams teach wherein: determining the corresponding sharpness metric comprises determining, for each respective pixel of a plurality of pixels of the respective image, a pixel sharpness metric indicative of a sharpness with which the respective pixel represents a corresponding portion of the scene, determining the corresponding weight comprises determining, for each respective pixel of the plurality of pixels of the respective image, a corresponding pixel weight based on the corresponding pixel sharpness metric of the respective pixel, and determining the output image comprises combining the plurality of pixels of each respective image according to the corresponding pixel weight of each respective pixel (obvious in Lam, paragraphs 0006-0012, 0052, wherein pixel weight involvement is well-known in the art).
In furthering said obviousness, Chen teach image sharpness degree may be determined based on a weight coefficient based on a global motion amplitude caused by lens movement (paragraph 0078), wherein the global motion amplitude may be obtained based on an optical flow method (paragraph 0075), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for sharpness analysis by design preference.
Claim(s) 10, 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Jouppi et al. (US2020/0082541).
To claim 10, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose further comprising: determining, based on a first image of the plurality of images, a segmentation mask representing two or more visual feature classes present in the scene, wherein determining the corresponding weight comprises: determining, for each respective semantically-distinct region of the segmentation mask, a corresponding class weight based on a corresponding visual feature class represented by the respective semantically-distinct region in the first image, wherein (i), for a first one or more visual feature classes, the corresponding class weight is selected to perform image sharpening and (ii), for a second one or more visual feature classes, the corresponding class weight is selected to perform image denoising.
Jouppi teach a segmentation mask representing two or more visual feature classes present in the scene, wherein determining the corresponding weight comprises: determining, for each respective semantically-distinct region of the segmentation mask, a corresponding class weight based on a corresponding visual feature class represented by the respective semantically-distinct region in the image, wherein (i), for a first one or more visual feature classes, the corresponding class weight is selected to perform image sharpening and (ii), for a second one or more visual feature classes, the corresponding class weight is selected to perform image denoising (paragraphs 0028-0054), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for image analysis by design preference.
To claim 14, Lam and Williams teach claim 1.
Lam, Williams and Jouppi teach further comprising: determining, based on a first image of the plurality of images, a segmentation mask representing two or more visual feature classes present in the scene; and determining a first denoised image by denoising the first image based on the segmentation mask, wherein an extent of denoising applied to each respective pixel of a plurality of pixels of the first image is based on a visual feature class represented by the respective pixel, and wherein the output image is determined by combining the first denoised image with other images of the plurality of images (as explained in response to claim 10 above).
To claim 15, Lam, Williams and Jouppi teach claim 14.
Lam, Williams and Jouppi teach wherein determining the first denoised image comprises: determining, for each respective image of the plurality of images, a corresponding denoised image by denoising the respective image based on the segmentation mask, wherein a corresponding extent of denoising applied to each respective pixel of a plurality of pixels of the respective image is based on a corresponding visual feature class represented by the respective pixel, and wherein the output image is determined by combining the corresponding denoised image of each respective image of the plurality of images (as explained in response to claim 10 above).
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Gilley et al. (US2010/0172551).
To claim 11, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose wherein determining the corresponding weight comprises: determining, for each respective image of the plurality of images, a corresponding elapsed time between (i) a reference time and (ii) a time at which the respective image has been captured; and determining, for each respective image of the plurality of images, the corresponding weight further based on the corresponding elapsed time.
Gillery teach determining the corresponding weight comprises: determining, for each respective image of the plurality of images, a corresponding elapsed time between (i) a reference time and (ii) a time at which the respective image has been captured; and determining, for each respective image of the plurality of images, the corresponding weight further based on the corresponding elapsed time (paragraph 0065), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for image analysis by design preference.
To claim 12, Lam, Williams and Gillery teach claim 11.
Lam, Williams and Gillery teach wherein the reference time represents a time at which a most recent image of the plurality of images has been captured (Gillery, paragraph 0114), and wherein the corresponding weight is inversely proportional to the corresponding elapsed time (Gillery, paragraphs 0065-0067, similarity score positively weights correlation score, wherein a longer elapsed time produces a lower similarity score).
Claim(s) 13, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Gallagher (US2003/0161545)
To claim 13, Lam and Williams teach claim 1.
Lam and Williams teach wherein determining the corresponding weight comprises: determining, for each respective image of the plurality of images, a corresponding signal-to-noise ratio (SNR); and further determining, for each respective image of the plurality of images, the corresponding weight based on the corresponding SNR (obvious in Lam, 630 of Fig. 6, SNR is an obvious representation of noise metric).
In furthering said obviousness, Gallagher teach determining weight based on SNR (Fig. 4; paragraphs 0031-0050), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for image analysis by design preference.
To claim 17, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose wherein determining the output image comprises: determining an intermediate image by determining a weighted sum of the plurality of images according to the corresponding weight of each respective image; and determining the output image by convolving the intermediate image with a predefined kernel.
Gallagher teach determining an intermediate image by determining a weighted sum of the plurality of images according to the corresponding weight of each respective image; and determining the output image by convolving the intermediate image with a predefined kernel (paragraphs 0035, 0065-0080), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams for image analysis by design preference.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044), Jouppi et al. (US2020/0082541) and Sekiguchi et al. (US2003/0035477).
To claim 16, Lam, Williams and Jouppi teach claim 14.
Lam, Williams and Jouppi teach wherein each respective visual feature class of the two or more visual feature classes is associated with a corresponding predetermined extent of denoising that allows image portions representing the respective visual feature class to be compressed with at least a threshold compression ratio (obvious as explained in response to claim 10 above).
In furthering said obviousness, Sekiguchi teach allowing image portions representing the respective visual feature class to be compressed with at least a threshold compression ratio (paragraphs 0013-0015), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam, Williams and Jouppi for image analysis by design preference.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Caviedes et al. (US2016/0173752).
To claim 18, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose further comprising: aligning the plurality of images to compensate for variations in different perspectives from which the plurality of images have been captured, wherein the output image is determined based on the plurality of images as aligned.
Caviedes teach aligning the plurality of images to compensate for variations in different perspectives from which the plurality of images have been captured, wherein the output image is determined based on the plurality of images as aligned (paragraphs 0027, 0052), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams, in order to further video stability processing.
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al. (US2012/0114235) in view of Williams et al. (US2014/0363044) and Liu et al. (US2022/0392202).
To claim 21, Lam and Williams teach claim 1.
But, Lam and Williams do not expressly disclose wherein determining the corresponding weight comprises: obtaining vehicle task data representing a task to be performed by the vehicle based on the plurality of images, wherein the vehicle task data comprises, for each respective visual feature class of a plurality of visual feature classes, a corresponding importance value that indicates an importance of the respective visual feature class to the task; and determining the corresponding weight based on the vehicle task data
Liu teach determining the corresponding weight comprises: obtaining vehicle task data representing a task to be performed by the vehicle based on the plurality of images, wherein the vehicle task data comprises, for each respective visual feature class of a plurality of visual feature classes, a corresponding importance value that indicates an importance of the respective visual feature class to the task; and determining the corresponding weight based on the vehicle task data (Figs. 2-3; paragraph 0041), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching into the method of Lam and Williams, in order to further smart video analysis.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 February 3, 2026