Prosecution Insights
Last updated: April 18, 2026
Application No. 18/646,640

METHOD, DEVICE AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR ENHANCING VEHICLE SURROUNDING IMAGES

Final Rejection §103
Filed
Apr 25, 2024
Examiner
HUNTSINGER, PETER K
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Nanning Fulian Fugui Precision Industrial Co. Ltd.
OA Round
2 (Final)
28%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
45%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
90 granted / 322 resolved
-34.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
59 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-10 are currently pending. The previous rejection to claims 1-10 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, is withdrawn due to Applicant’s amendment. Response to Arguments Applicant's arguments filed 3/16/26 have been fully considered but they are not persuasive. The Applicant argues on page 7 of the response in essence that: The Examiner asserts that it would have been obvious to combine the panoramic stitching of Zheng, the external image integration of Balasubramanian, and the image enhancement of Ren. However, there is no teaching or suggestion in the cited art to combine these features into the specific architecture claimed in amended claim 1. The Non-Final Rejection of 2/17/26 stated on pages 4 and 5 the motivation to combine Balasubramanian with Zheng and Ren with Zheng. The Applicant has not pointed out any specific deficiency in either motivation statement. Therefore, Applicant’s conclusory statement is unpersuasive. The Applicant argues on page 7 of the response in essence that: Specifically, Zheng focuses on basic stitching of vehicle-mounted cameras. Balasubramanian focuses on using external images to eliminate blind spots. Ren discloses general image enhancement (like gamma correction). None of these references, alone or in combination, teach or suggest classifying external images into groups based on similarity to vehicle-mounted camera groups, and then specifically selecting and enhancing only "benchmark image" before stitching. In response to Applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). This specific sequence provides an unexpected synergy that the cited references fail to address. By selecting a "benchmark image" for enhancement after similarity classification but before stitching, the present invention significantly reduces the computational burden compared to enhancing all captured images. In response to Applicant's argument that the present invention significantly reduces the computational burden compared to enhancing all captured images, the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985). The Applicant argues on page 8 of the response in essence that: Furthermore, because the external images of Balasubramanian and vehicle images of Zheng originate from different sensors with different optical characteristics, a simple "combination" as suggested by the Examiner would likely result in visual artifacts or stitching failures. The claimed "benchmark-based enhancement" ensures that the images are normalized and optimized specifically for the stitching process, leading to a more coherent panoramic view than what would be achieved by the general methods of Zheng, Balasubramanian, and Ren. The Applicant’s disclosure likewise requires obtaining external images from different sensors with different optical characteristics. Balasubramanian discloses combining the different views by providing a panoramic visual view which eliminates blind spots (paragraph 154). Applicant’s contention that this would result in visual artifacts or stitching failures would render Balasubramanian unsatisfactory for its intended purpose. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over Zheng Chinese Publication 115239724A (hereafter “Zheng”), Balasubramanian et al. US Publication 2024/0394931 (hereafter “Balasubramanian”) and Ren et al. US Publication 2024/0112472 (hereafter “Ren”). Referring to claims 1, 9 and 10, Zheng discloses a method for enhancing vehicle surrounding images, the method comprising: obtaining a plurality of images captured by a plurality of vehicle-mounted cameras (page 2, the specific operation mode is using several cameras arranged on the vehicle body to collect the environment image of the shooting angle interval); dividing the plurality of images into M groups of environmental images according to different view angles, wherein M is a natural number greater than 1 (page 2, counting the number of cameras present on the target vehicle, and numbering each camera according to the preset order); selecting a benchmark image of each group of the M groups of image collections (page 4, S66: comparing the high quality index of the environment image in the overlapped shooting angle interval corresponding to each characteristic camera, selecting the characteristic camera with the highest quality index as the preferable camera corresponding to the overlapped shooting angle interval); and stitching the benchmark images of the M groups of image collections to generate an enhanced vehicle surrounding image (page 3, S8: the effective environment image in the shooting angle interval corresponding to each camera is spliced for 360 degrees, obtaining the panoramic image of the target vehicle corresponding to the effective view range). While Zheng discloses stitching images to generate an enhanced vehicle surrounding image, Zheng does not disclose using external images. Balasubramanian discloses receiving external images captured by external devices (paragraph 149, Referring again to FIG. 13 as an illustrative example, once the C2C server 1360 receives the request from the vehicle 1310 (e.g., first vehicle), the C2C server 1360 can request the vehicle 1310 and other nearby vehicles (e.g., vehicles 1320, 1330, 1340) located in the vicinity of the vehicle 1310 to provide the visual key points and associated feature descriptors of their views to the C2C server 1360); comparing a similarity between each of the external images and each group of the environmental images (paragraph 151, At block 1430, the device (or component thereof) can match the key points and the associated feature descriptors related to the one or more other vehicles to the key points and the associated feature descriptors related to the first vehicle); classifying each of the external images into one group of the environmental images with a highest similarity to form M groups of image collections (paragraph 153, At block 1450, the device (or component thereof) can determine at least one mapping (e.g., at least one homography) between the at least one vehicle and the first vehicle. For instance, referring to FIG. 13, the C2C server 1360 can determine or infer a mapping (e.g., homography) between the vehicle 1310 and the vehicle 1320); and stitching the images of the M groups of image collections to generate an enhanced vehicle surrounding image (paragraph 154, At block 1460, the device (or component thereof) can combine, using the at least one mapping, the at least one ROI view of the at least one vehicle with the view of the first vehicle to generate a combined image having the visual view of the ROI). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to use external images to generate an enhanced vehicle surrounding image. The motivation for doing so would have been to improve the vehicle surrounding image by removing blind spots from the image. While Zheng discloses a benchmark image of each group, Zheng does not disclose expressly enhancing the benchmark image of each group. Ren discloses utilizing a preset enhancement processing to enhance the benchmark image of each group to obtain an enhanced benchmark image of each group (paragraph 72, The method 1000, at block B1002, includes processing captured image data into processed image data using a particular type of image processing. For example, with respect to FIG. 9, the image processing module 905 may process frames of captured image data 902 using any number and type of known image processing techniques to generate frames of processed image data 910. Example image processing may include gamma correction to improve color range, exposure compensation, tone mapping, noise reduction, removing bad pixels, applying white balance, applying color correction to remove lens shading artifacts in fisheye images, and/or others). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to perform an image enhancement process. The motivation for doing so would have been to improve the quality of the vehicle surrounding image. Therefore, it would have been obvious to combine Balasubramanian and Ren with Zheng to obtain the invention as specified in claims 1, 9 and 10. Referring to claim 2, Balasubramanian discloses wherein the external devices comprise other vehicles and roadside units (paragraph 149, Referring again to FIG. 13 as an illustrative example, once the C2C server 1360 receives the request from the vehicle 1310 (e.g., first vehicle), the C2C server 1360 can request the vehicle 1310 and other nearby vehicles (e.g., vehicles 1320, 1330, 1340) located in the vicinity of the vehicle 1310 to provide the visual key points and associated feature descriptors of their views to the C2C server 1360). Referring to claim 3, Balasubramanian discloses wherein the method further comprises: after receiving the external images, preprocessing each of the external image, wherein the preprocessing comprises transposed projection (paragraph 43, The C2C server may then determine or infer a mapping (e.g., a homography) between the first vehicle and each of the appropriate vehicles. For instance, because the corresponding points from the matched descriptors map to the same 3D point, a mapping or transformation (e.g., homography) between two views (including a first image from the first vehicle and a second image from another vehicle) can be obtained from a mapping between the 3D point and the corresponding point of the first image and a mapping between the 3D point and the corresponding point of the second image). Ren discloses wherein the preprocessing comprises fisheye correction (paragraph 72, The method 1000, at block B1002, includes processing captured image data into processed image data using a particular type of image processing. For example, with respect to FIG. 9, the image processing module 905 may process frames of captured image data 902 using any number and type of known image processing techniques to generate frames of processed image data 910. Example image processing may include gamma correction to improve color range, exposure compensation, tone mapping, noise reduction, removing bad pixels, applying white balance, applying color correction to remove lens shading artifacts in fisheye images, and/or others). Referring to claim 4, Zheng discloses wherein the selecting a benchmark image of each group of the M groups of image collections further comprises: determining an image quality of each image of each group of the M groups of image collections (page 3-4, S63: respectively extracting the quality index of the environment image in the overlapped shooting angle interval corresponding to each characteristic camera, wherein the quality index comprises resolution, colour depth and signal-to-noise ratio); selecting the benchmark image based on the image quality of each image of each group of the M groups of image collections (page 4, S66: comparing the high quality index of the environment image in the overlapped shooting angle interval corresponding to each characteristic camera, selecting the characteristic camera with the highest quality index as the preferable camera corresponding to the overlapped shooting angle interval) Referring to claim 5, Zheng discloses wherein the selecting a benchmark image of each group of the M groups of image collections further comprises: obtaining a pre-stitched image corresponding to each image in each group of image collections by stitching the each image with environmental images in other groups (page 3, S8: the effective environment image in the shooting angle interval corresponding to each camera is spliced for 360 degrees, obtaining the panoramic image of the target vehicle corresponding to the effective view range); selecting the benchmark image of each group of image collections according to image qualities of the pre-stitched images (page 4, In a further technical solution, the specific operation method corresponding to S9 is as follows: IN S91: the target vehicle corresponding to the panoramic image in the effective view range for splicing position mark, and each splicing position of the mark according to the set order number sequentially 1, 2, ..., i, ..., n). Referring to claim 6, Zheng discloses wherein the selecting the benchmark image of each group of image collections according to image qualities of the pre-stitched images further comprises: configuring an image block corresponding to a view angle of each group of image collections in each of the pre-stitched images as an area of interest (page 3, S3: obtaining the viewing angle range corresponding to each camera, and combining the layout orientation of each camera to obtain the shooting angle interval of each camera in one circumference); ranking each of the pre-stitched images according to an image quality of each the area of interest of each of the pre-stitched images (page 4, S66: comparing the high quality index of the environment image in the overlapped shooting angle interval corresponding to each characteristic camera); and selecting an image with highest ranking as the benchmark image (page 4, S66: selecting the characteristic camera with the highest quality index as the preferable camera corresponding to the overlapped shooting angle interval). Referring to claim 7, Ren discloses wherein the preset enhancement processing comprises at least one of an enhancement algorithm based on a spatial domain or an enhancement algorithm based on a frequency domain (paragraph 141, The video image compositor may include enhanced temporal noise reduction for both spatial and temporal noise reduction). Referring to claim 8, Ren discloses wherein the preset enhancement processing comprises: adjusting feature parameters of the benchmark image that do not meet corresponding target values to the corresponding target values to obtain the enhanced image of the benchmark image, wherein the feature parameters comprise brightness, white balance, and sharpness (paragraph 72, The method 1000, at block B1002, includes processing captured image data into processed image data using a particular type of image processing. For example, with respect to FIG. 9, the image processing module 905 may process frames of captured image data 902 using any number and type of known image processing techniques to generate frames of processed image data 910. Example image processing may include gamma correction to improve color range, exposure compensation, tone mapping, noise reduction, removing bad pixels, applying white balance, applying color correction to remove lens shading artifacts in fisheye images, and/or others). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER K HUNTSINGER/Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §103
Mar 16, 2026
Response Filed
Apr 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12540884
Determining Fracture Roughness from a Core
2y 5m to grant Granted Feb 03, 2026
Patent 12412381
METHODS AND SYSTEMS FOR CONTROLLING OPERATION OF WIRELINE CABLE SPOOLING EQUIPMENT
2y 5m to grant Granted Sep 09, 2025
Patent 12387360
APPARATUS AND METHOD FOR ESTIMATING UNCERTAINTY OF IMAGE COORDINATE
2y 5m to grant Granted Aug 12, 2025
Patent 12388943
PRINTING SYSTEM USING FLUORESENT AND NON-FLUORESENT INK, PRINTING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL METHOD THEREOF
2y 5m to grant Granted Aug 12, 2025
Patent 12374081
DIGITAL IMAGE PROCESSING TECHNIQUES USING BOUNDING BOX PRECISION MODELS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
28%
Grant Probability
45%
With Interview (+16.7%)
4y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month