Prosecution Insights
Last updated: April 19, 2026
Application No. 18/202,289

AROUND VIEW MONITORING SYSTEM AND THE METHOD THEREOF

Non-Final OA §103
Filed
May 26, 2023
Examiner
WERNER, DAVID N
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
HL Klemove Corp.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
483 granted / 713 resolved
+9.7% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
23.1%
-16.9% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§103
DETAILED ACTION This Office action for U.S. Patent Application No. 18/202,289 is responsive to the Request for Continued Examination filed 22 December 2025, in reply to the Final Rejection of 22 September 2025. Claims 1–3, 5–12, 14–17, 19, and 20 are pending. In he previous Office action, claims 1, 2, 5–11, 14–16, 19, and 20 were rejected under 35 U.S.C. § 103 as obvious over US 2021/0321049 A1 (“Imura”) in view of CN 104786933 A (“Liu”). Claims 3, 4, 12, 13, 17, and 18 were rejected under 35 U.S.C. § 103 as obvious over Imura in view of Liu and in view of US 2020/0013154 A1 (“Jiang”). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 C.F.R. § 1.114 A request for continued examination under 37 C.F.R. § 1.114, including the fee set forth in 37 C.F.R. § 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 C.F.R. § 1.114, and the fee set forth in 37 C.F.R. § 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 C.F.R. § 1.114. Applicant's submission filed on 22 December 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Upon further search, the examiner finds that it was known in the art for vehicle exterior cameras to have an envelope of correctable field of view areas, with a distortion outside this envelope indicating the camera is misaligned or out of place, requiring a new calibration. US 2011/0285856 A1 is added to the record as representative. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5–11, 14–16, 19, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. 2021/0321049 A1 (“Imura”) in view of Chinese Publication No. CN 104786933 A (“Liu”)1 and in view of U.S. Patent Application Publication No. 2011/0285856 A1 (“Chung”). Imura, directed to image generation and display for a vehicle, teaches with respect to claim 1 an around view monitoring system, comprising: a first image processor (Fig. 2, birds-eye view image generation unit 11) configured to: generate a top view image by stitching a plurality of images (¶ 0032, generating a birds-eye view image as a projective transformation of captured images) received from a plurality of cameras mounted to a vehicle (Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43), process the top view image based on a display setting (¶¶ 0055–56, adjust luminance or contrast), and control a display to display the processed top view image (¶ 0062, output to monitor 45); and a second image processor configured to: process an image received from at least one camera among the plurality of cameras based on a recognition setting (Fig. 2, ¶ 0033; three-dimensional object recognition unit 13 recognizes a three-dimensional object), and detect an object in the processed image (¶ 0034, determine if the recognized object is a vehicle). The claimed invention first differs from Imura in that the claimed invention specifies outputting a warning sound notifying a driver of certain detected objects. Although the examiner believes this is well-known in the art as a component of the Imura driving assistance (¶ 0080), Imura does not explicitly recite a sound output. However, Liu, directed to driving assistance using panoramic images, teaches with respect to claim 1: control an alarm device to output a warning sound that notifies a driver of a detected object (pp. 4, 6, recognizing various road objects such as lane marking and road signs such as prohibited entry; p. 4, alarm device 9 that notifies the driver of “danger information recognized by the image processing device” may be a sound or audible voice alarm that is variable dependent on a danger degree). It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Imura to provide an audible warning when approaching a “WRONG WAY” or “DO NOT ENTER” sign, for example, as taught by Liu, in order to prevent wrong-way collisions. The claimed invention further differs from Imura in that the claimed invention recites details of correcting images. Imura teaches wherein the second image processor is configured to: extracting a correction index according to a correction degree of each of an image received from the at least one camera and the processed image (Imura ¶¶ 0044–46, distortion reduction), correcting fisheye image distortion); and inactivating a function of detecting an object in the processed image if the correction index exceeds a predefined value (¶ 0041, not covering portion of distorted area 55 with a substitute image 61). The claimed invention as amended specifies controlling the alarm device to notify the driver that the image is not valid upon determining that the correction index exceeds a predefined value. This differs from the commonplace practice of overlaying an image of the vehicle over an extreme central portion having the most distortion from a composite of panoramic lateral cameras, but instead suggests an error detection process. However, Chung, directed to a vehicle camera, teaches with respect to claim 1: inactivate a function of detecting the object in the processed image if the correction index exceeds a predefined value (¶¶ 0052–57, determine if camera error is outside the maximum correction range); and control the alarm device to notify the driver that the image is not valid (¶ 0054, screen output showing the error is above a maximum error correction range). It would have been obvious to one of ordinary skill in the art at the time of effective filing to incorporate the Chung maximum error detection into the Imura vehicle cameras, to notify the car owner or mechanics if the cameras need physical adjustment. Chung ¶ 0054. Regarding claim 2, this claim is directed to correcting the image based on the display setting before stitching the plurality of images. Imura, in contrast, at Figures 2 and 3, generates the bird’s-eye view image before any other processing. However, it would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Imura to correct luminance and contrast of the individual camera images before stitching them into the bird’s-eye view image, with the predictable result of better color matching at the borders between the images. See M.P.E.P. §§ 2144.04(VI)(C) (rearrangement of parts), 2144.04(IV)(C) (change in process sequence). Regarding claim 5, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the second image processor is configured to transmit a control signal to control a driving control device according to a detection result (Imura ¶ 0080, driving assistance). Regarding claim 6, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the display setting comprises information about a setting suitable for image display for each of a plurality of correction techniques (Imura ¶¶ 0055–56, adjusting luminance and contrast). Regarding claim 7, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the recognition setting comprises information about a setting suitable for image display for each of a plurality of correction techniques (Imura Fig. 2, shadow superimposition and distortion reducing). Regarding claim 8, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein: the top view image is a first top view image (Imura ¶ 0032, birds-eye view image), and the second image processor is configured to: generate a second top view image by stitching the plurality of images received from the plurality of cameras (¶¶ 0022, 0025; capturing and displaying a plurality of images), and process the second top view image based on a recognition setting to detect an object in the second top view image (id.). Regarding claim 9, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the plurality of cameras comprises a front camera configured to capture a front of the vehicle, a rear camera configured to capture a rear of the camera, a left camera configured to capture a left side of the vehicle, and a right camera configured to capture a right side of the vehicle (Imura Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43). Regarding claim 10, Imura in view of Liu and Chung teaches an around view monitoring system, comprising: a first image processor (Imura Fig. 2, birds-eye view image generation unit 11) configured to primarily process a plurality of images received from a plurality of cameras mounted to a vehicle (¶ Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43) based on a display setting (¶¶ 0055–56, adjust luminance or contrast), generate a top view image by stitching the plurality of processed images (¶ 0032, generating a bids-eye view image as a projective transformation of captured image), secondarily process the top view image based on the display setting (¶¶ 0055–56, adjusting other of luminance or contrast), and control a display the processed top view image (¶ 0062, output to monitor 45); and a second image processor (Fig. 2, ¶ 0033; three-dimensional object recognition unit 13) configured to process the plurality of processed image or the top view image received from the first image processor based on a recognition setting (id., recognizing a three-dimensional object), detect an object in the plurality of processed images (¶ 0034, determine if the recognized object is a vehicle), and control an alarm device to output a warning sound that notifies a driver of a detected object (Liu pp. 4, 6, recognizing various road objects such as lane marking and road signs such as prohibited entry; p. 4, alarm device 9 that notifies the driver of “danger information recognized by the image processing device” may be a sound or audible voice alarm that is variable dependent on a danger degree), wherein the second image processor is configured to: extract a correction index according to the plurality of images received from the at least one camera and a correction degree of each of the plurality of processed images (Imura ¶ 0044–46, distortion reduction, correcting fisheye image distortion); inactivate a function of detecting the object in the processed image if the correction index exceeds a predefined value (Chung ¶¶ 0052–57, determine if camera error is outside the maximum correction range); and control the alarm device to notify the driver that the image is not valid (¶ 0054, screen output showing the error is above a maximum error correction range). Regarding claim 11, Imura in view of Liu and Chung teaches the around view monitoring system of claim 10, wherein the second image processor is further configured to determine whether the plurality of processed images or the top view image is suitable for object detection based on the recognition setting (Imura Fig. 3, determine that a three-dimensional object is not a vehicle, a specific object, or a utility pole), and restore a correction of the plurality of processed images or the top view image if the plurality of processed images or the top view image is not suitable for the object detection (id., do not perform processing specific to a vehicle, specific object, or utility pole but instead output the bird’s eye view image). Regarding claim 14, Imura in view of Liu and Chung teaches the around view monitoring system of claim 10, wherein the second image processor is configured to transmit a control signal to control at least one of an alarm device and a driving control device according to a detection result (Imura ¶ 0080, driving assistance). Regarding claim 15, Imura in view of Liu teaches an around view monitoring method performed by an around view monitoring system, comprising: by a first image processor of the around view monitoring system (Imura Fig. 2, birds-eye view image generation unit 11): generating a top view image by stitching a plurality of images (¶ 0032, generating a birds-eye view image as a projective transformation of captured images) received from a plurality of cameras mounted to a vehicle (Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43); processing the top view image based on a display setting (¶¶ 0055–56, adjust luminance or contrast) and controlling a display to display the processed top view image (¶ 0062, output to monitor 45); and by a second image processor of the around view monitoring system (Fig. 2, ¶ 0033; three-dimensional object recognition unit 13) processing an image received from at least one of the plurality of cameras based on a recognition setting (Fig. 2, ¶ 0033; recognizing a three-dimensional object); detecting an object in the processed image (¶ 0034, determine if the recognized object is a vehicle); and controlling an alarm device to output a warning sound that notifies a driver of a detected object (Liu pp. 4, 6, recognizing various road objects such as lane marking and road signs such as prohibited entry; p. 4, alarm device 9 that notifies the driver of “danger information recognized by the image processing device” may be a sound or audible voice alarm that is variable dependent on a danger degree). extracting a correction index according to the plurality of images received from the at least one camera and a correction degree of each of the plurality of processed images (Imura ¶ 0044–46, distortion reduction, correcting fisheye image distortion); inactivating a function of detecting the object in the processed image if the correction index exceeds a predefined value (Chung ¶¶ 0052–57, determine if camera error is outside the maximum correction range); and controlling the alarm device to notify the driver that the image is not valid (¶ 0054, screen output showing the error is above a maximum error correction range). Regarding claim 16, this claim is directed to correcting the image based on the display setting before stitching the plurality of images. Imura, in contrast, at Figures 2 and 3, generates the bird’s-eye view image before any other processing. However, it would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Imura to correct luminance and contrast of the individual camera images before stitching them into the bird’s-eye view image, with the predictable result of better color matching at the borders between the images. See M.P.E.P. §§ 2144.04(VI)(C) (rearrangement of parts), 2144.04(IV)(C) (change in process sequence). Regarding claim 19, Imura teaches the around view monitoring method of claim 15, further comprising transmitting a control signal to control a driving control device according to a detection result (¶ 0080, driving assistance). Regarding claim 20, Imura teaches the around view monitoring method of claim 15, wherein: the display setting comprises information about a setting suitable for image display for each of a plurality of correction techniques (¶¶ 0055–56, adjusting luminance and contrast), and the recognition setting comprises information about a setting suitable for object detection for each of a plurality of correction techniques (Fig. 2, shadow superimposition and distortion reducing). Claims 3, 12, and 17 are rejected under 35 U.S.C. § 103 as being unpatentable over Imura in view of Liu and Chung, and further in view of U.S. Patent Application Publication No. 2020/0013154 A1 (“Jang”). Claims 3, 12, and 17 are directed to further details of the around view monitoring system not disclosed by Imura or Liu. However, with respect to claims 3, 12, and 17, Jang teaches: detecting the object in the object in the image based on a model trained to detect an object in an input image (¶¶ 0004–06, machine learning model, ¶ 0046, training image database). It would have been obvious to one having ordinary skill in the art at the time of effective filing to use a training model to recognize objects in Imura, as taught by Jang, so that the camera need not be calibrated during manufacturing. Jang ¶ 0074. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. CN 115511971 A US 2019/0088011 A1 US 2016/0275683 A1 US 2015/0193916 A1 The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. The documents are ranked from most to least relevant. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025). US 2015/0353010 A1 KR 20130095525 A CN 107856608 A KR 20180074093 A KR 20130124762 A KR 20210078614 A Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /David N Werner/Primary Examiner, Art Unit 2487 1 A machine translation from the European Patent Office is added to the record, and is relied on, including for pagination.
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
May 13, 2025
Non-Final Rejection — §103
Aug 14, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103
Dec 22, 2025
Request for Continued Examination
Jan 08, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598312
OVERHEAD REDUCTION IN MEDIA STORAGE AND TRANSMISSION
2y 5m to grant Granted Apr 07, 2026
Patent 12598297
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Apr 07, 2026
Patent 12593144
SOLID STATE IMAGING ELEMENT, IMAGING DEVICE, AND SOLID STATE IMAGING ELEMENT CONTROL METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587754
METHOD FOR DYNAMIC CORRECTION FOR PIXELS OF THERMAL IMAGE ARRAY
2y 5m to grant Granted Mar 24, 2026
Patent 12587689
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+16.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month