DETAILED ACTION
This Office action for U.S. Patent Application No. 18/202,289 is responsive to the Request for Continued Examination filed 22 December 2025, in reply to the Final Rejection of 22 September 2025.
Claims 1–3, 5–12, 14–17, 19, and 20 are pending.
In he previous Office action, claims 1, 2, 5–11, 14–16, 19, and 20 were rejected under 35 U.S.C. § 103 as obvious over US 2021/0321049 A1 (“Imura”) in view of CN 104786933 A (“Liu”). Claims 3, 4, 12, 13, 17, and 18 were rejected under 35 U.S.C. § 103 as obvious over Imura in view of Liu and in view of US 2020/0013154 A1 (“Jiang”).
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 C.F.R. § 1.114
A request for continued examination under 37 C.F.R. § 1.114, including the fee set forth in 37 C.F.R. § 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 C.F.R. § 1.114, and the fee set forth in 37 C.F.R. § 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 C.F.R. § 1.114. Applicant's submission filed on 22 December 2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Upon further search, the examiner finds that it was known in the art for vehicle exterior cameras to have an envelope of correctable field of view areas, with a distortion outside this envelope indicating the camera is misaligned or out of place, requiring a new calibration. US 2011/0285856 A1 is added to the record as representative.
Claim Rejections - 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 5–11, 14–16, 19, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. 2021/0321049 A1 (“Imura”) in view of Chinese Publication No. CN 104786933 A (“Liu”)1 and in view of U.S. Patent Application Publication No. 2011/0285856 A1 (“Chung”).
Imura, directed to image generation and display for a vehicle, teaches with respect to claim 1 an around view monitoring system, comprising:
a first image processor (Fig. 2, birds-eye view image generation unit 11) configured to:
generate a top view image by stitching a plurality of images (¶ 0032, generating a birds-eye view image as a projective transformation of captured images) received from a plurality of cameras mounted to a vehicle (Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43),
process the top view image based on a display setting (¶¶ 0055–56, adjust luminance or contrast), and
control a display to display the processed top view image (¶ 0062, output to monitor 45); and
a second image processor configured to:
process an image received from at least one camera among the plurality of cameras based on a recognition setting (Fig. 2, ¶ 0033; three-dimensional object recognition unit 13 recognizes a three-dimensional object),
and detect an object in the processed image (¶ 0034, determine if the recognized object is a vehicle).
The claimed invention first differs from Imura in that the claimed invention specifies outputting a warning sound notifying a driver of certain detected objects. Although the examiner believes this is well-known in the art as a component of the Imura driving assistance (¶ 0080), Imura does not explicitly recite a sound output. However, Liu, directed to driving assistance using panoramic images, teaches with respect to claim 1:
control an alarm device to output a warning sound that notifies a driver of a detected object (pp. 4, 6, recognizing various road objects such as lane marking and road signs such as prohibited entry; p. 4, alarm device 9 that notifies the driver of “danger information recognized by the image processing device” may be a sound or audible voice alarm that is variable dependent on a danger degree).
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Imura to provide an audible warning when approaching a “WRONG WAY” or “DO NOT ENTER” sign, for example, as taught by Liu, in order to prevent wrong-way collisions.
The claimed invention further differs from Imura in that the claimed invention recites details of correcting images. Imura teaches wherein the second image processor is configured to:
extracting a correction index according to a correction degree of each of an image received from the at least one camera and the processed image (Imura ¶¶ 0044–46, distortion reduction), correcting fisheye image distortion); and
inactivating a function of detecting an object in the processed image if the correction index exceeds a predefined value (¶ 0041, not covering portion of distorted area 55 with a substitute image 61).
The claimed invention as amended specifies controlling the alarm device to notify the driver that the image is not valid upon determining that the correction index exceeds a predefined value. This differs from the commonplace practice of overlaying an image of the vehicle over an extreme central portion having the most distortion from a composite of panoramic lateral cameras, but instead suggests an error detection process. However, Chung, directed to a vehicle camera, teaches with respect to claim 1:
inactivate a function of detecting the object in the processed image if the correction index exceeds a predefined value (¶¶ 0052–57, determine if camera error is outside the maximum correction range); and
control the alarm device to notify the driver that the image is not valid (¶ 0054, screen output showing the error is above a maximum error correction range).
It would have been obvious to one of ordinary skill in the art at the time of effective filing to incorporate the Chung maximum error detection into the Imura vehicle cameras, to notify the car owner or mechanics if the cameras need physical adjustment. Chung ¶ 0054.
Regarding claim 2, this claim is directed to correcting the image based on the display setting before stitching the plurality of images. Imura, in contrast, at Figures 2 and 3, generates the bird’s-eye view image before any other processing. However, it would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Imura to correct luminance and contrast of the individual camera images before stitching them into the bird’s-eye view image, with the predictable result of better color matching at the borders between the images. See M.P.E.P. §§ 2144.04(VI)(C) (rearrangement of parts), 2144.04(IV)(C) (change in process sequence).
Regarding claim 5, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the second image processor is configured to transmit a control signal to control a driving control device according to a detection result (Imura ¶ 0080, driving assistance).
Regarding claim 6, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the display setting comprises information about a setting suitable for image display for each of a plurality of correction techniques (Imura ¶¶ 0055–56, adjusting luminance and contrast).
Regarding claim 7, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the recognition setting comprises information about a setting suitable for image display for each of a plurality of correction techniques (Imura Fig. 2, shadow superimposition and distortion reducing).
Regarding claim 8, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein:
the top view image is a first top view image (Imura ¶ 0032, birds-eye view image), and
the second image processor is configured to:
generate a second top view image by stitching the plurality of images received from the plurality of cameras (¶¶ 0022, 0025; capturing and displaying a plurality of images), and
process the second top view image based on a recognition setting to detect an object in the second top view image (id.).
Regarding claim 9, Imura in view of Liu and Chung teaches the around view monitoring system of claim 1, wherein the plurality of cameras comprises a front camera configured to capture a front of the vehicle, a rear camera configured to capture a rear of the camera, a left camera configured to capture a left side of the vehicle, and a right camera configured to capture a right side of the vehicle (Imura Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43).
Regarding claim 10, Imura in view of Liu and Chung teaches an around view monitoring system, comprising:
a first image processor (Imura Fig. 2, birds-eye view image generation unit 11) configured to
primarily process a plurality of images received from a plurality of cameras mounted to a vehicle (¶ Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43) based on a display setting (¶¶ 0055–56, adjust luminance or contrast),
generate a top view image by stitching the plurality of processed images (¶ 0032, generating a bids-eye view image as a projective transformation of captured image),
secondarily process the top view image based on the display setting (¶¶ 0055–56, adjusting other of luminance or contrast), and
control a display the processed top view image (¶ 0062, output to monitor 45); and
a second image processor (Fig. 2, ¶ 0033; three-dimensional object recognition unit 13) configured to
process the plurality of processed image or the top view image received from the first image processor based on a recognition setting (id., recognizing a three-dimensional object),
detect an object in the plurality of processed images (¶ 0034, determine if the recognized object is a vehicle), and
control an alarm device to output a warning sound that notifies a driver of a detected object (Liu pp. 4, 6, recognizing various road objects such as lane marking and road signs such as prohibited entry; p. 4, alarm device 9 that notifies the driver of “danger information recognized by the image processing device” may be a sound or audible voice alarm that is variable dependent on a danger degree),
wherein the second image processor is configured to:
extract a correction index according to the plurality of images received from the at least one camera and a correction degree of each of the plurality of processed images (Imura ¶ 0044–46, distortion reduction, correcting fisheye image distortion);
inactivate a function of detecting the object in the processed image if the correction index exceeds a predefined value (Chung ¶¶ 0052–57, determine if camera error is outside the maximum correction range); and
control the alarm device to notify the driver that the image is not valid (¶ 0054, screen output showing the error is above a maximum error correction range).
Regarding claim 11, Imura in view of Liu and Chung teaches the around view monitoring system of claim 10, wherein the second image processor is further configured to determine whether the plurality of processed images or the top view image is suitable for object detection based on the recognition setting (Imura Fig. 3, determine that a three-dimensional object is not a vehicle, a specific object, or a utility pole),
and restore a correction of the plurality of processed images or the top view image if the plurality of processed images or the top view image is not suitable for the object detection (id., do not perform processing specific to a vehicle, specific object, or utility pole but instead output the bird’s eye view image).
Regarding claim 14, Imura in view of Liu and Chung teaches the around view monitoring system of claim 10, wherein the second image processor is configured to transmit a control signal to control at least one of an alarm device and a driving control device according to a detection result (Imura ¶ 0080, driving assistance).
Regarding claim 15, Imura in view of Liu teaches an around view monitoring method performed by an around view monitoring system, comprising:
by a first image processor of the around view monitoring system (Imura Fig. 2, birds-eye view image generation unit 11):
generating a top view image by stitching a plurality of images (¶ 0032, generating a birds-eye view image as a projective transformation of captured images) received from a plurality of cameras mounted to a vehicle (Fig. 1, front view camera 37, right-side view camera 39, left-side view camera 41, and rear view camera 43);
processing the top view image based on a display setting (¶¶ 0055–56, adjust luminance or contrast) and
controlling a display to display the processed top view image (¶ 0062, output to monitor 45); and
by a second image processor of the around view monitoring system (Fig. 2, ¶ 0033; three-dimensional object recognition unit 13)
processing an image received from at least one of the plurality of cameras based on a recognition setting (Fig. 2, ¶ 0033; recognizing a three-dimensional object);
detecting an object in the processed image (¶ 0034, determine if the recognized object is a vehicle); and
controlling an alarm device to output a warning sound that notifies a driver of a detected object (Liu pp. 4, 6, recognizing various road objects such as lane marking and road signs such as prohibited entry; p. 4, alarm device 9 that notifies the driver of “danger information recognized by the image processing device” may be a sound or audible voice alarm that is variable dependent on a danger degree).
extracting a correction index according to the plurality of images received from the at least one camera and a correction degree of each of the plurality of processed images (Imura ¶ 0044–46, distortion reduction, correcting fisheye image distortion);
inactivating a function of detecting the object in the processed image if the correction index exceeds a predefined value (Chung ¶¶ 0052–57, determine if camera error is outside the maximum correction range); and
controlling the alarm device to notify the driver that the image is not valid (¶ 0054, screen output showing the error is above a maximum error correction range).
Regarding claim 16, this claim is directed to correcting the image based on the display setting before stitching the plurality of images. Imura, in contrast, at Figures 2 and 3, generates the bird’s-eye view image before any other processing. However, it would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Imura to correct luminance and contrast of the individual camera images before stitching them into the bird’s-eye view image, with the predictable result of better color matching at the borders between the images. See M.P.E.P. §§ 2144.04(VI)(C) (rearrangement of parts), 2144.04(IV)(C) (change in process sequence).
Regarding claim 19, Imura teaches the around view monitoring method of claim 15, further comprising transmitting a control signal to control a driving control device according to a detection result (¶ 0080, driving assistance).
Regarding claim 20, Imura teaches the around view monitoring method of claim 15, wherein:
the display setting comprises information about a setting suitable for image display for each of a plurality of correction techniques (¶¶ 0055–56, adjusting luminance and contrast), and
the recognition setting comprises information about a setting suitable for object detection for each of a plurality of correction techniques (Fig. 2, shadow superimposition and distortion reducing).
Claims 3, 12, and 17 are rejected under 35 U.S.C. § 103 as being unpatentable over Imura in view of Liu and Chung, and further in view of U.S. Patent Application Publication No. 2020/0013154 A1 (“Jang”).
Claims 3, 12, and 17 are directed to further details of the around view monitoring system not disclosed by Imura or Liu. However, with respect to claims 3, 12, and 17, Jang teaches:
detecting the object in the object in the image based on a model trained to detect an object in an input image (¶¶ 0004–06, machine learning model, ¶ 0046, training image database). It would have been obvious to one having ordinary skill in the art at the time of effective filing to use a training model to recognize objects in Imura, as taught by Jang, so that the camera need not be calibrated during manufacturing. Jang ¶ 0074.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
CN 115511971 A
US 2019/0088011 A1
US 2016/0275683 A1
US 2015/0193916 A1
The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. The documents are ranked from most to least relevant. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025).
US 2015/0353010 A1
KR 20130095525 A
CN 107856608 A
KR 20180074093 A
KR 20130124762 A
KR 20210078614 A
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David N Werner/Primary Examiner, Art Unit 2487
1 A machine translation from the European Patent Office is added to the record, and is relied on, including for pagination.