DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed August 27, 2025 has been entered. Claims 1-20 remain pending in the application. Applicant’s amendments to the Claims have overcome some but not each and every objection previously set forth in the Non-Final Office Action mailed June 9, 2025.
Claim Objections
Claims 5 and 10 are objected to because of the following informalities:
in claim 5 line 7, “identified image” should read “identified object”
in claim 10 lines 1-2, “the first ultra-wide-angle lens is disposed” should read “the first ultra-wide-angle lens disposed”
in claim 10 line 3, “the second ultra-wide-angle lens is disposed” should read “the second ultra-wide-angle lens disposed”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-2, 9-13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Soni (KR102069735) in view of Zou et al. (US 2022/0206567), Hyvarinen (US 2014/0376830) and Humpal et al. (US 2022/0192175).
Regarding claim 1, Soni discloses a processor-implemented method with driving control, the method comprising: obtaining a first image (Fig. 1: camera 110; paragraph 0042: “The first image control unit [130] can obtain a first image from the first camera [110]”) captured by a first ultra-wide-angle lens (paragraph 0037: “The present invention can obtain a wide-angle image of about 190 degrees by using a fish-eye lens having a field of view exceeding 180 degrees as a camera lens. A fisheye lens is an ultra-wide-angle retrofocus lens”) disposed on a first position; obtaining a second image captured by a second ultra-wide-angle lens disposed on a second position (Fig. 2: camera 120; paragraphs 0003, 0042: “the second image control unit [140] can obtain a second image from the second camera [120]” in order to provide “a panoramic image using two cameras placed at a certain distance apart”); monitoring a state of a driver and an occupant of the vehicle (paragraphs 0048,0083: “The user status recognition unit (400) can identify the user's face in a panoramic image…[and] extract facial feature points including the eyes and mouth from the user's face to determine the user's first status” to detect “a drowsy or sleeping driver”) based on the first image and the second image (paragraph 0047: “The panoramic image generation unit (300) can generate a panoramic image by merging the first image and the second image”); while monitoring the state of the driver and the occupant, detecting an object in a blind spot area (paragraph 0056: “The vehicle control unit (700) can determine whether any vehicle exists in the vehicle's blind spot when the user's first state determined by the user state recognition unit (400) is a sleeping state or a drowsy state”) based on a matching of the first image and the second image (paragraphs 0028, 0047: “judging the situation in blind spots that the user cannot see using a panoramic image” where “the panoramic image generation unit [300] extracts feature points from the first image and the second image, and matches the feature points of the first image and the feature points of the second image that exist at corresponding locations to the extracted feature points to generate a panoramic image using the set feature point pairs”); and generating information for control of the vehicle based on a result of the monitoring and a result of the detecting of the object (paragraph 0056: “when there is any vehicle in the blind spot of the vehicle…If the user's first state is a sleeping or drowsy state, and the sensor unit [200] detects the user's attempt to change lanes, the vehicle control unit [700] controls the steering wheel to prevent the vehicle from changing lanes”). However, Soni fails to explicitly disclose the first and second lenses are positioned in a vehicle; and detecting an object based on information of an object not detected in the first image due to an object obscuration obtained based on a matching of the first image and the second image, and based on a reliability of the first image determined based on an object obscuration factor of the first image.
In related art, Zou discloses the first and second lenses are positioned in a vehicle (Zou paragraph 0031: “the image information…may be acquired through a camera arranged in the vehicle cabin”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soni to incorporate the teachings of Zou to control the vehicle display screen without requiring the manual touch of the driver or occupant, thereby improving the convenience of control of the vehicle display screen, and helping to improve driving safety (Zou paragraph 0025). However, Soni, modified by Zou, still fails to disclose detecting an object based on information of an object not detected in the first image due to an object obscuration obtained based on a matching of the first image and the second image, and based on a reliability of the first image determined based on an object obscuration factor of the first image.
In related art, Hyvarinen discloses detecting an object (Hyvarinen FIGs. 4A-B: scene objects 7) based on information of an object not detected in the first image (Hyvarinen FIG. 4A: first image 10) due to an object obscuration (Hyvarinen FIG. 4A, paragraph 0032: “the forward object 2 obscures the whole of band B' ”) obtained based on a matching of the first image and the second image (Hyvarinen paragraph 0034: “it is possible to remove the object 2 from the composite image by replacing the first obscured portion 12 of the scene 4 at an imaging plane 3 in the first image 10 with a corresponding portion from the second image 20” where matching is performed via the vertical bands C, B, A, A’, B’, C’ and the corresponding portion from the second image 20 is understood to be the matching vertical band B’ of the second image 20 that contains information of the scene object 7). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Soni to incorporate the teachings of Hyvarinen to inform a user on whether the forward object can be removed, allowing the user the opportunity to re-frame the scene (e.g. change position) so that the forward object can be removed (Hyvarinen paragraph 0005). However, Soni, modified by Zou and Hyvarinen, still fails to disclose detecting an object based on a reliability of the first image determined based on an object obscuration factor of the first image.
In related art, Humpal discloses detecting an object based on a reliability of the first image (Humpal paragraph 0093: “confidence level generator 234 can generate one or more confidence level metrics or quality level metrics indicative of the quality of the image being analyzed and/or the confidence that the system has in being able to identify targets in that image and apply material to the targets”) determined based on an object obscuration factor of the first image (Humpal paragraph 0093: “it may be that a number of obscurants [such as dust, smoke, etc.] are detected in the air…All of these and other factors may bear on the quality of the image being processed and the confidence with which the system can identify and apply material to targets”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Soni to incorporate the teachings of Humpal to accurately identify targets in time to perform a functionality as a result (Humpal paragraph 0061).
Regarding claim 2, Soni, modified by Zou, Hyvarinen and Humpal, discloses the method claimed in claim 1, wherein the detecting of the object in the blind spot area comprises: obtaining the information of the object not detected in the first image based on the second image (Hyvarinen paragraph 0034: “it is possible to remove the object 2 from the composite image by replacing the first obscured portion 12 of the scene 4 at an imaging plane 3 in the first image 10 with a corresponding portion from the second image 20” where the corresponding portion from the second image 20 is understood to be the matching vertical band B’ of the second image 20 that contains information of the scene object 7), based on a matching relationship between an object detected in the first image (Hyvarinen FIG. 4A, paragraph 0032: “the whole of the distinct scene features 7 in front of…bands A…and C are visible in the first image 10”) and an object detected in the second image (Hyvarinen FIG. 4B, paragraph 0033: “the whole of the distinct scene features 7 in front of only bands A and C are visible in the second image 20”).
Regarding claim 9, Soni, modified by Zou, Hyvarinen and Humpal, discloses the method claimed in claim 1, wherein the monitoring of the state of the driver and the occupant of the vehicle based on the first image and the second image comprises: detecting the driver of the vehicle (Soni paragraph 0048: “The user status recognition unit (400) can identify the user's face in a panoramic image generated from an image taken of the interior of the vehicle”) and the occupant of the vehicle based on information of an inner area of the vehicle in the first image and the second image (Zou paragraph 0025: “image information of an occupant in a vehicle cabin is acquired, a rotation angle of a target part of the occupant is detected based on the image information”); and monitoring the detected state of the driver (Soni paragraph 0048: “the user status recognition unit (400) can extract facial feature points including the eyes and mouth from the user's face to determine the user's first status. The first state may include a sleep state, a drowsy state, and a normal state…This method can continuously determine the user's condition”) and the detected state of the occupant (Zou paragraph 0031: “the rotation of the target part of the occupant can be detected in time based on the image information acquired in real time”).
Regarding claim 10, Soni, modified by Zou, Hyvarinen and Humpal, discloses the method claimed in claim 1, wherein the first ultra-wide-angle lens disposed on the first position comprises a driver monitoring system (DMS) camera for capturing a front seat of the vehicle (Zou paragraph 0031: “the camera arranged in the vehicle cabin may include a DMS camera…the image information of the driver may be acquired through the DMS camera”), and the second ultra-wide-angle lens disposed on the second position comprises an occupant monitoring system (OMS) camera for capturing a back seat of the vehicle (Zou paragraph 0031: “the camera arranged in the vehicle cabin may include…an OMS camera...and the image information of the occupant except the driver may be acquired through the OMS camera”).
Regarding claim 11, the claim is interpreted and rejected as to claim 1.
Regarding claim 12, the claim is interpreted and rejected as to claim 1.
Regarding claim 13, the claim is interpreted and rejected as to claim 2.
Regarding claim 20, the claim is interpreted and rejected as to claim 9.
Claim(s) 3-4 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Soni, Zou, Hyvarinen and Humpal in view of Sharma et al. (US 10558897).
Regarding claim 3, Soni, modified by Zou, Hyvarinen and Humpal, discloses the method claimed in claim 1, wherein the detecting of the object in the blind spot area comprises: identifying an object in the blind spot area detected in both the first image and the second image by matching the first image and the second image (Hyvarinen FIGs. 4A-B: by matching the first image 10 and the second image 20 via the vertical bands C, B, A, A’, B’, C’, the same scene features 7 are detected in both images in front of bands A and C). However, Soni fails to disclose obtaining recognition information of the identified object based on first recognition information of the identified object detected in the first image and second recognition information of the identified object detected in the second image. In related art, Sharma discloses obtaining recognition information of the identified object based on first recognition information of the identified object detected in the first image and second recognition information of the identified object detected in the second image (Sharma col 5 lines 53-63, Equation 1: “the combined [object detection] output is a linear combination of the output of the different object detection modules [e.g., object detection algorithms 210, 212, 214, and 216 of FIG. 2, which may be referred to as sensor detection output S1 , S2 , S3 , etc.] as shown in FIG. 2, as per their respective weights [W1, W2 , W3 , etc.]”). Sharma additionally discloses the objection detection modules can be object detection classifiers since “existing approaches combine the output of camera and LiDAR using…two distinct object detection classifiers with one for each sensor type and select a classification based on confidence level between the distinct classifiers” (Sharma col 2 lines 47-54). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Soni to incorporate the teachings of Sharma to combine outputs of multiple sensors to produce more accurate results for object detection and tracking by an autonomous vehicle (Sharma col 3 lines 13-21).
Regarding claim 4, Soni, modified by Zou, Hyvarinen, Humpal and Sharma, discloses the method claimed in claim 3, wherein the obtaining of the recognition information of the identified object comprises: obtaining recognition information of the identified object by calculating a weighted sum of the first recognition information and the second recognition information (Sharma col 8 lines 18-44: “At 506, a first weight to a first object detection result from sensor data of the first sensor is assigned. At 508, a second weight to a second object detection result from sensor data of the second sensor is assigned. At 510, a combined object detection technique is performed by combining the first object detection result weighted by the first weight and the second object detection result weighted by the second weight”) based on the reliability of the first image corresponding to the identified object and a reliability of the second image corresponding to the identified object (Sharma col 4 lines 36-67: “A relative weighting is applied to each of the object detection algorithm outputs based on…contextual information [that] may be gathered, at least in part, from the sensors 200, 202, 204, or 206. Additional contextual information may be obtained using vibration sensors, olfactory sensors, a GPS unit, an IMU, time of the day, weather sensors, etc. Context may also factor in sensor health and other operational conditions”).
Regarding claim 14, the claim is interpreted and rejected as to claim 3.
Regarding claim 15, the claim is interpreted and rejected as to claim 4.
Claim(s) 5-6, 8, 16-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Soni, Zou, Hyvarinen and Humpal in view of Han et al. (US 2023/0109473).
Regarding claim 5, Soni, modified by Zou, Hyvarinen and Humpal, discloses the method claimed in claim 1, wherein the detecting of the object in the blind spot area comprises: identifying, in the blind spot area, an object detected in both the first image and the second image by matching the first image and the second image (Hyvarinen FIGs. 4A-B: by matching the first image 10 and the second image 20 via the vertical bands C, B, A, A’, B’, C’, the same scene features 7 are detected in both images in front of bands A and C). However, Soni fails to disclose obtaining position information of the identified object based on first position information of the identified object detected in the first image and second position information of the identified object detected in the second image. In related art, Han discloses obtaining position information of the identified object based on first position information of the identified object detected in the first image and second position information of the identified object detected in the second image (Han paragraph 0071: “the processor 141 may generate composite distance value data of composite image information based on the distance value data of the first image sensor 111 and the second image sensor 112 and the reliability data of the first image sensor 111 and the reliability data of the second image sensor 112”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Soni to incorporate the teachings of Han to reduce the post-processing cost required to composite an output of a final object and reduce output time synchronization between the plurality pieces of image information (Han paragraphs 0128-0129).
Regarding claim 6, Soni, modified by Zou, Hyvarinen, Humpal and Han, discloses the method claimed in claim 5, wherein the obtaining of the position information of the identified object comprises: obtaining the position information of the identified object by calculating a weighted sum of the first position information and the second position information (Han Equation 1, paragraphs 0078-0079: “the processor 141 may generate composite distance value data based on the first distance value data, the second distance value data, and the weights”) based on the reliability of the first image corresponding to the identified object and a reliability of the second image (Han paragraph 0077: “the processor 141 may impart a weight based on a first reliability data corresponding to the distance value data [first distance value data] of the first image information, and may impart a weight based on a second reliability data corresponding to the distance value data [second distance value data] of the second image”) corresponding to the identified object (Han paragraph 0113: “R[x, y] may refer to reliability data corresponding to the pixel [x, y]”).
Regarding claim 8, Soni, modified by Zou, Hyvarinen and Humpal, discloses the method claimed in claim 1, wherein the blind spot area is an area surrounding the vehicle that is not visible from the mirrors of the vehicle and a view angle of the driver (Soni FIG. 10, paragraph 0028: “judging the situation in blind spots that the user cannot see”). However, Soni fails to disclose obtaining recognition information of the object; and obtaining position information of the object. In related art, Han discloses obtaining recognition information of the object (Han paragraph 0068: “the processor 141 may identify objects around the vehicle 10 by inputting the composite image information to the surrounding object recognition model. Herein, the objects may include, for example, other vehicles other than the vehicle 10 and/or pedestrians and/or cyclists and/or objects having a size larger than or equal to a predetermined size”); and obtaining position information of the object (Han paragraph 0069: “The processor 141 may calculate distance value data between a point corresponding to a pixel of the image information and the vehicle 10 based on the processing of image information”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Soni to incorporate the teachings of Han to reduce the post-processing cost required to composite an output of a final object and reduce output time synchronization between the plurality pieces of image information (Han paragraphs 0128-0129).
Regarding claim 16, the claim is interpreted and rejected as to claim 5.
Regarding claim 17, the claim is interpreted and rejected as to claim 6.
Regarding claim 19, the claim is interpreted and rejected as to claim 8.
Claim(s) 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Soni, Zou, Hyvarinen, Humpal and Han in view of Wang (WO 2020061794).
Regarding claim 7, Soni, modified by Zou, Hyvarinen, Humpal and Han, discloses the method claimed in claim 5. However, Soni fails to disclose obtaining the position information of the identified object by correcting the first position information and the second position information based on a difference between the first position and the second position. In related art, Wang discloses obtaining the position information of the identified object by correcting the first position information (Wang paragraph 0043: “the first distance information output by the ranging module can be transformed and corrected [for example, the first distance information is converted to the camera coordinate system where the multi-eye camera is located]”) and the second position information based on a difference between the first position and the second position (Wang paragraph 0044: “when the difference between the second distance information and the first distance information in the initial depth map is greater than a preset threshold, the matching relationship of the pixels in the multi-view images is readjusted until the difference between the second distance information and the first distance information in the calculated depth map is less than the preset threshold”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Soni to incorporate the teachings of Wang to facilitate comparison of distance information when the sensors (e.g., ranging module and multi-eye camera) are installed at different positions of the vehicle (Wang paragraph 0043).
Regarding claim 18, the claim is interpreted and rejected as to claim 7.
Response to Arguments
Applicant's arguments with respect to independent claim 1 have been fully considered but they are not persuasive.
Regarding the argument that “there is no discussion in Soni regarding detecting objects in the external environment of the vehicle at the same time point while monitoring the states of the driver and the passenger”, Soni discloses a vehicle control unit detects other vehicles in the vehicle's blind spot while monitoring whether the driver is in a sleeping state or a drowsy state (Soni paragraph 0056). These actions are performed at the same time in order to prevent accidents that can occur when there’s simultaneously a vehicle in the blind spot and the driver is drowsy.
Regarding the argument that “Hyvarinen is unrelated to detecting an object in a blind spot area”, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Soni teaches detecting objects in blind spots using images taken by cameras (Soni paragraph 0028). Hyvarinen teaches detecting objects using images taken by a camera apparatus (Hyvarinen FIGs. 4A-B and FIG. 8). Therefore, the combination of Soni and Hyvarinen teaches the image processing in Hyvarinen can be applied to images depicting a blind spot area of a vehicle. Furthermore, regarding the argument that “the Office has interpreted the claim elements in isolation”, Examiner would like to clarify "detecting an object in a blind spot area based on information of an object not detected in the first image due to an object obscuration obtained based on a matching of the first image and the second image" is not interpreted without the claimed "detecting an object in a blind spot area based on information of an object not detected in the first image". Hyvarinen teaches detecting an object based on information of an object not detected in the first image due to an object obscuration obtained based on a matching of the first image and the second image (Hyvarinen FIGs. 4A-B). The combination of Soni and Hyvarinen teaches the image processing in Hyvarinen can be applied to images depicting a blind spot area of a vehicle and thus detect an object in the blind spot area.
Regarding the argument that “one can see the foreground object 2 in front of the background and be aware that a portion of the background is blocked by the foreground object…In contrast, the object in a blind spot is an object that is a part of a visual field where one literally cannot see the object, and one is not aware of its existence in the visual field, as the person cannot see the object”, the foreground object 2 in front of the background is being imaged by image sensor 56A and/or image sensor 56B to produce first image 10 and second image 20. Hyvarinen does not explicitly disclose whether a person can see the foreground object 2 and/or the background. The scene is being visualized by cameras like how cameras are used to visualize the blind spots in Soni.
Regarding the argument that “There is no discussion in Hyvarinen regarding…both the first and second images show an object (e.g., same object, not two different portions)”, FIG. 4A and FIG. 4B of Hyvarinen both show the same scene objects 7. The only differences are, for example, in FIG. 4A, the scene object 7 in band B is not obscured. In contrast, in FIG. 4B, the same scene object 7 in band B is obscured.
Regarding the argument that “It will not be logical to modify Soni to detect the object in the blind spot and remove the object, as it would defeat the purpose of Soni which is to detect objects in the blind spots to control the vehicle accordingly to avoid crashes”, Hyvarinen teaches removing a forward object that is obscuring objects of interest. In the context of Soni, one of ordinary skill in the art would be able to imagine such a forward object can be part of the vehicle, e.g. the rear of the vehicle, that is blocking the cameras’ view of the objects of interest, e.g. other vehicles, in the blind spot. It would be desirable to remove the obstruction to be able to detect other vehicles in the blind spot. The forward object can also be an obscurant such as dust, smoke or fog, as suggested by Humpal (Humpal paragraph 0093), in which case it would also be desirable to remove the obscurant to be able to effectively detect objects in the blind spot.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.Z./Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677