Prosecution Insights
Last updated: April 19, 2026
Application No. 18/596,763

BLIND ZONE OBJECT DETECTION

Non-Final OA §103
Filed
Mar 06, 2024
Examiner
ABDI, AMARA
Art Unit
2668
Tech Center
2600 — Communications
Assignee
GM Global Technology Operations LLC
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
76%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
677 granted / 816 resolved
+21.0% vs TC avg
Minimal -8% lift
Without
With
+-7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
849
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-7, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Miu et al, (US-PGPUB 20200195837) in view of Sakai et al, (US-PGPUB 20200406747) In regards to claim 1, Miu et al discloses a method for alerting a driver to occluded objects found within a front blind zone of a vehicle, comprising: identifying a plurality of candidate objects within an ambient environment forward of a front fascia of the vehicle, (see at least: Fig. 1, Par. 0061, vehicle 1 may include an imaging system 2 for detecting objects 3 (e.g., at least one of a gesture and an obstacle) within a distance 101 of the vehicle 1, [i.e., an imaging system 2 implicit the identifying plurality of candidate objects within an ambient environment forward of a front fascia of the vehicle, implicitly by using an imaging system 2]); determining positional coordinates for each of the candidate objects relative to the vehicle, (see at least: Par. 0071, calculating position information from the distance information, where the information provided may include the X and Y coordinates of an object 3 in the field of view, [i.e., determining positional coordinates, “X and Y coordinates of an object 3”, for each of the candidate objects, “one or more objects 3”, relative to the vehicle, “relative to the vehicle 1”]). Miu et al does not expressly disclose determining a visibility curve relative to a forward-looking field of view of the driver, the visibility curve representing a visibility boundary between the front blind zone and a visible zone of the ambient environment coinciding with the forward-looking field of view; and identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility curve, the visible objects having at least a visible portion thereof within the visible zone and the occluded objects having no visible portion within the visible zone and/or an entirety thereof within the front blind zone. However, Sakai et al discloses determining a visibility dashed line relative to a forward-looking field of view of the driver, the visibility curve representing a visibility boundary between the front blind zone and a visible zone of the ambient environment coinciding with the forward-looking field of view, (see at least: Figs. 16, 18, Par. 0079-0080, extracting the invisible area A1 and a visible area A2 based on an ideal road surface area that can be recognized in a captured image of the front camera 1. The blind spot area determining part 20 can thereby determine the location of the blind spot area AB (specifically, a boundary location between the invisible area A1 and the visible area A2), [i.e., determining a visibility line, “extended dashed line between areas A1 and A2”, relative to a forward-looking field of view of the driver, “implicit by road surface area … captured image of the front camera 1”, the visibility line representing a visibility boundary between the front blind zone and a visible zone of the ambient environment coinciding with the forward-looking field of view, “boundary location between the invisible area A1 and the visible area A2 with the road surface captured image of the front camera 1”]); and identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility dashed line, the visible objects having at least a visible portion thereof within the visible zone and the occluded objects having no visible portion within the visible zone and/or an entirety thereof within the front blind zone, (see at least: Fig. 4 and Figs. 16, 18, when the moving obstacle MO jumping out of the blind spot area to the visible area as shown in Fig. 4, the moving obstacle MO is implicitly visible to the vehicle in the road surface area captured by the front camera, and when the moving obstacle MO is located within the invisible area A1 as shown in Figs. 16 and 18, the moving obstacle MO is implicitly invisible to the vehicle, [which is technically equivalent to determining whether the moving obstacle MO is visible or invisible to the vehicle, based on a comparison of its positional coordinates to the visibility extended dashed line, as to whether the moving obstacle MO is located within the blind spot area or the visible area]). Miu and Sakai are combinable because they are both concerned with an obstacle detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Miu, to use the blind spot area determining part 20, as though by Sakai, in order to determine the boundary location between the invisible area and the visible area, (Sakai, Par. 0080). The combine teaching Miu and Sakai as whole does not expressly disclose the visibility curve. At the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to determine the visibility curve. Applicant has not disclosed that determining the visibility curve provides an advantage, be used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant’s invention to perform equally well with either the determining an extended dashed line between the invisible area A1 and the visible area A2, as though by Sakai, or the claimed determining visibility curve between the front blind zone and the visible zone, because both of the visible dashed line and the visible curve, perform the same function of determining the boundary location between the invisible area and the visible area, (Sakai, Par. 0080). In regards to claim 2, the combine teaching Miu and Sakai as whole discloses the limitations of claim 1. Sakai further discloses in response to identifying one or more of the occluded objects, providing an alert to apprise the driver of a hidden object having been found within the front blind zone, (Sakai, see at least: Par. 0029, the vehicle drive assist system 10 displays an alert image (attention call marking) to calls driver's attention, “providing an alert to apprise the driver of a hidden object”, to a moving obstacle (moving object) MO, “object having been found within the front blind zone”). In regards to claim 3, the combine teaching Miu and Sakai as whole discloses the limitations of claim 1. Sakai further discloses providing the alert by activating one or more systems onboard the vehicle to generate at least one of a haptic warning, an auditory, and/or a visual warning, (Sakai, see at least: Par. 0029, implicit by displaying an alert image (attention call marking) to calls driver's attention, “i.e., visual warning”). In regards to claim 4, the combine teaching Miu and Sakai as whole discloses the limitations of claim 1. Sakai further discloses in response to identifying one or more of the visible objects, providing the alert without providing a specific reference or a dedicated callout for the visible objects, (see at least: Par. 0038, display device 5 (display part) is configured not to display an alert image MC when the size of the front obstacle B is less than or equal to the preset threshold value T, “implicitly providing an alert without a dedicated callout for the visible objects”). In regards to claim 6, the combine teaching Miu and Sakai as whole discloses the limitations of claim 1. Sakai et al further discloses determining the visibility curve based on relatively comparing the forward looking field of view to geometries of one or more a hood, a dashboard, an A-pillar, a steering wheel, or other structure of the vehicle forward of the driver, (see at least: Figs. 3-8, and 10, Par. 0040-0042, as the vehicle 100 approaches the front obstacle B that creates the blind spot area AB, the extended line EL is inclined more toward an intersection line IL side, and the reference angle gradually decreases, [i.e., implicitly determining the extended line EL, which is technically determined based on comparing the field of view of the driver to the geometries structure, “point PB as shown in Figs. 7-8”, of the vehicle B forward of the driver, “comparing the forward looking field of view to geometries of one or more a structure of the vehicle forward of the drive”] Such that the front blind zone corresponds with sectors of the forward-looking field of view obstructed by one or more of the geometries and the visible zone corresponds with sectors of the forward-looking field of view unobstructed by one or more of the geometries, (see at least: Figs. 7-8, where the blind spot area AB on top of boundary EL technically corresponds with the forward-looking field of view of the vehicle 100 obstructed by point PB of the edge corner of the vehicle B, “one or more of the geometries”, and the visible region at the bottom of the boundary EL corresponds with the forward-looking field of view of the vehicle 100 unobstructed by point PB of the edge corner of the vehicle B, “one or more of the geometries”). In regards to claim 7, the combine teaching Miu and Sakai as whole discloses the limitations of claim 1. Miu further discloses determining the positional coordinates based on a longitudinal distance, a lateral distance, and a height separately derived for each of the candidate objects from images of the ambient environment captured with an imaging device included onboard the vehicle, (see at least: Par. 0093-0094, digital imaging systems 20, 120 provide advantages in performance (such as high resolution in lateral, vertical and longitudinal dimensions) …. an algorithm (e.g., of the control unit 28) can more accurately determine position information of an object 3 or objects; and from Par. 0077, the digital imaging system 20 measures the distance 54 from an object 3 to the host vehicle 1. Accordingly, the digital imaging systems 20 technically determines the positional coordinates, “position information of an object 3 or objects”, based on a longitudinal distance, a lateral distance, “implicit by the digital imaging system 20 being provided with high resolution in lateral, vertical and longitudinal dimensions”]). Further, from Par. 0071, from the distance information, position information may be calculated, for example if the object off to the side of a center of the vehicle or if the object 3 is at a height above the ground, [i.e., determining the positional coordinates, “position information may be calculated”, based on a height separately derived for each of the candidate objects, “if the object 3 is at a height above the ground”, from images of the ambient environment captured with an imaging device included onboard the vehicle, “from the image data outputted from the image sensor assembly 24 of the imaging system 20”]). In regards to claim 18, Miu et al discloses vehicle, comprising: a plurality of wheels operable to facilitate movement of the vehicle; and a powertrain operable to rotate one or more of the wheels in response to mechanical power generated with an internal combustion engine and/or an electric motor, (see at least: Par. 0035, the motor vehicle implicitly comprises plurality of wheels and the powertrain operable to rotate one or more of the wheels to mode the vehicle); an imaging system configured for capturing images of an ambient environment forward of a front fascia of the vehicle, (see at least: Fig. 3, and Par. 0093, implicit by using the image sensor assembly 24, 124 or camera vision sensors); an object detection system configured for: determining positional coordinates for a plurality of candidate objects forward of the front fascia, (see at least: Par. 0071, calculating position information from the distance information, where the information provided may include the X and Y coordinates of an object 3 in the field of view, [i.e., determining positional coordinates, “X and Y coordinates of an object 3”, for each of the candidate objects, “one or more objects 3”, forward of the front fascia, “in front view of the vehicle”]). Miu et al does not expressly disclose determining a visibility curve for an occupant of the vehicle; and identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility curve; and an alert system configured for providing an alert having a callout for drawing an attention of the occupant to a closest one of the occluded objects. However, Sakai et al discloses determining a visibility dashed line for an occupant of the vehicle, (see at least: Figs. 16, 18, Par. 0079-0080, the blind spot area determining part 20 can thereby determine the location of the blind spot area AB (specifically, a boundary location between the invisible area A1 and the visible area A2), [i.e., determining a visibility line, “extended dashed line between areas A1 and A2”, for an occupant of the vehicle, “the driver”]); identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility dashed line, (see at least: Fig. 4 and Figs. 16, 18, when the moving obstacle MO jumping out of the blind spot area to the visible area as shown in Fig. 4, the moving obstacle MO is implicitly visible to the vehicle in the road surface area captured by the front camera, and when the moving obstacle MO is located within the invisible area A1 as shown in Figs. 16 and 18, the moving obstacle MO is implicitly invisible to the vehicle, [which is technically equivalent to determining whether the moving obstacle MO is visible or invisible to the driver, based on a comparison of its positional coordinates to the visibility dashed line, as to whether the moving obstacle MO is located within the blind spot area or the visible area]); and an alert system configured for providing an alert having a callout for drawing an attention of the occupant to a closest one of the occluded objects, (see at least: Par. 0029, the vehicle drive assist system 10 displays an alert image (attention call marking) to calls driver's attention, “generating an alert having a callout for drawing an attention”, to a moving obstacle (moving object) MO, “occluded object”). Miu and Sakai are combinable because they are both concerned with an obstacle detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Miu, to use the blind spot area determining part 20, as though by Sakai, in order to determine the boundary location between the invisible area and the visible area, (Sakai, Par. 0080). The combine teaching Miu and Sakai as whole does not expressly disclose the visibility curve. At the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to determine the visibility curve. Applicant has not disclosed that determining the visibility curve provides an advantage, be used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant’s invention to perform equally well with either the determining an extended dashed line between the invisible area A1 and the visible area A2, as though by Sakai, or the claimed determining visibility curve between the front blind zone and the visible zone, because both of the visible dashed line and the visible curve, perform the same function of determining the boundary location between the invisible area and the visible area, (Sakai, Par. 0080). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Miu et al, and Sakai et al, as applied to claim 7 above; and further in view of Lipchin et al, (US-PGPUB 20210019914); and further in view of Igarashi et al, (US-PGPUB 20170197620) The combine teaching Miu and Sakai as whole discloses the limitations of claim 1. The combine teaching Miu and Sakai as whole does not expressly disclose determining a physical height for each of the candidate objects based on bounding boxes derived from the images and geometrically triangulating the longitudinal distance and the lateral distance therewith. However, Lipchin et al discloses determining a physical height for each of the candidate objects based on bounding boxes derived from the images, (see at least: Par. 0010, determining an estimated height of the object using a bottom of the sample bounding box, “i.e., determining a physical height for each of the candidate objects based on bounding boxes derived from the images”). Miu, Sakai, and Lipchin are combinable because they are all concerned with an object detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Miu and Sakai, to use the sample bounding box obtained from the sample image, as though by Lipchin, in order to determine an estimated height of the object using a bottom of the sample bounding box, (Lipchin, Par. 0010). The combine teaching Miu, Sakai, and Lipchin as whole does not expressly disclose geometrically triangulating the longitudinal distance and the lateral distance therewith. However, Igarashi discloses geometrically triangulating the longitudinal distance and the lateral distance therewith, (see at least: Par. 0027, On the basis of principles of triangulating, a point on the distance image is coordinate-transformed to a point on a real space in which a vehicle width direction of a host vehicle, that is, a lateral direction is set to an X-axis, “geometrically triangulating the lateral distance”, a vehicle height directions is set to a Y-axis, and a vehicle longitudinal direction, that is, a distance direction is set to a Z-axis, “i.e., geometrically triangulating the longitudinal distance”). Miu, Sakai, Lipchin, and Igarashi are combinable because they are all concerned with an object detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Miu, Sakai, and Lipchin, to use the principles of triangulating, as though by Igarashi, in order to performing coordinate-transformation of the point on the distance image to a point on a real space, to thereby three-dimensionally recognizing an obstacle, (Igarashi, Par. 0027) Allowable Subject Matter Claims 5, 8, 10-13, and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. With respect to claim 5, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “determining the visibility curve based on relatively comparing an eye position of the driver to a geometry of the vehicle such that the forward-looking field of view is centered relative to the eye position and the front blind zone corresponds with sectors of the forward-looking field of view obstructed by the geometry” The prior art of record, Miu et al discloses identifying a plurality of candidate objects within an ambient environment of the vehicle, (see at least: Fig. 1, Par. 0061, vehicle 1 may include an imaging system 2 for detecting objects 3 (e.g., at least one of a gesture and an obstacle) within a distance 101 of the vehicle 1, [i.e., an imaging system 2 implicit the identifying plurality of candidate objects within an ambient environment forward of a front fascia of the vehicle, implicitly by using an imaging system 2]); and determining positional coordinates for each of the candidate objects relative to the vehicle, (see at least: Par. 0071, calculating position information from the distance information, where the information provided may include the X and Y coordinates of an object 3 in the field of view, [i.e., determining positional coordinates, “X and Y coordinates of an object 3”, for each of the candidate objects, “one or more objects 3”, relative to the vehicle, “relative to the vehicle 1”]). However, Miu et al fails to teach or suggest, either alone or in combination with the other cited references, determining the visibility curve based on relatively comparing an eye position of the driver to a geometry of the vehicle such that the forward-looking field of view is centered relative to the eye position and the front blind zone corresponds with sectors of the forward-looking field of view obstructed by the geometry A further prior art of record, Sakai et al discloses determining a visibility extended dashed line for the driver, (see at least: Figs. 16, 18, Par. 0079-0080, the blind spot area determining part 20 can thereby determine the location of the blind spot area AB (specifically, a boundary location between the invisible area A1 and the visible area A2), [i.e., determining a visibility line between the visible area and the invisible area]); and identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility curve, (see at least: Fig. 4 and Figs. 16, 18, when the moving obstacle MO jumping out of the blind spot area to the visible area as shown in Fig. 4, the moving obstacle MO is implicitly visible to the vehicle in the road surface area captured by the front camera, and when the moving obstacle MO is located within the invisible area A1 as shown in Figs. 16 and 18, the moving obstacle MO is implicitly invisible to the vehicle, [which is technically equivalent to identifying a visible object or an occluded object based on a comparison of its positional coordinates to the visibility extended dashed line, as to whether the moving obstacle MO is located within the blind spot area or the visible area]). However, Sakai et al fails to teach or suggest, either alone or in combination with the other cited references, determining the visibility curve based on relatively comparing an eye position of the driver to a geometry of the vehicle such that the forward-looking field of view is centered relative to the eye position and the front blind zone corresponds with sectors of the forward-looking field of view obstructed by the geometry. With respect to claim 8, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “calibrating the positional coordinates based on the wide-angle view, the focal length, and the mounting position to facilitate deriving the positional coordinates from the images captured therewith”. The relevant prior art of record, Englander, (US-PGPUB 20220118911) discloses an imaging device being a camera having a wide-angle view and a focal length, (see at least: Par. 0156, dual-vision system 200 is shown, where a front housing 201 contains a first camera opening 202 that allows a first camera 203, where either or both of first camera 203 and second camera 206 may be wide-angle, fixed focal length, [i.e., imaging device, “dual-vision system 200 “, being a camera having a wide-angle view and a focal length, “either or both of first camera 203 and second camera 206 may be wide-angle, fixed focal length”]). A further prior art of record, Oyama, (US-PGPUB 20210086789) discloses determining a mounting position of camera on the vehicle, (Par. 0004, implicit by detecting abnormalities in a mounting position of the vehicle-mounted cameras based on a difference from real images). Another prior art of record, Sakano et al, (US-PGPUB 20160275683) discloses a secondary calibration unit, which includes a coordinate system integrating unit that transforms the positions of the cameras in the marker coordinate system calculated by the primary calibration unit into positions in the vehicle coordinate system. However, none of the cited prior art of record Englander, Oyama, and Sakano, either alone or in combination, teach or suggest, calibrating the positional coordinates based on the wide-angle view, the focal length, and the mounting position to facilitate deriving the positional coordinates from the images captured therewith. With respect to claim 10, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “comparing the physical heights relative to the visibility curve; and identifying each of the candidate objects having the physical height above the visibility curve as one of the visible objects” The prior art of record cited above with respect to the claim 5, applies also to a claim 10. Sakai et al further discloses identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility dashed line, (see at least: Fig. 4 and Figs. 16, 18, “see the rejection of claim 1 for more details”). Further, from Par. 0038, 0061-0062, and 0091, the vehicle drive assist system 10 determines whether the size of the recognized front obstacle B is larger or smaller than a preset threshold value T, and if the size of the front obstacle B is larger than a preset threshold T, determining the moving obstacle MO is within the blind spot area. However, while disclosing comparing the positional coordinates of the moving obstacle MO to the visibility curve; Sakai et al fails to teach or suggest, either alone or in combination with the other cited references, comparing the physical heights relative to the visibility curve; and identifying each of the candidate objects having the physical height above the visibility curve as one of the visible objects. Regarding claims 11-13, claims 11-13 are in condition for allowance based at least on their dependency from claim 10. With respect to claim 19, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “generating the visibility curve to include a shaped contour extending virtually in a forward direction relative to an upper surface of the front fascia; and identifying the candidate objects with an object height above the shaped curve as the visible objects and the candidate objects with an object height below the shaped contour as the occluded objects” The prior art of record cited above with respect to the claim 5, applies also to a claim 19. Sakai et al further discloses identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility dashed line, (see at least: Fig. 4 and Figs. 16, 18, “see the rejection of claim 1 for more details”). Further, from Par. 0038, 0061-0062, and 0091, the vehicle drive assist system 10 determines whether the size of the recognized front obstacle B is larger or smaller than a preset threshold value T, and if the size of the front obstacle B is larger than a preset threshold T, determining the moving obstacle MO is within the blind spot area. However, while disclosing comparing the positional coordinates of the moving obstacle MO to the visibility curve; Sakai et al fails to teach or suggest, either alone or in combination with the other cited references, generating the visibility curve to include a shaped contour extending virtually in a forward direction relative to an upper surface of the front fascia; and identifying the candidate objects with an object height above the shaped curve as the visible objects and the candidate objects with an object height below the shaped contour as the occluded objects. Regarding claim 20, claim 20 is in condition for allowance based at least on its dependency from claim 19. The following is a statement of reasons for the indication of allowable subject matter: -- claim 14 is allowable over the prior art of record. -- Claims 15-17 are allowable in view of their dependency from claim 14 With respect to claim 14, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “determining a visibility curve for the driver, the visibility curve including a slope that gradually decreases in a forward direction relative to an upper surface of a front fascia of the vehicle”. The prior art of record, Miu et al discloses identifying a plurality of candidate objects within an ambient environment of the vehicle, (see at least: Fig. 1, Par. 0061, vehicle 1 may include an imaging system 2 for detecting objects 3 (e.g., at least one of a gesture and an obstacle) within a distance 101 of the vehicle 1, [i.e., an imaging system 2 implicit the identifying plurality of candidate objects within an ambient environment forward of a front fascia of the vehicle, implicitly by using an imaging system 2]); and determining positional coordinates for each of the candidate objects relative to the vehicle, (see at least: Par. 0071, calculating position information from the distance information, where the information provided may include the X and Y coordinates of an object 3 in the field of view, [i.e., determining positional coordinates, “X and Y coordinates of an object 3”, for each of the candidate objects, “one or more objects 3”, relative to the vehicle, “relative to the vehicle 1”]). A further prior art of record, Sakai et al discloses determining a visibility extended dashed line for the driver, (see at least: Figs. 16, 18, Par. 0079-0080, the blind spot area determining part 20 can thereby determine the location of the blind spot area AB (specifically, a boundary location between the invisible area A1 and the visible area A2), [i.e., determining a visibility line between the visible area and the invisible area]); and identifying each of the candidate objects as one of a visible object or an occluded object based on a comparison of the positional coordinates to the visibility curve, (see at least: Fig. 4 and Figs. 16, 18, when the moving obstacle MO jumping out of the blind spot area to the visible area as shown in Fig. 4, the moving obstacle MO is implicitly visible to the vehicle in the road surface area captured by the front camera, and when the moving obstacle MO is located within the invisible area A1 as shown in Figs. 16 and 18, the moving obstacle MO is implicitly invisible to the vehicle, [which is technically equivalent to identifying a visible object or an occluded object based on a comparison of its positional coordinates to the visibility extended dashed line, as to whether the moving obstacle MO is located within the blind spot area or the visible area]). However, none of the prior art of record Miu and Sakai et al, either alone or in combination, teach or suggest, determining a visibility curve for the driver, the visibility curve including a slope that gradually decreases in a forward direction relative to an upper surface of a front fascia of the vehicle. Regarding claims 15-17, claims 15-17 are in condition for allowance based at least on their dependency from claim 14. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMARA ABDI/Primary Examiner, Art Unit 2668 02/18/2026
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602822
METHOD DEVICE AND STORAGE MEDIUM FOR BACK-END OPTIMIZATION OF SIMULTANEOUS LOCALIZATION AND MAPPING
2y 5m to grant Granted Apr 14, 2026
Patent 12597252
METHOD OF TRACKING OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12576595
SYSTEMS AND METHODS FOR IMPROVED VOLUMETRIC ADDITIVE MANUFACTURING
2y 5m to grant Granted Mar 17, 2026
Patent 12574469
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12563154
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
76%
With Interview (-7.5%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month