Prosecution Insights
Last updated: April 19, 2026
Application No. 17/990,399

METHOD AND APPARATUS FOR COMPUTER-VISION-BASED OBJECT DETECTION

Non-Final OA §103
Filed
Nov 18, 2022
Examiner
BITAR, NANCY
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Here Global B V
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
786 granted / 946 resolved
+21.1% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
978
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 946 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/9/2026 has been entered. Response to Amendment Applicant’s amendments to claims 1, 13 and 17 are acknowledged. Response to Arguments Applicant's arguments, in the amendment filed 1/9/2026, with respect to the rejections of claims 1-20 under 35 U.S.C. 103(a)have been fully considered but are moot in view of the new ground(s) of rejection necessitated by the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of in view of Lakshmi et al (US 20200086879) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2,13-14, and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al (US 2019/0376809) in view of Lakshmi et al (US 20200086879) Regarding Claim 1, Hanniel teaches: A method comprising: receiving an image captured from a perspective of a vehicle or a device traveling at street level; (Hanniel, Fig. 1, Labels 122 and 124: Figure 1 in Hanniel depicts an image capturing device from the perspective of the vehicle (122) and an image capturing device traveling at street level (124).) and (Hanniel, Paragraph [0415]: "For example, during operation, system 100 may collect navigational information using image capture devices (e.g., devices 122, 124, 126)") processing the image using computer vision to detect one or more objects, one or more lane markings, a road surface, or a combination thereof depicted in the image; (Hanniel, Paragraph [0128]: "In a three camera system, a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.") determining a relative positioning of the one or more objects with respect to the one or more lane markings, the road surface, or a combination thereof; (Hanniel, Paragraph [0413]: "The supplemental information request may include instructions causing the navigation systems of the vehicles (e.g., vehicles 200 and 3214) to increase a collection parameter or acquire position information relative to the road feature associated with the feature coordinate information (e.g., yield sign 3202, road work sign 3204, or dashed lane markings 3208).") classifying one or more semantic localization features of the one or more objects based on the relative positioning; wherein each of the one or more semantic localization features comprises a predefined comprises a predefined label indicating a relative location of the one or more objects with respect to the one or more lane markings, the road surface, or a combination thereof, distinct from location coordinate data [(parags: 0198, 0264, 0352) localization position and trajectory stored in a sparse map (Fig. 8, item 800) to remove navigational errors in addition to data received from landmark beacons - RFID devices that use short range transceiver devices which are independent from location coordinate data]. In addition, the sparse maps can be used for navigation without requiring excessive data storage or data transfer (parags 0174, 0176.); and providing the one or more semantic localization features as an output. (Hanniel, Paragraph [0214]: " In some embodiments, signs may be referred to as semantic signs and non-semantic signs. A semantic sign may include any class of signs for which there's a standardized meaning (e.g., speed limit signs, warning signs, directional signs, etc.). A non-semantic sign may include any sign that is not associated with a standardized meaning (e.g., general advertising signs, signs identifying business establishments, etc.)") and (Hanniel, Paragraph [0283]: "Process 1900 may also include identifying, based on the plurality of images, a plurality of landmarks associated with the road segment (step 1910). For example, server 1230 may analyze the environmental images received from camera 122 to identify one or more landmarks, such as road sign along road segment 1200."). While Hanniel teaches several distinct technology methods to determine location and position of a vehicle . Hanniel et al fails to teach “wherein the predefined label serves as a substitute for location coordinate data to simplify data processing for a downstream location-based service.” Specifically, Lakshmi et al teaches a first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network ( abstract).Lakshmi et al clearly teaches the input may be untrimmed, egocentric sequences of video from the image capture device 106 and CAN signals from the CAN bus 128, while the output may be the tactical driver behavior label of each corresponding image frame. Examples of tactical driver behavior labels may include intersection passing, turning right, turning left, right lane change, left lane change, U turn, left branch, right branch, crosswalk passing, railroad passing, merge, intersection passing, etc (paragraph [0062])Lakshmi teaches the fusion of the label and can be fed through LTSM layer which may result in tactical driver recognition data and may be sent to downstream (paragraph [0075-0076]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the label instead of the coordinate in Hanniel in order to perform dynamic scene recognition rapidly and accurately with little attention to objects in the scene. Therefore, the claimed invention would have been obvious to one of ordinary skill in the art at the time of the invention by applicant. Regarding Claim 2, Hanniel teaches: The method of claim 1, further comprising: determining a relative size of the one or more objects based on an object pixel size of the one or more objects relative to an image pixel size of the image; (Hanniel, Paragraph [0098]: "…an image sensor can acquire image data associated with each pixel included in a particular scan line.") and (Hanniel, Paragraph [0099]: "…each pixel in a row is read one at a time, and scanning of the rows proceeds on a row-by-row basis until an entire image frame has been captured.") and (Hanniel, Paragraph [0146]: "… processing unit 110 may track a detected candidate object across consecutive frames and accumulate frame-by-frame data associated with the detected object (e.g., size, position relative to vehicle 200, etc.).") and filtering the one or more objects based on the relative size. (Hanniel, Paragraph [0145]: "At step 542, processing unit 110 may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory 140). Properties may include object shape, dimensions, texture, position (e.g., relative to vehicle 200), and the like.") and (Hanniel, Paragraph [0268]: "When the physical size of the landmark is known, the distance to the landmark may also be determined based on the following equation: Z=f*W/ω, where f is the focal length, W is the size of the landmark (e.g., height or width), w is the number of pixels when the landmark leaves the image. ") Regarding Claim 13, Hanniel teaches: An apparatus comprising: at least one processor; (Hanniel, Paragraph [0069]: “In some embodiments, processing unit 110 may include an applications processor 180…”) and at least one memory including computer program code for one or more programs, (Hanniel, Paragraph [0009]: “In an embodiment, a system for verifying and supplementing information received from a host vehicle may include a memory device storing a database…”) and (Hanniel, Paragraph [0009]: “The at least one processing device may further be programmed to compare the received feature coordinate information to map information associated with the environment of the host vehicle…”) the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an image captured from a perspective of a vehicle or a device traveling at street level; (Hanniel, Fig. 1, Labels 122 and 124: Figure 1 in Hanniel depicts an image capturing device from the perspective of the vehicle (122) and an image capturing device traveling at street level (124).) and (Hanniel, Paragraph [0415]: "For example, during operation, system 100 may collect navigational information using image capture devices (e.g., devices 122, 124, 126) ...") process the image using computer vision to detect one or more objects, one or more lane markings, a road surface, or a combination thereof depicted in the image; (Hanniel, Paragraph [0128]: "In a three camera system, a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.") determine a relative positioning of the one or more objects with respect to the one or more lane markings, the road surface, or a combination thereof; (Hanniel, Paragraph [0413]: "The supplemental information request may include instructions causing the navigation systems of the vehicles (e.g., vehicles 200 and 3214) to increase a collection parameter or acquire position information relative to the road feature associated with the feature coordinate information (e.g., yield sign 3202, road work sign 3204, or dashed lane markings 3208).") classify one or more semantic localization features of the one or more objects based on the relative positioning, wherein each of the one or more semantic localization features comprises a predefined comprises a predefined label indicating a relative location of the one or more objects with respect to the one or more lane markings, the road surface, or a combination thereof, distinct from location coordinate data [(parags: 0198, 0264, 0352) localization position and trajectory stored in a sparse map (Fig. 8, item 800) to remove navigational errors in addition to data received from landmark beacons - RFID devices that use short range transceiver devices which are independent from location coordinate data]; and provide the one or more semantic localization features as an output. (Hanniel, Paragraph [0214]: " In some embodiments, signs may be referred to as semantic signs and non-semantic signs. A semantic sign may include any class of signs for which there's a standardized meaning (e.g., speed limit signs, warning signs, directional signs, etc.). A non-semantic sign may include any sign that is not associated with a standardized meaning (e.g., general advertising signs, signs identifying business establishments, etc.)") and (Hanniel, Paragraph [0283]: "Process 1900 may also include identifying, based on the plurality of images, a plurality of landmarks associated with the road segment (step 1910). For example, server 1230 may analyze the environmental images received from camera 122 to identify one or more landmarks, such as road sign along road segment 1200.") While Hanniel teaches several distinct technology methods to determine location and position of a vehicle . Hanniel et al fails to teach “wherein the predefined label serves as a substitute for location coordinate data to simplify data processing for a downstream location-based service.” Specifically, Lakshmi et al teaches a first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network ( abstract).Lakshmi et al clearly teaches the input may be untrimmed, egocentric sequences of video from the image capture device 106 and CAN signals from the CAN bus 128, while the output may be the tactical driver behavior label of each corresponding image frame. Examples of tactical driver behavior labels may include intersection passing, turning right, turning left, right lane change, left lane change, U turn, left branch, right branch, crosswalk passing, railroad passing, merge, intersection passing, etc (paragraph [0062])Lakshmi teaches the fusion of the label and can be fed through LTSM layer which may result in tactical driver recognition data and may be sent to downstream (paragraph [0075-0076]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the label instead of the coordinate in Hanniel in order to perform dynamic scene recognition rapidly and accurately with little attention to objects in the scene. Therefore, the claimed invention would have been obvious to one of ordinary skill in the art at the time of the invention by applicant. Regarding Claim 14, Hanniel teaches: The apparatus of claim 13, wherein the apparatus is further caused to: determine a relative size of the one or more objects based on an object pixel size of the one or more objects relative to an image pixel size of the image; (Hanniel, Paragraph [0098]: "…an image sensor can acquire image data associated with each pixel included in a particular scan line.") and (Hanniel, Paragraph [0099]: "…each pixel in a row is read one at a time, and scanning of the rows proceeds on a row-by-row basis until an entire image frame has been captured.") and (Hanniel, Paragraph [0146]: "… processing unit 110 may track a detected candidate object across consecutive frames and accumulate frame-by-frame data associated with the detected object (e.g., size, position relative to vehicle 200, etc.).") and filter the one or more objects based on the relative size. (Hanniel, Paragraph [0145]: "At step 542, processing unit 110 may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory 140). Properties may include object shape, dimensions, texture, position (e.g., relative to vehicle 200), and the like.") and (Hanniel, Paragraph [0268]: "When the physical size of the landmark is known, the distance to the landmark may also be determined based on the following equation: Z=f*W/ω, where f is the focal length, W is the size of the landmark (e.g., height or width), w is the number of pixels when the landmark leaves the image.") Regarding Claim 17, Hanniel teaches: A non-transitory computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to at least perform the following steps (Hanniel, Paragraph [0010]: “Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executed by at least one processing device and perform any of the methods described herein.”): receiving an image captured from a perspective of a vehicle or a device traveling at street level; (Hanniel, Fig. 1, Labels 122 and 124: Figure 1 in Hanniel depicts an image capturing device from the perspective of the vehicle (122) and an image capturing device traveling at street level (124).) and (Hanniel, Paragraph [0415]: "For example, during operation, system 100 may collect navigational information using image capture devices (e.g., devices 122, 124, 126)") processing the image using computer vision to detect one or more objects, one or more lane markings, a road surface, or a combination thereof depicted in the image; (Hanniel, Paragraph [0128]: "In a three camera system, a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.") determining a relative positioning of the one or more objects with respect to the one or more lane markings, the road surface, or a combination thereof; (Hanniel, Paragraph [0413]: "The supplemental information request may include instructions causing the navigation systems of the vehicles (e.g., vehicles 200 and 3214) to increase a collection parameter or acquire position information relative to the road feature associated with the feature coordinate information (e.g., yield sign 3202, road work sign 3204, or dashed lane markings 3208).") classifying one or more semantic localization features of the one or more objects based on the relative positioning, wherein each of the one or more semantic localization features comprises a predefined comprises a predefined label indicating a relative location of the one or more objects with respect to the one or more lane markings, the road surface, or a combination thereof, distinct from location coordinate data [(parags: 0198, 0264, 0352) localization position and trajectory stored in a sparse map (Fig. 8, item 800) to remove navigational errors in addition to data received from landmark beacons - RFID devices that use short range transceiver devices which are independent from location coordinate data]; and providing the one or more semantic localization features as an output. (Hanniel, Paragraph [0214]: " In some embodiments, signs may be referred to as semantic signs and non-semantic signs. A semantic sign may include any class of signs for which there's a standardized meaning (e.g., speed limit signs, warning signs, directional signs, etc.). A non-semantic sign may include any sign that is not associated with a standardized meaning (e.g., general advertising signs, signs identifying business establishments, etc.)") and (Hanniel, Paragraph [0283]: "Process 1900 may also include identifying, based on the plurality of images, a plurality of landmarks associated with the road segment (step 1910). For example, server 1230 may analyze the environmental images received from camera 122 to identify one or more landmarks, such as road sign along road segment 1200.") While Hanniel teaches several distinct technology methods to determine location and position of a vehicle . Hanniel et al fails to teach “wherein the predefined label serves as a substitute for location coordinate data to simplify data processing for a downstream location-based service.” Specifically, Lakshmi et al teaches a first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network ( abstract).Lakshmi et al clearly teaches the input may be untrimmed, egocentric sequences of video from the image capture device 106 and CAN signals from the CAN bus 128, while the output may be the tactical driver behavior label of each corresponding image frame. Examples of tactical driver behavior labels may include intersection passing, turning right, turning left, right lane change, left lane change, U turn, left branch, right branch, crosswalk passing, railroad passing, merge, intersection passing, etc (paragraph [0062])Lakshmi teaches the fusion of the label and can be fed through LTSM layer which may result in tactical driver recognition data and may be sent to downstream (paragraph [0075-0076]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the label instead of the coordinate in Hanniel in order to perform dynamic scene recognition rapidly and accurately with little attention to objects in the scene. Therefore, the claimed invention would have been obvious to one of ordinary skill in the art at the time of the invention by applicant. Regarding Claim 18, The non-transitory computer-readable storage medium of claim 17, (Hanniel, Paragraph [0010]: “Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executed by at least one processing device and perform any of the methods described herein.”) wherein the apparatus is caused to further perform: determining a relative size of the one or more objects based on an object pixel size of the one or more objects relative to an image pixel size of the image; (Hanniel, Paragraph [0098]: "…an image sensor can acquire image data associated with each pixel included in a particular scan line.") and (Hanniel, Paragraph [0099]: "…each pixel in a row is read one at a time, and scanning of the rows proceeds on a row-by-row basis until an entire image frame has been captured.") and (Hanniel, Paragraph [0146]: "… processing unit 110 may track a detected candidate object across consecutive frames and accumulate frame-by-frame data associated with the detected object (e.g., size, position relative to vehicle 200, etc.).") and filtering the one or more objects based on the relative size. (Hanniel, Paragraph [0145]: "At step 542, processing unit 110 may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory 140). Properties may include object shape, dimensions, texture, position (e.g., relative to vehicle 200), and the like.") and (Hanniel, Paragraph [0268]: "When the physical size of the landmark is known, the distance to the landmark may also be determined based on the following equation: Z=f*W/ω, where f is the focal length, W is the size of the landmark (e.g., height or width), w is the number of pixels when the landmark leaves the image. ") Claims 3, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al. (US20190376809A1) in view of Lakshmi et al (US 20200086879) and further in view of Kainz et al. (Estimating the object size from static 2D image). Regarding Claim 3, Hanniel and Lakshmi does not explicitly teach: The method of claim 2, wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. However, in an analogous field of endeavor, Kainz teaches: The method of claim 2, wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. (Kainz, Page 2, Computer vision techniques in the Measurement: "The following step requires to acquire the displayed distance between edges of the object area. These values can be determined if the object is marked by a bounding box and a circle, to obtain a minimal rectangular and circular area which the object occupies. The size of the object will be represented by two numerical values - the width and height of the object area (or one if a circular diameter is chosen).") and (Kainz, Page 3, Implementing the Software Solution: "The first phase provides the object detection and a filtering out of the background from an image. The second phase is the edge detection and obtaining pixel dimensions of an object in the image. The last phase is the conversion of the pixel size to the estimated real values, which are output of the application.") Accordingly, it was obvious to combine Hanniel and Kainz to obtain the invention of claim 3. Hanniel and Kainz are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Kainz wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 3. Regarding Claim 15, the combination of Hanniel and Kainz teaches The apparatus of claim 14, wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. (Kainz, Page 2, Computer vision techniques in the Measurement: "The following step requires to acquire the displayed distance between edges of the object area. These values can be determined if the object is marked by a bounding box and a circle, to obtain a minimal rectangular and circular area which the object occupies. The size of the object will be represented by two numerical values - the width and height of the object area (or one if a circular diameter is chosen).") and (Kainz, Page 3, Implementing the Software Solution: "The first phase provides the object detection and a filtering out of the background from an image. The second phase is the edge detection and obtaining pixel dimensions of an object in the image. The last phase is the conversion of the pixel size to the estimated real values, which are output of the application.") Accordingly, it was obvious to combine Hanniel and Kainz to obtain the invention of claim 15. Hanniel and Kainz are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Kainz wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 15. Regarding Claim 19, the combination of Hanniel and Kainz teaches The non-transitory computer-readable storage medium of claim 18 (Hanniel, Paragraph [0010]: “Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executed by at least one processing device and perform any of the methods described herein.”), wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. (Kainz, Page 2, Computer vision techniques in the Measurement: "The following step requires to acquire the displayed distance between edges of the object area. These values can be determined if the object is marked by a bounding box and a circle, to obtain a minimal rectangular and circular area which the object occupies. The size of the object will be represented by two numerical values - the width and height of the object area (or one if a circular diameter is chosen).") and (Kainz, Page 3, Implementing the Software Solution: "The first phase provides the object detection and a filtering out of the background from an image. The second phase is the edge detection and obtaining pixel dimensions of an object in the image. The last phase is the conversion of the pixel size to the estimated real values, which are output of the application.") Accordingly, it was obvious to combine Hanniel and Kainz to obtain the invention of claim 19. Hanniel and Kainz are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the non-transitory computer-readable storage medium as taught by Hanniel to incorporate the teachings of Kainz wherein the object pixel size is determined from a size of a bounding box corresponding to the one or more objects as detected by the computer vision. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the accuracy of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 19. Claims 4, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hanniel (US20190376809A1) in view of Lakshmi et al (US 20200086879) and further in view of Nedevschi et al. (Accurate Ego-Vehicle Global Localization at Intersections Through Alignment of Visual Data With Digital Map). Regarding Claim 4, Hanniel and Lakshmi does not teach: The method of claim 1, wherein the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. However, in an analogous field of endeavor, Nedevschi teaches: “ wherein the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, (Nedevschi, Section A. Lane Identification Through a BN: "A general remark regarding the usefulness of the information provided by the other vehicles is the following. In the case in which there are more than three lanes per driving direction, the type of lateral lane delimiters is no longer discriminatory information, and it becomes difficult to correctly and uniquely identify the lane. In such cases, the relative position and traveling direction of the detected vehicles can provide useful information in discriminating between the lanes with equal probability.") and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. (Nedevschi, Section A. Lane Identification Through a BN:" Therefore, using each detected vehicle’s relative velocity and relative lateral position, the following features are extracted: 1) vehicle’s traveling direction, which can be either the same or the opposite direction with respect to the ego-vehicle (Outgoing or Oncoming, respectively); 2) vehicle’s lateral position, which can be on a left lane or a right lane with respect to the ego-lane (Left or Right, respectively).") Accordingly, it was obvious to combine Hanniel and Nedevschi to obtain the invention of claim 4. Hanniel and Nedevschi are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Nedevschi the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the accuracy of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 4. Regarding Claim 16, the combination of Hanniel and Nedevschi teaches: The apparatus of claim 13, wherein the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, (Nedevschi, Section A. Lane Identification Through a BN: "A general remark regarding the usefulness of the information provided by the other vehicles is the following. In the case in which there are more than three lanes per driving direction, the type of lateral lane delimiters is no longer discriminatory information, and it becomes difficult to correctly and uniquely identify the lane. In such cases, the relative position and traveling direction of the detected vehicles can provide useful information in discriminating between the lanes with equal probability.") and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. (Nedevschi, Section A. Lane Identification Through a BN:" Therefore, using each detected vehicle’s relative velocity and relative lateral position, the following features are extracted: 1) vehicle’s traveling direction, which can be either the same or the opposite direction with respect to the ego-vehicle (Outgoing or Oncoming, respectively); 2) vehicle’s lateral position, which can be on a left lane or a right lane with respect to the ego-lane (Left or Right, respectively).") Accordingly, it was obvious to combine Hanniel and Nedevschi to obtain the invention of claim 16. Hanniel and Nedevschi are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus as taught by Hanniel to incorporate the teachings of Nedevschi the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 16. Regarding Claim 20, the combination of Hanniel and Nedevschi teaches The non-transitory computer-readable storage medium of claim 17 (Hanniel, Paragraph [0010]: “Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executed by at least one processing device and perform any of the methods described herein.”), wherein the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, (Nedevschi, Section A. Lane Identification Through a BN: "A general remark regarding the usefulness of the information provided by the other vehicles is the following. In the case in which there are more than three lanes per driving direction, the type of lateral lane delimiters is no longer discriminatory information, and it becomes difficult to correctly and uniquely identify the lane. In such cases, the relative position and traveling direction of the detected vehicles can provide useful information in discriminating between the lanes with equal probability.") and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. (Nedevschi, Section A. Lane Identification Through a BN:" Therefore, using each detected vehicle’s relative velocity and relative lateral position, the following features are extracted: 1) vehicle’s traveling direction, which can be either the same or the opposite direction with respect to the ego-vehicle (Outgoing or Oncoming, respectively); 2) vehicle’s lateral position, which can be on a left lane or a right lane with respect to the ego-lane (Left or Right, respectively).") Accordingly, it was obvious to combine Hanniel and Nedevschi to obtain the invention of claim 20. Hanniel and Nedevschi are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Nedevschi the one or more semantic localization features includes a lateral localization of the one or more objects with respect to the vehicle, and wherein the lateral localization indicates that the one or more objects are in a left lane or a right lane relative to a location of the vehicle or the device. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 20. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al. (US20190376809A1), Lakshmi et al (US 20200086879) in view of Nedevschi et al. (Accurate Ego-Vehicle Global Localization at Intersections Through Alignment of Visual Data With Digital Map) and in further view of Takemura et al. (US20150161881A1). Regarding Claim 5, the combination of Hanniel and Nedevschi does not teach: The method of claim 4, further comprising: determining a vertical location of the one or more objects in the image; projecting a horizontal line from the vertical location to one or more lane boundaries determined from the one or more lane markings; and determining the lateral location of the one or more objects based on an intersection of the horizontal line with the one or more lane boundaries. However, in an analogous field of endeavor, Takemura teaches: The method of claim 4, further comprising: determining a vertical location of the one or more objects in the image; (Takemura, Paragraph [0183]: “As shown in FIG. 25(b), the distance d4 is a distance that specifies a height in actual space that is set so as to include the tires of the other vehicle VX and so on. The distance d4 is made to be the length shown in FIG. 25(a) in the image as seen in a bird's-eye view.”) projecting a horizontal line from the vertical location to one or more lane boundaries determined from the one or more lane markings; (Takemura, Fig.25: Figure 25(a) in Takemura shows a horizontal line projected from d4, which is the vertical location, to the lane. The line of b4 connects with endpoint C2, which indicates a lane marking.) and determining the lateral location of the one or more objects based on an intersection of the horizontal line with the one or more lane boundaries. (Takemura, Paragraph [0138]: After having counted the number of differential pixels DP, the three dimensional body detection unit 33 obtains the point of intersection CP between the line La and the ground line L1. And, corresponding to the point of intersection CP and the counted number, along with determining a position on the horizontal axis based upon the position of the point of intersection CP, in other words a position in the direction of the vertical axis in the figure at the right of FIG. 17, the three dimensional body detection unit 33 also determines a position on the vertical axis from the counted number, in other words a position in the direction of the horizontal axis in the figure at the right of FIG. 17, and plots this as the counted number at the point of intersection CP.”) Accordingly, it was obvious to combine the combination of Hanniel and Nedevschi and Takemura to obtain the invention of claim 5. Hanniel, Nedevschi, and Takemura are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel and Nedevschi to incorporate the teachings of Takemura to determine a vertical location of the one or more objects in the image; project a horizontal line from the vertical location to one or more lane boundaries determined from the one or more lane markings; and determine the lateral location of the one or more objects based on an intersection of the horizontal line with the one or more lane boundaries. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It as for at least the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 5. Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al. (US20190376809A1) in view of Lakshmi et al (US 20200086879) and further in view Sivaraman et al. (Integrated Lane and Vehicle Detection, Localization, and Tracking: A Synergistic Approach) Regarding Claim 6, Hanniel teaches: The method of claim 1, wherein the one or more semantic localization features includes an on/off road detection, (Hanniel, Paragraph [0242]: “Such distances may be used, as the geometrical road model is mainly used for two purposes: planning the trajectory ahead and localizing the vehicle on the road model.”) Hanniel does not teach: and wherein the on/off road detection indicates that the one or more objects are on the road surface detected in the image or off the road surface detected in the image. However, in an analogous field of endeavor, Sivaraman teaches: and wherein the on/off road detection indicates that the one or more objects are on the road surface detected in the image or off the road surface detected in the image. (Sivaraman, Page 5, Section B. Improved Vehicle Detection: “To determine if an object lies beneath the horizon, we first use the tracked object’s state vector, as given in (9). We then use (10) to calculate the center of the bottom edge of the object pbottom, which is represented in the image plane by its bounding box. If the bottom edge of the object sits lower than the estimated location of the ground plane, we keep this object as a vehicle. Objects whose lower edge sits above the estimated ground plane are filtered out…”) Accordingly, it was obvious to combine Hanniel and Sivaraman to obtain the invention of claim 6. Hanniel and Sivaraman are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Sivaraman wherein the one or more semantic localization features includes an on/off road detection, and wherein the on/off road detection indicates that the one or more objects are on the road surface detected in the image or off the road surface detected in the image. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 6. Regarding Claim 8, the combination of Hanniel and Sivaraman teaches: The method of claim 1, wherein the one or more semantic localization features includes an in-lane detection, (Sivaraman, Page 4: “We track the ego-vehicle’s position within its lane, the lane width, and lane model parameters using Kalman filtering.”) and (Sivaraman, Page 5, Section IV: Synergistic Integration of Lane-Vehicle Localization and Tracking: “To improve the lane estimation and tracking performance, we integrate knowledge of vehicle locations in the image plane.”) and wherein the in-lane detection indicates that the one or more objects are in a same lane or not the same lane as a location of the vehicle or the device. (Sivaraman, Page 2, Fig. 2: “Typical performance of integrated lane and vehicle tracking on highway with dense traffic. Tracked vehicles in the ego-lane are marked green. To the left of the ego-lane, tracked vehicles are marked blue. To the right of the ego-lane, tracked vehicles are marked red., Figure 2 of Sivaraman shows the performance of lane and vehicle tracking. The blue and red lines indicate lane boundaries, and the vehicle outlined in green is noted to be in the ego-lane (same lane).”) Accordingly, it was obvious to combine Hanniel and Sivaraman to obtain the invention of claim 8. Hanniel and Sivaraman are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Sivaraman to determine a left road boundary and a right road boundary of the road based on the plurality of pixels; and determining the on/off road detection of the one or more objects by comparing a horizontal location of the one or more objects in the object to the left road boundary and the right road boundary at a vertical location of the one or more objects in the image. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 8. Claims 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al. (US20190376809A1) ,in view of Lakshmi et al (US 20200086879)and further in view of Sivaraman et al. (Integrated Lane and Vehicle Detection, Localization, and Tracking: A Synergistic Approach) in further view of Seo et al. (Utilizing Instantaneous Driving Direction for Enhancing Lane-Marking Detection) in further view of Takemura (US20150161881A1) Regarding Claim 7, the combination of Hanniel, Lakshmi and Sivaraman does not teach: The method of claim 6, wherein the computer vision is used to perform image segmentation to determine a plurality of pixels of the image corresponding to the road surface, the method further comprising: determining a left road boundary and a right road boundary of the road based on the plurality of pixels; and determining the on/off road detection of the one or more objects by comparing a horizontal location of the one or more objects in the object to the left road boundary and the right road boundary at a vertical location of the one or more objects in the image. However, in an analogous field of endeavor, Seo teaches: The method of claim 6, wherein the computer vision is used to perform image segmentation to determine a plurality of pixels of the image corresponding to the road surface, the method further comprising: determining a left road boundary and a right road boundary of the road based on the plurality of pixels; (Seo, Part II. Longitudinal Lane-Marking Detection: “The filter is designed to emphasize the intensity contrast between lane-marking pixels and their neighboring pixels.”) and (Seo, Part II. Longitudinal Lane-Marking Detection: “Given an input perspective image, we can readily compute the number of pixels used to depict lane-markings on each row of the input image.) and (Seo, Part II. Longitudinal Lane-Marking Detection: “To produce a binary image of lane-markings from this new intensity image, we first do an intensity thresholding. This intensity thresholding keeps only pixels the values of which are greater than a predefined intensity threshold (e.g., 10). We then apply a connected-component group to identify a set of lane-marking pixel blobs. For each lane-marking pixel blob, we compute the eigenvalues and eigenvectors of the pixel coordinates’ dispersion matrix to fit a line segment to the blob.”) and (Seo, Fig.2: Figure 2 in Seo shows examples of the initial lane-marking detection results, which shows how the lane marking detections are determining a left road boundary and right road boundary, especially in figure 2(c).) Accordingly, it was obvious to combine the combination of Hanniel and Sivaraman and Seo to obtain the invention of claim 7. Hanniel, Sivaraman, and Seo are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel and Sivaraman to incorporate the teachings of Seo to determine a left road boundary and a right road boundary of the road based on the plurality of pixels. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 7. The combination of Hanniel, Sivaraman, and Seo does not teach: and determining the on/off road detection of the one or more objects by comparing a horizontal location of the one or more objects in the object to the left road boundary and the right road boundary at a vertical location of the one or more objects in the image. However, in an analogous field of endeavor, Takemura teaches: and determining the on/off road detection (Takemura, Paragraph [0071]: “As shown in FIG. 7, for example, the area setting unit 201 may comprise a road surface area setting unit 201 a…”) and (Takemura, Paragraph [0072]: “…various kinds of functions are implemented by the in-vehicle surrounding environment recognition device 100, such as, for example, lane recognition, other vehicle recognition, pedestrian detection, sign detection, right-turn collision prevention detection, parking box recognition, and moving body detection.) of the one or more objects by comparing a horizontal location of the one or more objects in the object to the left road boundary and the right road boundary at a vertical location of the one or more objects in the image. (Takemura, Paragraph [0183]: “As shown in FIG. 25(b), the distance d4 is a distance that specifies a height in actual space that is set so as to include the tires of the other vehicle VX and so on. The distance d4 is made to be the length shown in FIG. 25(a) in the image as seen in a bird's-eye view.”) and (Takemura, Fig.25: Figure 25(a) in Takemura shows a horizontal line projected from d4, which is the vertical location, to the lane. The line of b4 connects with endpoint C2, which indicates a lane marking.) and (Takemura, Paragraph [0138]: After having counted the number of differential pixels DP, the three dimensional body detection unit 33 obtains the point of intersection CP between the line La and the ground line L1. And, corresponding to the point of intersection CP and the counted number, along with determining a position on the horizontal axis based upon the position of the point of intersection CP, in other words a position in the direction of the vertical axis in the figure at the right of FIG. 17, the three dimensional body detection unit 33 also determines a position on the vertical axis from the counted number, in other words a position in the direction of the horizontal axis in the figure at the right of FIG. 17, and plots this as the counted number at the point of intersection CP.”) Accordingly, it was obvious to combine the combination of Hanniel, Sivaraman, Seo, and Takemura to obtain the invention of claim 7. Hanniel, Sivaraman, Seo, Takemura are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel, Sivaraman, and Seo to incorporate the teachings of Takemura to determine the on/off road detection of the one or more objects by comparing a horizontal location of the one or more objects in the object to the left road boundary and the right road boundary at a vertical location of the one or more objects in the image. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 7. Regarding Claim 9, the combination of Hanniel, Sivaraman, Seo, and Takemura teaches: The method of claim 8, further comprising: determining a vertical location of the one or more objects in the image; (Takemura, Paragraph [0183]: “As shown in FIG. 25(b), the distance d4 is a distance that specifies a height in actual space that is set so as to include the tires of the other vehicle VX and so on. The distance d4 is made to be the length shown in FIG. 25(a) in the image as seen in a bird's-eye view.”) projecting a horizontal line from the vertical location to one or more lane boundaries determined from the one or more lane markings; (Takemura, Fig.25: Figure 25(a) in Takemura shows a horizontal line projected from d4, which is the vertical location, to the lane. The line of b4 connects with endpoint C2, which indicates a lane marking.) and determining the in-lane detection of the one or more objects based on an intersection of the horizontal line with the one or more lane boundaries. (Takemura, Paragraph [0138]: After having counted the number of differential pixels DP, the three dimensional body detection unit 33 obtains the point of intersection CP between the line La and the ground line L1. And, corresponding to the point of intersection CP and the counted number, along with determining a position on the horizontal axis based upon the position of the point of intersection CP, in other words a position in the direction of the vertical axis in the figure at the right of FIG. 17, the three dimensional body detection unit 33 also determines a position on the vertical axis from the counted number, in other words a position in the direction of the horizontal axis in the figure at the right of FIG. 17, and plots this as the counted number at the point of intersection CP.”) and (Takemura, Paragraph [0072]: “…various kinds of functions are implemented by the in-vehicle surrounding environment recognition device 100, such as, for example, lane recognition, other vehicle recognition, pedestrian detection, sign detection, right-turn collision prevention detection, parking box recognition, and moving body detection.) and (Takemura, Paragraph [0067]: “For example if, in the lane recognition mentioned above, it has been determined that the subject vehicle appears to be deviating from the road lane upon which it is traveling, or if, in the other vehicle detection, the pedestrian detection, the right-turn collision prevention, the moving body detection, or the like, a vehicle has been detected for which there is a possibility of collision with the subject vehicle, then a warning is outputted from the warning output unit 3 according to control by the control unit 2.”) Accordingly, it was obvious to combine the combination of Hanniel, Sivaraman, Seo, and Takemura to obtain the invention of claim 9. Hanniel, Sivaraman, Seo, and Takemura are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel, Sivaraman, and Seo to incorporate the teachings of Takemura to determine a vertical location of the one or more objects in the image; project a horizontal line from the vertical location to one or more lane boundaries determined from the one or more lane markings; and determine the in-lane detection of the one or more objects based on an intersection of the horizontal line with the one or more lane boundaries. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 9. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al. (US20190376809A1) in view of Lakshmi et al (US 20200086879) and further in view of Ki et al. (Accident Detection System using Image Processing and MDRs) Regarding Claim 10, Hanniel does not teach: The method of claim 1, further comprising: determining a road incident involving the one or more objects based on the one or more semantic location features; and storing the road incident as a data record of a geographic database. However, in an analogous field of endeavor, Ki teaches: The method of claim 1, further comprising: determining a road incident involving the one or more objects based on the one or more semantic location features; (Ki, Page 3: "An outline of this process is shown in figure 2, and the accident detection algorithm is summarized as follows [12]: step 1: extract the vehicle objects from the video frame; step 2: track the MVs by the tracking algorithm; step 3: extract features such as variation rates of velocity, position, area, direction of the MV as the accident index; step 4: estimate the sum of the accident index flags (VF+PF+SF+DF) and identify the accident…") and storing the road incident as a data record of a geographic database. (Ki, Page 4, "We designed a database for ARRS by entity-relation (E/R) model. As shown in figure 3, the proposed model consists of seven categories; traffic accident, accident information, cause of the accident, driver, vehicle, site condition, and driving pattern at accident time. ") Accordingly, it was obvious to combine Hanniel and Ki to obtain the invention of claim 10. Hanniel and Ki are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Ki to determine a road incident involving the one or more objects based on the one or more semantic location features; and store the road incident as a data record of a geographic database. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 10. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Hanniel et al. (US20190376809A1) , Lakshmi et al (US 20200086879), in view of Ki et al. (Accident Detection System using Image Processing and MDRs) in further view of Catten et al. (US20100207787A1) Regarding Claim 11, the combination of Hanniel, Lakshmi and Ki does not teach: The method of claim 10, wherein the one or more objects includes a construction cone, and wherein the road incident is a construction event. However, in an analogous field of endeavor, Catten teaches: The method of claim 10, wherein the one or more objects includes a construction cone, and wherein the road incident is a construction event. (Catten, Paragraph [0052]: "Traffic cones 607, traffic barrels, barricades, construction equipment, or vehicles may block lane 604 and prevent vehicles 601, 608, 609 from using lane 604 beyond point 61. In the illustrated example, upon reaching construction zone 611, vehicle 609 is forced to slow down below the posted speed limit for road 603 due to traffic congestion. Vehicle monitoring system 610 may send updated road condition information to the central server.") and (Catten, Paragraph [0045]: "Vehicle monitoring system 302 may retrieve and use locally stored or centrally stored road condition information and/or may use a combination of information from multiple sources. It will be understood that the terms “road condition” or “road condition information” as used herein is intended to broadly include any road or route information, such as, … construction zone… or any other road or traffic conditions.") Accordingly, it was obvious to combine the combination of Hanniel and Ki and Catten to obtain the invention of claim 11. Hanniel, Ki, and Catten are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel and Ki to incorporate the teachings of Catten wherein the one or more objects includes a construction cone, and wherein the road incident is a construction event. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 11. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Hanniel (US20190376809A1), Lakshmi et al (US 20200086879) in view of Horita et al. (Employing a fully convolutional neural network for road marking detection) Regarding Claim 12, Hanniel and Lakshmi does not teach: The method of claim 1, wherein the computer vision uses an object detection algorithm to detect the one or more objects, uses a spatial neural network to detect the one or more lane markings, uses an image segmentation classifier to detect the road surface; or a combination thereof. However, in an analogous field of endeavor, Horita teaches: The method of claim 1, wherein the computer vision uses an object detection algorithm to detect the one or more objects, (Horita, Section B. Convolutional Neural Network for Image Segmentation: “An important advantage of using FCNN architecture is that it allows jointing multiple functionalities. In [9], the authors proposed the MultiNet, a unified architecture with shared encoder for image classification, object detection and road estimation by image segmentation. Considering that an autonomous navigation system consists of several perception subsystems, having a way to integrate all of them may be a key point to the development of autonomous vehicles.”) and (Horita, Section III. Methodology: “The architecture employed for this work was the only part of the MultiNet [9] used for image segmentation, and it can be seen in Fig. 1.”) uses a spatial neural network to detect the one or more lane markings, uses an image segmentation classifier to detect the road surface; or a combination thereof. (Horita, Section B. Convolutional Neural Network for Image Segmentation: "For this work, CNN architectures designed for image segmentation are more interesting, since the objective of road marking detection is to classify each pixel in road marking or non-road marking. Early CNN-based approaches for image segmentation employed conventional CNN architecture to classify sliding windows [20].") Accordingly, it was obvious to combine Hanniel and Horita to obtain the invention of claim 10. Hanniel and Horita are considered to be analogous to the claimed invention because they are in the same field of image analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Hanniel to incorporate the teachings of Horita wherein the computer vision uses an object detection algorithm to detect the one or more objects, uses a spatial neural network to detect the one or more lane markings, uses an image segmentation classifier to detect the road surface; or a combination thereof. Even though Horita does not explicitly use the words “object detection algorithm to detect one or more objects”, one of ordinary skill in the art recognizes that the architecture was the part of the MultiNet (an Fully Convolutional Neural Network) used for image segmentation, which includes the tasks of object detection and road estimation. Therefore, the prior art of Horita would encompass object detection algorithm to detect the one or more objects. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the robustness of computer-vision-based objection detection. It is at least for the aforementioned reason that the Examiner has reached a conclusion of obviousness with respect to Claim 12. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached at 571-272-2976 The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NANCY . BITAR Examiner Art Unit 2664 /NANCY BITAR/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Nov 18, 2022
Application Filed
Feb 25, 2025
Non-Final Rejection — §103
Jun 04, 2025
Response Filed
Jul 07, 2025
Final Rejection — §103
Sep 09, 2025
Response after Non-Final Action
Jan 09, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599437
PRE-PROCEDURE PLANNING, INTRA-PROCEDURE GUIDANCE FOR BIOPSY, AND ABLATION OF TUMORS WITH AND WITHOUT CONE-BEAM COMPUTED TOMOGRAPHY OR FLUOROSCOPIC IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12597132
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597240
METHOD AND SYSTEM FOR AUTOMATED CENTRAL VEIN SIGN ASSESSMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597189
METHODS AND APPARATUS FOR SYNTHETIC COMPUTED TOMOGRAPHY IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591982
MOTION DETECTION ASSOCIATED WITH A BODY PART
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
91%
With Interview (+8.2%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 946 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month