Prosecution Insights
Last updated: April 19, 2026
Application No. 18/339,819

SYSTEMS AND METHODS FOR USING IMAGE DATA TO IDENTIFY LANE WIDTH

Final Rejection §103
Filed
Jun 22, 2023
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Torc Robotics, Inc.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 were pending for examination in the Application No. 18/339,819 filed June 22nd, 2023. In the remarks and amendments received on December 10th, 2025, claims 1, 8, 15, and 18 are amended and claims 2, 9, and 16 are canceled. Accordingly, claims 1, 3-8, 10-15, and 17-20 are currently pending for examination in the application. Response to Amendment Applicant’s amendments filed December 10th, 2025, to the Specification, Drawings, and Claims have overcome each and every objection and 35 U.S.C. § 112 (b) rejection previously set forth in the Non-Final Office Action mailed August 11th, 2025. Accordingly, the objection(s) and 35 U.S.C. § 112 (b) rejection(s) are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure. Response to Arguments Applicant’s arguments filed December 10th, 2025, regarding the rejection(s) of the independent claim(s) have been fully considered but are moot because the arguments do not apply to the new combination of the references being used in the current rejection below. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 10th, 2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered and attached by the examiner. Priority (Previously Presented) Acknowledgment is made of applicant’s status as a continuation-in-part (CIP) of Application No. 18/303,460, filed on April 19th, 2023, which claims priority to prior-filed applications under 35 U.S.C. 119(e) Application No.’s 63/447,766 filed on February 23rd, 2023; 63/434,843 filed on December 22nd, 2022; and 63/376,860 filed on September 23rd, 2022. Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. However, applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed applications, Application No.’s 63/376,860 and 63/434,843, fail to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The prior-filled applications do not disclose the subject matter of labeling a set of image data with a total number of lanes for a roadway and using the labeled set of image data to train a machine learning model to predict a new total number of lanes for a new roadway as output. Accordingly, claims 1-20 are not entitled to the benefit of the prior-filed applications Application No.’s 63/376,860 and 63/434,843. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, 6-8, 11, 13-15, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Habib et al. (Habib; WO 2021/006870 A1) in view of Pham et al. (Pham; US 2020/0341466 A1), and further in view of Tran (US 2021/0157330 A1). Regarding claim 1, Habib discloses a method, comprising: identifying, by one or more processors coupled to non-transitory memory (para(s). [0019] and [0144], recite(s) [0019] “…The system includes memory storing instructions and one or more processors in communication with the memory…” [0144] “Computer-readable instructions stored on a computer-readable medium (e.g., the program 1755 stored in the memory 1710) are executable by the processor 1705 of the computing device 1700. A hard drive, CD-ROM, and RAM are some examples of articles including a non-transitory computer- readable medium such as a storage device…” ), a set of image data captured by at least one autonomous vehicle when the at least one autonomous vehicle was positioned in a lane of a roadway (para(s). [0011], [0016], [0065], and [0071], recite(s) [0011] “According to a first aspect of the present disclosure, there is provided a computer-implemented method for performing autonomy-level functions associated with a moving vehicle. The method includes extracting, using convolutional layers of a convolutional neural network, a plurality of features from a plurality of image frames obtained by a camera within the moving vehicle…” [0016] “…a lane number indication of a road lane the moving vehicle is traveling on…” [0065] “…A training database is used which includes a sequence of images of the road from the driver's perspective recorded while driving along a route and the corresponding consumer graded GPS data, industrial graded GPS data, and a lane line map of their route. Road parameters ground truth (which are the true values of the road parameters that are associated with the images) can be obtained from the lane line map and the high precision geolocation provided by the industrial graded GPS data. The dataset consists of a wide variety of road geometry, a viewpoint from driver’ s perspective, weather, and different time of the day…” [0071] “In some aspects, the training data 102 can include input data 103, such as image data taken from a driver’s perspective… The training data 102 can also include road parameters ground truth data corresponding to the input data 103. The input data 103 and the output data 105 are used during the DL model training 108 to train the DL model 110…” , where the “training database” or “training data” is a set of image data (e.g., a “sequence of images”) captured by at least one autonomous vehicle (e.g., a “moving vehicle” with “autonomy-level functions”) was positioned in a lane of the roadway (e.g., a “road lane the moving vehicle is traveling on”)), and respective ground truth localization data of the at least one autonomous vehicle (para(s). [0065]—see preceding citation above—, where the “road parameters ground truth” being obtained from the “lane line map and high precision geolocation” is respective ground truth localization data); determining, by the one or more processors, a total number of lanes for the roadway (para(s). [0068], recite(s) [0068] “Techniques disclosed herein estimate novel lane contextual information such as… a number of lanes associated with the road the vehicle is traveling on (also referred to as“noOfLanes”)…” , where the “number of lanes associated with the road the vehicle is traveling on” is a total number of lanes for the roadway); determining, by the one or more processors, a lane line type associated with a lane line of at least one lane of the roadway (para(s). [0065] further recite(s) [0065] “…The trained NN is capable of generalizing, so it learns to detect the lane lines and extract road parameters for roads it has never seen before. This trained NN can later be used by level 0 vehicles using a computing device camera as an input to generate road parameters estimation data, including lane parameters information and lane contextual information for output. The lane parameters information can include a lane marker heading angle, a lane marker offset, a lane marker curvature, a lane marker curvature derivative, and a lane marker type.” , where a “lane marker type” is a lane line type); determining, by the one or more processors, a direction of travel along the at least one lane(para(s). [0101] and [0097], recite(s) [0101] “…The DNN 604… generates road parameters estimation data 610, which can include lane parameters information 608 and lane contextual information 606. …The lane parameters information 608 can include lane marker heading angle information, lane marker offset information, lane marker curvature information, lane marker curvature derivative information, and lane marker type, as discussed in connection with FIG. 4. …” [0097] “…the lane marker heading angle at Z=0 (e.g., the angle between the direction of the road 412 and the direction of the vehicle 410). …” , where determining “lane marker heading angle information” includes determining a “direction of the road” is determining a direction of travel along at least one lane of the new roadway) labeling, by the one or more processors, the set of image data(para(s). [0077], recite(s) [0077] “The machine learning algorithms utilize the training data 102 to find correlations among the identified features that affect the outcome of assessments 116. In some example embodiments, the training data 102 includes labeled data, which is known data for one or more identified features and one or more outcomes. With the training data 102 (which can include identified features), the DL model is trained using the DL model training 108 within the DLA 106. The result of the training is the trained DL model 110…” , where the “labeled [training] data” is a set of image data labeled with “one or more identified features and one or more outcomes”); and training, by the one or more processors, using the labeled set of image data, a machine learning model (para(s). [0077]—see citation above—, where para(s). [0069] further recite(s): [0069] “FIG. 1 is a block diagram 100 illustrating the training of deep learning (DL) model to generate a trained DL model 110 using a DL architecture (DLA), according to some example embodiments. In some example embodiments, machine-learning programs (MLPs), including deep learning programs, also collectively referred to as machine- learning algorithms or tools, are utilized to perform operations associated with correlating data or other artificial intelligence (Al)-based functions.” , where training the “deep learning (DL) model” includes training the machine learning model using the labeled set of image data (e.g., the “training data 102 includes labeled data”)), such that the machine learning model is configured to predict a new total number of lanes for a new roadway as output (para(s). [0065] and [0068], recite(s) [0065] “…The trained NN is capable of generalizing, so it learns to detect the lane lines and extract road parameters for roads it has never seen before. This trained NN can later be used by level 0 vehicles using a computing device camera as an input to generate road parameters estimation data, including lane parameters information and lane contextual information for output.…” [0068] “Techniques disclosed herein estimate novel lane contextual information such as… a number of lanes associated with the road the vehicle is traveling on (also referred to as“noOfLanes”)…” , where the “trained NN” is trained to “detect the lane lines and extract road parameters for roads it has never seen before” is training the machine learning model to predict (i.e., “estimate”) new “road parameters estimation data” including new “lane contextual information” for a new roadway as output; wherein the new “lane contextual information” includes determining a new total number of lanes as disclosed in para. [0068] above); and training, by the one or more processors, using the labeled set of image data, the machine learning model to predict a direction of travel along at least one lane of the new roadway (para(s). [0065] and [0068]—see preceding citation immediately above—, wherein the prediction of new “road parameters estimation data” further includes a direction of travel along at least one lane of the new roadway (i.e., a “direction of the road”) as part of the “lane marker heading angle information” as disclosed in para(s). [0101] and [0097] below: [0101] “…The lane parameters information 608 can include lane marker heading angle information, lane marker offset information, lane marker curvature information, lane marker curvature derivative information, and lane marker type, as discussed in connection with FIG. 4. …” [0097] “…the lane marker heading angle at Z=0 (e.g., the angle between the direction of the road 412 and the direction of the vehicle 410). …” , where determining “lane marker heading angle information” includes determining a “direction of the road” is determining a direction of travel along at least one lane of the new roadway). Where Habib does not specifically disclose determining… a direction of travel along the at least one lane, based on the lane line type…; labeling… the set of image data with the total number of lanes for the roadway and the direction of travel along the at least one lane; Pham teaches in the same field of endeavor training machine learning models to predict at least a total number of lanes for a roadway determining,… a direction of travel along the at least one lane, based on the lane line type… (para(s). [0045] and [0058], recite(s) [0045] “Further, labels for the lanes 204, 206, and 208 may further be annotated with a corresponding heading direction, as indicated by arrows 202A-202V. The heading direction may represent a direction of the traffic pertaining to a certain lane. In some examples, the heading directions may be associated with a center (or key) point of its corresponding lane label. For example, heading direction 202S may be associated with a center point of lane 204. The different classification labels may be represented in FIG. 2A by different line types—e.g., solid lines, dashed lines, etc.—to represent different classifications. However, this is not intended to be limiting, and any visualization of the lane labels and their classifications may include different shapes, patterns, fills, colors, symbols, and/or other identifiers to illustrate differences in classification labels for features (e.g., lanes) in the images.” [0058] “The machine learning model(s) 104 may use the sensor data 102 to compute the output(s) 106, which may ultimately be applied to a decoder or one or more other post-processing components (described in more detail herein at least with respect to FIG. 5) to generate key points, classifications, lane widths, a number of lanes, lane heading, lane directionality, and/or other information. Although examples are described herein with respect to using deep neural networks (DNNs), and specifically convolutional neural networks (CNNs), as the machine learning model(s) 104 (e.g., with respect to FIGS. 1 and 5), this is not intended to be limiting…” , where determining a direction (e.g., “lane directionality” or “heading direction”) for at least one lane within the new roadway based on lane line type based on a lane’s “associat[ion] with a center (or key) point of its corresponding lane label”—such that the lane labels are “different classification labels” including “different line type[s]”—is determining a direction of travel along the at least one lane (e.g., “lane directionality” or “heading direction”) based on the lane line type (e.g., “line type[s]”); labeling… the set of image data with the total number of lanes for the roadway and the direction of travel along the at least one lane (para(s). [0047], recite(s) [0047] “Referring again to FIG. 1, the encoder 120 may be configured to encode the ground truth information corresponding to the intersection structure and pose using the annotation(s) 118. For example, as described herein, even though the annotations may be limited to lane labels 118A and classifications 118B, information such as key points, number of lanes, heading direction, directionality, width, and/or other structure and pose information may be determined from the annotations 118. …” , where annotating “ground truth information” with “number of lanes” and “heading direction” is labeling the set of image data a total number of lanes for the roadway (e.g., “number of lanes”) and a direction of travel along at least one lane (e.g., para. [0045]—see citation in preceding claim limitation above—recites that the “heading direction” is “a direction of the traffic pertaining to a certain lane”)). Since Habib further discloses training the machine learning model to predict “lane parameters information” for at least one lane within the new roadway including a lane type and lane heading angle, which includes a direction of travel along the at least one lane, (para(s). [0065]—see citation in claim 1 limitation “…, such that the machine learning model is configured to predict…” above—, where para. [0065] further recites: [0065] “…The lane parameters information can include a lane marker heading angle, a lane marker offset, a lane marker curvature, a lane marker curvature derivative, and a lane marker type.” ), It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Habib to incorporate labeling the image data set with the total number of lanes for the roadway and a direction of travel along at least one lane determined based on a lane line type of the at least one lane by the processor to improve training the machine learning model to output the predictions of the road parameters of at least the new total number of lanes for a new roadway and the lane parameters information of at least the direction of travel of at least one lane for a new roadway by incorporating a loss between the ground truth labels and estimated labels of said road parameters and lane parameters information to be accounted for within the training of the machine learning model as taught by Pham (para(s). [0057], recite(s) [0057] “Once the ground truth data 122 is generated for each instance of the sensor data 102 (e.g., for each image where the sensor data 102 includes image data), the machine learning model(s) 104 may be trained using the ground truth data 122. For example, the machine learning model(s) 104 may generate output(s) 106, and the output(s) 106 may be compared—using the loss function(s) 130—to the ground truth data corresponding to the respective instance of the sensor data 102. As such, feedback from the loss function(s) 130 may be used to update parameters (e.g., weights and biases) of the machine learning model(s) 104 in view of the ground truth data 122 until the machine learning model(s) 104 converges to an acceptable or desirable accuracy. Using the process 100, the machine learning model(s) 104 may be trained to accurately predict the output(s) 106 (and/or associated classifications) from the sensor data 102 using the loss function(s) 130 and the ground truth data 122…” ). Where Habib in view of Pham does not specifically disclose determining, by the one or more processors, a direction of travel along the at least one lane, based on the lane line type by: determining the direction of travel along the at least one lane as being the same as a lane on the other side of the lane line when the lane line type indicates lanes on either side of the lane line have the same direction of travel; and determining the direction of travel along the at least one lane as being opposite from the lane on the other side of the lane line when the lane line type indicates the lanes on the either side of the lane line have opposite directions of travel; Tran teaches in the same field of endeavor of determining a direction of travel along at least one lane based on lane line type detected in image data determining, by the one or more processors, a direction of travel along the at least one lane, based on the lane line type by (para(s). [0117], [0159], and [0161], recite(s) [0117] “The HD map represents portions of the lanes as lane elements. A lane element specifies the boundaries of the lane and various constraints including the legal direction in which a vehicle can travel within the lane element, the speed with which the vehicle can drive within the lane element, whether the lane element is for left turn only, or right turn only, and so on. The HD map represents a lane element as a continuous geometric portion of a single vehicle lane. The HD map stores objects or data structures representing lane elements that comprise information representing geometric boundaries of the lanes; driving direction along the lane; vehicle restriction for driving in the lane, for example, speed limit, relationships with connecting lanes including incoming and outgoing lanes; a termination restriction, for example, whether the lane ends at a stop line, a yield sign, or a speed bump; and relationships with road features that are relevant for autonomous driving, for example, traffic light locations, road sign locations and so on.” [0159] “As noted above, cameras can still be used to detect short range objects/symbols useful for navigation. For example, objects can include pavement markings which are used to convey messages to roadway users and to the camera and vision system. They indicate which part of the road to use, provide information about conditions ahead, and indicate where passing is allowed. Yellow lines separate traffic flowing in opposite directions. The autonomous vehicle is controlled to stay to the right of yellow lines. A solid yellow line indicates that passing is prohibited. A dashed yellow line indicates that passing is allowed. White lines separate lanes for which travel is in the same direction. A double white line indicates that lane changes are prohibited. A single white line indicates that lane changes are discouraged. A dashed white line indicates that lane changes are allowed. …” [0161] “…The “Yield Lines at Unsignalized Crosswalk, Two-Way Traffic” has a roadway with two lanes traveling in each direction, with opposing directions separated by a solid double yellow line. The lanes in the same direction are separated from each other by a broken white line. …” , where determining “directions” or “legal direction in which a vehicle can travel within the lane element” is determining a direction of travel along at least one lane based on at least the lane line type (e.g., “lane elements” including line types like “solid double yellow”, “broken white line”, etc.): determining the direction of travel along the at least one lane as being the same as a lane on the other side of the lane line when the lane line type indicates lanes on either side of the lane line have the same direction of travel (para(s). [0117], [0159], and [0161]—see preceding citation immediately above—, where a lane line type of a “dashed” or “broken white line” is a lane line type indicating lanes on either side of the lane line have the same direction of travel (e.g., “travel is in the same direction”)); and determining the direction of travel along the at least one lane as being opposite from the lane on the other side of the lane line when the lane line type indicates the lanes on the either side of the lane line have opposite directions of travel (para(s). [0117], [0159], and [0161]—see preceding citation immediately above—, where a lane line type of a “solid double yellow line” is a lane line type indicating lanes on either side of the lane line have opposite directions of travel (e.g., “traffic flowing in opposite directions”)). Since Pham and Tran each disclose determining a direction of travel along the at least one lane based on different lane line types, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Habib in view of Pham to incorporate determining the direction of travel along the at least one lane as being the same or opposite to an other side of the lane line based on the lane line type as different lane line types are known to indicate different directions of traffic as taught by Tran above. Regarding claim 4, Habib, as modified by Pham and Tran, discloses the method of claim 1, wherein Habib further discloses the ground truth localization data includes data derived from a high-definition (HD) map (para(s). [0060], [0061], and [0065], recite(s) [0060] “As used herein, the term “High Definition map” (or HD map) refers to a category of maps built for self-driving purposes in connection with higher SAE autonomy level vehicles. The HD maps are characterized by extremely high precision such as centimeter-level accuracy. HD maps contain information such as where the lanes are, where the road boundaries are, where the curbs are and how high the curbs are, where the traffic signs and road markers are located, and so forth.” [0061] “As used herein, the term “lane line map” refers to a category of maps including geo-referenced lane lines of the road. A lane line map is a subset of HD maps which includes the lane line information only.” [0065] “…Road parameters ground truth (which are the true values of the road parameters that are associated with the images) can be obtained from the lane line map and the high precision geolocation provided by the industrial graded GPS data…” , where determining the “ground truth” localization data from at least a “lane line map”—which is a “subset of HD maps”—is the ground truth localization data including data derived from a high-definition (HD) map). Regarding claim 6, Habib, as modified by Pham and Tran, discloses the method of claim 1, wherein Habib further discloses the machine learning model comprises a plurality of neural network layers (para(s). [0089], recite(s) [0089] “A deep neural network (DNN), also referred to as a convolutional neural network (CNN), is a stacked neural network, which is composed of multiple convolutional layers…” , where “multiple convolutional layers” are a plurality of neural network layers). Regarding claim 7, Habib, as modified by Pham and Tran, discloses the method of claim 1, wherein Habib further discloses the method of claim 1 further comprising: executing, by the one or more processors, the machine learning model for a second autonomous vehicle (para(s). [0003] and [0095], recite(s) [0003] “With time, vehicles are getting “smarter” by achieving some level of autonomy (e.g., by incorporating perceptive sensor technology and artificial intelligence, or AI) and by cooperation with neighboring vehicles and infrastructure through vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) communication. For example, vehicles on a highway communicate via V2V with each other so every vehicle on the road can be aware of nearby vehicles.” [0095] “In some aspects, a computing device 330 (such as a smartphone or another consumer device) can be used within the level 0 vehicle 302 to perform technique 326 (e.g., for determining road parameters estimation data) and technique 328 (e.g., for determining lane level vehicle localization) as discussed herein. In this regard, by using the computing device 330 to perform techniques 326 and 328, the level 0 vehicle 302 can be upgraded to a higher level vehicle 332 without the need of costly high precision equipment such as the LKAS camera 312, the high precision GPS 318, and the LIDAR 324.” , where the machine learning model can be executed by other vehicles on the road including different levels of autonomous vehicles (e.g., a “level 0” or “higher level” vehicle) is executing the machine learning model for at least a second autonomous vehicle (e.g., a “level 0 vehicle 302” or a “higher level vehicle 332”)). Regarding claim 8, the claim is the non-transitory memory of claim 1. Therefore, claim 8 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 11, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above). Regarding claim 13, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above). Regarding claim 14, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Regarding claim 15, the claim differs from claim 1 in that the claim is in the form of a system. Therefore, claim 15 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 18, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above). Regarding claim 20, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Habib, as modified by Pham and Tran, as applied to claim(s) 1, 8, and 15 above, and further in view of Halfaoui et al. (Halfaoui; US 2022/0180646 A1, cited in Applicant’s IDS filed March 12th, 2025). Regarding claim 3, Habib, as modified by Pham and Tran, discloses the method of claim 1, wherein Halfaoui teaches in the same field of endeavor of determining a total number of lanes for a roadway the total number of lanes for the roadway is determined using an image recognition or image segmentation protocol (para(s). [0086] and [0091], recite(s) [0086] “According to a further embodiment illustrated in 7, the processing circuitry of the lane detection system 100 is configured to implement a semantic segmentation network 116, wherein the semantic segmentation network 116 is configured to semantically segment the current image of the multi-lane road and wherein the neural network 109 is configured to determine the current lane of the vehicle in the current image of the multi-lane road on the basis of the left lane ID, i.e. the first candidate current lane, the right lane ID, i.e. the second candidate current lane, and the semantically segmented current image. By understanding the semantic content of the current image this embodiment allows to improve the accuracy of the estimates thanks to an improved detection of semantically relevant objects such as road, cars, lane markings and the like. According to an embodiment, the semantic segmentation network 116 can either be pre-trained or trained simultaneously with the original model 109.” [0091] “For the neural network architecture shown in FIG. 7, a pixel-level image segmentation is performed on the input processed image prior to the lane ID/count estimation…” , where “semantic segmentation” is used to determine lane “count estimation” is using at least image segmentation to determine a total number of lanes for the roadway). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Habib, as modified by Pham and Tran, to incorporate using at least image segmentation to determine the total number of lanes for the roadway to improve the accuracy of the machine learning model in estimating a new total number of for a new roadway by improving detection of semantically relevant objects on the roadway, such as lane markings, as taught by Halfaoui above (see para. [0086] above). Regarding claim 10, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Regarding claim 17, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Habib, as modified by Pham and Tran, as applied to claim(s) 4, 11, and 15 above, and further in view of Zeng et al. (Zeng; US 2022/0153315 A1). Regarding claim 5, Habib, as modified by Pham and Tran, discloses the method of claim 4, wherein Habib further discloses a plurality of lane indications of the set of image data are defined at least in part as a feature on a(para(s). [0060], [0061], and [0065]—see citations in claim 4 above—, where the “lane line map” is a layer of the high-definition (HD) map). Where Habib, as modified by Pham and Tran, does not specifically disclose …feature on a raster layer of the high-definition (HD) map; Zeng teaches in the same field of endeavor of HD maps …feature on a raster layer of the high-definition (HD) map (para(s). [0084], recite(s) [0084] “The map data 404 can be associated with the environment in which the computing system 400 (e.g., an autonomous vehicle) is operating. The map data 404 can include a BEV raster HD map, lane graph, etc. For example, the map data 404 can be indicative of the lanes and associated semantic attributes (e.g., turning lane, traffic light controlled lane, etc.) of the environment. Actors 416A-B can be more likely to follow lanes represented in the map data 404. The map data 404 can help determine the right of way, which can in turn affect the interactions among actors 416A-B.” , where a “BEV [Bird’s-Eye-View] raster HD map” is a raster layer of a HD map and “lanes” are features on the raster layer of the HD map). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Habib, as modified by Pham and Tran, to incorporate the lane line map layer of the HD map as a raster layer of the HD map to improve defining the plurality of lane indications of the set of image data while still yielding the predictable result of defining the plurality of lane indications of the set of image data. Regarding claim 12, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Regarding claim 19, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.Z.Y./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Aug 05, 2025
Non-Final Rejection — §103
Oct 28, 2025
Interview Requested
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Dec 10, 2025
Response Filed
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month