Prosecution Insights
Last updated: April 19, 2026
Application No. 18/161,954

BUILDING AND TRAINING A LANELET CLASSIFICATION SYSTEM FOR AN AUTONOMOUS VEHICLE

Non-Final OA §103§112
Filed
Jan 31, 2023
Examiner
PAIGE, TYLER D
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GM Global Technology Operations LLC
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1166 granted / 1276 resolved
+39.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
28 currently pending
Career history
1304
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1276 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to an application filed on 01/31/2023. The applicant submits an Information Disclosure Statement dated 01/31/2023. The applicant does not make a claim for Domestic or Foreign priority. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically claims 1, 3 & 6 contain the feature of “higher dimension feature” without defining with particularity the scope of the feature. Dependent claims 3 and 6 state the feature is defined as a summarization of local attributes of local lanelets and claim 6 states the feature is defined as a neighborhood of particular local lanelets. However, the drawings and the specification do not define the scope of the definition to give one of ordinary skill in the art context as to how the lanelet is identified based upon the features. Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, claims 4 – 6 contain the features of “attention score” and “attention mechanism” without defining the claims the scope of the features. The specification and drawings to do not define the features in a meaningful way where one of ordinary skill in the art would be able to identify the scope of the features. Claims 10, 19 & 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically claims 10 and 19 contain the feature of “completely” which makes the other claims unclear as to whether the operations are performed in an incomplete state. Also, the claims do not define what the feature “completely” constitutes with respect to training. The question is whether the feature is satisfied based upon manual input or a specific number of data points. Therefore, one of ordinary skill in the art would not know the threshold for satisfying the feature. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1 - 4, 6, 7, and 9 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kang US 2019/0095722 in view of Xu US 2019/0266418. As per claim 1, A lanelet classification system for an autonomous vehicle, the lanelet classification system comprising: (Kang paragraph 0080 discloses, “the driving lane identifying apparatus extracts the road marking including a line and/or a road sign from the input image using, for example, a convolutional neural network (CNN), a deep neural network (DNN), and a support vector machine (SVM) that are trained in advance to recognize the road marking.”) one or more controllers including a classifier having a neural network that classifies lanelets of a lane graph structure based on one or more lane attributes, the one or more controllers executing instructions to build the neural network by: (Kang paragraph 0033 discloses, “the processor may be configured to identify the driving lane of the vehicle based on the relative location in the multi-virtual lane that may include determined based on the number of the lanes on the road.” And paragraph 0102 discloses, “The driving lane identifying apparatus may generate the segmentation image by segmenting the input image into the objects included in the input image by a semantic unit using a classification network” And paragraph 0176 discloses, “The processor 1930 may recognize a change in a driving environment, and identify the driving lane of the vehicle by performing the driving lane identifying method described with reference to FIGS. 1 through 18. The change in the driving environment may include changes, such as, for example, at least one of a departure from the driving lane by the vehicle, an entry into the driving lane by a nearby vehicle, or a change of the road marking.”) determining a higher dimension feature for a plurality of local lanelets and a subject lanelet, wherein a spatial relationship exists between the subject lanelet and the local lanelets; (Kang paragraph 0104 discloses, “In operation 540, the driving lane identifying apparatus extracts information corresponding to a road marking of a road from the top-view input image,… In operation 550, the driving lane identifying apparatus generates a multi-virtual lane using the information corresponding to the road marking and the top-view segmentation image.”) computing an attention score for each of the local lanelets based on the higher dimension feature, wherein the attention score indicates a weight value that the subject lanelet has on a particular local lanelet; (Xu paragraph 0057 teaches, “In a non-limiting example, the segmentation mask(s) 110 may further represent confidence scores corresponding to a probability of each of the portions of the mask corresponding to potential lanes and/or road boundaries. In addition, in some examples, the segmentation mask(s) 110 may further represent confidence scores corresponding to probabilities of each of the portions of the mask corresponding to a certain class of lane marking or road boundary (e.g., a lane marking type and/or a road boundary type).”) determining a normalized shared attention mechanism applicable to all of the local lanelets based on the attention score; (Xu paragraph 0078 teaches, “alternative layers 134 may be used in the convolutional stream(s) 132, such as normalization layers, SoftMax layers, and/or other layer types.” And paragraph 0073 teaches, “The sensor data 102 and/or pre-processed sensor data 106 may be input into a convolutional layer(s) 132 of the convolutional network 108 (e.g., convolutional layer 134A). The convolutional stream 132 may include any number of layers 134, such as the layers 134A-134C. One or more of the layers 134 may include an input layer. The input layer may hold values associated with the sensor data 102 and/or pre-processed sensor data 106.”) computing a transformed feature vector of the local lanelet based on the higher dimension feature and the normalized shared attention mechanism; (Kang paragraph 0080 discloses, “the driving lane identifying apparatus extracts the road marking including a line and/or a road sign from the input image using, for example, a convolutional neural network (CNN), a deep neural network (DNN), and a support vector machine (SVM) that are trained in advance to recognize the road marking.”) and (Xu paragraphs 0073 and 0078 with respect to the normalization of the input data) and fusing the transformed feature vector for each of the local lanelet together to determine a single fused feature vector, wherein the single fused feature vector is input to build a subsequent layer of the neural network. (Kang paragraph 0081 discloses, “using the CNN trained with lines and/or road signs of various road images, the driving lane identifying apparatus may extract a line robustly against various situations. The driving lane identifying apparatus may extract the road marking using various machine learning methods in addition to the examples described above.”) and (Xu paragraphs 0073 and 0078 for creating the layers used in the identification) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a score weight value for identifying a lanelet. Xu teaches scoring a weight value for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 2, The lanelet classification system of claim 1, wherein the spatial relationship indicates an upstream, downstream, left, and right relationship between the subject lanelet and the local lanelets. (Xu paragraph 0042 teaches, “The training process (e.g., forward pass computations—backward pass computations—parameter updates) may be reiterated until the trained parameters converge to optimum, desired, or acceptable values.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose detecting spatial relationships. Xu teaches determining spatial relationships. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 3, The lanelet classification system of claim 1, wherein the higher dimension feature is determined based on: zi(l)=W(l)hi(l)wherein zi(l) is the higher dimension feature, hi(l) represents a set of node features that represent a summarization of local attributes of the local lanelets and the subject lanelet, and W(l) represents a weight matrix. (Xu paragraph 0074 teaches, “The convolutional layers may compute the output of neurons that are connected to local regions in an input layer (e.g., the input layer), each neuron computing a dot product between their weights and a small region they are connected to in the input volume.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a weight matrix for identifying a lanelet. Xu teaches a weight matrix for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 4, The lanelet classification system of claim 1, wherein the attention score is computed based on a nonlinear activation function. (Xu paragraph 0115 teaches, “The machine learning model(s) 108 may perform forward pass computations on the original and/or augmented images. In some examples, the machine learning model(s) 108 may extract features of interest from the image(s) and predict a probability of a boundary class, a lane marking class, or another feature class in the images (e.g., on a pixel-by-pixel basis). The loss function 318 may be used to measure loss (e.g., error) in the segmentation mask(s) 110 (e.g., predictions generated by the machine learning model(s) 108) as compared to the ground truth data (e.g., the original and/or augmented labels, annotations, and/or masks). In one example, a binary cross entropy function may be used as the loss function 318. In any example, backward pass computations may be performed to recursively compute gradients of the loss function with respect to training parameters. In some examples, weight and biases of the machine learning model(s) 108 may be used to compute these gradients.” and equation 00001) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 6, The lanelet classification system of claim 1, wherein the transformed feature vector is determined based on: hi*(l)=σ∑j∈N(i)aij(l)zi(l)wherein hi*(l) represents the transformed feature vector, zi(l) is the higher dimension feature, N(i) represents a neighborhood of the particular local lanelet i, aij(l) represents the normalized shared attention mechanism, and σ represents a nonlinear transform function. (Xu paragraph 0115 and equation 00001) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 7, The lanelet classification system of claim 1, wherein the single fused feature vector is determined based on: hii+1=D(l)∙(||k=03hi*l)wherein hii+1 represents the single fused feature vector, D(l) represents a vector having the same length as the single fused feature vector hi(i+1), and hi*(l) represents the transformed feature vector. (Xu paragraph 0115 and equation 00001) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 9, A lanelet classification system for an autonomous vehicle, the lanelet classification system comprising: one or more controllers including a classifier having a neural network that classifies lanelets of a lane graph structure based on one or more lane attributes, the one or more controllers executing instructions to: (Kang paragraph 0033 discloses, “the processor may be configured to identify the driving lane of the vehicle based on the relative location in the multi-virtual lane that may include determined based on the number of the lanes on the road.” And paragraph 0102 discloses, “The driving lane identifying apparatus may generate the segmentation image by segmenting the input image into the objects included in the input image by a semantic unit using a classification network” And paragraph 0176 discloses, “The processor 1930 may recognize a change in a driving environment, and identify the driving lane of the vehicle by performing the driving lane identifying method described with reference to FIGS. 1 through 18. The change in the driving environment may include changes, such as, for example, at least one of a departure from the driving lane by the vehicle, an entry into the driving lane by a nearby vehicle, or a change of the road marking.”) receive simulated data, wherein the simulated data is a combination of map data and simulated perception data; (Kang paragraph 0105 discloses, “The driving lane identifying apparatus may obtain the number of the lanes on the road from, for example, global positioning system (GPS) information, map information, and navigation information. In an example, the GPS information, the map information, and the navigation information is directly detected by the driving lane identifying apparatus through a GPS sensor. In another example, the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.”) and (Xu paragraph 0273 teaches, “The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine). In some examples, the training data is tagged (e.g., where the neural network benefits from supervised learning) and/or undergoes other pre-processing, while in other examples the training data is not tagged and/or pre-processed (e.g., where the neural network does not require supervised learning).”) combine the simulated data with manual annotations that label the simulated perception data together to create a ground truth data set; (Xu paragraph 0096 teaches, “the training process, and generation of the training and/or ground truth data, may contribute to increasing the processing speeds for the current system such that lane and road boundary detection may happen in real-time at an acceptable level of accuracy for safe operation of an autonomous vehicle (or other object).”) determine training data by mapping labels of one or more groups of labeled ground truth data points that are part of the ground truth data set to one or more groups of perturbed lane edge points that have been displaced from an original group of labeled ground truth data points to another group of labeled ground truth data points that are part of the ground truth data set; (Kang paragraph 0105 discloses, “The driving lane identifying apparatus may obtain the number of the lanes on the road from, for example, global positioning system (GPS) information, map information, and navigation information. In an example, the GPS information, the map information, and the navigation information is directly detected by the driving lane identifying apparatus through a GPS sensor. In another example, the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.”) and (Xu paragraph 0096 teaches, “the training process, and generation of the training and/or ground truth data, may contribute to increasing the processing speeds for the current system such that lane and road boundary detection may happen in real-time at an acceptable level of accuracy for safe operation of an autonomous vehicle (or other object).”) and train the neural network to classify each lanelet of the lane graph structure based on the training data. (Kang paragraph 0101 discloses, “The driving lane identifying apparatus may segment the input image into a plurality of regions using a classifier model that is trained to output a training output from a training image. The classifier model may be, for example, a CNN. For example, the training image may be a color image, and the training output may indicate a region image obtained by segmenting a training input.” And paragraph 0100 discloses, “the driving lane identifying apparatus may generate the segmentation image through a classification network including a convolution layer in several stages and a fully connected layer. While passing through the classification network, the input image may be reduced by 1/32 in size from an original size. For such a pixel-unit dense prediction, the original size may need to be restored.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a score weight value for identifying a lanelet. Xu teaches scoring a weight value for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 10, The lanelet classification system of claim 9, wherein the one or more controllers execute instructions to: determine the neural network of the classifier is completely trained; (Xu paragraph 0086 teaches, “The training process may be reiterated until the trained parameters converge to optimum, desired, and/or acceptable values.”) and in response to determining the neural network of the classifier is completely trained, evaluate perception data generated by a plurality of sensors and the map data. (Kang paragraph 0105 discloses, “the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.” And paragraph 0126, “The driving lane identifying method may be performed by transforming an input image into a top-view image. The operations in FIG. 9 may be performed in the sequence and manner as shown, although the order of some operations may be changed or some of the operations omitted without departing from the spirit and scope of the illustrative examples described.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 11, The lanelet classification system of claim 9, wherein the one or more controllers execute instructions to: identify an amount of overlap between a particular group of perturbed lane edge points and a particular group of labeled ground truth data points by calculating an intersection-over-union evaluation metric. (Xu paragraph 0096 teaches, “the training process, and generation of the training and/or ground truth data, may contribute to increasing the processing speeds for the current system such that lane and road boundary detection may happen in real-time at an acceptable level of accuracy for safe operation of an autonomous vehicle (or other object).”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 12, The lanelet classification system of claim 11, wherein the one or more controllers execute instructions to: calculate the intersection-over-union evaluation metric based on: IOU=PP∈GTi∩SEGj||PP∈GTi∪SEGj|wherein IOU is the intersection-over-union evaluation metric, SEGj represents a particular group of perturbed lane edge points, GTi represents a particular group of labeled ground truth data points, and P represents lane edge points expressed as a unique identification (ID) numbers. (Kang paragraph 0144 discloses, “However, when the calibration information of the camera is not obtained in advance, the driving lane identifying apparatus may discover points on two parallel lines in an input image, and obtain approximate calibration information using an actual distance and a pixel distance between the discovered points. Such values, for example, the actual distance between the points on the two parallel lines, may be obtained because lines are parallel to one another in a general road environment and a width between the lines follows a road regulation.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 13, The lanelet classification system of claim 9, wherein the one or more controllers execute instructions to: introduce noise to the simulated data to create noisy lanelet training samples. (Xu paragraph 0055 teaches, “Where noise reduction is employed by the sensor data pre-processor 104, it may include bilateral denoising in the Bayer domain. Where demosaicing is employed by the sensor data pre-processor 104, it may include bilinear interpolation. Where histogram computing is employed by the sensor data pre-processor 104, it may involve computing a histogram for the C channel, and may be merged with the decompanding or noise reduction in some examples. Where adaptive global tone mapping is employed by the sensor data pre-processor 104, it may include performing an adaptive gamma-log transform.”) As per claim 14, The lanelet classification system of claim 13, wherein the noise is modeled based on a variance profile of an error in a lane edge of the lane graph structure and a covariance. (Xu paragraph 0085 teaches, “the region based weighted loss function may result in back-propagation of more error at further distances during training, thereby reducing the error in predictions by the machine learning model(s) 108 at further distances during deployment of the machine learning model(s) 108.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 15, The lanelet classification system of claim 14, wherein the error in the lane edge is modeled as a Gaussian Process, and wherein a kernel function models a spatial correlation of the error between two lane edge points. (Xu paragraph 0086 teaches, “In one example, an Adam optimizer may be used, while in other examples, stochastic gradient descent, or stochastic gradient descent with a momentum term, may be used. The training process may be reiterated until the trained parameters converge to optimum, desired, and/or acceptable values.” and paragraph 0115 and equation 00001) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 16, The lanelet classification system of claim 14, wherein a Matern 3/2 kernel function determines the variance profile. (Xu paragraph 0086 teaches, “In one example, an Adam optimizer may be used, while in other examples, stochastic gradient descent, or stochastic gradient descent with a momentum term, may be used. The training process may be reiterated until the trained parameters converge to optimum, desired, and/or acceptable values.” and paragraph 0115 and equation 00001) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 17, The lanelet classification system of claim 14, wherein the covariance is determined based on: Σ=S1/2CS1/2wherein Σ is the covariance, C is a correlation matrix, and S is a diagonal scale matrix. (Xu paragraph 0115 and equation 00001) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 18, The lanelet classification system of claim 9, wherein the one or more controllers execute instructions to: introduce occlusion features to the simulated data, wherein the occlusion features represent an occluded region of a roadway that the autonomous vehicle is traveling along. (Kang paragraph 0075 discloses, “A road used herein refers to a road on which vehicles travel and includes various types of roads, for example, an expressway, a national highway, a local road, a national expressway, a by-lane, a toll road, frontage road, side road, a driveway, a Limited access grade-separated highway, and a limited-access road.”) As per claim 19, A lanelet classification system for an autonomous vehicle, the lanelet classification system comprising: a plurality of sensors collecting perception data indicative of an environment surrounding the autonomous vehicle; (Kang paragraph 0073 discloses, “The capturing device may include devices, such as, for example, a mono camera, a vision sensor, and an image sensor. Other capturing devices, such as, for example light detection and ranging (LiDAR) may be used without departing from the spirit and scope of the illustrative examples described. The input image may be an image captured by the capturing device included in the driving lane identifying apparatus or by other devices.”) and one or more controllers in electronic communication with the plurality of sensors, wherein the one or more controllers include a classifier having a neural network that classifies lanelets of a lane graph structure based on one or more lane attributes, the one or more controllers executing instructions to: (Kang paragraph 0033 discloses, “the processor may be configured to identify the driving lane of the vehicle based on the relative location in the multi-virtual lane that may include determined based on the number of the lanes on the road.” And paragraph 0102 discloses, “The driving lane identifying apparatus may generate the segmentation image by segmenting the input image into the objects included in the input image by a semantic unit using a classification network” And paragraph 0176 discloses, “The processor 1930 may recognize a change in a driving environment, and identify the driving lane of the vehicle by performing the driving lane identifying method described with reference to FIGS. 1 through 18. The change in the driving environment may include changes, such as, for example, at least one of a departure from the driving lane by the vehicle, an entry into the driving lane by a nearby vehicle, or a change of the road marking.”) receive simulated data, wherein the simulated data is a combination of map data and simulated perception data; (Kang paragraph 0105 discloses, “The driving lane identifying apparatus may obtain the number of the lanes on the road from, for example, global positioning system (GPS) information, map information, and navigation information. In an example, the GPS information, the map information, and the navigation information is directly detected by the driving lane identifying apparatus through a GPS sensor. In another example, the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.”) and (Xu paragraph 0273 teaches, “The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine). In some examples, the training data is tagged (e.g., where the neural network benefits from supervised learning) and/or undergoes other pre-processing, while in other examples the training data is not tagged and/or pre-processed (e.g., where the neural network does not require supervised learning).”) combine the simulated data with manual annotations that label the simulated perception data together to create a ground truth data set; introduce noise and occlusion features to the simulated data to create noisy lanelet training samples; (Xu paragraph 0055 teaeches, “Where noise reduction is employed by the sensor data pre-processor 104, it may include bilateral denoising in the Bayer domain. Where demosaicing is employed by the sensor data pre-processor 104, it may include bilinear interpolation. Where histogram computing is employed by the sensor data pre-processor 104, it may involve computing a histogram for the C channel, and may be merged with the decompanding or noise reduction in some examples. Where adaptive global tone mapping is employed by the sensor data pre-processor 104, it may include performing an adaptive gamma-log transform.”) determine training data by mapping labels of one or more groups of labeled ground truth data points that are part of the ground truth data set to one or more groups of perturbed lane edge points the noisy lanelet training samples that have been displaced from an original group of labeled ground truth data points to another group of labeled ground truth data points that are part of the ground truth data set; (Kang paragraph 0105 discloses, “The driving lane identifying apparatus may obtain the number of the lanes on the road from, for example, global positioning system (GPS) information, map information, and navigation information. In an example, the GPS information, the map information, and the navigation information is directly detected by the driving lane identifying apparatus through a GPS sensor. In another example, the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.”) and (Xu paragraph 0096 teaches, “the training process, and generation of the training and/or ground truth data, may contribute to increasing the processing speeds for the current system such that lane and road boundary detection may happen in real-time at an acceptable level of accuracy for safe operation of an autonomous vehicle (or other object).”) train the neural network to classify each lanelet of the lane graph structure based on the training data and the noisy lanelet training samples; (Kang paragraph 0101 discloses, “The driving lane identifying apparatus may segment the input image into a plurality of regions using a classifier model that is trained to output a training output from a training image. The classifier model may be, for example, a CNN. For example, the training image may be a color image, and the training output may indicate a region image obtained by segmenting a training input.” And paragraph 0100 discloses, “the driving lane identifying apparatus may generate the segmentation image through a classification network including a convolution layer in several stages and a fully connected layer. While passing through the classification network, the input image may be reduced by 1/32 in size from an original size. For such a pixel-unit dense prediction, the original size may need to be restored.”) the neural network of the classifier is completely trained; (Xu paragraph 0086 teaches, “The training process may be reiterated until the trained parameters converge to optimum, desired, and/or acceptable values.”) and (Kang paragraph 0105 discloses, “the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.” And paragraph 0126, “The driving lane identifying method may be performed by transforming an input image into a top-view image. The operations in FIG. 9 may be performed in the sequence and manner as shown, although the order of some operations may be changed or some of the operations omitted without departing from the spirit and scope of the illustrative examples described.”) and in response to determining the neural network of the classifier is completely trained, evaluate perception data generated by a plurality of sensors and the map data. (Kang paragraph 0105 discloses, “The driving lane identifying apparatus may obtain the number of the lanes on the road from, for example, global positioning system (GPS) information, map information, and navigation information. In an example, the GPS information, the map information, and the navigation information is directly detected by the driving lane identifying apparatus through a GPS sensor. In another example, the GPS information, the map information, and the navigation information is received from a map database, or provided by a sensor, a database, or other electronic devices disposed outside the driving lane identifying apparatus.”) and (Xu paragraph 0096 teaches, “the training process, and generation of the training and/or ground truth data, may contribute to increasing the processing speeds for the current system such that lane and road boundary detection may happen in real-time at an acceptable level of accuracy for safe operation of an autonomous vehicle (or other object).”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a score weight value for identifying a lanelet. Xu teaches scoring a weight value for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 20, The lanelet classification system of claim 19, wherein the one or more controllers execute instructions to: identify an amount of overlap between a particular group of perturbed lane edge points and a particular group of labeled ground truth data points calculating an intersection-over-union evaluation metric. (Kang paragraph 0144 discloses, “However, when the calibration information of the camera is not obtained in advance, the driving lane identifying apparatus may discover points on two parallel lines in an input image, and obtain approximate calibration information using an actual distance and a pixel distance between the discovered points. Such values, for example, the actual distance between the points on the two parallel lines, may be obtained because lines are parallel to one another in a general road environment and a width between the lines follows a road regulation.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet. Xu teaches a function for identifying a lanelet. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. Claims 5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kang US 2019/0095722 in view of Xu US 2019/0266418 in view of He US 2022/0076447. As per claim 5, The lanelet classification system of claim 1, wherein the normalized shared attention mechanism is determined based on a normalization function that sums to 1. (He paragraph 0203 teaches, “The pooling stage 920 uses a pooling function that replaces the output of the convolutional layer 906 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet or normalizing to a sum of 1. Xu teaches a function for identifying a lanelet. He teaches of a summation to include the sum or 1. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. and He et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. As per claim 8, The lanelet classification system of claim 1, wherein the neural network of the classifier is a graph attention network. (He paragraph 0437 , “The node classification model may include but not limited to GNN (Graph Neural Network) based node classification model. The GNN based node classification model may include GAT (Graph Attention Networks), GraphSAGE and GraphSAINT.”) Kang discloses a method and apparatus for identifying a driving lane through a neural network. Kang does not disclose a function for identifying a lanelet or a graph attention network. Xu teaches a function for identifying a lanelet. He teaches of a graph attention network used in an autonomous vehicle. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Xu. et.al. and He et.al. into the invention of Kang. Such incorporation is motivated by the need to ensure accurate detection as an autonomous vehicle traverses of a lane marking. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER D PAIGE whose telephone number is (571)270-5425. The examiner can normally be reached M-F 7:00am - 6:00pm (mst). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at 5712703921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER D PAIGE/Primary Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Jan 31, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §112
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597357
AUTOMATIC AIRCRAFT TAXIING
2y 5m to grant Granted Apr 07, 2026
Patent 12592102
OPERATION DATA SUPPORT SYSTEM FOR INDUSTRIAL MACHINERY
2y 5m to grant Granted Mar 31, 2026
Patent 12586424
DRIVING DIAGNOSIS DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586425
RARE EVENT DETECTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12579849
DETECTING AN UNUSUAL OPERATION OF A VEHICLE OUTSIDE OF A TIME FENCE AND NOTIFYING NEIGHBORING VEHICLES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+8.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month