Prosecution Insights
Last updated: April 19, 2026
Application No. 18/603,078

LANE GRAPH GENERATION USING NEURAL NETWORKS

Final Rejection §103
Filed
Mar 12, 2024
Examiner
WEI, XIAOMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
28 granted / 34 resolved
+20.4% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
83.6%
+43.6% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The office action is in response to Applicant’s amendment filed 01/26/2026 which has been entered and made of record. Claims 1, 15 and 18 have been amended. No claim has been newly added. Claims 1-20 are pending in the application. The claim interpretation under 35 U.S.C. 112(f) has been withdrawn based on the amendments of claims 15 and 18. Response to Arguments Applicant’s arguments, filed 01/26/2026, with respect to the rejection(s) under 35 U.S.C. 103 have been fully considered, but they are not persuasive. Applicant argues, Anastassov, Liu and Kaku taken individually or in combination, do not teach the previous claim language of neural networks with decoders, and the newly amended limitations of cross-sections and the connections between the cross-sections in the newly amended independent claims. Examiner respectfully disagrees. First, for the previous claimed limitation of neural networks with decoders, Liu does teaches a neural network with decoders in Figure 2 on page 3, a transformer decoder is used as a map element detector, as shown in Figure 2, the input to the neural network is generated from Lidar points and images as cell data. The prior art of Kaku also teaches the neural network with decoders in paragraph [0031], “the decoder 306 or 308 perform a 1d convolutional process from 16-to-8 channels using a kernel size three and padding one for particular applications. Here, although the neural model 300 illustrates implementing two decoders, the neural model 300 could implement a 2-depth layer for outputting the RB and LB values.”, Kaku further teaches an encoder to output features for decoder based on lateral slices in paragraph [0030] “After further maxpooling, the encoder 304 extracts features about a lane boundary (LB) and a road boundary (RB) of a lane for decoding”. Second, for the newly amended limitation of one or more cross-sections of the one or more lanes associated with the one or more cells; Kaku teaches the decoder outputting road boundary and lane boundary value as the cross-sections of the cell in paragraph [0031] “although the neural model 300 illustrates implementing two decoders, the neural model 300 could implement a 2-depth layer for outputting the RB and LB values. …… Another process for 1-to-1 uses a 1d convolutional process for outputting confidence values and boundary placements for a RB and LB per lateral slice.” Finally, for the newly amended limitation of and one or more connections between the one or more cross-sections; Kaku teaches an estimation system to link the lane boundary as the connections between cross-sections in paragraph [0039] “At 540 , the estimation system 170 generates a map by linking lane boundaries along the road edges. ……For example, the confidence values and the boundary positions per lateral slice derived from decoding are associated with potential lane boundaries. The estimation system 170 can identify relationships between lane characteristics that satisfy a threshold for an inverse distance and feature clarity along a road edge.”. Applicant argues. Claims 7-9, 13-15 and 17-20 depend on independent claims 1, 15 and 18, Anastassov combined with Liu do not teach the newly amended independent claims. Claims 2-4, 6 and 16 depend on independent claims 1 and 15, Anastassov combined with Liu and Kaku do not teach the newly amended independent claims. Claim 5 depends on independent claim 1, Anastassov combined with Liu, Kaku and chen do not teach the newly amended independent claims. Claims 10-11 depend on independent claim 1, Anastassov combined with Liu, Kaku and Pham do not teach the newly amended independent claims. Examiner respectfully disagrees. Please refer to the above replies for detailed explanation. Conclusions: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant's amendments to the claims. Therefore, the present Office Action is made final. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-9, 13-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anastassov (US 20220161817 A1), hereinafter as Anastassov, in view of NPL Liu et al. (“VectorMapNet: End-to-end Vectorized HD Map Learning”), hereinafter as Liu, further in view of Kaku et al. (US 20250146835 A1), hereinafter as Kaku. Regarding claim 1, Anastassov teaches A method (Anastassov paragraph [0007] “a method comprises retrieving probe data collected from one or more sensors of one or more probe devices traveling within a geographic area including at least one geographic partition.”) comprising: generating, for each cell of one or more cells of a grid representing a region of an environment (Anastassov paragraph [0038] “FIG. 2A is a diagram illustrating an example geo-spatial partitioning scheme, according to one embodiment. The diagram 200 shows an area of interest (e.g., a Washington D.C. map 201) divided into partitions 203 for the purposes of distributed/parallelized processing……By way of example, a partition 203m containing a portion of Washington D.C. borderline 201n is divided into grid cells 205”), a cell representation indicating one or more points that correspond with the cell and that represent corresponding sensor detections generated by a plurality of ego-machines in the environment (Anastassov teaches the probe data as the cell representation, paragraph [0008] “cause the apparatus to retrieve probe data collected from one or more sensors of one or more probe devices traveling within a geographic area including at least one geographic partition.” And paragraph [0036] “the system 100 can prepare probe data into seed points, then continue with the seed points (without the probe data)”); generating, based at least on applying the cell representation for at least one of the one or more cells to ……, lane data indicating one or more lanes associated with the one or more cells (Anastassov paragraph [0065] “In one embodiment, in step 409, the road segment module 305 can create at least one continuous road path based on the density maxima locations for the plurality of grid cells.” And paragraph [0066] “the seed points 225 created in step 407 (e.g., FIGS. 2B-2C) can be used to create continuous road paths represented by polylines 1-6 in FIG. 2D.”); …… and generating, based at least on the lane data, a lane graph that represents the one or more lanes on one or more roads in the environment (Anastassov paragraph [0082] “In one embodiment, in step 411, the output module 309 can include the at least one continuous road path in an output representing a base map of the geographic area. In other embodiments, the base map further includes connections, intersections, splits/merges, etc. as a graph.”). Anastassov does not explicitly teach …… one or more decoders of one or more neural networks …… one or more cross-sections of the one or more lanes associated with the one or more cells, and one or more connections between the one or more cross-sections; Liu teaches ……one or more decoders of one or more neural networks…… (Liu teaches a transformer decoder for the map element detector and multiple transformer decoder layers for polyline generator. Page 3, top “Figure 2: The network architecture of VectorMapNet. The top row is the pipeline of VectorMapNet generating polylines from raw sensor inputs. The bottom row illustrates detailed structures and inference procedures of three primary components of VectorMapNet: BEV feature extractor, map element detector, and polyline generator.”, page 6, left column, last paragraph, “To model these local geometric structures of polylines, the autoregressive network we choose is Transformer (Vaswani et al., 2017) (see the bottom-right of Figure 2)…… Each polyline’s keypoint coordinates and class label are tokenized and fed in as the query inputs of the transformer decoder.”). Anastassov and Liu are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove to achieve accuracy and efficiency. Anastassov in view of Liu does not explicitly teach …… one or more cross-sections of the one or more lanes associated with the one or more cells, and one or more connections between the one or more cross-sections; Kaku teaches …… one or more cross-sections of the one or more lanes associated with the one or more cells (Kaku teaches using lateral slices as cell for input to a encoder, and further teaches a neural decoder based on output from the encoder to generate lane boundaries and road boundaries indication as the cross-sections, paragraph [0030-0031] “the encoder 304 extracts features about a lane boundary (LB) and a road boundary (RB) of a lane for decoding …… the decoder 306 or 308 perform a 1d convolutional process from 16-to-8 channels using a kernel size three and padding one for particular applications. Here, although the neural model 300 illustrates implementing two decoders, the neural model 300 could implement a 2-depth layer for outputting the RB and LB values…… Another process for 1-to-1 uses a 1d convolutional process for outputting confidence values and boundary placements for a RB and LB per lateral slice.”), and one or more connections between the one or more cross-sections (Kaku teaches connecting road boundaries based on the confidence value paragraph [0039] “At 540 , the estimation system 170 generates a map by linking lane boundaries along the road edges. In one approach, the estimation system 170 links lane boundaries individually along road edges using heuristics. For example, the confidence values and the boundary positions per lateral slice derived from decoding are associated with potential lane boundaries. The estimation system 170 can identify relationships between lane characteristics that satisfy a threshold for an inverse distance and feature clarity along a road edge.”); Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 2, Anastassov in view of Liu and Kaku teach The method of claim 1, and further teach wherein the one or more decoders comprise a cross- section decoder that outputs one or more indications of cross-sections of the one or more lanes associated with the one or more cells (Kaku teaches using lateral slices as cell and further teaches a neural decoder based on the lateral slices to output lane boundaries indication, paragraph [0005] “the neural model includes a decoder that computes confidence values and boundary placements for the lane boundaries using a histogram of the aggregated features…...the estimation system generates a map with updated and fuller lane boundaries by processing sliced data individually and linking slices, thereby improving the accuracy and efficiency of generating maps”). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 3, Anastassov in view of Liu and Kaku teach The method of claim 2, and further teach wherein generating the lane graph comprises aggregating at least two cross-sections associated with a bin in the region (Kaku paragraph [0016-0017] “the histogram can aggregate and compress features with reduced dimensions through bins that are each associated with a lateral slice, thereby improving efficiency……This can involve counting compressed data within the bins for relevancy and correlation of features. In one approach, the estimation system automatically recombines the lane boundaries individually along the road edges for generating a map, such as by merging lateral slices being adjacent that have defined features……the estimation system selects the features using the neural model by factoring a distance between compressed data in the bins and the relationship with the lane boundary.”). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 4, Anastassov in view of Liu and Kaku teach The method of claim 2, and further teach wherein generating the lane graph comprises stitching at least two cross-sections together at least based on proximity or orientation of the at least two cross-sections relative to one another (Kaku paragraph [0016] “the estimation system automatically recombines the lane boundaries individually along the road edges for generating a map, such as by merging lateral slices being adjacent that have defined features.”). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 6, Anastassov in view of Liu and Kaku teach The method of claim 2, and further teach wherein the one or more decoders comprise a connection decoder to connect at least a first portion of a lane and a second portion of the lane (Kaku paragraph [0032] “the estimation system 170 can link lane boundaries individually along road edges heuristically using the confidence values and the boundary positions outputted per lateral slice. This can include identifying relationships between lane characteristics that satisfy a threshold for an inverse distance and feature clarity along a road edge. For example, two end lateral slices have a dashed line with elevated confidence values with a middle lateral slice that is adjacent and includes missing paint. As such, the estimation system 170 can reliably merge the lateral slices together using a dashed line across three lateral slices if within the threshold for confidence and position.” And paragraph [0031] “although the neural model 300 illustrates implementing two decoders, the neural model 300 could implement a 2-depth layer for outputting the RB and LB values……Another process for 1-to-1 uses a 1d convolutional process for outputting confidence values and boundary placements for a RB and LB per lateral slice. Such boundary positions use an inverse distance between the RB/LB and inferred features that the neural model 300 assembles into a map.). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 7, Anastassov in view of Liu and Kaku teach The method of claim 1, and further teach wherein the one or more decoders comprise an edge decoder that outputs one or more indications of Bezier curves or polyline parameterizations associated with edges of the one or more lanes (Liu teaches an edge decoder in the polyline generator. Page 5, right column, third paragraph, “the polyline generator focuses on the detailed geometry of HD map, which entails calculating variable-length polyline vertices and their order.” And Page 6, right column, first paragraph, “Each polyline’s keypoint coordinates and class label are tokenized and fed in as the query inputs of the transformer decoder. Then a sequence of vertex tokens are fed into the transformer iteratively, integrating BEV features with cross-attention, and decoded as polyline vertices.”). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove and Kaku to achieve accuracy and efficiency. Regarding claim 8, Anastassov in view of Liu and Kaku teach The method of claim 7, and further teach wherein generating the lane graph includes projecting the one or more indications of the Bezier curves or the polyline parameterizations to a map view of the environment (Anastassov teaches a polyline of route 1117 on a map view of the environment. Figure 11, paragraph [0122] “a user interface (UI) 1100 (e.g., a navigation application 113) is generated for a UE 111 (e.g., a mobile device, an embedded navigation system, a client terminal, etc.) that includes a map 1101, …… However, the system 100 determines an optimum route 1109 which nevertheless involves a mobile obstacle 1111 (e.g., a street cleaning vehicle) and a work area 1113, and shows an alert: “Warning! Work Areas Detected Along Route.” In response to an input 1115 of “Show Alternative Route,” the UI 1100 presents an alternative route 1117.”). Regarding claim 9, Anastassov in view of Liu and Kaku teach The method of claim 8, and further teach wherein generating the lane graph further includes stitching the one or more indications of the Bezier curves or the polyline parameterizations together across neighboring regions in the map view of the environment (Anastassov paragraph [0081] “The most common connection type in across-tiles merging is a segment A of Tile 1 terminated by a node of valence 1 can be connected to segment B of Tile 2 starting at a node of valence 1 in FIG. 7A. The arrows indicate the direction of travel. In this case, a combined polyline is created in FIG. 7B, defining a new map segment A+B associated with Tile 2”). Regarding claim 13, Anastassov in view of Liu and Kaku teach The method of claim 1, and further teach wherein each cell representation comprises a set of point representations including a set of attributes values for attributes associated with the one or more points that correspond with the cell (Anastassov paragraph [0047] “the sensor data includes probe data may be reported as probes, which are individual data records collected at a point in time that records telemetry data for that point in time. A probe point can include attributes such as: (1) source ID, (2) longitude, (3) latitude, (4) elevation, (5) heading, (6) speed, (7) time, and (8) access type. A source/probe can be a vehicle, a drone, a user device travelling with the vehicle, etc. Probe data can be used to define probe (e.g., a vehicle) travel paths, count numbers of contributing vehicles, forming “drives” by a location point (together with time information), etc.”). Regarding claim 14, Anastassov in view of Liu and Kaku teach The method of claim 1, and further teach wherein the method is performed by at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system for performing digital twin operations; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for generating synthetic data; or a system implemented at least partially using cloud computing resources (Anastassov paragraph [0047] “the system 100 can process sensor data from one or more vehicles 103a-103n (also collectively referred to as vehicles 103) (e.g., standard vehicles, autonomous vehicles, heavily assisted driving (HAD) vehicles, semi-autonomous vehicles, etc.).”, paragraph [0119] “the machine learning system 125 can continuously provide and/or update a machine learning model (e.g., a support vector machine (SVM), neural network, decision tree, etc.) during training using, for instance, supervised deep convolution networks or equivalents.”). Regarding claim 15, Anastassov teaches One or more processors comprising one or more circuits to (Anastassov paragraph [0162-0163] “FIG. 13 illustrates a computer system 1300 upon which an embodiment of the invention may be implemented. Computer system 1300 is programmed (e.g., via computer program code or instructions) to create a base map and identify special areas as described herein……One or more processors 1302 for processing information are coupled with the bus 1310.”): generating, for each cell of one or more cells of a grid representing a region of an environment (Anastassov paragraph [0038] “FIG. 2A is a diagram illustrating an example geo-spatial partitioning scheme, according to one embodiment. The diagram 200 shows an area of interest (e.g., a Washington D.C. map 201) divided into partitions 203 for the purposes of distributed/parallelized processing……By way of example, a partition 203m containing a portion of Washington D.C. borderline 201n is divided into grid cells 205”), a cell representation indicating one or more points that correspond with the cell and that represent corresponding sensor detections generated by a plurality of ego-machines in the environment (Anastassov teaches the probe data as the cell representation, paragraph [0008] “cause the apparatus to retrieve probe data collected from one or more sensors of one or more probe devices traveling within a geographic area including at least one geographic partition.” And paragraph [0036] “the system 100 can prepare probe data into seed points, then continue with the seed points (without the probe data)”); generating, based at least on applying the cell representations for the one or more cells to ……, lane data indicating one or more lanes associated with the one or more cells (Anastassov paragraph [0065] “In one embodiment, in step 409, the road segment module 305 can create at least one continuous road path based on the density maxima locations for the plurality of grid cells.” And paragraph [0066] “the seed points 225 created in step 407 (e.g., FIGS. 2B-2C) can be used to create continuous road paths represented by polylines 1-6 in FIG. 2D.”); …… and generating, based at least on the lane data, a lane graph that represents the one or more lanes on one or more roads in the environment (Anastassov paragraph [0082] “In one embodiment, in step 411, the output module 309 can include the at least one continuous road path in an output representing a base map of the geographic area. In other embodiments, the base map further includes connections, intersections, splits/merges, etc. as a graph.”). Anastassov does not explicitly teach …… one or more decoders of a transformer machine learning model …… one or more cross-sections the one or more lanes associated with the one or more cells, and one or more connections between the one or more cross-sections…… Liu teaches …… one or more decoders of a transformer machine learning model……(Liu teaches a transformer decoder for the map element detector and multiple transformer decoder layers for polyline generator. Page 3, top “Figure 2: The network architecture of VectorMapNet. The top row is the pipeline of VectorMapNet generating polylines from raw sensor inputs. The bottom row illustrates detailed structures and inference procedures of three primary components of VectorMapNet: BEV feature extractor, map element detector, and polyline generator.”, page 6, left column, last paragraph, “To model these local geometric structures of polylines, the autoregressive network we choose is Transformer (Vaswani et al., 2017) (see the bottom-right of Figure 2)…… Each polyline’s keypoint coordinates and class label are tokenized and fed in as the query inputs of the transformer decoder.”). Anastassov and Liu are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove to achieve accuracy and efficiency. Anastassov in view of Liu fail to teach …… one or more cross-sections the one or more lanes associated with the one or more cells, and one or more connections between the one or more cross-sections…… Kaku teaches …… one or more cross-sections the one or more lanes associated with the one or more cells (Kaku teaches using lateral slices as cell for input to a encoder, and further teaches a neural decoder based on output from the encoder to generate lane boundaries and road boundaries indication as the cross-sections, paragraph [0030-0031] “the encoder 304 extracts features about a lane boundary (LB) and a road boundary (RB) of a lane for decoding …… the decoder 306 or 308 perform a 1d convolutional process from 16-to-8 channels using a kernel size three and padding one for particular applications. Here, although the neural model 300 illustrates implementing two decoders, the neural model 300 could implement a 2-depth layer for outputting the RB and LB values…… Another process for 1-to-1 uses a 1d convolutional process for outputting confidence values and boundary placements for a RB and LB per lateral slice.”), and one or more connections between the one or more cross-sections…… (Kaku teaches connecting road boundaries based on the confidence value paragraph [0039] “At 540 , the estimation system 170 generates a map by linking lane boundaries along the road edges. In one approach, the estimation system 170 links lane boundaries individually along road edges using heuristics. For example, the confidence values and the boundary positions per lateral slice derived from decoding are associated with potential lane boundaries. The estimation system 170 can identify relationships between lane characteristics that satisfy a threshold for an inverse distance and feature clarity along a road edge.”); Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 16, claim 16 has similar limitations as claim 2, therefore it is rejected under the same rationale as claim 2. Regarding claim 17, Anastassov in view of Liu and Kaku teach The one or more processors of claim 15, and further teach wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system for generating synthetic data; a system for generating synthetic data using AI; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Anastassov paragraph [0047] “the system 100 can process sensor data from one or more vehicles 103a-103n (also collectively referred to as vehicles 103) (e.g., standard vehicles, autonomous vehicles, heavily assisted driving (HAD) vehicles, semi-autonomous vehicles, etc.).”, paragraph [0119] “the machine learning system 125 can continuously provide and/or update a machine learning model (e.g., a support vector machine (SVM), neural network, decision tree, etc.) during training using, for instance, supervised deep convolution networks or equivalents.”). Regarding claim 18, Anastassov teaches A system comprising: one or more circuits to (Anastassov paragraph [0019] “FIG. 1 is a diagram of a system capable of creating a base map and identifying special areas, according to one embodiment.”): generate, based at least on applying at least one cell representation indicating one or more points corresponding with a cell of a region and representing sensor detections generated by a plurality of ego-machines in an environment (Anastassov teaches the probe data as the cell representation, paragraph [0008] “cause the apparatus to retrieve probe data collected from one or more sensors of one or more probe devices traveling within a geographic area including at least one geographic partition.” And paragraph [0036] “the system 100 can prepare probe data into seed points, then continue with the seed points (without the probe data)” and paragraph [0038] “FIG. 2A is a diagram illustrating an example geo-spatial partitioning scheme, according to one embodiment. The diagram 200 shows an area of interest (e.g., a Washington D.C. map 201) divided into partitions 203 for the purposes of distributed/parallelized processing……By way of example, a partition 203m containing a portion of Washington D.C. borderline 201n is divided into grid cells 205”) ……, lane data indicating one or more lanes associated with the cell of the region (Anastassov paragraph [0065] “In one embodiment, in step 409, the road segment module 305 can create at least one continuous road path based on the density maxima locations for the plurality of grid cells.” And paragraph [0066] “the seed points 225 created in step 407 (e.g., FIGS. 2B-2C) can be used to create continuous road paths represented by polylines 1-6 in FIG. 2D.”); …… and generate, based at least on the lane data, a lane graph that represents the one or more lanes on one or more roads in the environment (Anastassov paragraph [0082] “In one embodiment, in step 411, the output module 309 can include the at least one continuous road path in an output representing a base map of the geographic area. In other embodiments, the base map further includes connections, intersections, splits/merges, etc. as a graph.”). Anastassov does not explicitly teach ……to one or more decoders of one or more neural networks …… one or more cross-sections of the one or more lanes associated with the cell of the regions, and one or more connections between the one or more cross-sections…… Liu teaches ……to one or more decoders of one or more neural networks……(Liu teaches a transformer decoder for the map element detector and multiple transformer decoder layers for polyline generator. Page 3, top “Figure 2: The network architecture of VectorMapNet. The top row is the pipeline of VectorMapNet generating polylines from raw sensor inputs. The bottom row illustrates detailed structures and inference procedures of three primary components of VectorMapNet: BEV feature extractor, map element detector, and polyline generator.”, page 6, left column, last paragraph, “To model these local geometric structures of polylines, the autoregressive network we choose is Transformer (Vaswani et al., 2017) (see the bottom-right of Figure 2)…… Each polyline’s keypoint coordinates and class label are tokenized and fed in as the query inputs of the transformer decoder.”). Anastassov and Liu are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove to achieve accuracy and efficiency. Anastassov in view of Liu fails to teach …… one or more cross-sections of the one or more lanes associated with the cell of the regions, and one or more connections between the one or more cross-sections…… Kaku teaches …… one or more cross-sections of the one or more lanes associated with the cell of the regions (Kaku teaches using lateral slices as cell for input to a encoder, and further teaches a neural decoder based on output from the encoder to generate lane boundaries and road boundaries indication as the cross-sections, paragraph [0030-0031] “the encoder 304 extracts features about a lane boundary (LB) and a road boundary (RB) of a lane for decoding …… the decoder 306 or 308 perform a 1d convolutional process from 16-to-8 channels using a kernel size three and padding one for particular applications. Here, although the neural model 300 illustrates implementing two decoders, the neural model 300 could implement a 2-depth layer for outputting the RB and LB values…… Another process for 1-to-1 uses a 1d convolutional process for outputting confidence values and boundary placements for a RB and LB per lateral slice.”), and one or more connections between the one or more cross-sections (Kaku teaches connecting road boundaries based on the confidence value paragraph [0039] “At 540 , the estimation system 170 generates a map by linking lane boundaries along the road edges. In one approach, the estimation system 170 links lane boundaries individually along road edges using heuristics. For example, the confidence values and the boundary positions per lateral slice derived from decoding are associated with potential lane boundaries. The estimation system 170 can identify relationships between lane characteristics that satisfy a threshold for an inverse distance and feature clarity along a road edge.”); Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Kaku teaches using lateral slices with decoder to achieve accuracy and efficiency (Kaku paragraph [0017] “the estimation system improves the definition of lane boundaries and reduces computation costs by slicing road data that allows simpler geometric modeling and map generation.” And paragraph [0029] “slicing data has benefits because segments can solve a locally optimizable problem repeatedly. Slicing also improves solving disconnections or merging among inputs that impact decoding and improving global inferences as detection accuracy among lateral slices increases. For FIG. 3, the neural model 300 may select channels from lateral slices having keypoints that will expand detection areas and improve accuracy by applying various factoring.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaku with the method of Anastassove and Liu to achieve accuracy and efficiency. Regarding claim 19, Anastassov in view of Liu and Kaku teach The system of claim 18, and further teach wherein the one or more decoders comprise a cross- section decoder, a connection decoder, an edge decoder, or a combination thereof (Liu teaches an edge decoder in the polyline generator. Page 5, right column, third paragraph, “the polyline generator focuses on the detailed geometry of HD map, which entails calculating variable-length polyline vertices and their order.” And Page 6, right column, first paragraph, “Each polyline’s keypoint coordinates and class label are tokenized and fed in as the query inputs of the transformer decoder. Then a sequence of vertex tokens are fed into the transformer iteratively, integrating BEV features with cross-attention, and decoded as polyline vertices.”). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove and Kaku to achieve accuracy and efficiency. Regarding claim 20, Anastassov in view of Liu and Kaku teach The system of claim 18, and further teach wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Anastassov paragraph [0047] “the system 100 can process sensor data from one or more vehicles 103a-103n (also collectively referred to as vehicles 103) (e.g., standard vehicles, autonomous vehicles, heavily assisted driving (HAD) vehicles, semi-autonomous vehicles, etc.).”, paragraph [0119] “the machine learning system 125 can continuously provide and/or update a machine learning model (e.g., a support vector machine (SVM), neural network, decision tree, etc.) during training using, for instance, supervised deep convolution networks or equivalents.”). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anastassov (US 20220161817 A1), hereinafter as Anastassov, in view of NPL Liu et al. (“VectorMapNet: End-to-end Vectorized HD Map Learning”), hereinafter as Liu, further in view of Kaku et al. (US 20250146835 A1), hereinafter as Kaku, and Chen et al. (US 20240067207 A1), hereinafter as Chen. Regarding claim 5, Anastassov in view of Liu and Kaku teach The method of claim 2, but fail to teach further comprising using one or more trajectories to connect at least a first portion of a lane and a second portion of the lane. Chen teaches further comprising using one or more trajectories to connect at least a first portion of a lane and a second portion of the lane (Chen teaches using historical trajectory data in lane boundary detection of neural network, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the decoder of Kaku, paragraph [0026] “lane-boundary detection training system 200 can communicate with other computing systems and local or remote databases to acquire image data 230 and historical vehicle trajectory data 235 for use in training a neural-network-based model to detect roadway lane boundaries.”). Anastassov, Liu, Kaku and Chen are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Chen teaches using historical vehicle trajectory data to improve performance of detecting lane boundaries based on sensor data (Chen paragraph [0013] “First, during the training of a neural network to detect lane boundaries based on image input, historical vehicle trajectory data is used as weak supervision to improve the performance of the resulting trained network. Second, the trained network, when deployed in a semi-autonomous or autonomous vehicle (inference or test time), can continue to use historical vehicle trajectory data as an input to improve performance, particularly when HD map data is unavailable or when weather conditions (e.g., heavy rain or snow) interfere with the ability of the system to detect lane boundaries based on sensor (image) data.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the method of Anastassove, Liu and Kaku to achieve accuracy and efficiency. Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anastassov (US 20220161817 A1), hereinafter as Anastassov, in view of NPL Liu et al. (“VectorMapNet: End-to-end Vectorized HD Map Learning”), hereinafter as Liu, further in view of Kaku et al. (US 20250146835 A1), hereinafter as Kaku, and Pham et al. (US 20200341466 A1), hereinafter as Pham. Regarding claim 10, Anastassov in view of Liu and Kaku teach The method of claim 1, and further teach wherein the one or more decoders comprise an edge decoder that autoregressively outputs one or more indications of keypoints along one or more centerlines of the one or more lanes (Liu Page 6, Left Column, Last Paragraph, “Architecture. To model these local geometric structures of polylines, the autoregressive network we choose is Transformer (Vaswani et al., 2017) (see the bottom-right of Figure 2).”, Page 4, right column, first paragraph, “we divide the task into three distinct components: (1) A BEV feature extractor (§ 3.2) that lifts various sensor modality inputs into a canonical feature space. (2) A map element detector (§ 3.3) that locates and classifies all map elements by predicting element keypoints A = {Ai ∈ R k×2 |i = 1, . . . , N} and their class labels L = {li ∈ Z|i = 1, . . . , N}. The definition of element keypoint representation A is described in § 3.3. (3) A polyline generator (§ 3.4) that produces a sequence of ordered polyline vertices which describes the local geometry of each detected map element (Ai , li).”, and Page 8, Right Column, First paragraph, “Centerline prediction by VectorMapNet. As discussed in § 3.1 and above, the polyline is a versatile primitive, capable of representing map element classes that extend beyond the elements in the HD semantic map setting. To further demonstrate this flexibility, we expand VectorMapNet to predict the centerline, an imaginary line commonly used as a reference for driving direction, vehicle positioning, and navigation.”). Anastassov, Liu and Kaku are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove and Kaku to achieve accuracy and efficiency. Anastassov in view of Liu and Kaku fail to teach and one or more indications of offsets to lane edges of the one or more lanes. Pham teaches and one or more indications of offsets to lane edges of the one or more lanes (Pham teaches an decoder outputting the width of a lane corresponding to the key points as the offset to the lane edges, paragraph [0026] “The sensor data may be applied to a neural network (e.g., a deep neural network (DNN), such as a convolutional neural network (CNN)) that is trained to identify areas of interest pertaining to road markings, road boundaries, intersections……More specifically, the neural network may be designed to compute key points corresponding to segments of an intersection (e.g., corresponding to lanes, bike paths, etc., and/or corresponding to features therein—such as cross walks, intersection entry points, intersection exit points, etc.), and to generate outputs identifying, for each key point, a width of a lane corresponding to the key point, a directionality of the lane”). Anastassov, Liu, Kaku and Pham are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Pham teaches using a machine learning network to decide lanes and crosswalks in order to achieve accuracy and efficiency for road map (Pham paragraph [0007] “As a result of using live perception to generate an understanding of each intersection, the process of generating paths for navigating the intersection may be comparatively less time-consuming, less computationally intense, and more scalable as the system may learn to diagnose each intersection in real-time or near real-time, without requiring prior experience or knowledge of the intersection.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Pham with the method of Anastassove in view of Liu and Kaku to achieve accuracy and efficiency of road map. Regarding claim 11, Anastassov in view of Liu, Kaku and Pham teach The method of claim 10, and further teach wherein generating the lane graph includes projecting the one or more indications of the keypoints and the one or more indications of the offsets to a map view of the environment (Liu teaches showing a map of the lane graph with the green boundary line and gray centerline. Each vector represents the direction of key points. Page 7, Right Column, Figure 5, “Figure 5: The centerline predictions by VectorMapNet, where the gray lines are the predicted centerlines.” And Page 5, Figure 3, “The arrow line indicates the direction of the example polyline, and the arrow dash lines indicate the vertices order of keypoint representations.”). Anastassov, Liu, Kaku and Pham are in the same field of endeavor, namely computer graphics, especially in the field of road map generation based on sensor data. Liu teaches using VectorMapNet to generate the HD semantic map in order to achieve accuracy and efficiency (Liu Page 9, Right Column, fourth paragraph, “Our experiments show that VectorMapNet can generate coherent and complex geometries for urban map elements, benefiting from the polyline primitives. We believe that this novel way to learn HD maps provides a new perspective on the HD semantic map learning problem.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Anastassove, Kaku and Pham to achieve accuracy and efficiency. Allowable Subject Matter Claim 12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 12, the closest prior art of Pham teaches deciding center key points, left and right edge key points, further teaches connecting pair of key points together to generate a path (paragraph [0078] “The heat map(s) 108 may be used by the decoder to determine locations of the center key points, left key points, and/or right key points corresponding to a center of a lane, a left edge of a lane, or a right edge of a lane, respectively.” And paragraph [0025] “Computer vision and/or machine learning algorithm(s) may be trained to detect the key points of an intersection, and the (center) key points may be connected together—using one or more filters—to generate paths and/or trajectories for the vehicle to effectively and accurately navigate the intersection.”). However, Pham fails to teach the combined limitation as a whole, “wherein generating the lane graph further includes stitching the one or more indications of the keypoints and the one or more indications of the offsets across neighboring regions in the map view of the environment”. Furthermore, no prior art of record either alone or in combination teaches the above limitation as a whole. Therefore, claim 12 is considered to allowable. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMING WEI whose telephone number is (571)272-3831. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOMING WEI/ Examiner, Art Unit 2611 /KEE M TUNG/ Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 12, 2024
Application Filed
Oct 22, 2025
Non-Final Rejection — §103
Jan 12, 2026
Interview Requested
Jan 22, 2026
Examiner Interview Summary
Jan 22, 2026
Applicant Interview (Telephonic)
Jan 26, 2026
Response Filed
Feb 08, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603064
CIRCUIT AND METHOD FOR VIDEO DATA CONVERSION AND DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597246
METHOD AND APPARATUS FOR GENERATING ADVERSARIAL PATCH
2y 5m to grant Granted Apr 07, 2026
Patent 12597175
Avatar Creation From Natural Language Description
2y 5m to grant Granted Apr 07, 2026
Patent 12586280
TECHNIQUES FOR GENERATING DUBBED MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12586318
METHOD AND APPARATUS FOR LABELING ROAD ELEMENT, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+26.1%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month