Prosecution Insights
Last updated: April 19, 2026
Application No. 18/171,004

MACHINE LEARNING BASED LANDMARK PERCEPTION FOR LOCALIZATION IN AUTONOMOUS SYSTEMS AND APPLICATIONS

Final Rejection §102§103
Filed
Feb 17, 2023
Examiner
DOUGLAS, SHANE EMANUEL
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nvidia Corporation
OA Round
4 (Final)
17%
Grant Probability
At Risk
5-6
OA Rounds
2y 4m
To Grant
39%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
2 granted / 12 resolved
-35.3% vs TC avg
Strong +22% interview lift
Without
With
+22.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
44 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
30.3%
-9.7% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to amendments and remarks filed on 11/13/2025. Claims 1-20 are considered in this office action. Claims 5, 9, 11, and 18 have been amended. Claims 1-20 are pending examination. Applicant's amendment necessitated new grounds of rejection therefore, claims 1-20 are rejected. Response to Arguments Applicant presents the following arguments regarding the previous office action: The rejection of claim 1 under USC 102 regarding Chen's confidence values can only be predicted by classification and the enablement of Chen. The rejection of the claims under USC 103 and that Chen teaches away and therefore should not be available for use in any obviousness rejection, and Chen does not disclose lidar. Regarding argument A: The applicants arguments have been fully considered but are not persuasive. Claim 1 recites generating using one or neural networks based on at least first data representing a view and second data representing one or more classifications of one or more candidate anchor points of parametric curves fitted to detected landmarks corresponding to a view and third data representing parametric curves. Chen anticipates these limitations because Chen discloses inputting an image such as the view data into a lane line detection network and outputting a central point thermodynamic diagram having confidence values over the view, then selecting the candidate points from that diagram based on the confidence thresholds, and then determining a parametric curve representing each lane line using the control points derived from the network outputs. Under BRI the thermodynamic diagram constitutes the "second data' that represents classification corresponding to the view. The thresholder central point's constitute the "candidate anchor points" because they are the anchor location from which the system determines the corresponding curve representation of each detected landmark. Chen further discloses, determining Bezier curves that represent lane lines which meet the claims third data representing the one or mor parametric curves. The applicants reliance on Chen's background discussion is misplaced. Even if Chen discusses and criticizes certain prior techniques in the background, this does not negate what Chen affirmatively teaches in its operative disclosure and embodiments. Anticipation is determined by whether the reference discloses each and every limitation as arranged in the claim, not whether the reference praises or criticizes alternative approaches. The operative step and network outputs discloses by Chen describe a process that includes producing a confidence map over the view, selecting the candidate points based on the confidence and produces fitted parametric curves representing the detected lane landmarks. Therefore the claim limitations are disclosed in the Chen reference. Applicants argument that Chen's confidence values can only be predicted by classification is also not conclusive. Even if Chen labels a component as a "regression" network or trains using regression losses the claim does not require this particular training method or loss function. The claim requires that the generated second data represent classification of candidate anchor points. The heatmap or thermodynamic diagram of confidence values is reasonably interpreted as classification data because it assigns a likelihood/score over locations in the view for the presence of a landmark center. Chen uses those confidence values to select candidate points by thresholding. This, Chen's disclosure meets the classification requirement under BRU regardless of whether Chen internally characterizes the predictor as regression. Applicants enablement assertions are likewise not persuasive. Chen does not merely name the subject matter, rather Chen describes the network outputs (thermodynamic diagram), the selection of candidate points via confidence thresholding, and the generation of Bezier curves to represent lane lines. This level of detail is sufficient for a person of ordinary skill in the art to practice the claimed subject matter without undue experimentation, and therefore Chen is enabling for the claimed subject matter. Accordingly, the rejection of claim 1 under 35 USC 102 as anticipated by Chen is maintained. For the same reasons independent claims 11 and 18, and the claims depending therefrom remain anticipated by Chen. Regarding argument B: With respect to applicants assertion that Chen teaches away and therefore should not be available for use in any obviousness rejection, teaching away does not eliminate the reliance on a refence in a 103 rejection and a references discussion of disadvantages of certain prior art techniques does not render the reference unusable in combination where the rejection is based on the references affirmation teachings. Furthermore Chen's background discussion critiques particular implementations of segmentation or instance segmentation and certain polynomial parameter regressions. Chen does not criticize the use of confidence outputs to select candidate points and then fitting parametric curves to represent lane lines. Chen only affirmatively teaches generating a confidence map over the view. Accordingly Chen does not teach away from the claimed subject matter and even if Chen's background discussion were construed as discouraging a particular alternative approach, that would at most be factored and not preclude an obviousness rejection where the combination is supported by a reasoned rationale and yields predictable results. Regarding the applicants argument asserting Chen does not disclose Lidar, the office agrees that Chen is directed toward processing image data. However, the 103 rejection infra does not require Chen by itself to disclose the Lidar or a projected representation of lidar data. As set forth in the rejection the Lidar sensing and the projected representation of Lidar data are taught by Smolyanskiy and Chen is relied upon for the neural network based generation of a confidence representation corresponding to the view and candidate point selection and parametric curve representations corresponding to detected lane landmarks. A person of ordinary skill in the art would have been ,motivated to apply Chen's known neural network output structure to the projected representation of Liard data taught by Smolyanskiy because both are view like grid representations used for perceiving lane and road structures and substituting one know input representation for another image in a known perception architecture to obtain the same type of structure landmark output constitutes a predictable use of prior art elements according to their established functions. Accordingly, the 35 USC 103 rejection is maintained. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 6, 12, 14-15, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al (CN114694109B). Regarding claim 1, Chen discloses, a method comprising: determining, using sensor data obtained using at least one sensor, first data representing a view of at least a portion of an environment (Abstract, inputting an image to be detected into a lane line detection network, and acquiring output information of the lane line detection network), and generating, using one or more Neural Networks (NNs) (Detailed Description, Paragraph 3, with the development of the deep neural network, a plurality of lane line detection methods based on the deep network appear), and based at least on the first data, second data representing one or more classifications of one or more candidate anchor points (Detailed Description, Paragraph 3, such as segmentation or instance segmentation based methods based on anchor point (anchor) target detection), of one or more parametric curves fitted to one or more detected landmarks corresponding to the view (Detailed Description, Paragraph 3, these methods indirectly express lane lines based on a segmentation map or a detection of a large number of points, the global expressiveness is not enough, and lane line detection may fail under the condition of occlusion or extreme weather. Some methods express lane lines by polynomial curves, and model lane line curves by using parameters of deep network regression), and third data representing the one or more parametric curves (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line). Regarding claim 6, Chen discloses, the method of claim 1, further comprising decoding the one or more parametric curves fitted to the one or more detected landmarks based at least on: identifying one or more anchor points of the one or more parametric curves based at least on thresholding one or more channels of the second data (Detailed Description, respectively determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … n central points may be screened from the predicted central point thermodynamic diagram according to the confidence, specifically, a point with the highest region confidence and greater than or equal to a certain threshold may be selected as the central point). Regarding claim 12, Chen discloses, the processor of claim 11, wherein the one or more processing units are further to generate the classification data and the shape regression data based at least on jointly predicting: the classification data representing the one or more classifications of the one or more candidate anchor points using a classification head of the one or more NNs, and the shape regression data representing the one or more regressed parametric curves using a shape regression head of the one or more NNs (Disclosure of Invention, the first regression network is used for predicting the central point thermodynamic diagram based on the characteristic information of the image to be detected) … (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … (Disclosure of Invention, the second regression network is used for predicting the position deviation between the central point and the control point based on the characteristic information of the image to be detected), Regarding claim 14, Chen discloses, the processor of claim 11, wherein the one or more processing units are further to decode the one or more regressed parametric curves fitted to the one or more detected landmarks based at least on: identifying one or more anchor points of the one or more regressed parametric curves based at least on the classification data, (Abstract, a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … (Detailed Description, the lane line detection device provided by the embodiment of the invention predicts the central point thermodynamic diagram by using the lane line detection network … and the control point related to the central point can be used for determining the Bezier curve of the lane line corresponding to the central point), and identifying one or more parameters for the one or more regressed parametric curves (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line), based at least on a portion of the shape regression data corresponding to the one or more anchor points (Detailed Description, determining the second Bezier curve based on the new control point). Regarding claim 15, Chen discloses the processor of claim 11, wherein the one or more processing units are further to decode the one or more regressed parametric curves fitted to the one or more detected landmarks based at least on: identifying one or more anchor points of the one or more regressed parametric curves (Abstract, a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … (Detailed Description, the lane line detection device provided by the embodiment of the invention predicts the central point thermodynamic diagram by using the lane line detection network … and the control point related to the central point can be used for determining the Bezier curve of the lane line corresponding to the central point), based at least on thresholding one or more channels of the classification data. (Detailed Description, respectively determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … n central points may be screened from the predicted central point thermodynamic diagram according to the confidence, specifically, a point with the highest region confidence and greater than or equal to a certain threshold may be selected as the central point). Regarding claim 19, Chen discloses, the system of claim 18, wherein the one or more processors are further to generate the second data and the third data based at least on jointly predicting: the second data representing the one or more classifications of the one or more candidate anchor points using a classification head of the one or more NNs, and the third data representing the one or more parametric curves using a shape regression head of the one or more NNs. (Cai, 0030, the framework may specifically include a neural network, a feature fusion model, a confidence level adjustment model, a prediction head model, and a predicted lane line integration model. The neural network is configured to extract a feature from an input to-be-detected image after being trained) … (Cai, 0095, for each grid Gij, the prediction head model predicts a group of offsets Δxijz and an endpoint location (that is, a location at which a remote end of a lane line disappears on a feature map), where Δxijz is a horizontal distance between a real lane line on the ground and a vertical line passing through an anchor point) … (Cai, 0166, it should be noted that the regions, the sub-regions, and the like shown in FIG. 23 and FIG. 24 are illustrated by using rectangular regions. In some implementations of this disclosure, shapes of the regions or the sub-regions are not limited. For example, the regions or the sub-regions may be circular regions, oval regions, trapezoidal regions, or even irregular regions, provided that a function of dividing the first predicted lane line can be implemented in the regions). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-4, 10-11, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN114694109B) in view of Smolyanskiy (US20210150230A1). Regarding claim 2, Chen discloses, the method of claim 1 as discussed supra. Additionally, Smolyanskiy who is in the same field of endeavor of neural network based lidar perception discloses, generating the second and third data comprises fusing first extracted features generated in the view using a first input head of the one or more NNs with a projected representation of second extracted features generated in a second view of the environment using a second input head of the one or more NNs (Smolyanskiy, Paragraph 0056, Lines 3-10, the machine learning model(s) 408 may include separate feature extractors in multiple stages chained together to sequentially process data from multiple views of a 3D environment. For example, the machine learning model(s) 408 may include a first stage with a first feature extractor configured to extract classification data from an image with a first view of the environment (e.g., a perspective view), and the output of the first feature extractor may be transformed to a second view of the environment (e.g., a top down view) and fed into a second feature extractor) … (Smolyanskiy, Paragraph 0056, Lines 13-17, additionally or alternatively, multiple images may be generated with different views, each image may be fed into separate side-by-size feature extractors, and the latent space tensors output by the separate feature extractors may be combined to form classification data and/or object instance data). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the two disclosures. This would serve to solve discrepancies in autonomous driving perception as they both use compatible tensors and deep-learning frameworks, and invite adding specialized output heads. Integrating the proven spline-regressor onto the existing fused feature backbone would yield higher precision landmark curves. Additionally, using parametric curves would be advantageous because they provide a compact, differentiable representation that can be learned by neural networks, and they are less sensitive to noise and variations in LiDAR data compared to raw point clouds. Justification for combining the two disclosures not only comes from the state of the art but from Chen (Final Paragraph, the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention). Regarding claim 3, Chen discloses, the method of claim 1 as discussed supra. Additionally, Chen discloses, generating the second and third data comprises jointly predicting: the second data representing the one or more classifications of the one or more candidate anchor points (Disclosure of Invention, the first regression network is used for predicting the central point thermodynamic diagram based on the characteristic information of the image to be detected), and the third data representing the one or more parametric curves fitted to the one or more detected landmarks using a shape regression head of the one or more NNs (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … (Disclosure of Invention, the second regression network is used for predicting the position deviation between the central point and the control point based on the characteristic information of the image to be detected). However Chen does not explicitly disclose, the second data representing the one or more classifications of the one or more candidate anchor points using a classification head of the one or more NNs. Nevertheless Smolyanskiy discloses, the second data representing the one or more classifications of the one or more candidate anchor points using a classification head of the one or more NNs (0070, the encoder/decoder trunk 650 may extract features into some latent space tensor, which may be input into the class confidence head 655 and the instance regression head 660). Regarding claim 4, Chen discloses, the method of claim 1 as discussed supra. Additionally, Smolyanskiy discloses generating the second and third data comprises sequentially: predicting the second data representing the one or more classifications of the one or more candidate anchor points using a first stage of the one or more NNs Smolyanskiy, Paragraph 0006, Lines 9-11, an example multi-view perception DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view)) … (Smolyanskiy, Paragraph 0007, Lines 1-3, the first stage may extract classification data (e.g., confidence maps, segmentations masks, etc.) from a LiDAR range image or an RGB image), predicting the third data representing the one or more parametric curves fitted to the one or more detected landmarks based at least on processing a representation of the second data using a second stage of the one or more NNs (0069, the second stage may extract features from a representation of the transformed classification data and/or geometry data (e.g. a tensor having M+N channels), and may perform class segmentation and/or regress instance geometry in the second view). Additionally, Chen discloses the parametric curves (Detailed Description, the lane line detection device provided by the embodiment of the invention predicts the central point thermodynamic diagram by using the lane line detection network, then determines the central point of the lane line based on the central point thermodynamic diagram, and obtains the control point related to the central point based on the central point and the position deviation between the control point predicted by the lane line detection network and the central point, and the control point related to the central point can be used for determining the Bezier curve of the lane line corresponding to the central point). Regarding claim 10, Chen discloses, the method of claim 1 as discussed supra. Additionally, Smolyanskiy discloses, the method being performed by at least one of a control system for an autonomous or semi-autonomous machine (Smolyanskiy, Paragraph 0006, Lines 3-6, systems and methods described herein use object detection techniques to identify or detect instances of obstacles (e.g., cars, trucks, pedestrians, cyclists, etc.) and other objects such as environmental parts for use by autonomous vehicles, semi-autonomous vehicles, robots, and/or other object types); a perception system for an autonomous or semi-autonomous machine (Smolyanskiy, Paragraph 0006, Lines 1-2, embodiments of the present disclosure relate to LiDAR perception for autonomous machines using deep neural networks (DNNs)); a system for performing simulation operations (Smolyanskiy, Paragraph 0173, Lines 3-8, the real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation); a system for performing real-time streaming (Smolyanskiy, Paragraph 0229, Lines 1-7, the vehicle 1600 may further include the infotainment SoC 1630 (e.g., an in-vehicle infotainment system (IVI)). Although illustrated and described as a SoC, the infotainment system may not be a SoC, and may include two or more discrete components. The infotainment SoC 1630 may include a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.)), a system for generating or presenting one or more of augmented reality content, virtual reality content or mixed reality content (Smolyanskiy, Paragraph 0166, Lines 3-6, the PVA(s) may be designed and configured to accelerate computer vision algorithms for the advanced driver assistance systems (ADAS), autonomous driving, and/or augmented reality (AR) and/or virtual reality (VR) applications); a system for performing digital twin operations (Smolyanskiy, Paragraph 0066, Lines 7-11, sensor data representing 3D locations of detected objects in the environment may be sampled or otherwise processed to represent characteristics of the detected objects in a particular dimension (e.g., the orthogonal dimension), for example, by taking one or more slices of the sensor data in the particular dimension); a system for performing deep learning operations (Smolyanskiy, Paragraph 0163, Lines 1-2, the accelerator(s) 1614 (e.g., the hardware acceleration cluster) may include a deep learning accelerator(s) (DLA)); a system implemented using an edge device (Smolyanskiy, Paragraph 0138, Lines 1-8, the vehicle 1600 further includes a network interface 1624 which may use one or more wireless antenna(s) 1626 and/or modem(s) to communicate over one or more networks. For example, the network interface 1624 may be capable of communication over LTE, WCDMA, UMTS, GSM, CDMA2000, etc. The wireless antenna(s) 1626 may also enable communication between objects in the environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth LE, Z-Wave, ZigBee, etc., and/or low power wide-area network(s) (LPWANs), such as LoRaWAN, SigFox, etc); a system implemented using a robot (Smolyanskiy, Paragraph 0006, Lines 3-6, systems and methods described herein use object detection techniques to identify or detect instances of obstacles (e.g., cars, trucks, pedestrians, cyclists, etc.) and other objects such as environmental parts for use by autonomous vehicles, semi-autonomous vehicles, robots, and/or other object types); a system incorporating one or more virtual machines (VMs) (Smolyanskiy, Paragraph 0041, Lines 10-13, various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory); a system implemented at least partially in a data center (Smolyanskiy, Paragraph 0028, Lines 1-3, FIG. 16D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 16A, in accordance with some embodiments of the present disclosure) Fig 16D Shows the system diagram for communication between cloud based servers that could be a data center and the autonomous vehicle; PNG media_image1.png 480 377 media_image1.png Greyscale Figure 1. Shows the system diagram for communication between cloud based servers that could be a data center and the autonomous vehicle a system for performing light transport simulation (Smolyanskiy, Paragraph 0044, Lines 3-7, a LiDAR system may include a transmitter that emits pulses of laser light. The emitted light waves reflect off of certain objects and materials, and one of the LiDAR sensors may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, reflectivity, signal-to-noise ratio (SNR), and/or the like); a system for performing collaborative content creation for 3D assets (Smolyanskiy, Paragraph 0038, Lines 24-26, annotations and/or links between different types of sensor data for the same object may be generated manually and/or automatically, and may be used to generate training data for the multi-view perception DNN); a system for generating synthetic data (Smolyanskiy, Paragraph 0038, Lines 24-26, annotations and/or links between different types of sensor data for the same object may be generated manually and/or automatically, and may be used to generate training data for the multi-view perception DNN); or a system implemented at least partially using cloud computing resources (Smolyanskiy, Paragraph 0028, Lines 1-3, FIG. 16D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 16A, in accordance with some embodiments of the present disclosure). Regarding claim 11, Chen discloses, a processor comprising: one or more processing units to: determine, using sensor data obtained using at least one sensor in an environment, first data representing a view of at least a portion of the environment (Disclosure of Invention, the computer program, when executed by the processor, implementing any of the lane line detection methods of the first aspect) and generate, using one or more Neural Networks (NNs) and based at least on the first data, classification data representing one or more classifications of one or more candidate anchor points of one or more regressed parametric curves fitted to one or more detected landmarks corresponding to the view and shape regression data representing the one or more regressed parametric curves (Detailed Description, Paragraph 3, with the development of the deep neural network, a plurality of lane line detection methods based on the deep network appear) … (Detailed Description, Paragraph 3, such as segmentation or instance segmentation based methods based on anchor point (anchor) target detection) … (Detailed Description, Paragraph 3, these methods indirectly express lane lines based on a segmentation map or a detection of a large number of points, the global expressiveness is not enough, and lane line detection may fail under the condition of occlusion or extreme weather. Some methods express lane lines by polynomial curves, and model lane line curves by using parameters of deep network regression) … (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line). However, Chen does not explicitly disclose, using three-dimensional (3D) sensor data obtained using at least one 3D sensor in an environment, first data representing a projected view of the 3D sensor data. Nevertheless, Smolyanskiy discloses, using three-dimensional (3D) sensor data obtained using at least one 3D sensor in an environment, first data representing a projected view of the 3D sensor data (0062, a classification value from a predicted confidence map may be associated with 3D locations in a similar manner as the previous example, by identifying the 3D location of a sensor detection (e.g., a point in a point cloud) represented by a corresponding range scan pixel in the range image) … (0060, classification data extracted by the encoder/decoder 605 may be used to label 620 corresponding 3D locations in the environment, and the labeled 3D locations may be projected 625 into the second view). Regarding claim 13 Chen discloses, the processor of claim 11 as discussed supra. Additionally, Chen discloses the parametric curves (Detailed Description, the lane line detection device provided by the embodiment of the invention predicts the central point thermodynamic diagram by using the lane line detection network, then determines the central point of the lane line based on the central point thermodynamic diagram, and obtains the control point related to the central point based on the central point and the position deviation between the control point predicted by the lane line detection network and the central point, and the control point related to the central point can be used for determining the Bezier curve of the lane line corresponding to the central point). However Chen does not explicitly disclose, the one or more processing units are further to generate the classification data and the shape regression data based at least on sequentially: predicting the classification data representing the one or more classifications of the one or more candidate anchor points using a first stage of the one or more NNs. Nevertheless, Smolyanskiy discloses, the one or more processing units are further to generate the classification data and the shape regression data based at least on sequentially: predicting the classification data representing the one or more classifications of the one or more candidate anchor points using a first stage of the one or more NNs, (Smolyanskiy, Paragraph 0006, Lines 9-11, an example multi-view perception DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view)) … (Smolyanskiy, Paragraph 0007, Lines 1-3, the first stage may extract classification data (e.g., confidence maps, segmentations masks, etc.) from a LiDAR range image or an RGB image), and predicting the shape regression data representing the one or more regressed parametric curves based at least on processing a representation of the classification data using a second stage of the one or more NNs (0069, the second stage may extract features from a representation of the transformed classification data and/or geometry data (e.g. a tensor having M+N channels), and may perform class segmentation and/or regress instance geometry in the second view). Regarding claim 18, Chen discloses, a system comprising: one or more processors to: generate a projected representation of sensor data of an environment (Disclosure of Invention, the computer program, when executed by the processor, implementing any of the lane line detection methods of the first aspect) and generate, using one or more Neural Networks (NNs) and based at least on the projected representation, second data representing one or more classifications of one or more candidate anchor points of one or more parametric curves fitted to one or more detected landmarks (Detailed Description, Paragraph 3, with the development of the deep neural network, a plurality of lane line detection methods based on the deep network appear) … (Detailed Description, Paragraph 3, such as segmentation or instance segmentation based methods based on anchor point (anchor) target detection) and third data representing the one or more parametric curves fitted to the one or more detected landmarks (Detailed Description, image is input to a neural network (for example, a convolutional neural network) for feature extraction to obtain a feature map that is output by a last layer of the neural network, and the extracted feature map is decoded by using a prediction head model to generate dense line clusters (that is, a plurality of predicted lane lines). Finally, line non-maximum suppression (Line-NMS) processing is performed on the line cluster to output a final prediction result for a lane line in the to-be-detected image). However, Chen does not explicitly disclose, generating a projected representation of Lidar data representing an environment. Nevertheless, Smolyanskiy discloses, generate a projected representation of Lidar data representing an environment (0007, the first stage may extract classification data (e.g., confidence maps, segmentations masks, etc.) from a LiDAR range image or an RGB image. The extracted classification data may be transformed to a second view of the environment). Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN114694109B) in view of Tabelini (PolyLaneNet: Lane Estimation via Deep Polynomial Regression). Regarding claim 7, Chen discloses the method of claim 1 as discussed supra. Additionally, Chen discloses the one or more parametric curves comprises predicting, in one or more channels of the third data using the one or more NNs, (Detailed Description, the second regression network may also be composed of two layers of convolution networks, and outputs: 8 (h/4) (w/4), predicting the position deviation between the 4 control points and the central point. The second regression network outputs a corresponding position deviation value for each point on the predicted central point thermodynamic diagram, specifically, 8 position deviation values for each point (4 points, one deviation value for each x-axis and one deviation value for each y-axis of each point)), one or more parameters of a parameterization of the at least one individual fitted Bezier spline (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line). However Chen does not explicitly disclose, the at least one individual fitted polynomial. Nevertheless, Tabelini who is in the same field of endeavor of lane estimation discloses, the at least one individual fitted polynomial (III. PolyLaneNet adopts a polynomial representation for the lane markings instead of a set of points. Therefore, for each output j, j = 1, . . . , Mmax, the model estimates the coefficients Pj = {ak,j} K k=0 representing the polynomial, where K is a parameter that defines the order of the polynomial). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the two disclosures. This would serve to have the regression head predict normalized coefficients directly by using fitted polynomials. Justification for combining the two disclosures not only comes from the state of the art but from Chen (Final Paragraph, the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention). Regarding claim 16, Chen discloses, the processor of claim 11 as discussed supra. Additionally, Chen discloses, the one or more regressed parametric curves comprise at least one individual fitted Bezier spline (Detailed Description, in some optional specific embodiments, after determining a first bezier curve based on a plurality of control points associated with each central point, respectively), or at least one individual fitted polynomial (Detailed Description, some methods express lane lines by polynomial curves, and model lane line curves by using parameters of deep network regression), and wherein the one or more processing units are further to generate the shape regression data based at least on predicting, in one or more channels of the shape regression data using the one or more NNs (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … (Disclosure of Invention, the second regression network is used for predicting the position deviation between the central point and the control point based on the characteristic information of the image to be detected), one or more parameters of a parameterization of the at least one individual fitted Bezier spline (Abstract, determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line). However Chen does not explicitly disclose the at least one individual fitted polynomial. Nevertheless, Tabelini discloses the at least one individual fitted polynomial (III. PolyLaneNet adopts a polynomial representation for the lane markings instead of a set of points. Therefore, for each output j, j = 1, . . . , Mmax, the model estimates the coefficients Pj = {ak,j} K k=0 representing the polynomial, where K is a parameter that defines the order of the polynomial). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN114694109B) in view of Smolyanskiy (US20210150230A1) further in view of Tabelini (PolyLaneNet: Lane Estimation via Deep Polynomial Regression). Regarding claim 8, Chen discloses, the method of claim 1 as discussed supra. Additionally, Smolyanskiy discloses generating the second data representing the one or more classifications of the one or more candidate anchor points comprises predicting, for at least one individual pixel of one or more pixels using the one or more NNs, a likelihood that the at least one individual pixel is associated with a first class of the one or more detected landmarks (Smolyanskiy, Paragraph 0059, Lines 8-12, by way of nonlimiting example, the encoder/decoder 605 may output a tensor with N channels corresponding to N classes (e.g., one confidence map per channel). Thus, each pixel in the tensor may store depth-wise pixel values representing a probability, score, or logit that the pixel is part of a corresponding class for each channel), and one or more parameters of a parameterization of a fitted shape associated with the at least one individual pixel (Smolyanskiy, Paragraph 0053, Lines 8-13, the instance regression data 412) representing detected objects in the 3D environment. The classification data and object instance data may be post-processed 414 to generate class labels and 2D and/or 3D bounding boxes, closed polylines, or other bounding shapes identifying the locations, geometry, and/or orientations of the detected object instances). Additionally, Tabelini discloses, the generating the third data representing the one or more parametric curves comprises predicting, for the at least one individual pixel using the one or more NNs, (Detailed Description, for each central point, determining a plurality of control points related to the central point according to the first predicted position deviation; and respectively determining a first Bezier curve based on a plurality of control points related to each central point, wherein each first Bezier curve is used for representing a lane line) … (Detailed Description, the control point related to the central point can be used for determining the Bezier curve of the lane line corresponding to the central point, so that any number of lane lines can be detected in a self-adaptive mode), It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the two disclosures. This would serve to have the regression head predict normalized coefficients directly by using fitted polynomials. Justification for combining the two disclosures not only comes from the state of the art but from Chen (Final Paragraph, the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention). Claims 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN114694109B) in view of Smolyanskiy (US20210150230A1) further in view of Muthler (US10885698B2). Regarding claim 17, Chen discloses, the processor of claim 11 as discussed supra. Additionally, Smolyanskiy discloses, the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine (Smolyanskiy, Paragraph 0006, Lines 3-6, systems and methods described herein use object detection techniques to identify or detect instances of obstacles (e.g., cars, trucks, pedestrians, cyclists, etc.) and other objects such as environmental parts for use by autonomous vehicles, semi-autonomous vehicles, robots, and/or other object types); a perception system for an autonomous or semi-autonomous machine (Smolyanskiy, Paragraph 0006, Lines 1-2, embodiments of the present disclosure relate to LiDAR perception for autonomous machines using deep neural networks (DNNs)); a system for performing simulation operations (Smolyanskiy, Paragraph 0173, Lines 3-8, the real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation); a system for performing digital twin operations (Smolyanskiy, Paragraph 0066, Lines 7-11, sensor data representing 3D locations of detected objects in the environment may be sampled or otherwise processed to represent characteristics of the detected objects in a particular dimension (e.g., the orthogonal dimension), for example, by taking one or more slices of the sensor data in the particular dimension); a system for performing light transport simulation (Smolyanskiy, Paragraph 0044, Lines 3-7, a LiDAR system may include a transmitter that emits pulses of laser light. The emitted light waves reflect off of certain objects and materials, and one of the LiDAR sensors may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, reflectivity, signal-to-noise ratio (SNR), and/or the like); a system for performing collaborative content creation for 3D assets (Smolyanskiy, Paragraph 0038, Lines 24-26, annotations and/or links between different types of sensor data for the same object may be generated manually and/or automatically, and may be used to generate training data for the multi-view perception DNN); a system for performing deep learning operations (Smolyanskiy, Paragraph 0163, Lines 1-2, the accelerator(s) 1614 (e.g., the hardware acceleration cluster) may include a deep learning accelerator(s) (DLA)); a system for performing remote operations (Smolyanskiy, Paragraph 0234, Lines 15-18, once the machine learning models are trained, the machine learning models may be used by the vehicles (e.g., transmitted to the vehicles over the network(s) 1690, and/or the machine learning models may be used by the server(s) 1678 to remotely monitor the vehicles); a system for performing real-time streaming (Smolyanskiy, Paragraph 0229, Lines 1-7, the vehicle 1600 may further include the infotainment SoC 1630 (e.g., an in-vehicle infotainment system (IVI)). Although illustrated and described as a SoC, the infotainment system may not be a SoC, and may include two or more discrete components. The infotainment SoC 1630 may include a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.)), a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content (Smolyanskiy, Paragraph 0166, Lines 3-6, the PVA(s) may be designed and configured to accelerate computer vision algorithms for the advanced driver assistance systems (ADAS), autonomous driving, and/or augmented reality (AR) and/or virtual reality (VR) applications); a system implemented using an edge device (Smolyanskiy, Paragraph 0138, Lines 1-8, the vehicle 1600 further includes a network interface 1624 which may use one or more wireless antenna(s) 1626 and/or modem(s) to communicate over one or more networks. For example, the network interface 1624 may be capable of communication over LTE, WCDMA, UMTS, GSM, CDMA2000, etc. The wireless antenna(s) 1626 may also enable communication between objects in the environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth LE, Z-Wave, ZigBee, etc., and/or low power wide-area network(s) (LPWANs), such as LoRaWAN, SigFox, etc); a system implemented using a robot (Smolyanskiy, Paragraph 0006, Lines 3-6, systems and methods described herein use object detection techniques to identify or detect instances of obstacles (e.g., cars, trucks, pedestrians, cyclists, etc.) and other objects such as environmental parts for use by autonomous vehicles, semi-autonomous vehicles, robots, and/or other object types); a system for generating synthetic data (Smolyanskiy, Paragraph 0038, Lines 24-26, annotations and/or links between different types of sensor data for the same object may be generated manually and/or automatically, and may be used to generate training data for the multi-view perception DNN); a system incorporating one or more virtual machines (VMs) (Smolyanskiy, Paragraph 0041, Lines 10-13, various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory); a system implemented at least partially in a data center (Smolyanskiy, Paragraph 0028, Lines 1-3, FIG. 16D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 16A, in accordance with some embodiments of the present disclosure) See figure one supra; or a system implemented at least partially using cloud computing resources (Smolyanskiy, Paragraph 0028, Lines 1-3, FIG. 16D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 16A, in accordance with some embodiments of the present disclosure). However, Smolyanskiy does not explicitly disclose, a system for performing conversational Al operations. Nevertheless, Muthler who is in the same field of endeavor of utilizing ray tracing in hardware for enhancing autonomous systems teaches a system for performing conversational Al operations (Muthler, Paragraph 312, Lines 1-7, the architecture and/or functionality of the various previous figures may be implemented in the context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and/or any other desired system. For example, the system 1965 may take the form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA)). One of ordinary skill in the art prior to the effective filing date of the given invention would have been motivated to combine Cai and Smolyanskiy’s disclosure with Muthler’s teaching. This combination would enhance Cai and Smolyanskiy’s system by adding Muthler’s ray tracing methods to enhance the processing and visualization of the 3D point cloud from the lidar. Additionally, the architecture disclosed by Muthler excels at real-time decision making which enables the personal digital assistant; this could be applied to an autonomous vehicle to process sensor data and make quick decisions. Further justification can be found from Muthler (Muthler, Paragraph 221, Lines 1-4, one or more PPUs 1700 may be configured to accelerate thousands of High Performance Computing (HPC), data center, and machine learning applications. The PPU 1700 may be configured to accelerate numerous deep learning systems and applications including autonomous vehicle platforms). Regarding claim 20, Chen discloses, the system of claim 18 as discussed supra. Additionally, Smolyanskiy discloses, the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine (Smolyanskiy, Paragraph 0006, Lines 3-6, systems and methods described herein use object detection techniques to identify or detect instances of obstacles (e.g., cars, trucks, pedestrians, cyclists, etc.) and other objects such as environmental parts for use by autonomous vehicles, semi-autonomous vehicles, robots, and/or other object types); a perception system for an autonomous or semi-autonomous machine (Smolyanskiy, Paragraph 0006, Lines 1-2, embodiments of the present disclosure relate to LiDAR perception for autonomous machines using deep neural networks (DNNs)); a system for performing simulation operations (Smolyanskiy, Paragraph 0173, Lines 3-8, the real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation); a system for performing digital twin operations (Smolyanskiy, Paragraph 0066, Lines 7-11, sensor data representing 3D locations of detected objects in the environment may be sampled or otherwise processed to represent characteristics of the detected objects in a particular dimension (e.g., the orthogonal dimension), for example, by taking one or more slices of the sensor data in the particular dimension); a system for performing light transport simulation (Smolyanskiy, Paragraph 0044, Lines 3-7, a LiDAR system may include a transmitter that emits pulses of laser light. The emitted light waves reflect off of certain objects and materials, and one of the LiDAR sensors may detect these reflections and reflection characteristics such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, reflectivity, signal-to-noise ratio (SNR), and/or the like); a system for performing collaborative content creation for 3D assets (Smolyanskiy, Paragraph 0038, Lines 24-26, annotations and/or links between different types of sensor data for the same object may be generated manually and/or automatically, and may be used to generate training data for the multi-view perception DNN); a system for performing deep learning operations (Smolyanskiy, Paragraph 0163, Lines 1-2, the accelerator(s) 1614 (e.g., the hardware acceleration cluster) may include a deep learning accelerator(s) (DLA)); a system for performing remote operations (Smolyanskiy, Paragraph 0234, Lines 15-18, once the machine learning models are trained, the machine learning models may be used by the vehicles (e.g., transmitted to the vehicles over the network(s) 1690, and/or the machine learning models may be used by the server(s) 1678 to remotely monitor the vehicles); a system for performing real-time streaming (Smolyanskiy, Paragraph 0229, Lines 1-7, the vehicle 1600 may further include the infotainment SoC 1630 (e.g., an in-vehicle infotainment system (IVI)). Although illustrated and described as a SoC, the infotainment system may not be a SoC, and may include two or more discrete components. The infotainment SoC 1630 may include a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.)), a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content (Smolyanskiy, Paragraph 0166, Lines 3-6, the PVA(s) may be designed and configured to accelerate computer vision algorithms for the advanced driver assistance systems (ADAS), autonomous driving, and/or augmented reality (AR) and/or virtual reality (VR) applications); a system implemented using an edge device (Smolyanskiy, Paragraph 0138, Lines 1-8, the vehicle 1600 further includes a network interface 1624 which may use one or more wireless antenna(s) 1626 and/or modem(s) to communicate over one or more networks. For example, the network interface 1624 may be capable of communication over LTE, WCDMA, UMTS, GSM, CDMA2000, etc. The wireless antenna(s) 1626 may also enable communication between objects in the environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth LE, Z-Wave, ZigBee, etc., and/or low power wide-area network(s) (LPWANs), such as LoRaWAN, SigFox, etc); a system implemented using a robot (Smolyanskiy, Paragraph 0006, Lines 3-6, systems and methods described herein use object detection techniques to identify or detect instances of obstacles (e.g., cars, trucks, pedestrians, cyclists, etc.) and other objects such as environmental parts for use by autonomous vehicles, semi-autonomous vehicles, robots, and/or other object types); a system for generating synthetic data (Smolyanskiy, Paragraph 0038, Lines 24-26, annotations and/or links between different types of sensor data for the same object may be generated manually and/or automatically, and may be used to generate training data for the multi-view perception DNN); a system incorporating one or more virtual machines (VMs) (Smolyanskiy, Paragraph 0041, Lines 10-13, various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory); a system implemented at least partially in a data center (Smolyanskiy, Paragraph 0028, Lines 1-3, FIG. 16D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 16A, in accordance with some embodiments of the present disclosure) See figure one supra; or a system implemented at least partially using cloud computing resources (Smolyanskiy, Paragraph 0028, Lines 1-3, FIG. 16D is a system diagram for communication between cloud-based server(s) and the example autonomous vehicle of FIG. 16A, in accordance with some embodiments of the present disclosure). However, Smolyanskiy does not explicitly disclose, a system for performing conversational Al operations. Nevertheless, Muthler teaches a system for performing conversational Al operations (Muthler, Paragraph 312, Lines 1-7, the architecture and/or functionality of the various previous figures may be implemented in the context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and/or any other desired system. For example, the system 1965 may take the form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA)). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN114694109B) in view of Smolyanskiy (US20210150230A1) further in view of Meier (Visual-inertial curve simultaneous localization and mapping: Creating a sparse structured world without feature points). Regarding claim 9, Chen discloses, the method of claim 1 as discussed supra. Additionally, Smolyanskiy discloses, the first data comprises a projected representation of measured three-dimensional (3D) lidar points (0033, the input to the DNN may be formed from LiDAR data (e.g., a LiDAR range image, a projection of a LiDAR point cloud, etc.) and/or data from other sensors (e.g., images from any number of cameras)), the method further comprising decoding the one or more parametric curves fitted to the one or more detected landmarks represented in the measured 3D lidar points based at least on the second data and the third data (0108, the outputs of the second stage (e.g., the class confidence data 410 and the instance regression data 412) may be post-processed (e.g., decoded) to generate bounding boxes, closed polylines, or other bounding shapes identifying the locations, geometry, and/or orientations of the detected object instances). However, Smolyanskiy does not explicitly disclose, localizing one or more positions of one or more ego-machines based at least on the one or more parametric curves fitted to the one or more detected landmarks represented in the measured 3D lidar points. Nevertheless, Meier who is in the same field of endeavor of Visual inertial curve simultaneous localization and mapping discloses, localizing one or more positions of one or more ego-machines based at least on the one or more parametric curves fitted to the one or more detected landmarks represented in the measured 3D lidar points (Abstract, we present a simultaneous localization and mapping (SLAM) algorithm that uses Bézier curves as static landmark primitives rather than feature points) … (introduction, with a stereo camera and inertial measurement unit (IMU), we reconstruct the three-dimensional (3D) location of these curves while simultaneously estimating the six degrees of freedom (6-DOF) pose of a robot). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the two disclosures. This would serve to have improved localization that is superior to point based SLAM. Justification for combining the two disclosures not only comes from the state of the art but from Chen (Final Paragraph, the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN114694109B) in view of Kacyra et al. (US20050099637A1). Regarding claim 5, Chen discloses the method of claim 1, as discussed supra. Additionally, Kacyra who is in the same field of endeavor of imaging and modeling three-dimensional objects discloses, the one or more parametric curves comprise one or more circles fitted to one or more detected landmarks (0195, one can project the scan points onto the plane. The projected points will be well described by a circle in this plane, since the plane is normal to the cylinder axis. A best fit circle can be calculated using the projected points on the plane to give an estimate of the cylinder radius. The center of the circle on the plane can be converted to a 3-D point to give a point on the cylinder axis). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the two disclosures. This is obvious because circles are a well know parametric form for compactly modeling curve landmark geometries such as poles or curved segments. Having circles be one of the parametric curves would decrease representational complexity and result in parametric curves that include circles fitted to a detected landmark such as poles. Justification for combining the two disclosures not only comes from the state of the art but from Kacyra (0204, the segmentation techniques disclosed above can be used to create a variety of useful fitting tool based on combinations of the previously described shapes. For instance, a corner, consisting of an intersection of three planes which may or may not be orthogonal, is a very common feature to scan). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE E DOUGLAS whose telephone number is (703)756-1417. The examiner can normally be reached Monday - Friday 7:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached on (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.E.D./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Feb 17, 2023
Application Filed
Dec 30, 2024
Non-Final Rejection — §102, §103
Mar 05, 2025
Examiner Interview (Telephonic)
Mar 05, 2025
Examiner Interview Summary
Mar 06, 2025
Response Filed
May 20, 2025
Final Rejection — §102, §103
Jul 14, 2025
Request for Continued Examination
Jul 16, 2025
Response after Non-Final Action
Aug 18, 2025
Non-Final Rejection — §102, §103
Oct 28, 2025
Interview Requested
Nov 06, 2025
Examiner Interview Summary
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 13, 2025
Response Filed
Jan 27, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592101
INFORMATION COMMUNICATION DEVICE OF VEHICLE, INFORMATION MANAGEMENT SERVER, AND INFORMATION COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
17%
Grant Probability
39%
With Interview (+22.2%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month