Prosecution Insights
Last updated: April 19, 2026
Application No. 18/769,977

SYSTEM AND METHODS FOR UPDATING HIGH DEFINITION MAPS

Non-Final OA §101§102§103
Filed
Jul 11, 2024
Examiner
THOMAS, ANA D
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
94%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
359 granted / 408 resolved
+36.0% vs TC avg
Moderate +6% lift
Without
With
+6.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
20 currently pending
Career history
428
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
30.2%
-9.8% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§101 §102 §103
DETAILED CORRESPONDENCE This Office action is in response to the application filed 7/11/2024. Claim Status Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/13/2024 and 5/12/2025 complies with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to under 37 CFR 1.83(a) because items 301, 304, 306, chosen path and pixelwise labels blocks fail to clearly show the structural details since these items contains a photograph of a view that is capable of being illustrated as a line drawing. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites: applying, to one or more neural networks, sensor data obtained using one or more image sensors and one or more LiDAR sensors; determining, using the one or more neural networks and based at least on the applying, semantic labels corresponding to one or more objects in an environment; storing, in association with one or more locations in a semantic map of the environment, object information corresponding to the semantic labels; and determining a path for a machine through the environment using the semantic map. Step 1: Statutory category- Yes The claim 1 recites an method including at least one step. The claim falls within one of the four statutory categories. See MPEP 2106.03 Step 2A Prong one evaluation: Judicial Exception - Yes - Mathematical processes. In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/ or c) certain methods of organizing human activity. The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mathematical concept” because under its broadest reasonable interpretation, the limitations are a “relationship between variables or numbers”. See MPEP 2106.04(a)(2)(I). The claim recites (in-part): determining, using the one or more neural networks and based at least on the applying, semantic labels corresponding to one or more objects in an environment; determining a path for a machine through the environment using the semantic map. These “determining” limitations, as drafted, under their broadest reasonable interpretation, covers performance of these limitations using mathematical concepts. For example, but for the “determining” language, the claims encompasses that a model on a processor computes the mathematical relationship between the collected data from various sensors. The mere recitation of determining a path for a machine through the environment using the semantic map does not take the claim limitations out of the mathematical process grouping because the claim language does not positively recite controlling the subject vehicle based on the determined path (emphasis added). Thus, the claim 1 recites a mathematical processes. Step 2A Prong two evaluation: Practical Application – No In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04( d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application. The claim recites the additional elements of: applying, to one or more neural networks, sensor data obtained using one or more image sensors and one or more LiDAR sensors [selecting a particular data source or type of data to be manipulated]; storing, in association with one or more locations in a semantic map of the environment, object information corresponding to the semantic labels [data gathering, assessing data derived from the sensors]; These claims does not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. The additional limitation is no more than mere data gathering and data manipulation. Accordingly, even in combination, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Respectively, dependent claims 2-8, 10-15 and 17-20 as a whole, do not integrate the recited judicial exception into a practical application. Regarding claim 9. Claim 9 is the processor that performs the method of claim 1; therefore, claim 9 is rejected under the same rationale of claim 1. Regarding claim 16. Claim 16 is the system that uses the method to perform claim 1; therefore, claim 16 is rejected under the same rationale of claim 1. Step 2B evaluation: Inventive Concept: - No In Step 2B of the 2019 PEG, the claim(s) is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As discussed with respect to Step 2A Prong Two, the additional elements in claims 2-8, 10-15 and 17-20 amount to no more than mere data gathering step, data manipulation, insignificant extra solution activity and/or data output. The same analysis applies here in 2B, i.e., data manipulation and/or data output to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B, MPEP 2106.0S(f). Thus, these claims are ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 5, 7, 9, 10, 13, 16, 17, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Adams et al., US 2020/0372285 hereinafter “Adams”. Claims 1, 9 and 16. Adams teaches a method comprising: applying, to one or more neural networks, sensor data obtained using one or more image sensors and one or more LiDAR sensors ([0021]-[0022]—teaches concept of neural network as such—“The neural network may be part of a machine-learned model trained to detect features of repeated object classifications, where the machine-learned model uses the output of the neural network (e.g., the semantically segmented image) to detect and output features of repeated object classifications in an environment. As above, a single network may be used to both segment an image and extract features associated with particular classifications, and/or multiple steps may be used (e.g., output a first network trained to segment an image and provide masks of relevant portions of the image is then used to determine associated features”. While [0011] along with [0014] teaches “[s]ensor data captured by the vehicle can include lidar data, radar data, image data, time of flight data, sonar data, odometer data (such as wheel encoders), IMU data, and the like. In some cases, the sensor data can be provided to a perception system configured to determine a type (classification) of an object (e.g., vehicle, pedestrian, bicycle, motorcycle, animal, parked car, tree, building, and the like) in the environment. Taken together the at least cited section reads on this element. Fig. 6, step 604 teaches input the image into a machine-learned model trained to segment images.); determining, using the one or more neural networks and based at least on the applying, semantic labels corresponding to one or more objects in an environment [0012] along with [0022] reads on this element as such—“[f]or instance, the sensor data may be captured by the vehicle as the vehicle traverses an environment. In some examples, the vehicle may include one or more cameras configured to capture sequences of images (e.g., individual images and/or video) of the surrounding environment. Images in the sequence of images may be semantically segmented to associate pixels of an image with a label indicating the associated classification, e.g., drivable region, car, pedestrian, sidewalk, traffic control signal, and so forth….In some cases, detecting the feature may be performed by inputting the image 108 and/or the segmented image 112 into a machine-learned model trained to determine features of objects in images, and receiving the feature from the machine-learned model….For instance, an area of interest may correspond to a subset of pixels of the image 108 associated with one or more of the labels indicated above. Thus, by masking the segmented image 130, processing resources can be conserved by focusing image analysis on an area of interest, such as the lane markers 124.”); storing, in association with one or more locations in a semantic map of the environment, object information corresponding to the semantic labels ([0020]-[0022] along with [0045] read on this element as such—“[a]n operation 110 includes segmenting the images into different object classifications in the environment. Segmenting may include associating subsets of pixels of the image with one or more class labels. A segmented image 112 corresponds to the image 108 of the environment surrounding the vehicle 104. The segmented image 112 includes areas (e.g., subsets of pixels of the image 108) which may be labeled as a drivable region 114, a non-drivable region 116, vegetation 118, other vehicles 120, a pedestrian 122, lane markers 124, and street signs 126…. In the illustrated example, the memory 418 of the vehicle computing device(s) 404 stores a localization component 420, a perception component 422, one or more maps 424, one or more system controllers 426, a semantic segmentation (SemSeg) localization component 428, a semantic segmentation component 430, location determination component 432, and a planning component 434.”); and determining a path for a machine through the environment using the semantic map (When taken [0046], [0048] and [0051] together theses cited sections describes this element as such—“In some instances, the localization component 420 can provide data to various components of the vehicle 402 to determine an initial position of an autonomous vehicle for generating a trajectory, for determining to retrieve map data, and/or determining a speed of the vehicle 402 when a sequence of images is captured for determining a velocity of an object.…In some examples, the vehicle 402 can be controlled based at least in part on the maps 424. That is, the maps 424 can be used in connection with the localization component 420, the perception component 422, the SemSeg localization component 428, or the planning component 434 to determine a location of the vehicle 402, identify objects in an environment, and/or generate routes and/or trajectories to navigate within an environment.….In some instances, the SemSeg localization component 428 can provide location information generated by the semantic segmentation component 430 and/or the location determination component 432 to the planning component 434 to determine when and/or how to control the vehicle 402 to traverse an environment. As discussed herein, the SemSeg localization component 428 can receive image data, map data, lidar data, and the like to determine location-related information about objects in an environment.”). Claims 2, 10 and 17. Adams teaches the method of claim 1 and further teaches, wherein the semantic labels include pixelwise labels of a semantic segmentation of one or more images corresponding to the sensor data ([0052] and [0078] along with [0086] reads on this element as such—“The semantic segmentation component 430 included in the SemSeg localization component 428 receives images, such as from a camera of the sensor system 406, and labels pixels of the received images according to object classifications of objects identified in the images. At operation 504, the process can include associating pixels of the image with a label representing an object of an object type (or classification). In some examples, associating the pixels of the image with the label may be part of a semantic segmentation process performed on the image, where pixels in different areas of the image are associated with different labels. Semantically segmenting the image may involve a neural network, though any other computer vision algorithm is contemplated….An operation 604 includes inputting the image into a machine-learned model trained to segment images. In some examples, the machine-learned model may be trained to associate pixels of the image with the label, where pixels in different areas of the image are associated with different labels. The machine-learned model used to segment the image in some cases may be a neural network, though any other computer vision algorithm is contemplated.”). Claims 5, 13 and 20. Adams teaches the method of claim 1 and further teaches, wherein the one or more locations include one or more absolute locations corresponding to the one or more objects ([0037]—teaches the concept of ground truth). Claims 7. Adams teaches the method of claim 1 and further teaches, wherein at least a portion of the object information represents one or more shapes of the one of more objects, the one or more shapes determined using the sensor data ([0050] reads on this element as such—In some examples, the one or more maps 424 can store sizes or dimensions of objects associated with individual locations in an environment. For example, as the vehicle 402 traverses the environment and as maps representing an area proximate to the vehicle 402 are loaded into memory, one or more sizes or dimensions of objects associated with a location can be loaded into memory as well. In some examples, a known size or dimension of an object at a particular location in the environment may be used to determine a depth of a feature of an object relative to the vehicle 402 when determining a location of the vehicle 402.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 8, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of Efland et al., US 2021/0406559 hereinafter “Efland”. Claims 3, 11 and 18. Adams teaches the method of claim 1; however, Adams does is silent of the term cost. Yet, Efland teaches wherein the determining the path includes selecting the path from a plurality of paths using traversability costs associated with the plurality of paths ([0030] along with [0171] teaches that the “behavior plan for the ego vehicle…defines the desired driving behavior of the ego vehicle (e.g., the ego vehicle's trajectory) for some future period of time (e.g., the next 5 seconds)… Further, in practice, planning subsystem 602c may derive the behavior plan for vehicle 600 in various manners. For instance, as one possibility, planning subsystem 602c may be configured to derive the behavior plan for vehicle 600 by (i) deriving a plurality of different “candidate” behavior plans for vehicle 600 based on the one or more derived representations of the vehicle's surrounding environment (and perhaps other data), (ii) evaluating the candidate behavior plans relative to one another (e.g., by scoring the candidate behavior plans using one or more cost functions) in order to identify which candidate behavior plan is most desirable when considering factors such as proximity to other objects, velocity, acceleration, time and/or distance to destination, road conditions, weather conditions, traffic conditions, and/or traffic laws, among other possibilities, and then (iii) selecting the candidate behavior plan identified as being most desirable as the behavior plan to use for vehicle 600.”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing data of the claimed invention to combined the teaching of Efland with the invention of Adams because such combination would to provide functionality to a vehicle with an on-board computing system that is configured to perform functions such as localization, object detection, prediction, and path planning using a variety of data, including but not limited to sensor data captured by the vehicle and map data related to the vehicle's surrounding environment (see [0001], Efland). Claims 8. Adams teaches the method of claim 1; however, Adams is silent on the term high-definition maps. Yet, Efland teaches wherein the semantic map includes a local high definition map stored on the machine ([0068] along with [0149] reads on this element as such “the generated geometric data and the updated semantic data may be combined together into the final set of map data that defines the high-resolution map for the given real-world environment.”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing data of the claimed invention to combined the teaching of Efland with the invention of Adams because such combination would to provide functionality to a vehicle with an on-board computing system that is configured to perform functions such as localization, object detection, prediction, and path planning using a variety of data, including but not limited to sensor data captured by the vehicle and map data related to the vehicle's surrounding environment (see [0001], Efland). Claims 4, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of M. Scholtes et al., "6-Layer Model for a Structured Description and Categorization of Urban Traffic and Environment," in IEEE Access, vol. 9, pp. 59131-59147, 2021 hereinafter, “Scholtes”. Claims 4, 12 and 19. Adams teaches the method of claim 1; however, Adams does not teach layer specific attribute information. Yet, Scholtes teaches wherein the semantic map includes one or more first layers comprising road geometry, and one or more second layers comprising features corresponding to static objects in the environment, wherein the object information is stored in association with one or more static objects in the one or more second layers (see Section IV, A on pg. 59135-59137 teaches layer 1 as a road network (i.e. road geometry) and Section IV, B on pg. 59137 teaches layer 2 as roadside structures (i.e. static objects). While Section VI, 59143-59144 further reads on layer 1 and layer 2). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing data of the claimed invention to combined the teaching of Efland with the invention of Adams because such combination would to provides the possibility to categorize the environment and, therefore, functions as a structured basis for subsequent scenario description (see Abstract, Scholtes). Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of Stojanovic et al., US 2019/0050648. Claims 6 and 14. Adams teaches the method of claim 5, further including: determining one or more relative locations of the one or more objects in the environment ([0029] along with [0050] reads on this element as such—“In some cases, the frequency with which the location of the vehicle 104 is determined relative to the location of the feature (e.g., corner 132) of the repeating object (in this case lane markers 124) may be based on an amount of difference between an estimated location based on data provided by another sensor (e.g., odometer or wheel encoder) and the location determined using the described semantic segmentation techniques….In some examples, a known size or dimension of an object at a particular location in the environment may be used to determine a depth of a feature of an object relative to the vehicle 402 when determining a location of the vehicle 402.”). Adams teaches the concept of retrieving map data, ground truth and relative locations of one or more objects in the environment; however, Adams does not explicitly recite the absolute locations. Yet, Stojanovic teaches retrieving, from the semantic map, one or more absolute locations corresponding to one or more second objects in the environment (fig. 1 illustrates the sematic map requester/receiver (i.e. retrieving); while [0046] teaches “That is, a semantic image is an image that encodes semantic representations of tangible objects. As noted herein, a corresponding semantic image may be generated from a corresponding visual image via semantic segmentation As such, the semantic image data encoded in the pixel values of the semantic pixels encode semantic labels that are associated with tangible objects that were previously imaged (via a visual image camera) based on the detections of EM waves/photons that may have been reflected from and/or emitted by, and imaged in the corresponding visual image…. At least based on the associated coordinate system, in addition to indicating semantic labels of tangible objects, the semantic representations included in a semantic map may indicate the absolute positions, with respect to the surface, of the corresponding tangible object.”); and determining, using the one or more relative locations corresponding to the one or more objects and the one or more absolute locations corresponding to the one or more second objects, the one or more absolute locations corresponding to the one or more objects ([0034] along [0104] describes this element as such—“Localizing an object may include determining an absolute position (or locations) of the object, wherein the absolute position (or location) is with respect to the surface. The embodiments herein register, in the semantic-domain, a previously generated semantic map of the surface and real-time semantic images of the object's current environment. That is, based on correlating semantic features encoded in the semantic map and corresponding semantic features encoded in the real-time semantic image, a registration of the semantic map and the semantic images is generated….In response to determining that the label is indicated by both the first and second semantic representations, a spatial correspondence between the absolute position of the object and the relative position of the object may be generated. In some embodiments, a relative rotational correspondence between the absolute position of the object and the relative position of the object is also generated. An absolute position of the vehicle, with respect to the surface, may be determined based on the spatial correspondence between the absolute position of the object and the relative position of the object. An orientation of the vehicle may be determined based on the relative rotational correspondence between the absolute position of the object and the relative position of the object. That is, the vehicle may be localized via one or more correspondences between the first and the second semantic representations of the object”.) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing data of the claimed invention to combined the teaching of Stojanovic with the invention of Adams because such combination would to provide the localization accuracy and precision required for safe and efficient navigation of autonomous vehicles (see [0003], Stojanovic). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. S. Bittel, T. Rehfeld, M. Weber and J. M. Zöllner, "Estimating high definition map parameters with convolutional neural networks," 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 2017, pp. 52-56, doi: 10.1109/SMC.2017.8122577. This reference teaches a method to estimate abstract parameters of high definition (HD) maps from sensor data. Parameters we estimate include the distance from ego vehicle to road boundary, orientation of the ego-vehicle with respect to lanes, number of lanes, and street type. Our method is realized as a Convolutional Neural Network (CNN) that takes pre-processed sensor information in the form of grid map images as input. The estimated parameters of the network can then either be used for localization or to validate existing map data. Ilci, V.; Toth, C. High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation. Sensors 2020, 20, 899. https://doi.org/10.3390/s20030899. This reference teaches assess the feasibility of creating a high-definition 3D map using only auto industry-grade mobile LiDAR sensors. In other words, can LiDAR sensors deployed on AV create accurate mapping of the environment, the corridor the vehicle travels. Kendall, Alex, Vijay Badrinarayanan, and Roberto Cipolla. "Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding." arXiv preprint arXiv:1511.02680 (2015). This reference teaches a deep learning framework for probabilistic pixel-wise semantic segmentation, which is termed Bayesian SegNet. Badrinarayanan, Vijay, Ankur Handa, and Roberto Cipolla. "Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling." arXiv preprint arXiv:1505.07293 (2015). This reference teaches a novel deep architecture, SegNet, for semantic pixel wise image labelling. Düser, US 20240311279—This reference teaches the 6-layer model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA D THOMAS whose telephone number is (571)272-8549. The examiner can normally be reached Monday - Friday 8 - 5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya Burgess can be reached at 571-272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.D.T/Examiner, Art Unit 3661 /RUSSELL FREJD/Primary Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Jul 11, 2024
Application Filed
Jan 03, 2026
Non-Final Rejection — §101, §102, §103
Mar 24, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589741
DRIVE ASSIST APPARATUS FOR VEHICLE AND DRIVE ASSIST SYSTEM FOR VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12583448
OBSTACLE DETECTION CONTROLLER OF ARTICULATED VEHICLE, OPERATION SYSTEM OF ARTICULATED VEHICLE, AND OBSTACLE DETECTION METHOD OF ARTICULATED VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12576836
CONDITIONAL OBJECT POSITION PREDICTION BY A MACHINE LEARNED MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12571185
PROPEL LIMITING SYSTEM AND METHOD FOR REAR COLLISION AVOIDANCE
2y 5m to grant Granted Mar 10, 2026
Patent 12565236
METHOD FOR CONTROLLING AUTONOMOUS VEHICLE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
94%
With Interview (+6.4%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month