Prosecution Insights
Last updated: April 19, 2026
Application No. 18/143,360

SYSTEMS AND METHODS FOR SAFE AND RELIABLE AUTONOMOUS VEHICLES

Non-Final OA §102§103
Filed
May 04, 2023
Examiner
PARIHAR, SUCHIN
Art Unit
2851
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
97%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
1001 granted / 1141 resolved
+19.7% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
1176
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
17.4%
-22.6% vs TC avg
§102
55.7%
+15.7% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1141 resolved cases

Office Action

§102 §103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Non-Final office action is in response to application 18/143,360, application filed on 05/04/2023, and Preliminary Amendment filed on 10/24/2023. 3. In the Preliminary Amendment, claims 1-20 are presented as new. Examiner notes that there do not appear to be any claims presented in the original claim set filed on 05/04/2023. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on 09/11/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 6. Claim(s) 1, 4, 7-12, and 14-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Moloney et al. (US PG Pub No. 2021/0166464). 7. With respect to independent claim 1, Moloney teaches: A system-on-a-chip (on-chip network, para 49; system on a chip SoC, para 151) for an autonomous vehicle (autonomous vehicles using system-on-chip SoC processors, para 148-151) including: at least one central processing unit (CPU) cluster or CPU complex supporting virtualization (see processors, clusters, para 55-64, 149-151; see CPU, para 52, 92; see virtualization, para 155-164), wherein the CPU cluster or CPU complex includes multiple CPU cores and associated caches (see processor with cache, para 174), at least one graphics processing unit (GPU) providing multi-core parallel processing (see parallel processing GPUs, para 49), an embedded hardware accelerator cluster (see embedded accelerators, para 67-68), and at least one memory device interface structured to connect the system-on-a-chip to at least one memory device storing instructions that when executed by the system-on-the-chip (see system on chip, host processors, and associated memory, para 49), configure the system-on-a-chip to operate as an autonomous vehicle controller configured to receive optical sensor data (see camera sensors, plurality of sensors and cameras, working with CPU, GPU, and system-on-chip to operate autonomous vehicle controller, para 50, 91), wherein the at least one CPU cluster or complex, the at least one GPU providing multi-core parallel processing, and the at least embedded hardware accelerator cluster comprising the system-on-a-chip (see hardware accelerator for computer vision and graphics, para 46-48), interoperate to process at least the received optical sensor data to perform autonomous driving (see camera sensors, plurality of sensors and cameras, working with CPU, GPU, and system-on-chip to operate autonomous vehicle, para 50, 91; operating a vehicle to self-navigate, autonomous navigation, para 93). 8. With respect to independent claim 4, Moloney teaches: A system-on-a-chip (on-chip network, para 49; system on a chip SoC, para 151) for use in an autonomous vehicle (autonomous vehicles using system-on-chip SoC processors, para 148-151) including: at least one central processing unit (CPU) (see processors, clusters, para 55-64, 149-151; see CPU, para 52, 92; see virtualization, para 155-164), at least one programmable graphics processing unit (GPU) providing parallel processing (see parallel processing GPUs, para 49) and configured to use a tensor instruction set (see GPU using tensor data, para 91) including mixed-precision processing cores (see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120) partitioned into multiple processing blocks (see processing blocks, para 75-79, 90-95), at least one programmable vision accelerator and/or at least one deep learning accelerator (see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48), and at least one memory device interface structured to connect the system-on-a-chip to at least one memory device storing program code that when executed by system-on-the-chip (see system on chip, host processors, and associated memory, para 49), configures the system-on-a-chip to operate as an autonomous vehicle controller configured to receive optical sensor data (see camera sensors, plurality of sensors and cameras, working with CPU, GPU, and system-on-chip to operate autonomous vehicle controller, para 50, 91), wherein the at least one CPU, the at least one GPU, and the at least one programmable vision accelerator and/or the at least one deep learning accelerator (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48), interoperate to process the optical sensor data and a trajectory estimation and/or route plan to provide autonomous driving control of an automobile (see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 9. With respect to claim 7, Moloney teaches: The system-on-a-chip of claim 4 wherein the at least one GPU providing parallel processing is programmable and the deep learning accelerator comprises a tensor processing unit configured to execute the neural networks based on a tensor instruction set (see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 10. With respect to claim 8, Moloney teaches: The system-on-a-chip of claim 4 wherein the at least one GPU is power-optimized for performance in automotive embedded use applications (see reduction of power dissipation, para 60, 71, 74). 11. With respect to claim 9, Moloney teaches: 9. The system-on-a-chip of claim 4 wherein the at least one GPU is fabricated on a FinFET (Fin field effect transistor) high-performance manufacturing process (see FINFET technology, para 160). 12. With respect to independent claim 10, Moloney teaches: A system-on-a-chip for an autonomous vehicle (on-chip network, para 49; system on a chip SoC, para 151; autonomous vehicles using system-on-chip SoC processors, para 148-151) including: at least one central processing unit (CPU) (see processors, clusters, para 55-64, 149-151; see CPU, para 52, 92; see virtualization, para 155-164), at least one graphics processing unit (GPU) providing parallel processing (see parallel processing GPUs, para 49), a cache available to both the at least one CPU and the at least one GPU (see processors with cache, para 174), an embedded hardware accelerator cluster comprising at least one hardware-based accelerator configured to accelerate neural networks and/or accelerate programmable vision (see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48); and at least one memory device interface structured to connect the system-on-a-chip to at least one memory device storing code that when executed by the system-on-the-chip (see system on chip, host processors, and associated memory, para 49), configures the system-on-a-chip to operate as an autonomous vehicle controller configured to receive sensor data (see camera sensors, plurality of sensors and cameras, working with CPU, GPU, and system-on-chip to operate autonomous vehicle controller, para 50, 91), wherein the at least one CPU, the at least one GPU providing parallel processing (see parallel processing GPUs, para 49), and the at least one hardware-based accelerator (see hardware accelerator for computer vision and graphics, para 46-48) interoperate to process the received sensor data to perform autonomous driving (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 13. With respect to independent claim 11, Moloney teaches: A system-on-a-chip for an autonomous vehicle (on-chip network, para 49; system on a chip SoC, para 151; autonomous vehicles using system-on-chip SoC processors, para 148-151) including: at least one central processing unit (CPU) (see processors, clusters, para 55-64, 149-151; see CPU, para 52, 92; see virtualization, para 155-164), at least one graphics processing unit (GPU) providing parallel processing cores (see parallel processing GPUs, para 49), an embedded hardware accelerator cluster comprising at least one hardware-based accelerator (see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48), and at least one memory device interface structured to connect the system-on-a-chip (see system on chip, host processors, and associated memory, para 49) to at least one memory device storing program instructions that when executed by the system-on-the-chip, configure the system-on-a-chip to operate as an autonomous vehicle controller configured to receive sensor data (see camera sensors, plurality of sensors and cameras, working with CPU, GPU, and system-on-chip to operate autonomous vehicle controller, para 50, 91), wherein the at least one CPU, the at least one GPU providing parallel processing cores, and the at least one hardware-based accelerator (see hardware accelerator for computer vision and graphics, para 46-48) interoperate to process at least the received sensor data to perform autonomous driving of an automobile (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 14. With respect to claim 12, Moloney teaches: The system-on-a-chip of claim 11 wherein the at least hardware-based accelerator includes one or more tensor processing units (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 15. With respect to claim 14, Moloney teaches: The system-on-a-chip of claim 11 wherein the at least one accelerator is configured to accelerate computer vision algorithms for autonomous driving (see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 16. With respect to claim 15, Moloney teaches: 15. The system-on-a-chip of claim 11 wherein the at least one GPU is power-optimized for performance in automotive embedded use applications (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 17. With respect to claim 16, Moloney teaches: 16. The system-on-a-chip of claim 11 wherein the at least one GPU is fabricated on a FinFET (Fin field effect transistor) high-performance manufacturing process (see FINFET technology, para 160). 18. With respect to independent claim 17, Moloney teaches: A system-on-a-chip for an autonomous vehicle (on-chip network, para 49; system on a chip SoC, para 151; autonomous vehicles using system-on-chip SoC processors, para 148-151) including: at least one central processing unit (CPU) supporting virtualization (see processors, clusters, para 55-64, 149-151; see CPU, para 52, 92; see virtualization, para 155-164), at least one graphics processing unit (GPU) providing parallel processing (see parallel processing GPUs, para 49), a cache memory available to both the at least one CPU and the at least one GPU (see processors with cache, para 174), at least one accelerator (see hardware accelerator for computer vision and graphics, para 46-48), and at least one memory device interface structured to connect the system-on-a-chip (see system on chip, host processors, and associated memory, para 49) to at least one memory device storing instructions that when executed by the system-on-the-chip, configure the system-on-a-chip to operate as an autonomous vehicle controller configured to receive LIDAR sensor data (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120), wherein the at least one CPU, the at least one GPU providing parallel processing (see parallel processing GPUs, para 49), and the at least one accelerator (see hardware accelerator for computer vision and graphics, para 46-48) interoperate to process the LIDAR sensor data and a trajectory estimation and/or route planning to provide autonomous driving (see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 19. With respect to claim 18, Moloney teaches: The system-on-a-chip of claim 17 wherein the accelerator is configured to accelerate computer vision algorithms for autonomous driving (machine-learning accelerator, para 92; see embedded accelerators, para 67-68; see hardware accelerator for computer vision and graphics, para 46-48; see depth and vision processing, camera sensors and GPU output, scene geometry and maps, para 50; see different precisions, para 78; see precision measurements from 2D and 3D volumetric shapes using embedded devices, LIDAR, resulting maps, para 120). 20. With respect to claim 19, Moloney teaches: The system-on-a-chip of claim 17 wherein the accelerator is configured for deep neural network acceleration (see neural network acceleration, para 56). 21. With respect to claim 20, Moloney teaches: The system-on-a-chip of claim 17 wherein the system-on-a-chip is configured to comprise at least a part of an autonomous vehicle controller (on-chip network, para 49; system on a chip SoC, para 151; autonomous vehicle control using system-on-chip SoC processors, para 148-151). Claim Rejections - 35 USC § 103 22. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 23. Claim(s) 2 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moloney et al. (US PG Pub No. 2021/0166464) in view of CRUESOT (US PG Pub No. 2017/0242436). 24. With respect to claim 2, while Moloney appears to be silent regarding the limitations recited in claim 2, CREUSOT teaches: The system-on-a-chip of claim 1 wherein the system-on-a-chip is configured to enable the autonomous vehicle controller to be substantially compliant with level 5 full autonomous driving as defined by SAE specification J3016 (SAE 3016 Standard, level 5 automation, para 22). It would have been obvious to one of ordinary skill in the art before the time of the invention to have incorporated CRUESOT’s SAE J3016 standard into the autonomous system controller of Moloney for at least the following reason(s): CRUESOT provides a standard for autonomous vehicle control that can accommodate for road construction, which would be a desirable improvement int the art. 25. With respect to claim 6, while Moloney appears to be silent regarding the limitations recited in claim 2, CREUSOT teaches: The system-on-a-chip of claim 4 wherein the system-on-a-chip includes at least one memory device connected to the system-on-a-chip, the memory device storing instructions that when executed by the CPU and/or the GPU provides autonomous vehicle control that is substantially compliant with level 5 full autonomous driving as defined by SAE specification J3016 (SAE 3016 Standard, level 5 automation, para 22). It would have been obvious to one of ordinary skill in the art before the time of the invention to have incorporated CRUESOT’s SAE J3016 standard into the autonomous system controller of Moloney for at least the following reason(s): CRUESOT provides a standard for autonomous vehicle control that can accommodate for road construction, which would be a desirable improvement int the art. 26. Claim(s) 3 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moloney et al. (US PG Pub No. 2021/0166464) in view of Habel et al. (US PG Pub No. 2015/0033357). 27. With respect to claim 3, while Moloney appears to be silent regarding the limitations recited in claim 3, Habel teaches: The system-on-a-chip of claim 1 wherein the system-on-a-chip is configured to enable the autonomous vehicle controller to be substantially compliant with integrity level “D” defined by ISO Standard 26262 (autonomous control, ISO 26262, level D, para 3-4). It would have been obvious to one of ordinary skill in the art before the time of the invention to have incorporated Habel’s ISO 26262 standard into the autonomous system controller of Moloney for at least the following reason(s): as described in Habel, it was well known to utilize standards such as ISO 26262 to provide a focus on safety and stabilization of acceleration in autonomous vehicle control into autonomous control systems such as those described in both Moloney and Habel. 28. With respect to claim 5, while Moloney appears to be silent regarding the limitations recited in claim 5, Habel teaches: The system-on-a-chip of claim 4 wherein the at least one CPU, the at least one GPU, and the at least one programmable vision accelerator and/or the at least one deep learning accelerator are structured and interconnected to be substantially compliant with integrity level “D” defined by Standard 26262 of the International Organization for Standardization (autonomous control, ISO 26262, level D, para 3-4). It would have been obvious to one of ordinary skill in the art before the time of the invention to have incorporated Habel’s ISO 26262 standard into the autonomous system controller of Moloney for at least the following reason(s): as described in Habel, it was well known to utilize standards such as ISO 26262 to provide a focus on safety and stabilization of acceleration in autonomous vehicle control into autonomous control systems such as those described in both Moloney and Habel. 29. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moloney et al. (US PG Pub No. 2021/0166464) in view of KUNDU et al. (US PG Pub No. 2018/0314940). 30. With respect to claim 13, while Moloney appears to be silent regarding the limitations recited in claim 5, KUNDU teaches: The system-on-a-chip of claim 12 wherein the one or more tensor processing units are configured for supporting INT8/INT16/FP16 data type for both features and weights (see tensor weights, INT8, INT16, para 170-179; 215). It would have been obvious to one of ordinary skill in the art before the time of the invention to have incorporated KUNU’s tensor weights at INT8/INT16 standard into the autonomous system controller of Moloney for at least the following reason(s): KUNDU illustrates that conversion to weight tensors that improve precision can reduce loss accumulation across layers of parallel computation, which may improve the GPU computational model of Moloney and improve processing speeds in an autonomous vehicle control unit. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUCHIN PARIHAR whose telephone number is (703)756-1970. The examiner can normally be reached on M-F 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jack Chiang can be reached on 571-272-7483. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUCHIN PARIHAR/ Primary Examiner, Art Unit 2851
Read full office action

Prosecution Timeline

May 04, 2023
Application Filed
Oct 24, 2023
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603465
AUTOMOTIVE DC/AC POWER INVERTER AND POWER OUTLET WITH PLUG-DETECT MODE
2y 5m to grant Granted Apr 14, 2026
Patent 12596945
METHOD AND SYSTEM FOR COMPILING BARE QUANTUM-LOGIC CIRCUITS
2y 5m to grant Granted Apr 07, 2026
Patent 12594849
OVERHEAD CHARGING APPARATUS FOR ELECTRIC VEHICLES
2y 5m to grant Granted Apr 07, 2026
Patent 12591727
LANE REPAIR AND LANE REVERSAL IMPLEMENTATION FOR DIE-TO-DIE (D2D) INTERCONNECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12591729
ALIGNMENT OF MACROS BASED ON ANCHOR LOCATIONS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
97%
With Interview (+9.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 1141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month