Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,546

HEALTH AND ERROR MONITORING OF SENSOR FUSION SYSTEMS

Non-Final OA §102§103
Filed
Mar 08, 2024
Examiner
RUSIN, KAYO LISA
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
21 granted / 23 resolved
+36.3% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
10 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Interview In regards to the telephonic interview made on January 16th, 2026, proposed amendments to the independent claim 1 were discussed. The Examiner noted that the proposed amendment will most likely overcome the 35 U.S.C. 112 rejection. The Examiner did not make such comments regarding the 35 U.S.C. 103 rejection. The Examiner noted that the proposed amendment will add more details to the independent claim 1 which would require further search. Response to Amendment Applicant’s arguments filed February 24th, 2026 have been fully considered. In regards to 35 U.S.C. 112(b) rejection, necessary amendments were made and the Examiner withdraws the rejection. In regards to 35 U.S.C. 103 rejections, the Examiner maintains the rejection. The Applicant argues that because the following limitation in claim 9 has been recited as being allowable subject matter in the Office Action, including similar claim limitation will render the independent claim 1 into an allowable subject matter: (claim limitation from claim 9): “determining a first number of cycles during which the one or more first objects are determined as invalid; and in response to determining that the first number of cycles is equal to a first threshold, determining the one or more fused objects as invalid.” However, the Examiner respectfully disagrees. When the claim limitation of claim 9 is taken as a whole, it makes it clear that the first objects is a separate object from the fused object, and that there is a distinct sequencing that is involved in the determination of the first objects and the fused objects such that the first object is used as the input for the process that would output the fused object. Because these limitations that live within the context of claim 9 is not present in claim 1, the claim language that is brought into claim 1 can be read broadly. For instance, the first object is only limited in scope in such that it is based on the perception data and found within the plurality of execution cycles. With that broad interpretation, the fusion result itself can also be interpreted as the first object cited in claim 1. Because of these reasons, the Examiner argues that the claim language that has been added to claim 1 is different in scope than that is presented in claim 9. Furthermore, the amended claim 1 is found in the prior art as described in the sections below, it is not considered an allowable subject matter. Allowable Subject Matter Claims 9-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The elements of dependent claims 9, 10, and 11 were neither found through a search of the prior art nor considered obvious by the Examiner. In particular, the prior art of record does not teach or suggest, in combination with the remaining limitations and in the context of their claims as a whole: Claim 9: “wherein the output data is generated at least in part, by: detecting, during the plurality of execution cycles, one or more fused objects by performing fusion of at least the one or more first objects and one or more second objects, the one or more first objects being detected based at least on the perception data from one or more first sensors of a machine, the one or more second objects being detected based at least on data from one or more third sensors of the machine; determining, during the plurality of execution cycles, that the one or more first objects are invalid; determining the first number of cycles during which the one or more first objects are determined as invalid; and in response to determining that the first number of cycles is equal to a first threshold, determining the one or more fused objects as invalid.” Claim 10 and 11 are dependent on claim 9 and thus inherits the qualities of the claim. The most relevant prior art, Ditty (complete reference described further in the Office action below), teaches using a confidence level in order to determine when the detected fused object is invalid ([00197]); however, this is different from the claim limitation. Other prior art has been examined. For instance, Roheda et al (“Decision Level Fusion: An Event Driven Approach,” 2018, 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 2018) teaches the decision level fusion which explicitly teaches the detection of the two objects from raw sensor data before they are used as input in order to generate the fused object (Roheda, page 2598). Another prior art example is Jahja et al (“Kalman Filter, Sensor Fusion, and Constrained Regression: Equivalences and Insights,” 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada), which teaches the use of Kalman filter during the combination of the sensor data. Although Kalman filter teaches the use of continuously gathering data and adjusting the estimation and the confidence of the estimation over time, it does not teach details such as comparing the number of cycles the system has taken to determine that the first object is invalid against a threshold and invalidating the fused object if they were equal to one another. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 8, 12-20 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Ditty et al (WO 2019094843 A1) from henceforth referred to as Ditty. Per claim 1, Ditty teaches One or more processors ([00291] processors), comprising: one or more circuits ([00191] circuitry) to: receive perception data obtained using one or more first sensors of a machine ([00113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... one or more Light Detection and Ranging ("LIDAR") sensors); receive position data obtained using one or more second sensors of the machine ([0113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... GPS unit); generate output data by, at least in part, performing, during a plurality of execution cycles, fusion of at least the perception data and the position data ([00230] pre-processing may include sensor fusion, which may be used to combine the outputs of different sensors such as by using Kalman filtering, artificial intelligence, or the like in order to learn more from a combination of sensors than is possible from any individual sensor and to increase performance. Although the “plurality of execution cycles” is not explicitly mentioned, the prior art speaks to numerous examples such as in [0019] and [0020] in which automatic emergency braking and lane-departure warning features respectively will use new incoming sensor data in order to function. Because sensor data is gathered continuously this teaches the plurality of execution cycles); evaluate a plurality of criteria according to at least a subset of the perception data, the position data, and the output data, the plurality of criteria corresponding to functionality and safety of performing the fusion ([00247] The Advanced SoCs and dGPUs may use deep neural networks to perform some, or all, of the high-level functions necessary for autonomous vehicle control. This includes lane detection, object detection, and/or free space detection. GPU complex is further configured to run trained neural networks to perform any AI function desired for vehicle control, vehicle management, or safety, including the functions of perception, planning, and control) output a signal according to the evaluation ([00152] outputs a signal); and based at least on the signal including an error signal and on a first number of cycles, among the plurality of execution cycles, during which one or more first objects detected based at least on the perception data are determined as invalid: set validity information of the result of the fusion ([00152] the deep-learning infrastructure is capable of fast, real-time inferencing, and may use that capability to evaluate and verify the health of the processors, software, and associated hardware. For example, deep-learning infrastructure receives period updates including the objects that has been located. The Deep-learning infrastructure runs its own neural network to identify the objects and compare them with the identified objects. If the result does not match, the infrastructure concludes that the AI, which includes the fusion function, is malfunctioning. Although not explicitly stated, the identification of malfunction teaches setting the validity information of the result of the fusion, since it will yield predictable result to allow the information to persist) Per claim 2, Ditty teaches The one or more processors of claim 1, wherein the position data is monitored for a period of time shorter than a period of time for which the perception data is monitored ([00464] temporal sliding window can be used over a Kalman filter when analyzing data. The Example embodiment shows an example in which the perception data (for lane centers and lane edges) does not change temporarily across time compared to motion/position data. Examiner’s interpretation is that due to the nature of the data, it is appropriate to monitor the perception data for a longer period when considering a temporal sliding window). Per claim 3, Ditty teaches The one or more processors of claim 1, wherein the one or more first sensors comprise at least one of RADAR sensor, a light detection and ranging (LiDAR) sensor, an ultrasonic sensor, a stereo camera, a wide-view camera, an infrared camera, a surround camera, a long-range camera, or a mid-range camera ([00113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... one or more Light Detection and Ranging ("LIDAR") sensors). Per claim 4, Ditty teaches The one or more processors of claim 1, wherein: the one or more circuits are to detect a first object from the perception data ([00594] example of the system using the computer vision algorithm in order to detect objects such as vehicle detection); and the plurality of criteria corresponding to the perception data comprise at least one of validity of the perception data, whether data is missing in the perception data, whether the perception data is stale, validity of a timestamp, delay of a timestamp, position of the first object within a range of a predetermined position, velocity of the first object within a range of a predetermined velocity, acceleration of the first object within a range of a predetermined acceleration, vertical position of the first object with respect to a ground, size of the first object, or class of the first object ([00363] sensor data includes information in regards to whether the frame has been dropped, which would affect the integrity of the data. Examiner’s interpretation is that this would relate to the validity of the data). Per claim 5, Ditty teaches The one or more processors of claim 1, wherein the one or more second sensors comprise at least one of global navigation satellite systems (GNSS) sensor, or a Global Positioning System sensor, an inertial measurement unit (IMU) sensor, an accelerometer, a gyroscope, a magnetic compass, a magnetometer, a microphone, a speed sensor, a vibration sensor, a steering sensor, or a brake sensor ([0113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... GPS unit). Per claim 6, Ditty teaches The one or more processors of claim 1, wherein the plurality of criteria according to the position data comprises at least one of validity of data, whether data is missing, whether data is stale, velocity of the machine within a range of a predetermined velocity, or acceleration of the machine within a range of a predetermined acceleration ([00363] software-based watchdog perform integrity checks and detects occurrence of integrity errors, which the Examiner interprets as including checking the criteria related to the validity of data and whether the data is missing) . Per claim 8, Ditty teaches The one or more processors of claim 1, wherein in response to the error signal, the one or more circuits are to perform at least one of: degrading one or more functions of a system performing the fusion; or sending one or more health messages to a health server for debugging purposes ([00320] a safety monitor (watchdog) server monitors all the other functions running on the CPU cores and reports status to on-chip hardware or other resources. Also, [00341] and [00344] for additional information on the watchdog). Per claim 12, Ditty teaches The one or more processors of claim 1, wherein the one or more processors is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing one or more simulation operations; a system for performing one or more digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing one or more deep learning operations; a system for generating or presenting at least one of augmented reality content, virtual reality content, or mixed reality content; a system for hosting one or more real-time streaming applications; a system implemented using an edge device; a system implemented using a robot; a system for performing one or more conversational AI operations; a system implementing one or more large language models (LLMs); a system implementing one or more language models; a system for performing one or more generative AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. ([00157] includes software system with sensors to gather perception data for autonomous or semi-autonomous vehicles). Per claim 13, Ditty teaches A system, comprising: one or more processors ([00291] processors) to perform operations comprising: receiving perception data from one or more first sensors of a machine ([00113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... one or more Light Detection and Ranging ("LIDAR") sensors); receiving position data from one or more second sensors of the machine ([0113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... GPS unit); generating output data by performing, during a plurality of execution cycles, fusion of at least the perception data and the position data ([00230] pre-processing may include sensor fusion, which may be used to combine the outputs of different sensors such as by using Kalman filtering, artificial intelligence, or the like in order to learn more from a combination of sensors than is possible from any individual sensor and to increase performance. Although the “plurality of execution cycles” is not explicitly mentioned, the prior art speaks to numerous examples such as in [0019] and [0020] in which automatic emergency braking and lane-departure warning features respectively will use new incoming sensor data in order to function. Because sensor data is gathered continuously, this teaches the plurality of execution cycles); evaluating a plurality of criteria according to at least a subset of the perception data, the position data, and the output data, the plurality of criteria corresponding to functionality and safety of performing the fusion ([00247] The Advanced SoCs and dGPUs may use deep neural networks to perform some, or all, of the high-level functions necessary for autonomous vehicle control. This includes lane detection, object detection, and/or free space detection. GPU complex is further configured to run trained neural networks to perform any AI function desired for vehicle control, vehicle management, or safety, including the functions of perception, planning, and control); outputting a signal according to the evaluation ([00152] outputs a signal); and based at least on the signal including an error signal and on a first number of cycles, among the plurality of execution cycles, during which one or more first objects detected based at least on the perception data are determined as invalid, set validity information of the result of the fusion ([00152] the deep-learning infrastructure is capable of fast, real-time inferencing, and may use that capability to evaluate and verify the health of the processors, software, and associated hardware. For example, deep-learning infrastructure receives period updates including the objects that has been located. The Deep-learning infrastructure runs its own neural network to identify the objects and compare them with the identified objects. If the result does not match, the infrastructure concludes that the AI, which includes the fusion function, is malfunctioning. Although not explicitly stated, the identification of malfunction teaches setting the validity information of the result of the fusion, since it will yield predictable result to allow the information to persist) Per claim 14, the claim limitation recites similar claim limitation from claim 2, and thus, is rejected for similar reasons as claim 2. Per claim 15, the claim limitation recites similar claim limitation from claim 4, and thus, is rejected for similar reasons as claim 4. Per claim 16, the claim limitation recites similar claim limitation from claim 6, and thus, is rejected for similar reasons as claim 6. Per claim 17, the claim limitation recites similar claim limitation from claim 8, and thus, is rejected for similar reasons as claim 8. Per claim 18, the claim limitation recites similar claim limitation from claim 12, and thus, is rejected for similar reasons as claim 12. Per claim 19, Ditty teaches A method comprising: receiving perception data from one or more first sensors of a machine ([00113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... one or more Light Detection and Ranging ("LIDAR") sensors); receiving position data from one or more second sensors of the machine ([0113] controller generates autonomous driving outputs in response to an array of sensor inputs including ... GPS unit); generating output data by performing, during a plurality of execution cycles, fusion of at least the perception data and the position data ([00230] pre-processing may include sensor fusion, which may be used to combine the outputs of different sensors such as by using Kalman filtering, artificial intelligence, or the like in order to learn more from a combination of sensors than is possible from any individual sensor and to increase performance. Although the “plurality of execution cycles” is not explicitly mentioned, the prior art speaks to numerous examples such as in [0019] and [0020] in which automatic emergency braking and lane-departure warning features respectively will use new incoming sensor data in order to function. Because sensor data is gathered continuously this teaches the plurality of execution cycles); evaluating a plurality of criteria according to at least a subset of the perception data, the position data, and the output data, the plurality of criteria corresponding to functionality and safety of performing the fusion ([00247] The Advanced SoCs and GPUs may use deep neural networks to perform some, or all, of the high-level functions necessary for autonomous vehicle control. This includes lane detection, object detection, and/or free space detection. GPU complex is further configured to run trained neural networks to perform any AI function desired for vehicle control, vehicle management, or safety, including the functions of perception, planning, and control) outputting a signal according to the evaluation ([00152] outputs a signal); and based at least on the signal including an error signal and on a first number of cycles, among the plurality of execution cycles, during which one or more first objects detected based at least on the perception data are determined as invalid setting validity information of the result of the fusion ([00152] the deep-learning infrastructure is capable of fast, real-time inferencing, and may use that capability to evaluate and verify the health of the processors, software, and associated hardware. For example, deep-learning infrastructure receives period updates including the objects that has been located. The Deep-learning infrastructure runs its own neural network to identify the objects and compare them with the identified objects. If the result does not match, the infrastructure concludes that the AI, which includes the fusion function, is malfunctioning. Although not explicitly stated, the identification of malfunction teaches setting the validity information of the result of the fusion, since it will yield predictable result to allow the information to persist) Per claim 20, the claim limitation recites similar claim limitation from claim 8, and thus, is rejected for similar reasons as claim 8. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ditty in view of Jang, Jinhwan (Pavement slipperiness detection using wheel speed and acceleration sensor data, Transportation Research Interdisciplinary Perspectives, Volume 11, 2021, 100431, ISSN 2590-1982, https://doi.org/10.1016/j.trip.2021.100431) from henceforth referred to as Jang. Per claim 7, Ditty teaches The one or more processors of claim 1, wherein: the one or more circuits are to detect a first object from the output data ([00157] example of using fusion data in order to detect objects) Ditty fails to teach explicitly the plurality of criteria according to the output data comprise at least one of whether a system time increases between fusion cycles, whether a time difference between input modality data is larger than a threshold, whether a prediction time is larger than a threshold, whether a gap between positions of the first object is greater than a threshold, whether a gap between velocities of the first object is greater than a threshold, or whether a gap between accelerations of the first object is greater than a threshold. However, Jang teaches the plurality of criteria according to the output data comprise at least one of whether a system time increases between fusion cycles, whether a time difference between input modality data is larger than a threshold, whether a prediction time is larger than a threshold, whether a gap between positions of the first object is greater than a threshold, whether a gap between velocities of the first object is greater than a threshold, or whether a gap between accelerations of the first object is greater than a threshold ([Abstract] the paper teaches a wheel acceleration-based approach for detection road slipperiness using sensor data. Page 2, “Wheel slip-based approach” section, first paragraph, teaches that the difference between the speed is evaluated; page 3, first column, last paragraph of the “Wheel slip-based approach,” the wheel slip aggregation is checked if the value exceeds the predefined threshold or not). Prior to the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Ditty in order to incorporate the teachings of Jang because Ditty teaches detecting slippery road condition as one of the uses for sensor fusion algorithms (Ditty, [00157]-[00158]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYO LISA RUSIN whose telephone number is (703)756-1679. The examiner can normally be reached Monday-Friday 8:30 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.R./Examiner, Art Unit 2114 /ASHISH THOMAS/Supervisory Patent Examiner, Art Unit 2114
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
May 12, 2025
Non-Final Rejection — §102, §103
Aug 15, 2025
Response Filed
Nov 19, 2025
Final Rejection — §102, §103
Jan 14, 2026
Examiner Interview Summary
Jan 14, 2026
Applicant Interview (Telephonic)
Feb 24, 2026
Request for Continued Examination
Mar 08, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591500
Event Monitoring and Code Autocorrecting Batch Processing System
2y 5m to grant Granted Mar 31, 2026
Patent 12579040
Optimized Snapshot Storage And Restoration Using An Offload Target
2y 5m to grant Granted Mar 17, 2026
Patent 12566670
SUPPORTING AUTOMATIC AND FAILSAFE BOOTING OF BMC AND BIOS FIRMWARE IN A CRITICAL SECURED SERVER SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12554601
ELECTRONIC APPARATUS AND CONTROL METHOD THEREROF FOR HANDLING A CEC MALFUNCTION
2y 5m to grant Granted Feb 17, 2026
Patent 12554609
DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR PROVIDING ENVIRONMENT TRACKING CONTENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+13.3%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month