Prosecution Insights
Last updated: April 19, 2026
Application No. 18/435,897

OPERATOR AND OCCUPANT MONITORING VALIDATION FOR AUTONOMOUS AND SEMI AUTONOMOUS MACHINES

Non-Final OA §102§112
Filed
Feb 07, 2024
Examiner
HUYNH, VAN D
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
630 granted / 721 resolved
+25.4% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
746
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
30.9%
-9.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Group I (claims 1-11) in the reply filed on 01/09/2026 is acknowledged. Claims 12-20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected invention. Since the restriction requirement properly made, the restriction requirement is now made final. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation “…execute the one or more neural networks on second hardware that is unrated or rated for a lower safety or reliability level than the first safety or reliability level” which is unclear and confusing. The Examiner cannot determine why it is necessary to execute the neural network to achieve a lower safety or reliability level. Figure 4 and paragraph [0062] of the Specification disclose only a higher safety or reliability level, and the Specification does not provide any information regarding the lower safety or reliability level. For the purpose of examination, The Examiner will interpret the lower safety or reliability level as the higher safety or reliability level. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ning et al., US 2021/0090284. Regarding claim 1, Ning discloses one or more processors comprising one or more processing units (fig. 1, element 112; para 0084-0085; the processor may be a central processing unit (CPU) which is configured to control operation of the computing device) to: detect one or more first positions of one or more human keypoints based at least on processing a first frame of sensor data using one or more neural networks (figs. 1 and 6; para 0092, 0094 and 0129; first frame of the video…detect objects (or targets, or candidates) from the first frame…include coordinates of the bounding box in the first frame…estimate pose for each object…determine keypoints in the bounding box, for example using a convolutional neural network (CNN)); detect one or more second positions of the one or more human keypoints based at least on processing a second frame of sensor data using the one or more neural networks (figs. 1 and 6; para 0093-0094 and 0130; process the second frame…the location and pose of an object are generally similar in two sequential frames…perform CNN using the inferred bounding box and the second frame, generate heatmaps, and determine keypoints using the heatmaps, where all the keypoints in the second frame are located in the area that is enclosed by the inferred bounding box); and generate a representation of one or more identified faults based at least on executing one or more validity checks based on at least one of the one or more first positions or the one or more second positions of the one or more human keypoints (figs. 1 and 6; para 0093-0094 and 0130; determine object state in the second frame based on the average confidence score of the keypoints. The object state is “tracked” in the second frame if the average confidence score is greater than a threshold value, and the object state is “lost” in the second frame if the average confidence score equals to or is lower than the threshold value. When the object state is “tracked,” the pose tracking module 160 is configured to generate an enclosing box using four keypoints from the second frame, and infer an inferred bounding box from the enclosing box. When the object state in the second frame is determined as “tracked,” the inferred bounding box from the first frame is regarded as the bounding box of the corresponding object in the second frame). Regarding claim 2, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on comparing a designated threshold to a spatial displacement of the one or more human keypoints from the one or more first positions to the one or more second positions (para 0093, 0112, and 0130). Regarding claim 3, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold to an angular displacement of a joint represented by a plurality of human keypoints comprising the one or more human keypoints from a plurality of first positions comprising the one or more first positions to a plurality of second positions comprising the one or more second positions (fig. 5; para 0079-0080, 0093, and 0130). Regarding claim 4, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold to a difference between a detected limb length represented by a plurality of human keypoints comprising the one or more human keypoints at a plurality of first positions comprising the one or more first positions and at a plurality of second positions comprising the one or more second positions (fig. 5; para 0092-0093, and 0130). Regarding claim 5, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold to a detected joint angle represented by at least one of: a plurality of human keypoints comprising the one or more human keypoints at a plurality of first positions comprising the one or more first positions, or the plurality of human keypoints at a plurality of second positions comprising the one or more second positions (fig. 5; para 0079-0080, 0093, and 0130). Regarding claim 6, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold to a detected limb length represented by at least one of: a plurality of human keypoints comprising the one or more human keypoints at a plurality of first positions comprising the one or more first positions, or the plurality of human keypoints at a plurality of second positions comprising the one or more second positions (fig. 5; para 0092-0093, and 0130). Regarding claim 7, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold associated with one or more spatial constraints that are external to a human body represented by the one or more human keypoints (para 0093, 0112, and 0130). Regarding claim 8, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold on relative positions of keypoints within a detected instance of a plurality of human keypoints comprising the one or more human keypoints (figs. 1 and 6; para 0093-0094 and 0130). Regarding claim 9, the one or more processors of claim 1, Ning further discloses wherein first frame represents a first modality of sensor data from a first time slice, the second frame represents a second modality of sensor data from the first time slice, and the one or more processing units are further to execute the one or more validity checks based at least on applying a designated threshold to a spatial displacement of the one or more human keypoints from the one or more first positions detected using the first modality of sensor data from the first time slice to the one or more second positions detected using the second modality of sensor data from the first time slice (para 0093-0094, 0112, and 0130). Regarding claim 10, the one or more processors of claim 1, Ning further discloses wherein the one or more processing units are further to execute the one or more validity checks on first hardware rated for a first safety or reliability level, and to execute the one or more neural networks on second hardware that is unrated or rated for a lower safety or reliability level than the first safety or reliability level (para 0004 and 0172-0175). Regarding claim 11, the one or more processors of claim 1, Ning further discloses wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations (fig. 4; para 0115; Siamese Graph Convolutional Network (SGCN)); a system for performing remote operations; a system for performing real-time streaming (para 0091; an online video or a live video); a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system for generating synthetic data; a system for generating synthetic data using AI; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (para 0084; a cloud computer). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang et al., US 2022/0101556 discloses computer automated interactive activity recognition based on keypoint detection. Au et al., US 2022/0079472 discloses a fall-detection system for detecting personal fall while preserving the privacy of a detected person. Hayakawa et al., US 2021/0271866 discloses a first set of image data is received, the first set of image data corresponding to images of a first type and being of a person in an environment of a vehicle and including a first plurality of images of the person over a time interval. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAN D HUYNH/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Feb 15, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602798
METHOD AND APPARATUS FOR GENERATING SUBJECT-SPECIFIC MAGNETIC RESONANCE ANGIOGRAPHY IMAGES FROM OTHER MULTI-CONTRAST MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602784
MEDICAL DEVICE FOR TRANSCRIPTION OF APPEARANCES IN AN IMAGE TO TEXT WITH MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12594046
METHOD AND APPARATUS FOR ASSISTING DIAGNOSIS OF CARDIOEMBOLIC STROKE BY USING CHEST RADIOGRAPHIC IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12586186
JAUNDICE ANALYSIS SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12582345
Systems and Methods for Identifying Progression of Hypoxic-Ischemic Brain Injury
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month