Prosecution Insights
Last updated: April 19, 2026
Application No. 18/410,801

PERSONALIZED CALIBRATION FUNCTIONS FOR USER GAZE DETECTION IN AUTONOMOUS DRIVING APPLICATIONS

Final Rejection §112
Filed
Jan 11, 2024
Examiner
HAUSMANN, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
658 granted / 863 resolved
+14.2% vs TC avg
Strong +22% interview lift
Without
With
+21.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 863 resolved cases

Office Action

§112
DETAILED ACTION Response to Amendment Claims 1-20 are pending. Claims 1-20 are amended directly or by dependency on an amended claim. Response to Arguments Applicant’s arguments, see pages 9-10, filed 5 January, 2026, with respect to the 35 USC 112a rejections of claims 1-20 along with accompanying amendments have been fully considered and are persuasive. The original 35 USC 112a rejections of claims 1-20 have been withdrawn. However in view of amendments, another set of 35 USC 112a rejections has been necessitated. Applicant’s arguments, see page 10, filed 5 January, 2026, with respect to the 35 USC 112b rejections of claims 1-20 along with accompanying amendments have been fully considered and are persuasive. The original 35 USC 112b rejections of claims 1-20 have been withdrawn. However in view of amendments, another set of 35 USC 112b rejections has been necessitated. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1 (and by similarity, claims 11 and 16) (and by dependency, claims 2-10, 12-15, and 17-20) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification states: “In contrast to conventional systems, such as those described herein, the system of the present disclosure generates and applies personalized calibration functions to gaze predictions—e.g., such as those computed using a machine learning model (e.g., a deep neural network (DNN)—to determine a personalized gaze prediction for a particular user” (paragraph 23 of the publication of the application US 20240143072 A1). The is different from the present claim language “…analyzing using one or more machine learning models (MLMs), a visual appearance of the user in image data to compute one or more observed values of the one or more conditions”. The specification and original claim language never use the term “observed values of the one or more conditions”. There is only one citation of “computed” and it is related to “personalized calibration functions” rather than “observed values of the one or more conditions”. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 (and by similarity, claims 11 and 16) (and by dependency, claims 2-10, 12-15, and 17-20) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The present claim language “…analyzing using one or more machine learning models (MLMs), a visual appearance of the user in image data to compute one or more observed values of the one or more conditions” is unclear. This should likely read “analyzing using one or more machine learning models (MLMs), a visual appearance of the user in image data to compute one or more predicted gaze values” since those would be “computed”. Anything computed would at a minimum be a computed value, not an observed value. Therefore to resolve the ambiguity created by calculating an observed value (as opposed to computing a computed value, or observing an observed value), examiner recommends the language “analyzing using one or more machine learning models (MLMs), a visual appearance of the user in image data to compute one or more predicted gaze values” or similar. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20210049410 A1 (The term “gaze-related parameter” as used within this specification intends to describe a gaze direction, a cyclopean gaze direction, a 3D gaze point, a 2D gaze point, eye pose as 3D position and orientation, a pair of 3D gaze directions (left and right eye), a visual axis orientation, an optical axis orientation, a pupil axis orientation, a line of sight orientation, an orientation and/or a position and/or an eyelid closure, a pupil area, a pupil size, a pupil diameter, a sclera characteristic, an iris diameter, a characteristic of a blood vessel, a cornea characteristic of at least one eye, a cornea radius, an eyeball radius, a distance pupil-center to cornea-center, a distance cornea-center to eyeball-center, a distance pupil-center to limbus center, a cornea keratometric index of refraction, a cornea index of refraction, a vitreous humor index of refraction, a distance crystalline lens to eyeball-center, to cornea center and/or to corneal apex, a crystalline lens index of refraction, a degree of astigmatism, an orientation angle of a flat and/or a steep axis, a limbus major and/or minor axes orientation, an eye cyclo-torsion, an eye intra-ocular distance, an eye vergence, statistics over eye adduction and/or eye abduction, statistics over eye elevation and/or eye depression, data about cognitive load, blink events, drowsiness and/or awareness of the user, and, a parameter for the user iris verification and/or identification. Points and directions can be specified for example within a scene camera image, an eye camera coordinate system, scene camera coordinate system, device coordinate system, head coordinate system, world coordinate system or any other suitable coordinate system. A companion app, i.e. a computer program designed to run on a mobile device such as a phone/tablet or watch (mobile app), running on the companion smartphone 627 can be the primary user interaction point. The user may be able to control recordings, user profiles, calibrations and validations via the companion app. The user may also be able to update and manage personal profiles, network models, and calibrations with the app. Such interactions may be low or minimal. The smartphone 627 is typically able to operate autonomously in a fully automated fashion. The companion app may control the device and may send firmware and model updates. The head wearable device 620 may also include components that allow determining the device orientation in 3D space, accelerometers, GPS functionality and the like. In an exemplary embodiment, the calibration method includes instructing a user wearing the device 720 to look at a particular known marker point, pattern or object in space, whose coordinates within the video images recorded by a scene camera connected to or provided by the device 720 can be precisely determined in an automated way by state of the art machine learning, computer vision or image processing techniques (block 7253 in FIG. 6). The image or images recorded by the cameras facing the eye(s) of the user are used to predict the user's gaze direction (gaze point) in block 7251. The offset of the predicted gaze direction (gaze point) g.sub.pr and the expected gaze direction (gaze point) g.sub.e defined by the marker position can then be calculated and used to generate a correction mapping or function F.sub.corr in a block 7254 to be applied henceforth (block 7252) to the prediction of the universal NN to arrive at a calibrated gaze-value g.sub.cpr. The described calibration methods yield pairs of images labeled with the ground truth gaze location, which can be used to improve the universal NN as described above). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M ENTEZARI HAUSMANN whose telephone number is (571)270-5084. The examiner can normally be reached 10-7 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent M Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE M ENTEZARI HAUSMANN/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Jan 11, 2024
Application Filed
Jan 07, 2025
Non-Final Rejection — §112
Apr 14, 2025
Response Filed
May 28, 2025
Final Rejection — §112
Sep 02, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Oct 06, 2025
Non-Final Rejection — §112
Jan 05, 2026
Response Filed
Feb 18, 2026
Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602775
INTERPOLATION OF MEDICAL IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602793
Systems and Methods for Predicting Object Location Within Images and for Analyzing the Images in the Predicted Location for Object Tracking
2y 5m to grant Granted Apr 14, 2026
Patent 12602949
SYSTEM AND METHOD FOR DETECTING HUMAN PRESENCE BASED ON DEPTH SENSING AND INERTIAL MEASUREMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597261
OBJECT MOVEMENT BEHAVIOR LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12597244
METHOD AND DEVICE FOR IMPROVING OBJECT RECOGNITION RATE OF SELF-DRIVING CAR
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+21.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 863 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month