Prosecution Insights
Last updated: April 19, 2026
Application No. 18/390,084

FOCUS ADJUSTMENTS BASED ON ATTENTION

Non-Final OA §102§103
Filed
Dec 20, 2023
Examiner
SMITH, STEPHEN R
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
306 granted / 433 resolved
+12.7% vs TC avg
Moderate +11% lift
Without
With
+11.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
13 currently pending
Career history
446
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered. Response to Arguments Applicant’s arguments with respect to independent claims 1, 10 and 19 have been fully considered but are moot because the arguments do not directly apply to the new combination of references being used in the current rejection. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 7-10, 12 and 16-19 rejected under 35 U.S.C. 102(A)(1) as being anticipated by US 10871823 B1 to Burgess et al. (hereinafter “Burgess”). Consider claim 1, Burgess discloses a method comprising: at an electronic device having a processor, a display, and one or more sensors (Fig. 1): obtaining sensor data from the one or more sensors in a physical environment (Fig. 1: sensors 104; Col. 4 Ln. 30-37: “In some embodiments, the system can determine scene understanding based on locations of objects in a scene or environment around the HMD or AR system, such as by using sensor data from cameras or depth sensors”); obtaining a scene understanding that identifies one or more objects and positions of the one or more objects within the physical environment (Col. 4 Ln. 30-37: “In some embodiments, the system can determine scene understanding based on locations of objects in a scene or environment around the HMD or AR system, such as by using sensor data from cameras or depth sensors”; Col. 8 Ln. 36-58: “the object position detector 120 can detect or identify objects represented by the image”); determining at least one gaze direction of at least one eye based on gaze vector data associated with the sensor data (Col. 9 Ln. 44-57: “determine a vector from the eyes of the user to the position of the object”); determining a distance associated with user attention corresponding to a particular object of one or more objects within a view of a 3D representation of the physical environment (Col. 4 Ln. 30-37: “The system can calibrate a varifocal system using the gaze direction information as a vergence cue, such as by using the gaze direction or the position of the object to estimate a vergence plane so that the varifocal system can adjust focus to match the vergence plane”), the distance associated with user attention determined by: (a) determining a convergence based on an intersection of gaze directions of the at least one gaze direction towards the particular object within the physical environment (Col. 12 Ln. 8-23: “The vergence planes can correspond to planes perpendicular to the respective gaze directions (e.g., planes where lines of sight from the eyes would meet)”), (b) combining different types of data, the different types of data including the gaze vector data (Col. 17 Ln. 3-4: FIG. 3 shows a method for using scene understanding for calibration eye tracking; Col. 9 Ln. 44-57: “determine a vector from the eyes of the user to the position of the object”), depth map data (Col. 9 Ln. 29-43: “using depth information to determine the position of the object”), and data associated with the obtained scene understanding (Col. 17 Ln. 62 - Col. 18 Ln. 25: “sensor data regarding the eyes of the user or the scene (e.g., contextual information), can be used to detect that the user is gazing at the object. A confidence score can be assigned to the determination of whether the user is gazing at the object, so that calibration of the eye tracking operation as discussed herein can be selectively performed … The user can be determined to be gazing at the object based on detecting information such as the user interacting with the object”; Col. 18 Ln. 51-64: “the eye tracking operation is calibrated responsive to a confidence score of at least one of scene understanding regarding the object or detecting that the user is gazing at the object meeting or exceeding a threshold”), and (c) determining a distance of the particular object in the 3D representation of the physical environment based on the combined different types of data (Col. 4 Ln. 30-55: “the system can determine scene understanding based on locations of objects in a scene or environment … The system can calibrate a varifocal system using the gaze direction information as a vergence cue, such as by using the gaze direction or the position of the object to estimate a vergence plane so that the varifocal system can adjust focus to match the vergence plane … The system can selectively calibrate the eye tracking operation using confidence scores”); and adjusting a focus of a camera of the one or more sensors based on the distance associated with the user attention corresponding to the particular object, the camera capturing image data of the physical environment that is displayed on the display (Col. 4 Ln. 30-55: “The system can calibrate a varifocal system using the gaze direction information as a vergence cue, such as by using the gaze direction or the position of the object to estimate a vergence plane so that the varifocal system can adjust focus to match the vergence plane”). Consider claim 3, Burgess discloses the method of claim 1, wherein determining the distance associated with user attention is based on detecting that a first gaze direction of the at least one gaze direction is oriented towards an object or an area in the 3D representation (Col. 9 Ln. 29-57: “the object position detector 120 determines the position of the object as a position in three-dimensional space (e.g., real world space, AR or VR space, space in the environment around the HMD or AR system), such as by using depth information to determine the position of the object. The object position detector 120 can determine a gaze direction using the position of the object, such as a gaze direction towards the position of the object”; Col. 4 Ln. 30-37: “The system can calibrate a varifocal system using the gaze direction information as a vergence cue, such as by using the gaze direction or the position of the object to estimate a vergence plane so that the varifocal system can adjust focus to match the vergence plane”). Consider claim 7, Burgess discloses the method of claim 1, wherein the at least one gaze direction is determined based on a reflective property associated with infrared (IR) reflections on the at least one eye (Col. 7 Ln. 34-56: “The eye tracker 144 can identify, using the eye tracking data 148, the eye position 136 based on pixels corresponding to light (e.g., light from sensors 104, such as infrared or near-infrared light from sensors 104, such as 850 nm light eye tracking) reflected by the one or more eyes of the user”). Consider claim 8, Burgess discloses the method of claim 1, wherein the display presents an extended reality (XR) environment based at least in part on the physical environment, wherein clarity of virtual content in the XR environment is adjusted to match the image data captured by the camera (Col. 16 Ln. 3-18: “the varifocal system 224 can change a focus (e.g., a point or plane of focus) as focal length or magnification changes … by receiving an indication of a vergence plane from the calibrator 132 which can be used to change the focus of the varifocal system 224. In some embodiments, the varifocal system 224 can enable a depth blur of one or more objects in the scene by adjusting the focus based on information received from the calibrator 132 so that the focus is at a different depth than the one or more objects”). Consider claim 9, Burgess discloses the method of claim 1, wherein the electronic device is a head-mounted device (HMD)( Col 4 Ln. 58-62: “The system 100 can be implemented using the HMD system 200 described with reference to FIG. 2”). Consider claim 10, Burgess discloses the device of claim 10 based on the same rationale as the method of claim 1 and because Burgess further teaches a non-transitory computer-readable storage medium comprising program instructions to be executed by the processor for controlling the device (Col. 20 Ln. 33-47). Consider claims 12 and 16-18, the device is rejected based on the same rationale as the method of claims 3 and 7-9, respectively. Consider claim 19, Burgess discloses the non-transitory computer-readable storage medium of claim 19 based on the same rationale as the method of claim 1 and because Burgess further teaches a non-transitory computer-readable storage medium comprising program instructions to be executed by the processor for controlling the device (Col. 20 Ln. 33-47). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 5 and 14 rejected under 35 U.S.C. 103 as being unpatentable over Burgess in view of US 20130300634 A1 to White et al. (hereinafter “White”). Consider claim 5, Burgess discloses the method of claim 1, wherein the different types of data further comprise gaze convergence data (Col. 12 Ln. 8-23), but fails to disclose wherein the different types of data include user interface content. In analogous art, White discloses wherein the different types of data further comprise gaze convergence data and user interface content (Par. [0030]-[0031]: “the convergence distance is then used as the focus distance … In addition or alternatively, the focus distance can be determined through user interface interaction by a user (e.g., selecting a specific point in the user's field of view of display with an input device to indicate the focus distance)”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of Burgess in view of the above teachings of White in order to provide the user option for designating a particular focus distance. Consider claim 14, the device is rejected based on the same rationale as the method of claim 5. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN R SMITH whose telephone number is (571)270-1318. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. STEPHEN R. SMITH Examiner Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Nov 01, 2024
Response after Non-Final Action
Apr 14, 2025
Non-Final Rejection — §102, §103
Jul 10, 2025
Applicant Interview (Telephonic)
Jul 10, 2025
Examiner Interview Summary
Jul 18, 2025
Response Filed
Oct 08, 2025
Final Rejection — §102, §103
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Response after Non-Final Action
Dec 12, 2025
Examiner Interview Summary
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598272
PT/PT-Z CAMERA COMMAND, CONTROL & VISUALIZATION SYSTEM AND METHOD UTILIZING ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Apr 07, 2026
Patent 12598280
VIDEO DATA PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, COMPUTER READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12597256
PARKING LOT MONITORING AND PERMITTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12587623
IMAGE SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12567443
METHOD FOR READING AND WRITING FRAME IMAGES HAVING VARIABLE FRAME RATES AND SYSTEM THEREFOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
82%
With Interview (+11.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month