Prosecution Insights
Last updated: April 19, 2026
Application No. 19/213,382

HEAD MOUNTABLE DISPLAY

Non-Final OA §103
Filed
May 20, 2025
Examiner
REED, STEPHEN T
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
1y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
342 granted / 474 resolved
+10.2% vs TC avg
Strong +16% interview lift
Without
With
+15.9%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 10m
Avg Prosecution
23 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 474 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are currently pending and prosecuted. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after allowance or after an Office action under Ex Parte Quayle, 25 USPQ 74, 453 O.G. 213 (Comm'r Pat. 1935). Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant's submission filed on 20 May 2025 has been entered. Information Disclosure Statement The information disclosure statement (IDS) submitted on 20 May 2025 was considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shin et al., US PG-Pub 2023/0280591, hereinafter Shin, in view of Faulkner et al., US PG-Pub 2021/0096726, hereinafter Faulkner. Regarding Claim 1, Shin teaches a wearable electronic device (head mounted device 100), comprising: a housing (frame 102); an input (gaze tracking camera 114); a first camera (front-facing camera 110) configured to capture an environment external to the housing (Fig. 1A, and corresponding descriptions; [0028], “The front-facing camera 110 can face, and/or capture images from, a front of the head-mounted device 100, and/or away from a user wearing the head-mounted device 100”), the camera facing a first direction (Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108”); an optical module comprising a display (display 108) facing a second direction different than the first direction (Fig. 1A; and corresponding descriptions; [0031], “When the head-mounted device 100 is worn on the head 170 of the user, the display 108 can face toward the user's eyes 174”) and a processor (processor 416) electrically coupled to the display, the first camera, and the input (Fig. 4, and corresponding descriptions, showing how the processor is connected to the input/output 422, which includes the display and cameras); wherein the processor is configured to cause the display to present augmented reality content including an amount of virtual content superimposed over video passthrough content captured by the first camera (Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108. In the example shown in FIG. 1A, the display 108 superimposes a menu including the options ‘C’, which can represent a calendar application, ‘U’, which can represent a ride service application, and ‘W’, which can represent a word processing application”). However, Shin does not explicitly teach the amount based on a manipulation of the input. Faulkner teaches the amount based on a manipulation of the input (Faulkner: Figs. 5-7B, and corresponding descriptions; [0098]-[0102], describing how gaze tracking input is verified and accepted; [0219], noting how the user’s gaze or a virtual user interface object may be manipulated to confirm or change an input). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to incorporate the input manipulation taught by Faulkner into the device taught by Shin in order to provide computer generated reality experiences for the user (Faulkner: [0102]), thereby allowing for a more immersive user experience. Regarding Claim 2, Shin, as modified by Faulkner, teaches the wearable electronic device of claim 1, further comprising a second camera (Faulkner: sensors 190 and image sensors 314) oriented to face a third direction different from the first direction and the second direction (Faulkner: Figs. 1, 3 and 7A-7C, and corresponding descriptions; [0106], “the input gestures described with regard to FIGS. 7A-7C are detected by analyzing data or signals captured by a sensor system (e.g., sensors 190, FIG. 1; image sensors 314, FIG. 3)”), the second camera configured to detect a hand gesture of a user donning the wearable electronic device (Faulkner: Figs. 7A-7C, and corresponding descriptions; [0105], “FIGS. 7A-7C illustrate examples of input gestures (e.g., discrete, small motion gestures performed by movement of the user's finger(s) relative to other finger(s) or part(s) of the user's hand”). Regarding Claim 3, Shin, as modified by Faulkner, teaches the wearable electronic device of claim 2, wherein: the input comprises the second camera (Faulkner: Figs. 7A-7C, and corresponding descriptions; [0105], “FIGS. 7A-7C illustrate examples of input gestures (e.g., discrete, small motion gestures performed by movement of the user's finger(s) relative to other finger(s) or part(s) of the user's hand”); and the amount of virtual content is based on the hand gesture (Faulkner: Figs. 7A-7C, and corresponding descriptions; [0119], “the hand of the user that performs a gesture input that causes an operation to be performed in the mixed reality environment is visible to the user on the display of the device.”). Regarding Claim 4, Shin, as modified by Faulkner, teaches the wearable electronic device of claim 1, wherein: the optical module further comprises: a chassis (Shin: Fig. 1A, and corresponding descriptions; [0027], “The frame 102 can include one or more rims 104 that support the display 108”); and a second camera gaze tracking camera 114) adjacent the display and facing the second direction (Shin: Fig. 1A, and corresponding descriptions); the display and the second camera are secured to the chassis (Shin: Fig. 1A, and corresponding descriptions); and the chassis is secured to the housing via a frame (Shin: Fig. 1A, and corresponding descriptions; [0027], “The frame 102 can include one or more rims 104 that support the display 108”). Regarding Claim 5, Shin, as modified by Faulkner, teaches the wearable electronic device of claim 4, wherein the second camera is configured to detect a facial feature of a user donning the wearable electronic device (Shin: [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on. The objects that the user is focusing on could include physical objects in front of the head-mounted device 100, and/or graphical objects presented by the display 108.”). Regarding Claim 6, Shin, as modified by Faulkner, teaches the wearable electronic device of claim 5, wherein the facial feature comprises a gaze direction of the user (Shin: [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on.”). Regarding Claim 7, Shin, as modified by Faulkner, teaches the wearable electronic device of claim 1, wherein: the input comprises a capacitive touch sensor (Shin: Fig. 1A, and corresponding descriptions; [0030], “The display of 152 can include a touchscreen display”); and the manipulation comprises a contact with the touch sensor (Shin: Fig. 1A, and corresponding descriptions; [0030], “The display of 152 can include a touchscreen display, which presents graphical output to the user and receives touch input from the user.”). Regarding Claim 8, Shin teaches a wearable display (head mounted device 100), comprising: a housing defining an external surface (frame 102); an outward facing camera (front-facing camera 110) secured within the housing (Fig. 1A, and corresponding descriptions); and an optical module (display 108) secured within the housing (Fig. 1A, and corresponding descriptions), the optical module comprising: a display (display 108) configured to project light toward an eye of a user donning the wearable display (Fig. 1A; and corresponding descriptions; [0031], “When the head-mounted device 100 is worn on the head 170 of the user, the display 108 can face toward the user's eyes 174”); wherein: the display is configured to display augmented reality content including a first immersion level of passthrough content captured by the outward facing camera (Fig. 1A; and corresponding descriptions; [0031], “When the head-mounted device 100 is worn on the head 170 of the user, the display 108 can face toward the user's eyes 174”) and a second immersion level of virtual content (Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108. In the example shown in FIG. 1A, the display 108 superimposes a menu including the options ‘C’, which can represent a calendar application, ‘U’, which can represent a ride service application, and ‘W’, which can represent a word processing application”). However, Shin does not explicitly teach a dial and a manipulation of the dial changes the first immersion level and the second immersion level. Faulkner teaches a dial (Faulkner: [0219], “the eleventh operation causes a virtual object (e.g., that is selected and/or held by the user (e.g., using gaze)) or a user interface object (e.g., a virtual dial control) to rotate in accordance with the hand rotation gesture”); and a manipulation of the dial changes the first immersion level and the second immersion level (Faulkner: [0215], “the computer system displays a visual indication of an operating context (e.g., displaying a menu of selectable options, a dial for adjusting a value”). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to incorporate the input manipulation taught by Faulkner into the device taught by Shin in order to provide computer generated reality experiences for the user (Faulkner: [0102], [0215]), thereby allowing for a more immersive user experience. Regarding Claim 9, Shin, as modified by Faulkner, teaches the wearable display of claim 8, wherein: the outward facing camera is oriented to capture the passthrough content in front of the user when donning the wearable display (Shin: Fig. 1A, and corresponding descriptions; [0028], “The front-facing camera 110 can face, and/or capture images from, a front of the head-mounted device 100, and/or away from a user wearing the head-mounted device 100”); and the wearable display further comprises an inward facing camera (Shin: gaze tracking camera 114) oriented to face toward the eyes of the user when donning the wearable display (Shin: [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on. The objects that the user is focusing on could include physical objects in front of the head-mounted device 100, and/or graphical objects presented by the display 108.”). Regarding Claim 10, Shin, as modified by Faulkner, teaches the wearable display of claim 9, wherein: the inward facing camera is a first inward facing camera (Shin: Fig. 1A, and corresponding descriptions; [0028], “The gaze-tracking camera 114 can capture images of the user's eyes); and the wearable display further comprises a second inward facing camera (Shin: Fig. 1A, and corresponding descriptions; [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, It would have been obvious to one having ordinary skill in the art at the time the invention was made to use two gaze tracking cameras with one for each eye, since it has been held that mere duplication of the essential working parts of a device involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8.). Regarding Claim 11, Shin, as modified by Faulkner, teaches the wearable display of claim 10, wherein the second inward facing camera is configured to detect a facial feature (Shin: [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on.”). Regarding Claim 12, Shin, as modified by Faulkner, teaches the wearable display of claim 11, wherein the facial feature includes a gaze direction (Shin: [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on.”). Regarding Claim 13, Shin, as modified by Faulkner, teaches the wearable display of claim 8, wherein: the outward facing camera is a first outward facing camera (Shin: Fig. 1A, and corresponding descriptions; [0028], “The front-facing camera 110 can face, and/or capture images from, a front of the head-mounted device 100, and/or away from a user wearing the head-mounted device 100”); and the wearable display further comprises a second outward facing camera (Shin: Fig. 1A, and corresponding descriptions; [0028], “The front-facing camera 110 can face, and/or capture images from, a front of the head-mounted device 100, and/or away from a user wearing the head-mounted device 100”, It would have been obvious to one having ordinary skill in the art at the time the invention was made to use two cameras, since it has been held that mere duplication of the essential working parts of a device involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8.)) configured to capture the passthrough content (Shin: Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108. In the example shown in FIG. 1A, the display 108 superimposes a menu including the options ‘C’, which can represent a calendar application, ‘U’, which can represent a ride service application, and ‘W’, which can represent a word processing application”.). Regarding Claim 14, Shin, as modified by Faulkner, teaches the wearable display of claim 13, wherein the first immersion level and the second immersion level are based on a facial feature (Shin: Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108. In the example shown in FIG. 1A, the display 108 superimposes a menu including the options ‘C’, which can represent a calendar application, ‘U’, which can represent a ride service application, and ‘W’, which can represent a word processing application. The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on. The objects that the user is focusing on could include physical objects in front of the head-mounted device 100, and/or graphical objects presented by the display 108.”). Regarding Claim 15, Shin, as modified by Faulkner, teaches the wearable display of claim 8, wherein: the dial is rotatable (Faulkner: [0219], “the eleventh operation causes a virtual object (e.g., that is selected and/or held by the user (e.g., using gaze)) or a user interface object (e.g., a virtual dial control) to rotate in accordance with the hand rotation gesture”); and the manipulation comprises a rotation of the dial (Faulkner: [0219], “the eleventh operation causes a virtual object (e.g., that is selected and/or held by the user (e.g., using gaze)) or a user interface object (e.g., a virtual dial control) to rotate in accordance with the hand rotation gesture”). Regarding Claim 16, Shin, as modified by Faulkner, teaches the wearable display of claim 8, wherein: the dial is depressible (Faulkner: [0271], “a button on the housing that is physically coupled with the display generation component for initiating a welcome interface has just been activated by the user”, noting a button may be similar to a dial); and the manipulation includes a depression of the dial (Faulkner: [0271], “a button on the housing that is physically coupled with the display generation component for initiating a welcome interface has just been activated by the user”). Regarding Claim 17, Shin teaches a head-mountable display device (head mounted device 100), comprising: a housing (frame 102); a first camera (front-facing camera 110) configured to capture passthrough content (Fig. 1A, and corresponding descriptions; [0028], “The front-facing camera 110 can face, and/or capture images from, a front of the head-mounted device 100, and/or away from a user wearing the head-mounted device 100”), the first camera facing a first direction (Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108”); an optical module (display 108) within the housing, the optical module comprising: a display (display 108) facing a second direction different than the first direction (Fig. 1A; and corresponding descriptions; [0031], “When the head-mounted device 100 is worn on the head 170 of the user, the display 108 can face toward the user's eyes 174”); and a second camera (gaze tracking camera 114) facing the second direction (Shin: [0028], “The gaze-tracking camera 114 can capture images of the user's eyes, and determine a direction that the users eyes are pointing to and/or objects that the user is focusing on. The objects that the user is focusing on could include physical objects in front of the head-mounted device 100, and/or graphical objects presented by the display 108.”); wherein the display is configured to project mixed reality content including a level of virtual content superimposed over the passthrough content (Fig. 1A, and corresponding descriptions; [0028], “The field of view 112 captured by the front-facing camera 110 can be presented to the user via the display 108. In the example shown in FIG. 1A, the display 108 superimposes a menu including the options ‘C’, which can represent a calendar application, ‘U’, which can represent a ride service application, and ‘W’, which can represent a word processing application”). However, Shin does not explicitly teach a button; or the level based on a manipulation of the button. Faulkner teaches a button (Faulkner: [0278], “wherein the user input causes activation of a first input device of the electronic device (e.g., a mechanical button on the housing that is physically coupled with the display generation component)”); and the level based on a manipulation of the button (Faulkner: [0278], “In response to detecting the user input that causes activation of the first input device of the electronic device, the computer system replaces the first view of the three-dimensional environment with the second view of the three-dimensional environment”). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to incorporate the input manipulation taught by Faulkner into the device taught by Shin in order to provide computer generated reality experiences for the user (Faulkner: [0102], [0278]), thereby allowing for a more immersive user experience. Regarding Claim 18, Shin, as modified by Faulkner, teaches the head-mountable display device of claim 17, further comprising a frame (Shin: rims 104) secured to the housing (Shin: Fig. 1A, and corresponding descriptions; [0027], “The frame 102 can include one or more rims 104 that support the display 108”), wherein the optical module is secured to the frame (Shin: Fig. 1A, and corresponding descriptions; [0027], “The frame 102 can include one or more rims 104 that support the display 108”). Regarding Claim 19, Shin, as modified by Faulkner, teaches the head-mountable display device of claim 18, wherein: the head-mountable display device further comprises a dial disposed on a first side of the housing (Faulkner: [0215], “the computer system displays a visual indication of an operating context (e.g., displaying a menu of selectable options, a dial for adjusting a value”, [0219], “the eleventh operation causes a virtual object (e.g., that is selected and/or held by the user (e.g., using gaze)) or a user interface object (e.g., a virtual dial control) to rotate in accordance with the hand rotation gesture”); the button is disposed on a second side of the housing (Faulkner: [0271], “a button on the housing that is physically coupled with the display generation component for initiating a welcome interface has just been activated by the user”); and the optical module is disposed between the dial and the button (It would have been an obvious matter of design choice to place the button and dial in various locations since the applicant has not disclosed that placing the button and dial in specific locations solves any stated problem or is for any particular purpose and it appears that the invention would perform equally well with the button and dial not with the optical module in between the elements). Regarding Claim 20, Shin, as modified by Faulkner, teaches the head-mountable display device of claim 17, wherein: the display comprises a pixelated screen (Shin: [0028], “In the example shown in FIG. 1A, the display 108 superimposes a menu including the options ‘C’, which can represent a calendar application, ‘U’, which can represent a ride service application, and ‘W’, which can represent a word processing application”, noting how these menu icons would be pixelated when presented to the user); and the second camera is disposed adjacent the pixelated screen (Shin: Fig. 1A, showing the gaze tracking camera is located next to the display). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN T REED whose telephone number is (571)272-7234. The examiner can normally be reached M-F: 0800-1800. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. STEPHEN T. REED Primary Examiner Art Unit 2627 /Stephen T. Reed/Primary Examiner, Art Unit 2627
Read full office action

Prosecution Timeline

May 20, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596455
CONTROL METHOD FOR A TOUCHPAD
2y 5m to grant Granted Apr 07, 2026
Patent 12573253
TOUCHSCREEN FOR ELECTRONIC LOCKS
2y 5m to grant Granted Mar 10, 2026
Patent 12572443
DIAGNOSIS DEVICE FOR DETERMINING NOISE LEVEL
2y 5m to grant Granted Mar 10, 2026
Patent 12572248
DETECTING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12566488
INTERFACE APPARATUS AND BOARD SPORT EXPERIENCE SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+15.9%)
1y 10m
Median Time to Grant
Low
PTA Risk
Based on 474 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month