Prosecution Insights
Last updated: April 19, 2026
Application No. 18/662,826

HEAD MOUNTABLE DISPLAY

Non-Final OA §103
Filed
May 13, 2024
Examiner
WILSON, DOUGLAS M
Art Unit
2622
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
320 granted / 427 resolved
+12.9% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
452
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09 January 2026 has been entered. Claims 1-20 are pending. Response to Arguments Applicant's arguments filed 28 January 2026 have been fully considered but they are not persuasive. Regarding Claim 1, Applicant states Viega (US 2016/0027216) fails to teach adjusting the field of view (FOV) of virtual reality content displayed on an HMD. The Examiner respectfully disagrees with Applicant’s statement. Viega teaches displaying a 3D viewport on an HMD [figure 3 @310] and [0029] teaches [0029] … The HMD device 104 locates a virtual 3D viewport 310 that is interoperable with the 3D application so that a model 315 can be rendered in 3D in the viewport and in 2D on the monitor. The HMD device 104 can expose controls to enable the user to configure the viewport 310 in terms of its location, size, shape, and other characteristics in some implementations in order to tailor the viewport to particular needs. The remainder of Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5 are rejected under 35 U.S.C. 103 as being unpatentable over Veiga (2016/0027216) in view of Kim (US 2021/0158715). All reference is to Veiga unless otherwise indicated. Regarding Claim 1 (Currently Amended), Viega teaches a head-mountable electronic device, comprising: a housing [fig. 13 @1300]; a camera configured to capture content external [¶0047, “A see-through display may be used in some implementations while an opaque (i.e., non-see-through) display using a camera-based pass-through or outward facing sensor, for example, may be used in other implementations … Display system 1300 further comprises one or more outward-facing image sensors 1306 configured to acquire images of a background scene and/or physical environment being viewed by a user”] to the housing [fig. 13 @1300]; an optical module secured to the housing, the optical module comprising: a display [fig. 13 @1302]; and an inward facing sensor [fig. 13 @1314, ¶0048, “one or more image sensors 1314, such as inward-facing sensors, that are configured to capture an image of each eyeball of the user]; and a processor [fig. 14 @1320] electrically coupled to the camera, the inward facing sensor, and the display [¶0053, “The display system 1300 can further include a controller 1320 having a logic subsystem 1322 and a data storage subsystem 1324 in communication with the sensors, gaze detection subsystem 1310, display subsystem 1304, and/or other components through a communications subsystem 1326”]; wherein the processor is configured to cause the display [fig. 3 @110] to project mixed reality content [fig. 3 @300] including a first amount of video passthrough content [fig. 3 @210 and 215] captured by the camera [¶0027, “as shown in FIG. 1, a user 102 can employ an HMD device 104 to experience a mixed reality environment 100 that is rendered visually on an optics display”] and a second amount [size of 3D viewport (fig. 3 @310) in HMD field of view (fig. 3 @110) represents field of view of second amount of virtual reality content, ¶0029, “The HMD device 104 locates a virtual 3D viewport 310 that is interoperable with the 3D application so that a model 315 can be rendered in 3D in the viewport’] of virtual reality content [¶0028, “the monitor is incorporated into a mixed-reality environment 300, as shown in FIG. 3, and is visible to the user within the field of view 110 on the HMD device 104”]; and a field of view of the second amount of virtual reality content [size of viewport (fig. 3 @310)] is based on user input [¶0029, “The HMD device 104 can expose controls to enable the user to configure the viewport 310 in terms of its location, size, shape, and other characteristics in some implementations in order to tailor the viewport to particular needs”] Veiga does not teach the user input is based on a detection of the inward facing sensor Kim teaches a user input [biometric eye measurement to determine if immersion level needs to be changed] is based on a detection of an inward facing sensor [¶0074, “In the meantime, the biometric sensor may include a sensor function for measuring at least one signal of … an eyeball … and the like. In the meantime, the sensor device 120 may be included in the VR device 110”, ¶0104, “The indirect information for evaluating the immersion level may include at least one of various biometric information which may be acquired from the user who uses the VR device”, ¶0114, “The operation for improving an immersion level may include an operation of changing an output of the VR device. For example, an amount of visual information among virtual reality information of the VR device may be changed”] Regarding Claim 2 (Previously Presented), Veiga in view of Kim teaches the head-mountable electronic device of Claim 1, wherein the camera [fig. 13 @1306] is oriented downward toward a hand [while holding the device at chest level 1306 is pointed toward hand not holding device] when a user is donning [the act of putting on a garment or piece of equipment; holding the device at chest level with camera pointing down and ear pieces viewed spread apart to ensure each will pass on either side of users head then raising device vertically till nose bridge is level with users nose then rotate 90 degrees and slide earpieces over ears and device frame to sit on users nose] the head-mountable electronic device [fig. 13 @1300]. Regarding Claim 3 (Original), Veiga in view of Kim teaches the head-mountable electronic device of Claim 2, wherein: the camera [fig. 13 @1306] is a first camera; and the head-mountable electronic device further comprises a second camera configured to capture the content external to the housing [¶0047, “Display system 1300 further comprises one or more outward-facing image sensors 1306 configured to acquire images of a background scene and/or physical environment being viewed by a user”]. Regarding Claim 5 (Original), Veiga in view of Kim teaches the head-mountable electronic device of Claim 1, wherein the inward facing sensor is configured to detect a facial feature [fig. 13 @1314, ¶0048, “one or more image sensors 1314, such as inward-facing sensors, that are configured to capture an image of each eyeball of the user]; the inward facing sensor comprises a visual camera [fig. 13 @1314]; the second amount of virtual reality content [fig. 3 @ size of 3D viewport] is based on the facial feature [¶0029 teaches changing size of viewport based on user input]; and the facial feature includes a gaze direction [¶0039, “the sensor package can support gaze tracking 720 to ascertain a direction of the user's gaze 725 which may be used along with the head position and orientation data when implementing the present viewport”]. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Veiga in view of Kim and Lessman (US 2023/0214025). All reference is to Veiga unless indicated otherwise. Regarding Claim 4 (Original), Veiga in view of Kim teaches the head-mountable electronic device of Claim 1, wherein the camera [fig. 13 @1306] is configured to capture a hand gesture including a position of the hand [Kim: ¶0074, “The sensor device 120 may sense an external environment … The sensor device may include at least one of a gesture sensor .. and the like”] Viega in view of Kim does not teach a hand gesture including position of the hand Lessman teaches a hand gesture [¶0011, “an XR device can utilize a camera to detect an orientation and/or motion of a hand of a user, Gestures can be performed in the user's field of view”] including a position [equivalent to orientation] of the hand Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate the concept of capturing a hand gesture using an external looking camera, as taught by Lessman, into the head-mounted electronic device, taught by Veiga in view of Kim, to allow the wearer of a head-mounted display to provide input without need for a mechanical or electrical input device. Claims 6, 10-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Veiga in view of Strawn (US 2024/0248527). All reference is to Veiga unless otherwise indicated. Regarding Claim 6 (Currently Amended), Viega teaches a wearable electronic device, comprising: a housing [fig. 13]; a first environmental camera [fig. 13 @1306] configured to capture content external to the housing [¶0047, “Display system 1300 further comprises one or more outward-facing image sensors 1306 configured to acquire images of a background scene and/or physical environment being viewed by a user”]; a second environmental camera [¶0047 teaches one or more cameras]; an optical module secured to the housing, the optical module comprising: a display screen [fig. 13 @1302]; and an inward facing sensor configured to detect a facial feature [¶0048, “one or more image sensors 1314, such as inward-facing sensors, that are configured to capture an image of each eyeball of the user … a location of a user's pupil, as determined from image data gathered using the image sensor(s) 1314, may be used to determine a direction of gaze”]; and a processor [fig. 14 @1320] electrically coupled to the first environmental camera [fig. 13 @1306-1], the second environmental camera [¶0047, fig. 13 @1306-2], the inward facing sensor [¶0048, 1314], and the display screen [fig. 13 @1302]; wherein: the processor is configured to cause the display screen to project mixed reality content [fig. 3 @300, ¶0028, “ the monitor is incorporated into a mixed-reality environment 300, as shown in FIG. 3, and is visible to the user within the field of view 110 on the HMD device 104”] including an amount of video passthrough content captured by the first environmental camera [¶0047, “Display system 1300 further comprises one or more outward-facing image sensors 1306 configured to acquire images of a background scene and/or physical environment being viewed by a user”] Viega does not teach a camera to detect a hand gesture; and the amount of video passthrough content is based on a combination of the hand gesture and the facial feature Strawn teaches a camera to detect a hand gesture [¶0060, “The user 402 may interact with the displayed mixed reality (MR) content 406 through gaze detection in conjunction with secondary inputs such as gesture detection 408, which may detect hand and/or finger gestures 410, and input device feedback such as a wrist band device 412, among other things”]; and the amount [zooming and/or panning changes amount of displayed content] of video passthrough content [¶0061, “In some examples, interaction with the displayed mixed reality (MR) content 406 may include selection of a portion of the displayed mixed reality (MR) content 406, modifications on the selected portion such as rotation, zooming, panning, etc”] is based on a combination [¶0063, “ FIG. 5A illustrates control of interaction with displayed content based on eye tracking in conjunction with finger gestures or hand gestures, according to examples. Diagram 500A shows a combination of gaze detection 502 and finger gestures 504 and a combination of gaze detection 502 and hand gestures 506 to interact with displayed mixed reality (MR) content”] of the hand gesture [fig. 5A @502] and the facial feature [fig. 5A @506] Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate the concept of controlling displayed mixed reality content using a combination of sensed user inputs, as taught by Strawn, into the wearable electronic device taught by Veiga in order to increase the number of user interactions that can be used to control a displayed image while an HMD is being worn. Regarding Claim 10 (Original), Veiga in view of Strawn teaches the wearable electronic device of Claim 6, wherein the processor is configured to cause the passthrough content to be superimposed over virtual reality content projected by the display screen [¶0027, “The field of view (represented by the dashed area 110 in FIG. 1) of the cityscape provided by HMD device 104 changes as the user moves through the environment and the device can render virtual elements over the real world view. Here, the virtual elements include a tag 115 that identifies a business and directions 120 to a place of interest in the environment”]. Regarding Claim 11 (Currently Amended), Veiga teaches a head-mountable display device, comprising: a housing defining an external surface [fig. 16 @1504]; a frame [fig. 16 @1605] coupled to the housing; an outward facing sensor secured to the frame [¶0047, “A see-through display may be used in some implementations while an opaque (i.e., non-see-through) display using a camera-based pass-through or outward facing sensor”]; an optical module secured to the frame [fig. 17 @1702], the optical module comprising: a display configured to project light toward an eye of a user donning the head-mountable display device [¶0059, “an optics display subassembly 1702 (shown in the disassembled view in FIG. 17)]; and an inward facing sensor configured to detect a facial feature of the user [fig. 13 @1314, ¶0048, “one or more image sensors 1314, such as inward-facing sensors, that are configured to capture an image of each eyeball of the user]; and a processor [fig. 14 @1320] electrically coupled to the camera, the inward facing sensor, and the display [¶0053, “The display system 1300 can further include a controller 1320 having a logic subsystem 1322 and a data storage subsystem 1324 in communication with the sensors, gaze detection subsystem 1310, display subsystem 1304, and/or other components through a communications subsystem 1326”]; wherein the processor is configured to cause the display [fig. 3 @300, ¶0028, “the monitor is incorporated into a mixed-reality environment 300, as shown in FIG. 3, and is visible to the user within the field of view 110 on the HMD device 104”] to simultaneously project a first immersion level [construed as a first amount] of real-world content [¶0028, “real world environment 200 that the user occupies when using the HMD device 104 can contain various real world objects including a PC 205, monitor 210, and work surface 215 captured by the outward facing sensor”, ¶0047, “A see-through display may be used in some implementations while an opaque (i.e., non-see-through) display using a camera-based pass-through or outward facing sensor, for example, may be used in other implementations”] and a second immersion level [construed as a second amount] of virtual content [size of 3D viewport (fig. 3 @310) in HMD field of view (fig. 3 @110) represents field of view of second amount of virtual reality content, ¶0029, “The HMD device 104 locates a virtual 3D viewport 310 that is interoperable with the 3D application so that a model 315 can be rendered in 3D in the viewport”], wherein the second immersion level changes in response to second user input [¶0029, “The HMD device 104 can expose controls to enable the user to configure the viewport 310 in terms of its location, size, shape, and other characteristics in some implementations in order to tailor the viewport to particular needs”] Veiga does not teach the outward facing sensor is configured to capture a hand gesture; and the first amount of real world content changes in response to the hand gesture and the facial feature and the second user input comprises the hand gesture and the facial feature Strawn teaches an outward facing sensor is configured to capture a hand gesture [¶0061, “the camera may be an outward facing camera to capture a hand gesture, a finger gesture, or a body movement”]; and the first amount [zooming and/or panning changes amount of displayed content] of real world content changes in response to the hand gesture and the facial feature [¶0061, “In some examples, interaction with the displayed mixed reality (MR) content 406 may include selection of a portion of the displayed mixed reality (MR) content 406, modifications on the selected portion such as rotation, zooming, panning, etc”] and the second user input comprises the hand gesture and the facial feature [¶0063, “FIG. 5A illustrates control of interaction with displayed content based on eye tracking in conjunction with finger gestures or hand gestures, according to examples. Diagram 500A shows a combination of gaze detection 502 and finger gestures 504 and a combination of gaze detection 502 and hand gestures 506 to interact with displayed mixed reality (MR) content”] Regarding Claim 12 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 11, wherein: the outward facing sensor is oriented to detect an environment external to the housing [Strawn: ¶0061, “the camera may be an outward facing camera to capture a hand gesture, a finger gesture, or a body movement”]; and the inward facing sensor is oriented toward a face of the user [Strawn: ¶0061, “the image sensor may be an inward facing camera to capture an eye gesture]. Regarding Claim 13 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 12, wherein: the outward facing sensor [Strawn: fig. 3A @350C] is oriented in a first direction [forward]; and the inward facing sensor [Strawn: fig. 3A @312] is oriented in a second direction [rearward] opposite the first direction. Regarding Claim 15 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 11, wherein the gesture includes a position of the hand [¶0065, “the hand gestures 506 may include different positions of the hand (e.g., palm-up, palm-down), sideways or vertical movement of the hand, rotation of the hand, a first formation, open hand formation”]. Regarding Claim 16 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 11, wherein the facial feature includes a gaze direction [Strawn: ¶0035, “The eye tracking unit 130 may include one or more eye tracking systems. As used herein, “eye tracking” may refer to determining an eye's position or relative position, including orientation, location, and/or gaze of a user's eye”]. Regarding Claim 17 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 11, the display comprising a pixelated screen [Strawn: ¶0027, “the display electronics 122 may include any number of pixels to emit light”]. Regarding Claim 18 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 17, wherein the inward facing sensor [Strawn: fig. 3A @312] is disposed adjacent the pixelated screen [Strawn: ¶0027, fig. 3A @310]. Regarding Claim 19 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 11, wherein: the outward facing sensor is a first outward facing sensor [Strawn: fig. 3A @350E]; and the head-mountable display device further comprises a second outward facing sensor [Strawn: fig. 3A @350C]. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Veiga in view of Strawn and Xiao (US 2025/0106376). All reference is to Veiga unless indicated otherwise. Regarding Claim 7 (Original), Veiga in view of Strawn teaches the wearable electronic device of Claim 6, wherein the inward facing sensor [¶0048, “one or more image sensors 1314, such as inward-facing sensors, that are configured to capture an image of each eyeball of the user”] is oriented to face a third direction [rearward] Veiga in view of Strawn does not teach the first environmental camera is oriented to face a first direction; the second environmental camera is oriented to face a second direction different than the first direction; and the rearward direction is different than the first direction and the second direction Xiao teaches the first environmental camera [fig. 2 @231] is oriented to face a first direction [forward]; the second environmental camera [fig. 2 @228] is oriented to face a second direction [forward and downward] different than the first direction; and the third direction [rearward] is different than the first direction [forward] and the second direction [forward-downward] Before the application was filed it would have been obvious to one of ordinary skill in the art to face a first external camera forward and face a second external camera forward and downward, as taught by Xiao, into the wearable electronic device, taught by Veiga in view of Strawn, in order to preposition a first external camera to capture the user’s viewpoint, the second external camera to capture hand gestures and the inward facing sensor to capture facial features or eye movements to determine the users gaze direction. Regarding Claim 8 (Original), Veiga in view of Strawn and Xiao teaches the wearable electronic device of Claim 7, wherein when a user dons [the act of putting on a garment or piece of equipment] the wearable electronic device: the first direction [forward] includes a forward direction; the second direction [forward and downward] includes a downward direction; and the third direction [rearward] includes a rearward direction. Regarding Claim 9 (Original), Veiga in view of Strawn and Xiao teaches the wearable electronic device of Claim 7, wherein: the display screen [fig. 13 @1302, ¶0047] is oriented to face the third direction [rearward]; and the inward facing sensor [fig. 13 @1314] is disposed adjacent to the display screen [fig. 13 @1302]. Claims 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Veiga in view of Strawn and Garcia (US 2023/0215084). All reference is to Viega unless indicated otherwise. Regarding Claim 14 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 12, wherein the outward facing sensor detect a hand position of the user [Strawn: ¶0061, “The head-mounted display 404 (near-eye display device) may include an image sensor to capture hand, finger, eye gestures or body movements (arm, head, torso, leg, etc.) … the camera may be an outward facing camera to capture a hand gesture”; ¶0065, “the hand gestures 506 may include different positions of the hand “] Veiga in view of Strawn does not teach the outward facing sensor is oriented downward Garcia teaches an outward facing sensor is oriented downward [¶0026, “Referring again to FIG. 1A, the HMD 104 may have external-facing cameras … the HMD 104 may have any number of cameras facing any direction… a downward-facing camera to capture a portion of the user's face and/or body”] Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate the concept of orienting an external facing camera in the downward direction, as taught by Garcia, into the head-mounted electronic device, taught by Veiga in view of Strawn, in order to alight the camera optical axis with the expected gesture area. Regarding Claim 20 (Original), Veiga in view of Strawn teaches the head-mountable display device of Claim 19 Veiga in view of Strawn does not teach the first outward facing sensor is oriented in a forward direction when the user dons the head-mountable display device; and the second outward facing sensor is oriented in a downward direction when the user dons the head-mountable display device Garcia teaches a first outward facing sensor is oriented in a forward direction when the user wears the head-mountable display device; and a second outward facing sensor is oriented in a downward direction when the user wears the head-mountable display device [¶0026, “Referring again to FIG. 1A, the HMD 104 may have external-facing cameras … the HMD 104 may have any number of cameras facing any direction… a downward-facing camera to capture a portion of the user's face and/or body”] Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate the concept of orienting an external facing camera in the downward direction, as taught by Garcia, into the head-mounted electronic device, taught by Veiga in view of Strawn, in order to alight the camera optical axis with the expected gesture area. Conclusion Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Douglas Wilson whose telephone number is (571)272-5640. The Examiner can normally be reached 1000-1700 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Patrick Edouard can be reached at 571-272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Douglas Wilson/ Primary Examiner, Art Unit 2622
Read full office action

Prosecution Timeline

May 13, 2024
Application Filed
May 29, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Nov 07, 2025
Final Rejection — §103
Jan 06, 2026
Examiner Interview Summary
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 09, 2026
Response after Non-Final Action
Jan 28, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103
Mar 27, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596431
VIRTUAL REALITY CONTENT DISPLAY SYSTEM AND VIRTUAL REALITY CONTENT DISPLAY METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596279
ACTIVE MATRIX SUBSTRATE AND A LIQUID CRYSTAL DISPLAY
2y 5m to grant Granted Apr 07, 2026
Patent 12583317
INPUT DEVICE FOR A VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12585480
USE OF GAZE TECHNOLOGY FOR HIGHLIGHTING AND SELECTING DIFFERENT ITEMS ON A VEHICLE DISPLAY
2y 5m to grant Granted Mar 24, 2026
Patent 12579947
DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
91%
With Interview (+16.1%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month