Prosecution Insights
Last updated: April 19, 2026
Application No. 18/425,930

Handheld Electronic Devices with Contextual Input-Output Capabilities

Final Rejection §102§103
Filed
Jan 29, 2024
Examiner
NGUYEN, QUYNH H
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
941 granted / 1078 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
29 currently pending
Career history
1107
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1078 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. DETAILED ACTION Claim Rejections - 35 USC § 102 2. Claims 1-3 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vaught et al. (2013/0187835). As to claim 1, Vaught teaches a system comprising: a handheld electronic device having a microphone configured to detect a voice command ([0046-0047] – computing system 1100 may take the form of a … mobile computing device such as a smart phone or media player, mobile communication device, see-through display system, etc. Computing system 1100 may also optionally include user input devices such as keyboards, mice, game controllers, cameras, microphones, and/or touch screens, for example, as well as the other input devices such as those described above for the see-through display system; and [0020] – the see-through system / head-mounted display system send a user input command to the handheld mobile device selecting the weather tile 208, resulting in the display of weather information on the external display screen 102 of the mobile device 104); and a head-mounted device (head-mounted 100) having a display configured to display an image (Figs. 2-4, 6 and related texts) in response to the voice command ([0023-0024] – display system 100 displays menu of items of information that a user browses, via gaze control, voice control or in any other suitable manner; via voice recognition analysis of audio input received from the user), wherein the location of the image is based on a location of the handheld electronic device (Figs. 2-4 and 6 and related texts - referring to FIG. 2, when a user is gazing lines 200, 202 are determined to lead from the eyes of the user to an email user interface tile 204. In response, the see-through display system 100 retrieves and displays a list 206 of messages currently in the inbox of an email account linked to and right next to the handheld electronic user's mobile device 104. Referring to FIG. 3, the user has ceased gazing at the email user interface tile 204, and is instead gazing at a weather user interface tile 208 displayed on the external display screen 102. In response, the see-through display system 100 retrieves and displays a current weather report 210 right next to the handheld electronic user's mobile device 104. Referring to FIG. 4 also shows the display of additional weather-related user interface items 300 on the see-through display. A user may interact with the additional weather-related user interface items 300 via gaze, and selection of one of the additional weather-related user interface items may result in the display of relevant information by the mobile device 104 or right next to the handheld electronic mobile device 104. Referring to FIG. 6, the see-through display system 100 may determine that the user is gazing at the actress by gaze and image analysis, and in response obtain and display information 504 regarding the actress 502. Therefore, Vaught shows or suggests a location of the image is based on a location of the handheld electronic device. When the user gazes at the wall then there is no image displayed next to the handheld electronic user's mobile device 104). As to claim 2, Vaught teaches the system defined in claim 1 wherein the handheld electronic device comprises a motion sensor that receives gesture input ([0046-0047] – computing system 1100 may take the form of a … mobile computing device such as a smart phone or media player, mobile communication device, see-through display system, etc. Computing system 1100 may also optionally include user input devices such as keyboards, mice, game controllers, cameras, microphones, and/or touch screens, for example, as well as the other input devices such as those described above for the see-through display system; and [0035] – motion sensor 818 detects movements of a user’s head,… employed as a user input devices, that a user interacts with see through display system via gestures of the eye, neck, head, verbal commands). As to claim 3, Vaught teaches the system defined in claim 2 wherein the image comprises a virtual control interface manipulated using the gesture input ([0033] - the gaze detection subsystem 810 comprises one or more glint sources 812, such as infrared light sources, configured to cause a glint of light to reflect from each eyeball of a user, and one or more image sensors 814 configured to capture an image of each eyeball of the user. Changes in the glints from the user's eyeballs as determined from image data gathered via image sensor(s) 814 may be used to determine a direction of gaze. Further, a location at which gaze lines projected from the user's eyes intersect the external display may be used to determine an object at which the user is gazing (e.g. a virtual object displayed on an external display). The gaze detection subsystem 810 may have any suitable number and arrangement of light sources and image sensor, [0035] - the motion sensors 818, as well as the microphone(s) 808 and the gaze detection subsystem 810, also may be employed as user input devices, such that a user may interact with see-through display system 800 via gestures of the eye, neck and/or head, as well as via verbal commands). Claim Rejections - 35 USC § 103 3. Claims 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Vaught et al. (2013/0187835) in view of North et al. (2013/0342629). As to claim 4, Vaught does not explicitly discuss the system defined in claim 1 wherein the image comprises a live video feed from a video call. North teaches a live video feed, a still image, a screen capture image from a video or a graphically designed or altered image; the modified image derived from the captured video stream at the devices that is part of the video call ([0059]). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of North into the teachings of Vaught for the purpose of allowing consumers to communicate with one another using a variety of media including voice, text, and video. As to claim 5, Vaught teaches the system defined in claim 4 wherein the handheld electronic device comprises a camera ([0047]). North teaches the user interface include an image sensor, an image camera to capture image during video call session ([0040, 0048]). As to claim 6, North teaches the system defined in claim 5 wherein the display is configured to display the captured images ([0048, 0051]). 4. Claim 7 rejected under 35 U.S.C. 103 as being unpatentable over Vaught et al. (2013/0187835) in view of Otsuka (2023/0139216). As to claim 7, Vaught does not explicitly discuss the system defined in claim 1 further comprising control circuitry configured to control whether the image is viewable from an additional head mounter device. However, Vaught teaches sending image information regarding the object to a remote computing device (claim 2). Otsuka teaches the image processing device causes the flat panel display to display an image in a similar field of view, thereby allowing other people to view the image that the user wearing the head mounted display is viewing ([0026]). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Otsuka into the teachings of Vaught for the purpose of allowing other people to view the image that the user wearing the head mounted display is viewing. 5. Claim 8 rejected under 35 U.S.C. 103 as being unpatentable over Vaught et al. (2013/0187835). As to claim 8, Vaught does not explicitly discuss the system defined in claim 1 wherein the handheld electronic device comprises a motion sensor that tracks motion of the handheld electronic device and when the image moves on the display in response to motion of the handheld electronic device. However, Vaught teaches computing system 1100 may take the form of a … mobile computing device such as a smart phone or media player, mobile communication device, see-through display system, etc. Computing system 1100 may also optionally include user input devices such as keyboards, mice, game controllers, cameras, microphones, and/or touch screens, for example, as well as the other input devices such as those described above for the see-through display system ([0046-0047]); and the see-through display system 800 further includes one or more motion sensor 881 to detect movements of a user’s head when the user is wearing the see-through display system 800 ([0035]), hence it would have been obvious that the motion sensor tracks motion of the handheld electronic device since the user is wearing the see-through display system in his/her head and when the user moves the see-through display system also moves and wherein the image moves on the display in response to motion of the handheld electronic device; and when the user move gazes lines are determined to lead from the eyes of the user interface displayed on the mobile device, different images displayed on the handheld electronic device (Figs. 2-4, 6 and related texts). 6. Claims 9-10, 16-17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vaught et al. (2013/0187835) Prada Gomez et al. (US Patent 8,179,604). As to claim 9, Vaught does not explicitly discuss the system defined in claim 1 wherein the handheld electronic device comprises a visual marker with which the head mounted device tracks the location of the handheld electronic device. Prada Gomez teaches wearable marker comprises an infrared visual marker (abstract; col. 1, line 67 through col. 2, line 1 and lines 65-66); and surface pattern configured to be visible to an IR detection device such as an IR camera or other IR detector (col. 3, lines 5-8); tracking position of the hand-wearable relative to the wearable HMD by measuring position and motion of the particular surface pattern relative to the wearable HMD (col. 1, lines 42-45 and col. 2, lines 8-12). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Prada Gomez into the teachings of Vaught for the purpose of tracking position and motion of the handheld device and recognize known patterns of motion that correspond to known hand gestures. As to claim 10, Prada Gomez teaches the system defined in claim 9 wherein the wearable marker comprises an infrared visual marker (abstract; col. 1, line 67 through col. 2, line 1 and lines 65-66) and wherein the head mounted device comprises an infrared camera (col.3, lines 5-8; col. 5, lines 32-35) that detects the infrared marker. As to claim 16, Vaught teaches a head-mounted device comprising: a microphone configured to detect a voice command ([0032] – the see-through display system comprises one or more microphone 808 configured to detect sounds such as voice commands from a user); and a display configured to display a virtual image in response to the voice input voice input ([0020] – the see-through display system (the head-mounted display system) allows a user to select a particular user interface control displayed on the external display screen and an interaction in the context of the user gazing at the weather tile and in response to the user gazing at the weather the head mounted display sends a user input command to the mobile device and selecting the weather tile resulting in the display of weather information; [0023-0024] – the see-through display system displays information about the actress that a user browse, via gaze control, voice control or in any suitable manner; and the user’s intent to interact determined via voice recognition analysis of audio input received from the user with the displayed object determined from sensor input indicating that the user’s gaze is directed at the displayed object), wherein the virtual image is anchored to the handheld electronic device (Figs. 2-4 and 6 and related texts that images and/or information displayed / appeared near or next to the handheld electronic device 104 where the user gazes at, for example, as can be seen in FIG. 2, gaze lines 200, 202 are determined to lead from the eyes of the user to an email user interface tile 204. In response, the see-through display system 100 retrieves and displays a list 206 of messages currently in the inbox of an email account linked to and right next to the handheld electronic the user's mobile device 104). Vaught does not explicitly discuss an infrared camera configured to track a location of a handheld electronic device. Prada Gomez teaches surface pattern configured to be visible to an IR detection device such as an IR camera or other IR detector (col. 3, lines 5-8); tracking position of the hand-wearable relative to the wearable HMD by measuring position and motion of the particular surface pattern relative to the wearable HMD (col. 1, lines 42-45 and col. 2, lines 8-12). It would have been obvious before the effective filing date of the claimed invention that it is well known to track location of a device with infrared camera and to incorporate the teachings of Prada Gomez into the teachings of Vaught for the purpose of tracking position and motion of the handheld device and recognize known patterns of motion that correspond to known hand gestures. As to claim 17, Vaught teaches the head mounted display system sends a user input command to the mobile device selecting the weather tile thereby resulting the display of weather information on the external display of the mobile device, current weather conditions (Figs. 3-4, [0020]). As to claim 20, Vaught teaches the head mounted device defined in claim 16 wherein a position of the virtual image on the display changes according to changes in the location of the handheld electronic device (Figs. 3-4, 6 and related texts). 7. Claim 18 rejected under 35 U.S.C. 103 as being unpatentable over Vaught and Prada Gomez in view of Gharpure et al. (2021/0183379). As to claim 18, Vaught and Prada Gomez do not explicitly teach the head mounted device defined in claim 17 further comprising a speaker configured to play weather sounds in response to the weather related voice input. Gharpure teaches the head mounted device defined in claim 17 further comprising a speaker configured to play weather sounds in response to the weather related voice input ([0031] - a computer device may include smart speaker, mobile device, wearable device, smart watch functions as a voice command device; [0034] - and physical effect of environment that can perceive by a user or a computing device in include human or animal sounds, e.g., voices or noises; atmospheric sounds, e.g., thunder, rain, wind, or other weather sounds. It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Gharpure into the teachings of Vaught and Prada Gomez for the purpose of detecting a current reading location and instruct one or more physical effect devices to enhance the listening experience. 8. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Vaught et al. (2013/0187835) and Prada Gomez et al. (US Patent 8,179,604) in view of Rodriguez, II (2023/0333378). As to claim 19, Vaught and Prada Gomez do not explicitly teach the head mounted device defined in claim 16 wherein the virtual image comprises a virtual control interface for controlling an electronic device and wherein an item on the virtual control interface is selected using gesture input. Rodriguez teaches the head mounted device wherein the virtual image comprises a virtual control interface for controlling an electronic device and wherein an item on the virtual control interface is selected using gesture input ([0097, 0099], claim 10). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Rodriguez into the teachings of Vaught and Prada Gomez for the purpose of allowing the user controls both the augmented reality eyewear display and the smartwatch display by making hand gestures in the air in close proximity to either the smartwatch or to the graphics displayed by the eyewear. Response to Arguments 9. Applicant’s arguments, filed 12/30/25, with respect to claims 11-15 have been fully considered and are persuasive. The rejection(s) of claims 11-15 has been withdrawn. Applicant's arguments filed 12/30/25 with respect to claims 1-10 and 16-20 have been fully considered but they are not persuasive. With respect to claim 1, Applicant argues that Vaught fails to show or suggest a location of the image is based on a location of the handheld electronic device. Examiner respectfully submits that Vaught teaches referring to FIG. 2, when a user is gazing lines 200, 202 are determined to lead from the eyes of the user to an email user interface tile 204. In response, the see-through display system 100 retrieves and displays a list 206 of messages currently in the inbox of an email account linked to and right next to the handheld electronic user's mobile device 104. Referring to FIG. 3, the user has ceased gazing at the email user interface tile 204, and is instead gazing at a weather user interface tile 208 displayed on the external display screen 102. In response, the see-through display system 100 retrieves and displays a current weather report 210 right next to the handheld electronic user's mobile device 104. Referring to FIG. 4 also shows the display of additional weather-related user interface items 300 on the see-through display. A user may interact with the additional weather-related user interface items 300 via gaze, and selection of one of the additional weather-related user interface items may result in the display of relevant information by the mobile device 104 or right next to the handheld electronic mobile device 104. Referring to FIG. 6, the see-through display system 100 may determine that the user is gazing at the actress by gaze and image analysis, and in response obtain and display information 504 regarding the actress 502. Therefore, Vaught shows or suggests a location of the image is based on a location of the handheld electronic device. With respect to claim 16, Applicant argues that Vaught fails to show or suggest the images display anchored to the handheld electronic device; and Prada Gomez use an infrared camera to track an infrared reflective surface on a wearable marker such as a ring, a bracelet, an artificial fingernail configured to be affixed to a fingernail, a decal configured to be affixed to a fingernail, or a glove, among other possible wearable items. The wearable marker is passive item that does not include an electronics. An infrared camera that tracks the location of a passive wearable item that does not have any electronics is not equivalent to an infrared camera, that tracks a location of a handheld electronic device. Examiner respectfully submits that Vaught teaches Figs. 2-4 and 6 and related texts that images and/or information displayed/appeared near or next to the handheld electronic device 104 where the user gazes at, for example, as can be seen in FIG. 2, gaze lines 200, 202 are determined to lead from the eyes of the user to an email user interface tile 204. In response, the see-through display system 100 retrieves and displays a list 206 of messages currently in the inbox of an email account linked to and right next to the handheld electronic the user's mobile device 104. Prada Gomez teaches surface pattern configured to be visible to an IR detection device such as an IR camera or other IR detector (col. 3, lines 5-8); tracking position of the hand-wearable relative to the wearable HMD by measuring position and motion of the particular surface pattern relative to the wearable HMD (col. 1, lines 42-45 and col. 2, lines 8-12). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Prada Gomez into the teachings of Vaught for the purpose of tracking position and motion of the handheld device and recognize known patterns of motion that correspond to known hand gestures. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Allowable Subject Matter 10. Claims 11-15 are allowed. Conclusion 11. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUYNH H NGUYEN whose telephone number is (571)272-7489. The examiner can normally be reached Monday-Friday 7:30AM-3:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUYNH H NGUYEN/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §102, §103
Dec 11, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Examiner Interview Summary
Dec 30, 2025
Response Filed
Mar 07, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591740
METHODS AND SYSTEMS FOR GENERATING TEXTUAL FEATURES
2y 5m to grant Granted Mar 31, 2026
Patent 12567409
RESTRICTING THIRD PARTY APPLICATION ACCESS TO AUDIO DATA CONTENT
2y 5m to grant Granted Mar 03, 2026
Patent 12566920
System and Method to Generate and Enhance Dynamic Interactive Applications from Natural Language Using Artificial Intelligence
2y 5m to grant Granted Mar 03, 2026
Patent 12563141
SYSTEM AND METHOD OF CONNECTING A CALLER TO A RECIPIENT BASED ON THE RECIPIENT'S STATUS AND RELATIONSHIP TO THE CALLER
2y 5m to grant Granted Feb 24, 2026
Patent 12554761
DATA SOURCE CURATION FOR LARGE LANGUAGE MODEL (LLM) PROMPTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+17.2%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1078 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month