Prosecution Insights
Last updated: April 19, 2026
Application No. 18/665,189

Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments

Non-Final OA §101§103
Filed
May 15, 2024
Examiner
TRAN, TAM T
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
318 granted / 397 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
415
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is in response to the Application 18/665189 filed on 05/15/2024. In the instant application, claims 1, 31 and 32 are independent claims; Claims 1-32 have been examined and are pending. This action is made non-final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings submitted on 18/665189 are acceptable Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/19/2025 was filed before the mailing date of the first office action on the merits. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Allowable Subject Matter Claims 4-8, 10-15, 17-18, 22-24 and 27-28 are objected to as being dependent upon a rejected based claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 31 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because claim 31 recites “A computer-readable storage medium”. Applicant has failed to define or limit the claimed “computer-readable storage medium.” Therefore, it would be reasonable to interpret the claimed “computer-readable storage medium” to comprise a signal or a carrier wave; neither of which falls into one of the four statutory categories invention. Claim Rejections - 35 USC § 103 filed before the effective filing date of the claimed invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 16, 19 and 31-32 are rejected under 35 U.S.C 103 as being unpatentable over STRAWN et al. (“Strawn,” US 2024/0248527), filed on January 24, 2023 in view of Rockel (“Rockel,” US 2022/0101613), published on March 31, 2022. Regarding claim 1, Strawn teaches a method 1.A method, comprising: at a computer system that is in communication with a display generation component and with one or more input devices, including a remote input device (Strawn: ¶0018 and Fig. 4; mixed reality user interactions may be enabled through a combination of eye tracking and secondary inputs such as a finger gestures, hand gestures, eye gestures, body movements, wrist band device input, handheld controller input, and similar ones): while a view of an environment is visible via the display generation component (Strawn: ¶0071 and Fig. 6; mixed reality content such as a 3D interactive map is displayed), detecting a first motion input that includes movement of the remote input device in a physical environment (Strawn: ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input. Actions such as zoom, rotate, pan, move, open actionable menus, select from presented options, etc. may be performed on the location of interest based on the secondary inputs. ¶0069; the wrist band device 518 (e.g., a smart watch, a wrist band controller, etc.) may detect hand arm or even finger gestures through one or more sensors and provide a detected gesture to the head-mounted display); in response to detecting the first motion input: in accordance with a determination that a gaze detected by the computer system was directed to a first object when the first motion input was detected (Strawn: ¶0070; a processor on the HMD 512 may receive gaze detection input and secondary input (e.g., through wireless means from a separate device or through gesture detection on the HMD 512 and allow the use to interact with the displayed content. ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input), moving the first object in the environment in accordance with the first motion input (Strawn: ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input. Actions such as zoom, rotate, pan, move, open actionable menus, select from presented options, etc. may be performed on the location of interest based on the secondary inputs); and [in accordance with a determination that the gaze detected by the computer system was not directed to the first object when the first motion input was detected, forgoing moving the first object in the environment in accordance with the first motion input]. Strawn does not explicitly teach: in accordance with a determination that the gaze detected by the computer system was not directed to the first object when the first motion input was detected, forgoing moving the first object in the environment in accordance with the first motion input. However Rockel teaches a device and method for interacting with 3-Dimensional Environments. Rockel further teaches: in accordance with a determination that the gaze detected by the computer system was not directed to the first object when the first motion input was detected, forgoing moving the first object in the environment in accordance with the first motion input (Rockel: ¶0250; in response to detecting the movement of the first set of fingers relative to the portion of the hand connected to the first set of fingers and in accordance with a determination that the gaze input is not directed to the first user interface object (e.g., when the gaze input fails to meet predefined stability and duration criteria at the first position, when the gaze input has shifted to another portion of the environments, etc), the computer system forgoes performance of the first or second operation, and does not changing the appearance of the first user interface object). Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Rockel and Strawn in front of them to include the method of forgoing operations when the gaze is no longer focused on the object as taught by Rockel with the method of interacting in mix reality environment using a combination of eye tracking and secondary inputs as disclosed by Strawn to provide computer generated experiences to users that make interaction with the computer systems more efficient and intuitive for a user (Rockel: ¶0005). Regarding claim 2, Strawn and Rockel teach the method of claim 1, Strawn and Rockel also teach: the method including: in response to detecting the first motion input: in accordance with a determination that the gaze detected by the computer system was directed to a second object, different from the first object, when the first motion input was detected, moving the second object in the environment in accordance with the first motion input without moving the first object in accordance with the first motion input (Rockel: ¶0125; when there are a first user interface object and a second user interface object in the first view 7202 of the three-dimensional environment, the hand movement of the user will affect a respective user interface object when the user’s gaze is detected at a position of the respective user interface object with sufficient stability and/or duration. Strawn: ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input. Actions such as zoom, rotate, pan, move, open actionable menus, select from presented options, etc. may be performed on the location of interest based on the secondary inputs. ¶0069; the wrist band device 518 (e.g., a smart watch, a wrist band controller, etc.) may detect hand arm or even finger gestures through one or more sensors and provide a detected gesture to the head-mounted display. Note: the movement of the wrist band device is interpreted as a first motion input). Regarding claim 3, Strawn and Rockel teach the method of claim 2, Strawn and Rockel also teach: the method including: in response to detecting the first motion input: in accordance with a determination that the gaze detected by the computer system was directed to the first object when the first motion input was detected, moving the first object in the environment in accordance with the first motion input without moving the second object in accordance with the first motion input (Rockel: ¶0125; when there are a first user interface object and a second user interface object in the first view 7202 of the three-dimensional environment, the hand movement of the user will affect a respective user interface object when the user’s gaze is detected at a position of the respective user interface object with sufficient stability and/or duration. Strawn: ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input. Actions such as zoom, rotate, pan, move, open actionable menus, select from presented options, etc. may be performed on the location of interest based on the secondary inputs. ¶0069; the wrist band device 518 (e.g., a smart watch, a wrist band controller, etc.) may detect hand arm or even finger gestures through one or more sensors and provide a detected gesture to the head-mounted display). Regarding claim 16, Strawn and Rockel teach the method of claim 1, Strawn and Rockel also teach: wherein: the first object displays content of a first application (Strawn: ¶0042; the virtual reality engine may execute applications within the artificial reality system environment and receive position information of the near eye display. The virtual reality engine may also receive estimated eye position and orientation information from the eye tracking module. Based on the received information, the virtual reality engine may determine content to provide to the near-eye display for presentation to the user) or is concurrently displayed with the content of the first application; and moving the first object in the environment corresponds to moving a display position of the content of the first application in the environment (Strawn: ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input. Actions such as zoom, rotate, pan, move, open actionable menus, select from presented options, etc. may be performed on the location of interest based on the secondary inputs. ¶0069; the wrist band device 518 (e.g., a smart watch, a wrist band controller, etc.) may detect hand arm or even finger gestures through one or more sensors and provide a detected gesture to the head-mounted display. Note: the movement of the wrist band device is interpreted as a first motion input). Regarding claim 19, Strawn and Rockel teach the method of claim 1, Strawn and Rockel also teach: wherein: the environment is a three-dimensional environment (Strawn: ¶0042; the virtual reality engine 116 may execute applications within the artificial reality system environment 100), where the view of the three-dimensional environment changes in accordance with movement of a viewpoint of a user of the environment via the display generation component (Strawn: ¶0022, 0045 and 0053; the near-eye display may include various sensors configured to generate image data representing different fields of views in one or more different directions). Regarding claim 31, claim 31 is directed to a computer-readable storage medium for executing the method as claimed in claim 1. Claim 31 is similar scope to claim 1 and is therefore rejected under similar rationale. Regarding claim 32, claim 32 is directed to a computer system for executing the method as claimed in claim 1. Claim 32 is similar scope to claim 1 and is therefore rejected under similar rationale. Claims 9 and 25-26 are rejected under 35 U.S.C 103 as being unpatentable over Strawn and Rockel as applied to claim 1 above and further in view of Dearman et al. (“Dearman,” US 2017/0329419), published on November 16, 2017. Regarding claim 9, Strawn and Rockel teach the method of claim 1, Strawn and Rockel further teach: the method, including: [while the view of the environment is visible via the display generation component, detecting a first touch input that includes movement of a first contact on the remote input device]; and in response to detecting the first touch input: in accordance with a determination that the gaze detected by the computer system was directed to the first object when the first touch input was detected, moving the first object in the environment in accordance with the first touch input (Rockel: ¶0125; when there are a first user interface object and a second user interface object in the first view 7202 of the three-dimensional environment, the hand movement of the user will affect a respective user interface object when the user’s gaze is detected at a position of the respective user interface object with sufficient stability and/or duration. Strawn: ¶0071; diagram 600 show identification of a focus area 608 on displayed content 606 through detection of a gaze 604 of an eye 602 and performance of an action on the focus area 612 based on a secondary input 610. ¶0018; the gaze based location of interest identification may be considered a primary input. Actions such as zoom, rotate, pan, move, open actionable menus, select from presented options, etc. may be performed on the location of interest based on the secondary inputs. ¶0069; the wrist band device 518 (e.g., a smart watch, a wrist band controller, etc.) may detect hand arm or even finger gestures through one or more sensors and provide a detected gesture to the head-mounted display. Note: the movement of the wrist band device is interpreted as a first motion input). Strawn and Rockel teach all the limitations above but do not explicitly teach: the method, including: while the view of the environment is visible via the display generation component, detecting a first touch input that includes movement of a first contact on the remote input device. Dearman teaches: the method, including: while the view of the environment is visible via the display generation component (Dearman: ¶0020 and Fig. 2A; the virtual display 420 may be viewed by the user in the HMD 100. The user may choose to select one of the virtual objects A-F for interaction and/or manipulation and the like in numerous different manners, for example directing a head gaze and/or eye gaze at the virtual object to be selected), detecting a first touch input that includes movement of a first contact on the remote input device; and in response to detecting the first touch input: in accordance with a determination that the gaze detected by the computer system was directed to the first object when the first touch input was detected, moving the first object in the environment in accordance with the first touch input. Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Dearman, Strawn and Rockel in front of them to include the system for combining a gyromouse input with a touch surface input as taught by Dearman with the method of interacting in mix reality environment using a combination of eye tracking and secondary inputs as disclosed by Strawn to provide a controller having improved functionality and utility in the AR/VR environment and enhancing the user’s experience (Dearman: see abstract). Regarding claim 25, Strawn and Rockel teach the method of claim 1, Strawn and Rockel do not explicitly teach while displaying the view of the environment, detecting that the remote input device is oriented to point at a respective object in the view of the environment; while the remote input device is oriented to point at the respective object in the view of the environment, detecting a tap input via a touch-sensitive surface of the remote input device; and in response to detecting the tap input while the remote input device is oriented to point at the respective object in the view of the environment, selecting the respective object as a target for a subsequent operation performed by the remote input device. Dearman teaches: while displaying the view of the environment, detecting that the remote input device is oriented to point at a respective object in the view of the environment; while the remote input device is oriented to point at the respective object in the view of the environment, detecting a tap input via a touch-sensitive surface of the remote input device; and in response to detecting the tap input while the remote input device is oriented to point at the respective object in the view of the environment, selecting the respective object as a target for a subsequent operation performed by the remote input device (Dearman: ¶0021; the user directs a virtual ray 450 from the controller 102 toward the virtual object A by, for example, manipulating the touch surface 108 and/or another manipulation device 106 of the controller 102. The user may then select, for example, the virtual object A, by manipulating the controller 102, such as, for example, releasing the touch from the touch surface 108, releasing the depression of one of the manipulation devices 106, and the like). Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Dearman, Strawn and Rockel in front of them to include the system for combining a gyromouse input with a touch surface input as taught by Dearman with the method of interacting in mix reality environment using a combination of eye tracking and secondary inputs as disclosed by Strawn to provide a controller having improved functionality and utility in the AR/VR environment and enhancing the user’s experience (Dearman: see abstract). Regarding claim 26, Strawn and Rockel teach the method of claim 1, Strawn and Rockel do not explicitly teach: while displaying the view of the environment, detecting that the remote input device is oriented to point at a respective object in the view of the environment; while the remote input device is oriented to point at the respective object in the view of the environment, detecting touch-down of a first swipe input via a touch-sensitive surface of the remote input device; and in response to detecting the first swipe input after the touch-down of the first swipe input, scrolling content within the respective object in accordance with the first swipe input. Dearman teaches: while displaying the view of the environment, detecting that the remote input device is oriented to point at a respective object in the view of the environment; while the remote input device is oriented to point at the respective object in the view of the environment, detecting touch-down of a first swipe input via a touch-sensitive surface of the remote input device; and in response to detecting the first swipe input after the touch-down of the first swipe input, scrolling content within the respective object in accordance with the first swipe input (Dearman: ¶0027; the user may cause movement of the features A1-An available in association with the selected virtual object A by, for example, inputting a selection input by pointing at the display area 420 to set an anchor point, and then implementing a touch and drag input on the touch surface 108 of the controller 102, for example, an upward drag on the touch surface 108 to cause the display of features to scroll upward, a downward drag on the touch surface 108 to cause the display of features to scroll downward, and the like). Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Dearman, Strawn and Rockel in front of them to include the system for combining a gyromouse input with a touch surface input as taught by Dearman with the method of interacting in mix reality environment using a combination of eye tracking and secondary inputs as disclosed by Strawn to provide a controller having improved functionality and utility in the AR/VR environment and enhancing the user’s experience (Dearman: see abstract). Claims 20-21 are rejected under 35 U.S.C 103 as being unpatentable over Strawn and Rockel as applied to claim 1 above and further in view of Erivantcev et al. (“Erivantcev,” US 2022/0291753), published on September 15, 2022. Regarding claim 20, Strawn and Rockel teach the method of claim 1, Strawn and Rockel do not appear to teach: wherein the remote input device is an electronic device with a touch-screen display and that displays a user interface that is distinct from the view of the environment. Erivantcev teaches spatial gesture recognition; wherein the remote input device is an electronic device with a touch-screen display and that displays a user interface that is distinct from the view of the environment (Erivantcev: ¶0059; the motion input module 121 can have a touch pad usable to generate an input of swipe gesture, such as swipe left, swipe right, swipe up, swipe down, or an input of tap gesture, such as a single tap, double tap, long tap, etc. ¶0092; Fig. 7 illustrates the use of an eye gaze direction vector 118 determined using an additional input module 131 and a tap gesture generated using a motion input module 121 for interaction within the context of an active application 105). Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Erivantcev, Strawn and Rockel in front of them to incorporate the gesture recognition using inputs from different devices as taught by Erivantcev with the method of interacting in mix reality environment using a combination of eye tracking and secondary inputs as disclosed by Strawn to provide an enhanced interface for the user to control the device in a convenient and flexible manner (Erivantcev: ¶0035). Regarding claim 21, Strawn, Rockel and Erivantcev teach the method of claim 20, Strawn, Rockel and Erivantcev also teach; wherein: the remote input device has a plurality of hardware affordances, including a first hardware button associated with a first function and a second hardware button associated with a second function; detecting an input directed at a respective hardware button of the plurality of hardware affordances; and in response to detecting the input directed at the respective hardware button: in accordance with a determination that the input is directed at the first hardware button, performing the first function with respect to the environment; and in accordance with a determination that the input is directed at the second hardware button, performing the second function with respect to the environment (Strawn: ¶0068-0069 and Fig. 5C: combination of gaze detection and handheld controller 520 such as a game controller to provide the secondary input by pressing buttons or scrolling wheels on the controller). Claims 29-30 are rejected under 35 U.S.C 103 as being unpatentable over Strawn and Rockel as applied to claim 1 above and further in view of SMOCHKO et al. (“Smochko,” US 2022/0035521), published on February 3, 2022. Regarding claim 29, Strawn and Rockel teach the method of claim 1, Strawn and Rockel do not appear to teach: detecting a selection input directed to a text input region displayed in the environment; and in response to detecting the selection input that is directed to the text input region displayed in the environment, causing display of a virtual keyboard on a display of the remote input device. Smochko teaches: detecting a selection input directed to a text input region displayed in the environment (Smochko: ¶0275; in response to the selection input, electronic device 500 optionally enters a text entry mode); and in response to detecting the selection input that is directed to the text input region displayed in the environment, causing display of a virtual keyboard on a display of the remote input device (Smochko: ¶0279; in response to the rightward swipe of text input alert 1242, device 511 displays user interface 1244 as shown in Fig. 12K, which optionally includes soft keyboard 1246 and text field 1248). Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Smochko, Strawn and Rockel in front of them to incorporate the displaying soft keyboard on remote device as taught by Smochko with the method of interacting in mix reality environment using a combination of eye tracking and secondary inputs as disclosed by Strawn to provide an easy and efficient way for users to interaction with content. Enhancing these interactions improves the user’s experience with the device and decreases user interaction time, which is particularly important where input devices are battery-operated (Smochko: ¶0004). Regarding claim 30, Strawn, Rockel and Smochko teach the method of claim 29, Strawn, Rockel and Smochko further teach: while displaying the text input region in the environment and while the virtual keyboard is displayed on the display of the remote input device, receiving textual input via the virtual keyboard displayed on the display of the remote input device; and in response to receiving the textual input, displaying one or more symbols in the text input region, wherein the one or more symbols correspond to the textual input received via the virtual keyboard displayed on the display of the remote input device (Smochko: ¶0279; input detected on user interface 1244 optionally causes device 511 to provide text input, for entry into text input user interface 1202, to electronic device 500). Conclusion The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275,277 (CCPA 1968)). Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tam T. Tran whose telephone number is (571) 270-5029. The examiner can normally be reached M-F: 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L. Bashore can be reached on 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAM T TRAN/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

May 15, 2024
Application Filed
Apr 25, 2025
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592007
LYRICS AND KARAOKE USER INTERFACES, METHODS AND SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12591312
WEARABLE TERMINAL APPARATUS, PROGRAM, AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12585419
AUDIO PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572272
METHOD FOR COMPUTER KEY AND POINTER INPUT USING GESTURES
2y 5m to grant Granted Mar 10, 2026
Patent 12572260
PRESENTATION AND CONTROL OF USER INTERACTIONS WITH A USER INTERFACE ELEMENT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+11.9%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month