Prosecution Insights
Last updated: April 19, 2026
Application No. 18/641,711

TWO-HANDED GESTURE INTERPRETATION

Non-Final OA §101§102§103§112
Filed
Apr 22, 2024
Examiner
MERCADO, GABRIEL S
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
69%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
84 granted / 198 resolved
-12.6% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
43 currently pending
Career history
241
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 198 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This office action is responsive to communication(s) filed on 4/22/2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims Status Claims 1-20 are pending and are currently being examined. Claims 1, 18 and 20 are independent. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The representative independent claim 1 recite(s) a method comprising: receiving data corresponding to user activity involving two hands of a user in a three-dimensional (3D) coordinate system, determining gestures performed by the two hands based on the data corresponding to the user activity, identifying actions performed by the two hands based on the determined gestures, each of the two hands performing one of the identified actions, determining whether the identified actions satisfy a criterion for a gesture type based on the data corresponding to the user activity, and in accordance with determining that the identified actions satisfy the criterion for the gesture type, interpreting the identified actions based on a reference element corresponding to the gesture type, wherein different gesture types correspond to different reference elements. Each step in the described method can be performed by a human mind—with or without pen and paper—because they represent fundamental cognitive processes of perception, categorization, and logical deduction. Receiving data: A human can observe and record physical movements or coordinates using their eyes and hands. Determining gestures: The mind naturally groups observed movements into recognizable patterns based on memory and visual processing. Identifying actions: A person can logically assign a specific meaning or label to those recognized patterns through simple association. Determining criteria satisfaction: A human can use a checklist or mental rules, to compare observed actions against predefined standards. Interpreting based on reference: The mind can translate a specific action into a final conclusion by cross-referencing it with a known guide or context. (e.g., American Sign Language). As such the claim recites an abstract idea grouped under “Mental Processes”. This judicial exception is not integrated into a practical application because the claim further includes the limitation: “at a device having a processor and one or more sensors”, but this is mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP 2106.05(f). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims as a whole include only the abstract idea with the mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, which cannot provide a practical application or satisfy the “significantly more” requirement. As such, representative claim 1 is ineligible under 101. Claim(s) 18 and 20 are directed to a device and computer-readable storage medium for accomplishing the steps of the method in claim 1, and are rejected using similar rationale(s). These claims do nothing more than adding limitations “A device comprising: a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the one or more processors to perform operations comprising” and “A non-transitory computer-readable storage medium, storing program instructions executable on a device including one or more processors to perform operations comprising”, which are reflective mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP 2106.05(f). Claims 2-4 and 19 further describe the “user activity” and/or purpose thereof are. This description doesn’t make the related steps any less abstract. Claim 5 further recites abstract concept of by adding the step of ”associating a pivot point on a body of the user as the reference element”. Associating a pivot point on a user's body as a reference element can be performed mentally with or without pen and paper because it relies on innate proprioception and basic spatial awareness to visualize a joint or body part as a fixed, central anchor point for movement. Claims 6-9, 11, 14 and 15 further recite or further limit the abstract “identifying” and/or ”determining” steps. However, these limitations don’t make the related steps any less abstract. Claim 10 involves further limiting the reference element. This limitation doesn’t make the receiving data step any less abstract. Claims 12-13 involve limitations describing “the 3D environment” mentioned in claim 11. These limitations don’t make the related steps any less abstract. Claims 14, 15 and 17 involve limitation of a step of determining “user intended action” (herein, it interpreted as “identifying actions performed”, see 112B rejection section below). These limitations don’t make the identifying/determining step any less abstract. Claim 16 adds the limitation of “using at least a portion of the identified actions performed by the two hands to directly enable a functional component displayed within an extended reality (XR) environment.” Here, enabling a functional component within an Extended Reality (XR) environment is interpreted as making a digital, interactive element—such as a button, 3D model, slider, or dashboard—fully operational and responsive to user input (gestures, voice, or controllers) while immersed in virtual, augmented, or mixed reality. However, the limitations of “using at least a portion of the identified actions performed by the two hands to directly enable a functional component displayed” and “within an extended reality (XR) environment” are ineffective to provide a practical application of the abstract idea because they are directed to insignificant extra-solution activity and applying the extra-solution activity to a specific technological environment, respectively, see MPEP 2106.05(h). There is nothing in the claim that adds significantly more than the abstract idea. The limitation of “using at least a portion of the identified actions performed by the two hands to directly enable a functional component displayed” is extra-solution activity and ineffective to show significantly more because it is directed to well-understood, routine, conventional activity. For example, the following references teach this concept: AU; Kin Chung et al. US 20120154313 A1 [0062] Another example of a pop-up interface is a virtual keyboard 800, as shown in FIG. 8. The virtual keyboard 800 is a two-hand interface and can be activated with a two-hand gesture, such as a two-hand five finger tap. El Dokor; Tarek US 8928590 B1 “Users can also lift a single hand or both hands above the keyboard. Thus, the inventive system provides for a delineation of gesture recognition between an active zone that is enabled once the user's hand (or hands) is visible for the cameras and inactive zone when the user is typing on the keyboard or using the mouse”, col 2:26-32. Wong; Yoon Kean US 20110320982 A1 [0002]…Traditionally, a user is required to use two hands and multiple input mechanisms to activate a palmtop computer application. Karlsson; David et al. US 20120131488 A1 [0108]…The controls can accept two-hand inputs, one to activate one of the movable icons and one to carry out a touch gesture to select a choice that carries out a desired interaction associated with the activated alternate interaction icon (block 218). The limitation of “within an extended reality (XR) environment” simply applies the extra-solution activity to a specific technological environment and is therefore also ineffective to provide significantly more than the abstract idea. For the reasons above, claims 2-20 are also ineligible under 101, similar to representative claim 1, for reasons explained above. Claim Rejections - 35 USC § 112(a) or 112(1st) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 2-4, 6, 14 and 19 are/is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contain(s) subject matter which was/were not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 2-4, 6 and 19 are rejected because "user centric" and "app centric gestures” are not recognized terms of art, and the Instant Specification doesn’t sufficiently describe the metes and bounds of these terms, providing only examples rather than precise definitions, ¶ 5 (as published). Claim 14 recites “user activity is associated with two simultaneous two-handed gestures”. However, the specification doesn’t sufficient describe how more than one two-handed gestures can occur simultaneously, when the user activity involves only “two hands” (see claim 1 upon which claim 14 depends). Claim Rejections - 35 USC § 112(b) or 112(2nd) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-4, 6, 8, 14, 15, 17 and 19 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 2-4, 6 and 19 are rejected for indefiniteness because "user centric gesture" and "app centric gesture” are not recognized terms of art, and their scope is unclear. The Instant Specification further illuminate on the metes and bounds of these terms, providing only examples rather than precise definitions, ¶ 5 (as published). In a broad sense, the terms are unclear because all gestures performed by a user may interpret as being “user centric”, since they gestures performed by the user. All user gestures used for interaction with an application can also be interpreted as being both user centric and app centric, as they involve user gestures for interactions with an app (application). For purposes of compact prosecution only, the examiner interprets “user centric”, as gestures that are identified by using user position references, such as user’s joint positions, and “app centric” as being gestures that don’t require using user position references. Correction required. Claim 6 recites that “wherein determining to associate each of the actions performed by each of the two hands with the user intended action corresponding to the gesture type”. Here, it is unclear whether or not the phrase “determining to associate each of the actions performed by each of the two hands with the user intended action corresponding to the gesture type” is referring to the step of “identifying actions performed by the two hands based on the determined gestures, each of the two hands performing one of the identified actions” in claim 1. For purposes of compact prosecution only, the examiner interprets the claim’s limitation(s) as referring to the abovementioned “identifying actions…” limitation in claim 1. Correction required. Furthermore, Claim 6 recites “the user intended action” in the abovementioned phrase. There is insufficient antecedent basis for this limitation in the claim. The same issue is present in Claims 14, 15 and 17. Claim 8 recites “the device worn on the head” in the phrase “positioning information associated with a head or the device worn on the head”. There is insufficient antecedent basis for this limitation in the claim. Claim 14 recites “user activity is associated with two simultaneous two-handed gestures”. Here, this is unclear because it can be interpreted as a plurality of two-handed gestures that are performed simultaneously. However, it is unclear how more than one two-handed gestures can occur simultaneously, when the user activity involves only “two hands” (see claim 1 upon which claim 14 depends). For purposes of compact prosecution only, the examiner interprets the limitation(s) as being directed to a single two-handed gesture that can be used control two functions at the same time, e.g., zoom and pan function, see ¶¶ 16 and 64. Correction required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 5-13, and 15-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Iyer; Vivek Viswanathan et al. (hereinafter Iyer – US 20190384408 A1). Independent Claim 1: Iyer teaches: A method comprising: (e.g., fig. 4) at a device having a processor and one or more sensors: (Abstract and systems in figs. 2 and 3) receiving data corresponding to user activity involving two hands of a user (receiving hand tracking information from camera(s) or optical sensor(s), ¶¶ 65 and 91, for one or two handed gesture sequences detection, by a processor(s) of the system, based on start of gesture, start of gesture motion, and gesture end sequence recognition data [receiving data], Abstract and fig. 4,404,405,406 and ¶ 89) in a three-dimensional (3D) coordinate system; (tracking a physical location (e.g., Euclidian or Cartesian coordinates x, y, and z) or translation, ¶¶ 52 and 82) determining gestures (start, motion, and end gestures) performed by the two hands based on the data corresponding to the user activity; (one or two handed gesture sequences are detected at 407, by a processor(s) of the system, based on recognizing [determining] start, motion, and end of gestures in a sequence of gesture recognition data, Abstract and fig. 4:404,405,406,407 and ¶¶ 89-90) identifying actions (a sequence of gestures) performed by the two hands based on the determined gestures, (one or two handed gesture sequences are detected at 407, by a processor(s) of the system, based on recognizing [determining] start, motion, and end of gestures in a sequence of gesture recognition data, Abstract and fig. 4:404,405,406,407 and ¶¶ 89-90) each of the two hands performing one of the identified actions; (because the gestures in the sequence are two-handed, ¶ 89, each of the two hands perform at least one of the identified actions. An example of two-handed gestures is illustrated in ¶ 152 and figs. 24A-24B) determining whether the identified actions satisfy a criterion for a gesture type based on the data corresponding to the user activity; (a positive gesture sequence recognition meets an accuracy threshold [satisfy a criterion], ¶ 166, also, the sequences are detected by the specific series of gestures determined to match certain attributes and/or reference images, ¶¶ 86-87, e.g., direction, position, and/or location, ¶ 119, and gesture duration, ¶ 135. Herein, a “gesture type” is broadly interpreted as including one or more candidate/identified gesture sequences) and in accordance with determining that the identified actions satisfy the criterion for the gesture type, interpreting the identified actions (a positive gesture sequence recognition meets an accuracy threshold [satisfy a criterion], ¶ 166, also, the sequences are detected by the series of gestures determined to match certain attributes and/or reference images, ¶¶ 86-87, e.g., direction, position, and/or location, ¶ 119) based on a reference element (reference parameters [element] assigned to each joint, e.g., coordinates and/or or other parameters specifying a conformation of the body part, e.g., hand open, etc., ¶¶ 81 and 82) corresponding to the gesture type, wherein different gesture types correspond to different reference elements. (By tracking the positional changes, trajectories, and interactions of skeletal joints and segments over time, the system can identify, classify, and interpret specific gestures, actions, or behavioral patterns [corresponding to the gesture type], see ¶¶ 81-82. Because the combination of reference elements, such as points in a trajectory are used to identify specific sequences, different gesture types correspond to different reference elements) Claim 2: The rejection of claim 1 is incorporated. Iyer further teaches: wherein the user activity is determined to provide a user centric gesture as the gesture type. (a gesture can be identified based on fitting a virtual skeleton to the user and analyzing the positional changes of its joints and segments over time, ¶ 82. This can be described as “user centric” due to its focus on modeling the user's physical characteristics.) Claim 3: The rejection of claim 1 is incorporated. Iyer further teaches: wherein the user activity is determined to provide an app centric gesture as the gesture type. (a gesture can be identified without relying on fitting a virtual skeleton to the user and analyzing the positional changes of its joints and segments over time, by sending raw point-cloud data directly to a feature extraction routine within a gesture sequence recognition, ¶¶ 82-83, and this can be seen as “app centric” because it prioritizes direct data processing for the application.) Claim 5: The rejection of claim 1 is incorporated. Iyer further teaches: further comprising: associating a pivot point on a body of the user as the reference element. (each body’s joints [pivot point on a body] is assigned a number parameters, such Cartesian coordinate points, ¶ 81-82) Claim 6: The rejection of claim 1 is incorporated. Iyer further teaches: wherein determining to associate each of the actions performed by each of the two hands with the user intended action corresponding to the gesture type is based on an app centric technique (because the gestures in the sequence are two-handed, ¶ 89, each of the two hands perform at least one of the identified actions, an example of two-handed gestures is found in ¶ 152 and figs. 24A-24B. A gesture can be identified without relying on fitting a virtual skeleton to the user and analyzing the positional changes of its joints and segments over time, by sending raw point-cloud data directly to a feature extraction routine within a gesture sequence recognition, ¶¶ 82-83, and this can be seen as “app centric technique” because it prioritizes direct data processing for the application.) and a reference orientation of a space defined based on positioning information of the user. (using positional tracking devices to map an environment where an HMD [Head-Mounted Device] is located, its orientation, and/or pose, ¶¶ 16 and 47, where Heads-Up Displays (HUDs), and eyeglasses-collectively referred to as “HMDs”, ¶¶ 16 and 46, and the same reference information is used for tracking the user’s state, e.g., orientation and movement, ¶¶ 51 and 52) Claim 7: The rejection of claim 1 is incorporated. Iyer further teaches: wherein determining whether the identified actions satisfy the criterion for the gesture type is based on determining spatial positioning between the user and a user interface. (gestures can control an opened menu, e.g., closing or repositioning a menu [a user interface] by placing hand in a certain way “behind” the opened menu [based on determining spatial positioning between the user and a user interface], ¶¶ 124-125.) Claim 8: The rejection of claim 1 is incorporated. Iyer further teaches: wherein determining whether the identified actions satisfy the criterion for the gesture type is based on determining: positioning information associated with a head or the device worn on the head, (using positional tracking devices to map an environment where an HMD [Head-Mounted Device] is located, its orientation, and/or pose, ¶¶ 16 and 47, where Heads-Up Displays (HUDs), and eyeglasses-collectively referred to as “HMDs”, ¶¶ 16 and 46, and the mapping information is used for tracking the user’s state, e.g., orientation and movement, ¶¶ 51 and 52. As mentioned above, gesture sequences are identified by tracking user movements in the environment, e.g., see ¶ 82, wherein specific sequences mapped to specific functions, e.g., minimizing all workspaces, ¶ 151 and figs. 23A-23B) positioning information associated with a torso, a gaze, or iv) a combination thereof. Claim 9: The rejection of claim 1 is incorporated. Iyer further teaches: wherein determining whether the identified actions satisfy the criterion for the gesture type is based on determining a motion type associate with motion data for each of the two hands. (for a successful sequence identification, the motion characteristics [type] for both hands have to match according to an accuracy threshold, ¶ 166, e.g., concerning motion velocity, ¶ 84) Claim 10: The rejection of claim 1 is incorporated. Iyer further teaches: wherein the reference element comprises a reference point in the 3D coordinate system. (reference parameters [element] assigned to each joint, e.g., coordinates and/or or other parameters specifying a conformation of the body part, e.g., hand open, etc., ¶¶ 81 and 82, e.g., Euclidian or Cartesian coordinates x, y, and z, ¶ 52) Claim 11: The rejection of claim 1 is incorporated. Iyer further teaches: wherein determining whether the identified actions satisfy the criterion for the gesture type is based on determining a context of the user within a 3D environment. ("two-handed gesture sequences for opening and closing files, applications, or workspaces (or for any other opening and closing action, depending on application or context)", ¶ 152 and figs. 24A-25B) Claim 12: The rejection of claim 11 is incorporated. Iyer further teaches: wherein the 3D environment comprises a physical environment. (use distinctive visual characteristics of the physical environment to identify specific images or shapes which are then usable to calculate HMD 102's position and orientation, ¶ 64) Claim 13: The rejection of claim 11 is incorporated. Iyer further teaches: wherein the 3D environment comprises an extended reality (XR) environment. (environment where a virtual, augmented, or mixed reality, ¶ 15. It was well within the capabilities of a person having ordinary skill in the art to have realized that Mixed Reality (MR) is a component of Extended Reality (XR).) Claim 15: The rejection of claim 1 is incorporated. Iyer further teaches: wherein determining the user intended action comprises determining that the user activity is associated with simultaneous one-handed gestures. (As mentioned above for claim 1, the identified gestures are two handed gestures, e.g., see ¶¶ 65 and 89. It was well within the capabilities of a person having ordinary skill in the art to have realized that two-handed gestures are interpreted as being simultaneous one-handed gestures because they consist of either two identical, independent motions or a dominant hand moving relative to a static, supportive, or passive "base" hand. The description of Iyer supports that two-handed gestures are meant to be performed simultaneously by describing a method that compensates for "natural human asynchronicity," the tendency for hands to start at different times, to recognize the intended coordinated two-handed movement, ¶ 104.) Claim 16: The rejection of claim 1 is incorporated. Iyer further teaches: further comprising: using at least a portion of the identified actions performed by the two hands to directly enable a functional component displayed within an extended reality (XR) environment. (the use may place both hands 2107 and 2108 over the object 2106 to capture the area, with palms facing out and all ten fingers extended. After a waiting in position for a predetermined amount of time, the xR object 2106 is highlighted with surround effect 2109 and object menu 2110 appears [enable a functional component], as shown in frame 2103, ¶ 148 and fig. 21) Claim 17: The rejection of claim 1 is incorporated. Iyer further teaches: wherein the user intended action comprises a pan interaction, a zoom interaction, or a rotation of one or more elements of a user interface. (xR application may provide a workspace that enables operations for single VO [virtual objects] or group of VOs in workspace, e.g., rotate [rotation of one or more elements of a user interface], ¶¶ 126 and 128) Independent Claims 18 and 20: Claim(s) 18 and 20 are directed to a device and computer-readable storage medium for accomplishing the steps of the method in claim 1, and are rejected using similar rationale(s). Claim 19: The rejection of claim 18 is incorporated. Claim 19 is directed to a device for accomplishing the steps of the method in claim 2, and is rejected using similar rationale(s). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer (US 20190384408 A1) as applied to claim 1 above, and further in view of Gorski; Ryan Joseph et al. (hereinafter Gorski – US 20240312247 A1). Claim 4: The rejection of claim 1 is incorporated. Iyer further teaches the gestures can be: user centric (a gesture can be identified based on fitting a virtual skeleton to the user and analyzing the positional changes of its joints and segments over time, ¶ 82. This can be described as “user centric” due to its focus on modeling the user's physical characteristics.), or app centric (a gesture can be identified without relying on fitting a virtual skeleton to the user and analyzing the positional changes of its joints and segments over time, by sending raw point-cloud data directly to a feature extraction routine within a gesture sequence recognition, ¶¶ 82-83, and this can be seen as “app centric” because it prioritizes direct data processing for the application) Iyer does not appear to expressly teach, but Gorski teaches: wherein the user activity is determined to provide a hybrid gesture as the gesture type, wherein the hybrid gesture comprises a portion of a user centric gesture and a portion of an app centric gesture (Point cloud data [app centric] combined with a 3D skeleton model [user centric] enables precise tracking of an occupant's articulating joints, gestures, and six-degree-of-freedom head movement to accurately predict gaze direction and indication direction, ¶ 94). Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the method of Iyer to include wherein the user activity is determined to provide a hybrid gesture as the gesture type, wherein the hybrid gesture comprises a portion of a user centric gesture and a portion of an app centric gesture, as taught by Gorski. One would have been motivated to make such a combination in order to improve the functionalities and accuracy of the method, e.g., improved accuracy in tracking of articulating joints, enhanced gesture detection using depth information, and improved prediction of indication direction or gaze direction, Gorski ¶ 94. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer (US 20190384408 A1) as applied to claim 1 above, and further in view of Rao; Liang et al. (hereinafter Rao – US 20170192668 A1). Claim 14: The rejection of claim 1 is incorporated. Iyer does not appear to expressly teach, but Rao teaches: wherein determining the user intended action comprises determining that the user activity is associated with two simultaneous two-handed gestures. (a single gesture bears two functions…refresh and exit, ¶ 82) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the method of Iyer to include wherein determining the user intended action comprises determining that the user activity is associated with two simultaneous two-handed gestures, as taught by Rao. One would have been motivated to make such a combination in order to which improve user experience and interface consistency offered by the method, Rao ¶ 82. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Below is a list of these references, including why they are pertinent: Kin; Kenrick Cheng-kuo US 10261595 B1, is pertinent to claim 1 for disclosing a console that detects gestures performed using both of a user’s hands, Abstract and col 14:46-49. Ravasz; Jonathan et al. US 20200387229 A1, is pertinent to claim 1 for disclosing an artificial reality system for rendering, presenting, and controlling element in an artificial reality environment, Abstract, wherein a gesture detector identifies two-handed inputs in addition to one-handed inputs, ¶¶ 43 and 94. Schwesinger; Mark et al. US 20150193107 A1, is pertinent to claim 1 for disclosing a device that detects one or more hands, tracks one or more hands and recognizes gestures, cols 13:2-4. The following reference are pertinent to claim 16 for disclosing that the limitation of “using at least a portion of the identified actions performed by the two hands to directly enable a functional component displayed” is well-understood, routine, conventional activity: AU; Kin Chung et al. US 20120154313 A1 [0062] Another example of a pop-up interface is a virtual keyboard 800, as shown in FIG. 8. The virtual keyboard 800 is a two-hand interface and can be activated with a two-hand gesture, such as a two-hand five finger tap. El Dokor; Tarek US 8928590 B1 “Users can also lift a single hand or both hands above the keyboard. Thus, the inventive system provides for a delineation of gesture recognition between an active zone that is enabled once the user's hand (or hands) is visible for the cameras and inactive zone when the user is typing on the keyboard or using the mouse”, col 2:26-32. Wong; Yoon Kean US 20110320982 A1 [0002]…Traditionally, a user is required to use two hands and multiple input mechanisms to activate a palmtop computer application. Karlsson; David et al. US 20120131488 A1 [0108]…The controls can accept two-hand inputs, one to activate one of the movable icons and one to carry out a touch gesture to select a choice that carries out a desired interaction associated with the activated alternate interaction icon (block 218). Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL S MERCADO whose telephone number is (408)918-7537. The examiner can normally be reached Mon-Fri 8am-5pm (Eastern Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gabriel Mercado/Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12543983
SYSTEMS AND METHODS FOR EMOTION PREDICTION
2y 5m to grant Granted Feb 10, 2026
Patent 12535942
BLOWOUT PREVENTER SYSTEM WITH DATA PLAYBACK
2y 5m to grant Granted Jan 27, 2026
Patent 12511024
Multi-Application Interaction Method
2y 5m to grant Granted Dec 30, 2025
Patent 12498838
CONTEXT-AWARE ADAPTIVE CONTENT PRESENTATION WITH USER STATE AND PROACTIVE ACTIVATION OF MICROPHONE FOR MODE SWITCHING USING VOICE COMMANDS
2y 5m to grant Granted Dec 16, 2025
Patent 12498843
Display of Book Section-Specific Fullscreen Recommendations for Digital Readers
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
69%
With Interview (+26.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 198 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month