Prosecution Insights
Last updated: April 19, 2026
Application No. 19/059,003

METHODS OF DETERMINING AN INPUT REGION ON A PHYSICAL SURFACE IN A THREE-DIMENSIONAL ENVIRONMENT

Non-Final OA §102§103
Filed
Feb 20, 2025
Examiner
DAVIS, DAVID DONALD
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
631 granted / 900 resolved
+8.1% vs TC avg
Moderate +9% lift
Without
With
+9.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
941
Total Applications
across all art units

Statute-Specific Performance

§101
1.2%
-38.8% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
40.8%
+0.8% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 900 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on June 13, 2025 has been considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-13, 17-23 and 27-32 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wilytsch et al (US 11,054,896). As per claim 1 Wilytsch et al discloses: A method comprising: at a computer system in communication with a display generation component 100 {figure 1}, one or more input devices: while a three-dimensional environment 200 {figure 2} is visible via the display generation component 100, wherein the three-dimensional environment 200 includes a first physical surface 115 that does not include sensors for detecting touch inputs { [column 3, lines 33-37] While FIG. 1 includes an integrated camera 120c and two external cameras 120a, 120b, alternative camera configurations are possible. For example, one or more cameras integrated into the HMD 100 sufficiently capture images of the user's hands 105, 110.}, detecting, via the one or more input devices, a first input corresponding to a request to designate a first portion of the first physical surface 115 as a first input region 325 {figure 3}; while the first portion of the first physical surface 115 is designated as the first input region 325, detecting, via the input devices, movement of a first portion of a person 101 {figure 1} directed to the first input region 325 {[column 4, lines 38-44] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}; and in response to detecting the movement of the first portion of the person 101: in accordance with a determination that the movement of the first portion of the person 101 was directed to the first input region 325, performing an operation at the computer system in accordance with the movement of the first portion of the person 101 relative to the first input region 325; and in accordance with a determination that the movement of the first portion of the person 101 was not directed to the first input region 325, forgoing performing the operation at the computer system in accordance with the movement of the first portion of the person 101 relative to the first input region 325 {[column 4, lines 51-64] User interactions with virtual interaction objects can cause one or more virtual interaction images to be displayed to the user 101. A virtual interaction image is a visual indicator that shows an interaction with a virtual interaction object occurred. Virtual interaction images can also include audio and haptic indicators. In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.}. As per claim 2 Wilytsch et al discloses: The method of claim 1, wherein detecting, via the one or more input devices, the first input corresponding to the request to designate the first portion of the first physical surface 115 as the first input region 325 comprises detecting a second portion of the person 101 performing a first gesture directed to the first portion of the first physical surface 115 that satisfies one or more criteria {[column 4, lines 51-64] In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.}. As per claim 3 Wilytsch et al discloses: The method of claim 2, wherein the one or more criteria include a criterion that is satisfied when the first gesture is a first pose performed within a threshold distance of the first portion of the first physical surface 115 {[column 4, lines 47-50] For example, tapping the reference plane 315 (e.g., bringing finger within a threshold distance of the reference plane 315) at a location of a key of the keyboard 325 can corresponded to pressing the key.}. As per claim 4 Wilytsch et al discloses: The method of claim 3, wherein the second portion of the person 101 comprises a palm of the person's hand 105 & 110, and wherein the one or more criteria include a criterion that is satisfied when the computer system detects that the palm of the person 101 is located within the threshold distance from the first portion of the surface {[column 6, lines 41-55] FIGS. 5B and 5C illustrate the user 101 in front of the surface 115 with the left hand 105 over the surface 115 and the right hand 110 on the surface 115, according to an embodiment. The right hand 110 is flat on the surface 115, and thus the user's palms are parallel to the surface 115. In the examples of FIGS. 5B and 5C the right hand 110 is in a first predetermined shape 515a. In FIG. 5B, the fingertips 505 and distances between adjacent fingers (referred to as fingertip distances 510) of the right hand 110 are indicated. In some embodiments, the HMD 100 tracks the fingertips 505 and fingertip distances 510 of each hand in real time. In FIG. 5C, a reference plane 315 is defined parallel to the palm of the right hand 110 and a perimeter of the reference plane 315 encloses the right hand 110 (as seen from a view perpendicular to the surface 115).}. As per claim 5 Wilytsch et al discloses: The method of claim 3, wherein the second portion of the person 101 comprises one or more fingers 505 & 605 of a hand 105 & 110 of the person 101, and wherein the one or more criteria include a criterion that is satisfied when the computer system detects that the one or more fingers 505 & 605 of the person’s hand 105 & 110 are extended and are located within the threshold distance from the first portion of the first physical surface 115 {[column 6, lines 41-55] FIGS. 5B and 5C illustrate the user 101 in front of the surface 115 with the left hand 105 over the surface 115 and the right hand 110 on the surface 115, according to an embodiment. The right hand 110 is flat on the surface 115, and thus the user's palms are parallel to the surface 115. In the examples of FIGS. 5B and 5C the right hand 110 is in a first predetermined shape 515a. In FIG. 5B, the fingertips 505 and distances between adjacent fingers (referred to as fingertip distances 510) of the right hand 110 are indicated. In some embodiments, the HMD 100 tracks the fingertips 505 and fingertip distances 510 of each hand in real time. In FIG. 5C, a reference plane 315 is defined parallel to the palm of the right hand 110 and a perimeter of the reference plane 315 encloses the right hand 110 (as seen from a view perpendicular to the surface 115).}. As per claim 6 Wilytsch et al discloses: The method of claim 2, wherein the one or more criteria include a criterion that is satisfied when the computer system detects the second portion of the person 101 performing a framing gesture directed to the first portion of the first physical surface 115 {figure 5B}. As per claim 7 Wilytsch et al discloses: The method of claim 2, wherein the second portion of the person 101 comprises a plurality of fingers 505 & 605 of the person's hand 105 & 110, and wherein the one or more criteria include a criterion that is satisfied when the computer system detects that the plurality of fingers 505 & 605 is concurrently performing a gesture directed to the first portion of the first physical surface 115 {[column 6, lines 41-55] FIGS. 5B and 5C illustrate the user 101 in front of the surface 115 with the left hand 105 over the surface 115 and the right hand 110 on the surface 115, according to an embodiment. The right hand 110 is flat on the surface 115, and thus the user's palms are parallel to the surface 115. In the examples of FIGS. 5B and 5C the right hand 110 is in a first predetermined shape 515a. In FIG. 5B, the fingertips 505 and distances between adjacent fingers (referred to as fingertip distances 510) of the right hand 110 are indicated. In some embodiments, the HMD 100 tracks the fingertips 505 and fingertip distances 510 of each hand in real time. In FIG. 5C, a reference plane 315 is defined parallel to the palm of the right hand 110 and a perimeter of the reference plane 315 encloses the right hand 110 (as seen from a view perpendicular to the surface 115).}. As per claim 8 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: before detecting, via the one or more input devices, the first input corresponding to the request to designate a first portion of the first physical surface 115 as the first input region 325, displaying a designation user interface that prompts the person 101 to provide the first input corresponding to the request to designate the first portion of the first physical surface 115 as the first input region 325 {figure 3}. As per claim 9 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: in response to detecting the first input corresponding to the request to designate the first portion of the first physical surface 115 as the first input region 325, displaying a calibration user interface that prompts the person 101 to provide one or more second inputs for calibrating the first input region 325 {[column 5, lines 40-47] A predetermined shape is a hand shape that indicates a desired location for a reference plane 315 to be established. Specifically, the predetermined shape may indicate the desired position in space, desired orientation, and desired size for a reference plane 315. The predetermined shape may be defined by the HMD 100 or the user, for example during a calibration mode.}. As per claim 10 Wilytsch et al discloses: The method of claim 9, wherein the one or more second inputs for calibrating the first input region 325 comprise a plurality of taps at a plurality of locations on the first input region 325 {[column, 6 lines 30-35] FIGS. 5A-5D are a sequence of diagrams illustrating the establishment of a reference plane, according to some embodiments. In some embodiments, FIGS. 5A-5D illustrate steps that occur during a calibration mode of the HMD 100. The diagrams illustrated in the figures may be views captured from a camera 120, such as camera 120a.}. As per claim 11 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325 {figure 1}: detecting one or more second inputs at the first input region 325 {[column 4, lines 38-44] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}; and in response to detecting the one or more second inputs at the first input region 325, calibrating the first input region 325 according to a first set of calibration parameters corresponding to the one or more second inputs; and detecting one or more third inputs, different from the one or more second inputs, at the first input region 325; and in response to detecting the one or more third inputs, different from the one or more second inputs, at the first input region 325, calibrating the first input region 325 according to a second set of calibration parameters corresponding to the one or more third inputs {[column, 6 lines 30-35] FIGS. 5A-5D are a sequence of diagrams illustrating the establishment of a reference plane, according to some embodiments. In some embodiments, FIGS. 5A-5D illustrate steps that occur during a calibration mode of the HMD 100. The diagrams illustrated in the figures may be views captured from a camera 120, such as camera 120a.}. As per claim 12 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, displaying, on the first physical surface 115, a visual indicator that indicates an extent of the first input region 325 {figure 3}. As per claim 13 Wilytsch et al discloses: The method of claim 12, wherein the visual indicator is an input region user interface, and wherein the input region user interface comprises one or more selectable options for providing input to the computer system, and wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, receiving, via the one or more input devices, a second input directed to a first selectable option of the one or more selectable options of the input region user interface; and in response to receiving the second input directed to the first selectable option, performing an operation at the computer system in accordance with the received second input {[column 4, lines 56-64] In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.}. As per claim 17 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, detecting via the one or more input devices, a first air gesture not directed to the first input region 325; and in response to detecting the first air gesture, performing an operation at the computer system in accordance with the detected first air gesture while maintaining designation of the first portion of the first physical surface 115 as the input region {[column 3, lines 29-33] Additionally, the cameras 120 are generally located to capture images of the user's hands 105, 110. Based on the image data from the cameras, the HMD 100 can track movement, gestures, locations, shapes, etc. of the user's hands 105, 110}. As per claim 18 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: detecting, via the one or more input devices, a second input corresponding to a request to designate a second portion of the first physical surface 115 as a second input region 330, different from the first portion of the first physical surface 115 {figure 3}; in response to detecting the second input: designating the second portion of the first physical surface 115 as the second input region 330 {figure 3}; while the second portion of the first physical surface 115 is designated as the second input region 330, detecting, via the one or more input devices, movement of a second portion of the person 101 directed to the second input region 330 {figure 3}; and in response to detecting the movement of the second portion of the person 101 directed to the second input region 330, performing an operation at the computer system in accordance the movement of the second portion of the person 101 relative to the second input region 330 {[column 4, lines 38-43] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}. As per claim 19 Wilytsch et al discloses: The method of claim 1, wherein the movement of the first portion of the person 101 is a tap input {[column 4, lines 47-50] For example, tapping the reference plane 315 (e.g., bringing finger within a threshold distance of the reference plane 315) at a location of a key of the keyboard 325 can corresponded to pressing the key.}. As per claim 20 Wilytsch et al discloses: The method of claim 1, wherein the movement of the first portion of the person 101 is an input that includes movement {[column 4, lines 38-43] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}. As per claim 21 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, displaying a content user interface on the first input region 325, wherein the content user interface includes one or more interactive controls; and wherein the movement of the first portion of the person 101 includes an interaction with the one or more interactive controls of the content user interface displayed on the first input region 325 {[column 4, lines 38-43] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}. As per claim 22 Wilytsch et al discloses: The method of claim 1, wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, displaying a content user interface on the first input region 325, wherein the content user interface includes content; and wherein performing the operation at the computer system in accordance with the detected first gesture comprises performing an operation on the content of the content user interface displayed on the first input region 325 { figure 3 & [column 4, lines 38-43] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}. As per claim 23 Wilytsch et al discloses: The method of claim 1, wherein performing the operation at the computer system in accordance with the detected first gesture includes: in accordance with a determination that attention of the person 101 is directed to a first location in the three-dimensional environment 200 {figure 2}, performing a first operation associated with the first location at the computer system in accordance with the detected first gesture; and in accordance with a determination that the attention of the person 101 is directed to a second location, different from the first location, in the three-dimensional environment 200, performing a second operation, different from the first operation, associated with the second location at the computer system in accordance with the detected first gesture {[column 4, lines 38-43] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}. As per claim 27 Wilytsch et al discloses: The method of claim 1, wherein the one or more input devices include one or more remote body tracking devices, and wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, detecting movement of the one or more remote body tracking devices; and in response to detecting the movement of the one or more input devices: in accordance with a determination that at least a portion of the first input region 325 is outside of a field of sensing limit of the one or more remote body tracking devices, displaying, via the display generation component 100, a visual indicator in the three-dimensional environment 200 indicating that at least a portion of the first input region 325 is outside of the field of sensing limit of the one or more remote body tracking devices. { [column 6, lines 41-65] FIGS. 5B and 5C illustrate the user 101 in front of the surface 115 with the left hand 105 over the surface 115 and the right hand 110 on the surface 115, according to an embodiment. The right hand 110 is flat on the surface 115, and thus the user's palms are parallel to the surface 115. In the examples of FIGS. 5B and 5C the right hand 110 is in a first predetermined shape 515a. In FIG. 5B, the fingertips 505 and distances between adjacent fingers (referred to as fingertip distances 510) of the right hand 110 are indicated. In some embodiments, the HMD 100 tracks the fingertips 505 and fingertip distances 510 of each hand in real time. In FIG. 5C, a reference plane 315 is defined parallel to the palm of the right hand 110 and a perimeter of the reference plane 315 encloses the right hand 110 (as seen from a view perpendicular to the surface 115). In some embodiments, HMD 100 tracks the fingertip distances 510 between two or more fingers to determine if one or more hands 105, 110 are in a predetermined shape. For example, the HMD 100 determines a hand 105 or 110 is in a predetermined shape if a fingertip distance 510 between two fingertips 505 substantially equals a value (e.g., within a few millimeters). Referring to FIG. 5B, the HMD 100 may determine that the right hand 110 is in a predetermined shape (e.g., flat) by determining that each of the fingertip distances 510 are above one or more threshold values.} As per claim 28 Wilytsch et al discloses: The method of claim 27, wherein displaying the visual indicator indicating that at least the portion of the first input region 325 is outside of the field of sensing limit of the one or more input devices includes: in accordance with a determination that the first input region 325 does not include interactive content, displaying the visual indicator at a first content window displayed in the three-dimensional environment 200, the first content window including interactive content that is interactable via the first input region 325; and in accordance with a determination that the first input region 325 includes interactive content, displaying the visual indicator at the first input region 325. { figure 3 & [column 4, lines 38-43] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.} As per claim 29 Wilytsch et al discloses: The method of claim 1, wherein designating the first portion of the first physical surface 115 as the first input region 325 includes: in accordance with a determination that a first object in the three-dimensional environment 200 is located at a first location in the three-dimensional environment 200, orienting the first input region 325 at a first orientation with respect to the three-dimensional environment 200; and in accordance with a determination that the first object in the three-dimensional environment 200 is located at a second location, different from the first location, in the three-dimensional environment 200, orienting the first input region 325 at a second orientation, different from the first orientation, with respect to the three-dimensional environment 200 {[column 4, lines 4-21] The reference plane 315 is a virtual plane that determines the position and orientation of interaction objects. The virtual plane 315 is generated by the HMD 100 and may be displayed to the user. As further described below, the position, orientation, and size of the reference plane 315 can be defined by the user 101 by placing their hands 105, 110 in predetermined shapes. This allows the user 101 to establish a reference plane 315 and interact with virtual interaction objects regardless of the physical environment he or she is in. In FIG. 3, the reference plane 315 is on the surface 115 of a real-world object (e.g., a table). The reference plane 315 can be generated on other physical surfaces, such as a desk, a canvas on an easel, a wall, etc. Additionally, the reference plane 315 may not be generated on a surface of a real-world object. For example, the reference plane 315 may be generated in mid-air with respect to surfaces and a user may not require interaction with a surface to form the predefined shape.} As per claim 30 Wilytsch et al discloses: The method of claim 1, wherein the first physical surface 115 is not an input device. {[column 4, lines 14-16] The reference plane 315 can be generated on other physical surfaces, such as a desk, a canvas on an easel, a wall, etc} As per claim 31 Wilytsch et al discloses: A computer system that is in communication with a display generation component 100 and one or more input devices, the computer system comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs {[column 9, lines 46-53] Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.} including instructions for: while a three-dimensional environment 200 is visible via the display generation component 100, wherein the three-dimensional environment 200 includes a first physical surface 115 that does not include sensors for detecting touch inputs { [column 3, lines 33-37] While FIG. 1 includes an integrated camera 120c and two external cameras 120a, 120b, alternative camera configurations are possible. For example, one or more cameras integrated into the HMD 100 sufficiently capture images of the user's hands 105, 110.}, detecting, via the one or more input devices, a first input corresponding to a request to designate a first portion of the first physical surface 115 as a first input region 325; while the first portion of the first physical surface 115 is designated as the first input region 325, detecting, via the input devices, movement of a first portion of a person 101 directed to the first input region 325 {[column 4, lines 38-44] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}; and in response to detecting the movement of the first portion of the person 101: in accordance with a determination that the movement of the first portion of the person 101 was directed to the first input region 325, performing an operation at the computer system in accordance with the movement of the first portion of the person 101 relative to the first input region 325; and in accordance with a determination that the movement of the first portion of the person 101 was not directed to the first input region 325, forgoing performing the operation at the computer system in accordance with the movement of the first portion of the person 101 relative to the first input region 325 {[column 4, lines 51-64] User interactions with virtual interaction objects can cause one or more virtual interaction images to be displayed to the user 101. A virtual interaction image is a visual indicator that shows an interaction with a virtual interaction object occurred. Virtual interaction images can also include audio and haptic indicators. In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.}. As per claim 32. Wilytsch et al discloses: A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a computer system that is in communication with a display generation component 100 and one or more input devices {[column 9, lines 46-53] Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.}, cause the computer system to perform a method comprising: while a three-dimensional environment 200 is visible via the display generation component 100, wherein the three-dimensional environment 200 includes a first physical surface 115 that does not include sensors for detecting touch inputs { [column 3, lines 33-37] While FIG. 1 includes an integrated camera 120c and two external cameras 120a, 120b, alternative camera configurations are possible. For example, one or more cameras integrated into the HMD 100 sufficiently capture images of the user's hands 105, 110.}, detecting, via the one or more input devices, a first input corresponding to a request to designate a first portion of the first physical surface 115 as a first input region 325; while the first portion of the first physical surface 115 is designated as the first input region 325, detecting, via the input devices, movement of a first portion of a person 101 directed to the first input region 325 {[column 4, lines 38-44] The virtual keyboard 325 and trackpad 330 are examples of virtual interaction objects. Interaction with these virtual interaction objects is typically provided by the user interacting with the real-world environment at the location corresponding to the location of the interaction objects.}; and in response to detecting the movement of the first portion of the person 101: in accordance with a determination that the movement of the first portion of the person 101 was directed to the first input region 325, performing an operation at the computer system in accordance with the movement of the first portion of the person 101 relative to the first input region 325; and in accordance with a determination that the movement of the first portion of the person 101 was not directed to the first input region 325, forgoing performing the operation at the computer system in accordance with the movement of the first portion of the person 101 relative to the first input region 325 {[column 4, lines 51-64] User interactions with virtual interaction objects can cause one or more virtual interaction images to be displayed to the user 101. A virtual interaction image is a visual indicator that shows an interaction with a virtual interaction object occurred. Virtual interaction images can also include audio and haptic indicators. In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.}. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Wilytsch et al (US 11,054,896) in view of Kandur Raja et al (US 2016/0209928). Regarding claim 14 Wilytsch et al is silent as to: The method of claim 1, wherein the first input includes detecting a second portion of the person 101 in a first pose, and wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, detecting, via the one or more input devices, the second portion of the person 101 changing from having the first pose to having a second pose, different from the first pose; and in response to detecting the second portion of the person 101 changing from having the first pose to having the second pose, ceasing designation of the first physical surface 115 as the first input region 325. With respect to claim 14 Kandur Raja et al discloses: wherein the first input includes detecting a second portion of the person in a first pose, and wherein the method further comprises: while the first portion of the first physical surface is designated as the first input region, detecting, via the one or more input devices, the second portion of the person changing from having the first pose to having a second pose, different from the first pose {[0096] In operation 801, the first sensor 110 may detect the number of user hands. According to various embodiments of the present disclosure, the first sensor 110 may detect the number of fingers, and may detect the number of hands based on the number of the fingers. & [0097] In operation 803, the processor 200 may determine the number of the hands detected in operation 801. For example, the processor 200 may determine the number of the hands detected in operation 801 through specific image processing. In the case where the number of the hands is one, the procedure may proceed to operation 823. In the case where the number of the hands is two, the procedure may proceed to operation 805. In the case where the number of the hands is greater than or equal to three, the procedure may proceed to operation 829.}; and in response to detecting the second portion of the person changing from having the first pose to having the second pose, ceasing designation of the first physical surface as the first input region {figure 8: END}. It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Wilytsch et al with the first input includes detecting a second portion of the person in a first pose, while the first portion of the first physical surface is designated as the first input region, detecting, via the one or more input devices, the second portion of the person changing from having the first pose to having a second pose, different from the first pose; and in response to detecting the second portion of the person changing from having the first pose to having the second pose, ceasing designation of the first physical surface as the first input region as taught by Kandur Raja et al. The rationale is as follows: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with the first input includes detecting a second portion of the person in a first pose, while the first portion of the first physical surface is designated as the first input region, detecting, via the one or more input devices, the second portion of the person changing from having the first pose to having a second pose, different from the first pose; and in response to detecting the second portion of the person changing from having the first pose to having the second pose, ceasing designation of the first physical surface as the first input region so that the screen is able to be cleared and display alternate objects for the user. Claims 15-16 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Wilytsch et al (US 11,054,896) in view of Demir et al (US 2024/0355076). Regarding claim 15 Wilytsch et al is silent as to: The method of claim 1, wherein the method further comprises: while the first portion of the of the first physical surface 115 is designated as the first input region 325, detecting that an elapsed time since input has been detected at the first input region 325 is greater than a time threshold; and in response to detecting that the elapsed time is greater than the time threshold, ceasing designation of the first physical surface 115 as the first input region 325. Regarding claim 16 Wilytsch et al is silent as to: The method of claim 1, wherein the method further comprises: while the first portion of the first physical surface 115 is designated as the first input region 325, detecting, via the one or more input devices, a second gesture corresponding to a request to terminate the designation of the first portion of the first physical surface 115 as the first input region 325; and in response to detecting the second gesture, ceasing designation of the first physical surface 115 as the first input region 325. With respect to claims 15-16 Demir et al discloses: [0054] In some embodiments, the virtual reality system 100 may employ the gesture detection software to determine whether the user 104 is no longer performing a gesture (e.g., the first gesture 402, the second gesture 410, and/or the third gesture 500) and may stop translation or rotation of the user 104 though the VR environment based on the determination. For instance, if the virtual reality system determines the user 104 is no longer performing the second gesture 410, then the virtual reality system 100 may stop modifying the image data based on the adjusted user coordinate related to translation. As another example, if the virtual reality system 100 determines the user 104 is no longer performing the third gesture 500, then the virtual reality system 100 may stop modifying the image data based on the adjusted user coordinate related to rotation. It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Wilytsch et al with the first portion of the of the first physical surface is designated as the first input region, detecting that an elapsed time since input has been detected at the first input region is greater than a time threshold; and in response to detecting that the elapsed time is greater than the time threshold, ceasing designation of the first physical surface as the first input region as taught by Demir et al. The rationale is as follows: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with the first portion of the of the first physical surface is designated as the first input region, detecting that an elapsed time since input has been detected at the first input region is greater than a time threshold; and in response to detecting that the elapsed time is greater than the time threshold, ceasing designation of the first physical surface as the first input region. so that the screen is able to be cleared and display alternate objects for the user. Regarding claim 24 Wilytsch et al is silent as to: The method of claim 1, the method further comprising: while the first portion of the first physical surface 115 is designated as the first input region 325, detecting, via the one or more input devices, a second input; and in response to detecting the second input: in accordance with a determination that the second input corresponds to a request to relocate the first input region 325 in a first manner, relocating the first input region 325 from the first portion of the first physical surface 115 to a respective portion of a second surface in the three-dimensional environment 200; and in accordance with a determination that the second input corresponds to a request to relocate the first input region 325 in a second manner, different from the first manner, relocating the first input region 325 from the first portion of the first physical surface 115 to a respective portion of a third surface in the three-dimensional environment 200. Regarding claim 25 Wilytsch et al is silent as to: The method of claim 24, wherein the method further comprises: while a movement portion of the second input is ongoing: detecting that the first input region 325 reaches a first movement threshold; and in response to detecting that the first input region 325 has reached a first movement threshold, displaying a first visual indicator at the first input region 325 indicating that the first movement threshold has been reached. Regarding claim 26 Wilytsch et al is silent as to: The method of claim 25, wherein the method further comprises: while displaying the first visual indicator at the first input region 325 indicating that the first movement threshold has been reached, detecting further input corresponding to movement of the first input region 325 beyond the first movement threshold followed by termination of the further input; in response to detecting the further input, moving the first input region 325 beyond the first movement threshold in accordance with further input; and in response to detecting the termination of the further input, displaying an animation of the first input region 325 moving back to within the first movement threshold. With respect to claims 24-26 Demir et al depicts in figure 4 and discloses: [0053] At process block 314, the virtual reality system 100 may modify the image data based on the adjusted user coordinate. That is, the 3D coordinate system 406 may be associated with the VR environment, such that the translation and rotation within the 3D coordinate system 406 corresponds to movement and rotation within the VR environment. The movement vector (e.g., 418 and 508) may be indicative of the user 104 translation and rotation within the VR environment. As an example, illustrated in FIG. 4, if the 3D user coordinate translates 30 units up and 90 units to the left, then the image data being displayed to the user 104 may depict the user 104 moving through the VR environment 30 units up and 90 units to left (e.g., as shown in the second visualization 414). It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Wilytsch et al with detecting that the first input region reaches a first movement threshold; and in response to detecting that the first input region has reached a first movement threshold, displaying a first visual indicator as taught by Demir et al. The rationale is as follows: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with detecting that the first input region reaches a first movement threshold; and in response to detecting that the first input region has reached a first movement threshold, displaying a first visual indicator so that the user is able to visually see the boundaries when moving regions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID D DAVIS whose telephone number is (571)272-7572. The examiner can normally be reached Monday - Friday, 8 a.m. - 4 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID D DAVIS/Primary Examiner, Art Unit 2627 DDD
Read full office action

Prosecution Timeline

Feb 20, 2025
Application Filed
Feb 04, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602106
Ambience-Driven User Experience
2y 5m to grant Granted Apr 14, 2026
Patent 12602128
DISPLAY DEVICE HAVING PIXEL DRIVE CIRCUITS AND SENSOR DRIVE CIRCUITS
2y 5m to grant Granted Apr 14, 2026
Patent 12602121
TOUCH DEVICE FOR PASSIVE RESONANT STYLUS, DRIVING METHOD FOR THE SAME AND TOUCH SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12596265
Aiming Device with a Diffractive Optical Element and Reflective Image Combiner
2y 5m to grant Granted Apr 07, 2026
Patent 12592178
Display Device Including an Electrostatic Discharge Circuit for Discharging Static Electricity
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
79%
With Interview (+9.1%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 900 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month