Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,132

TWO-DIMENSIONAL AND THREE-DIMENSIONAL CURSOR MOVEMENT

Non-Final OA §103
Filed
May 16, 2024
Examiner
BOCAR, DONNA V
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
77%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
212 granted / 367 resolved
-4.2% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 13, 18, and 20 are amended. Claim 16 is cancelled. Claims 1-15 and 17-21 are currently under review Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 13, 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-15 and 17-21 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 1, 13, and 20 are objected to because of the following informalities: typographic error. Appropriate correction is required. The following is suggested: Claim 1, line 13: “motion, wherein the 3D motion of the hand results in 3D movement of [[an]]the virtual element in the [[3D]]XR environment.” Claim 13, line 17: “motion, wherein the 3D motion of the hand results in 3D movement of [[an]]the virtual element in the [[3D]]XR environment.” Claim 20, line 14: “motion, wherein the 3D motion of the hand results in 3D movement of [[an]]the virtual element in the [[3D]]XR environment.” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 11, 13-14, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. (Pub. No.: US 2024/0061657 A1) hereinafter referred to as Singh in view of Zhang et al. (Pub. No.: US 2019/0073109 A1) and in view of Ueno et al. (Pub. No.: US 2015/0312559 A1) hereinafter referred to as Ueno. With respect to Claim 1, Singh teaches a method (fig. 8; ¶67, “the methods herein may be implemented as software instructions executed by a processor, suitably configured application specific integrated circuits (ASIC) or field programmable gate array (FPGA) modules, or any other convenient manner”; ¶114) comprising: at an electronic device (fig. 3, item 216; ¶90) having a processor (fig. 3, item 302; ¶90) and a display (fig. 3, item 304; ¶90): presenting an extended reality (XR) environment (¶90) comprising a virtual element (¶92, “the headset may be established by an AR headset that may have a transparent display that is able to present 3D virtual objects/content”) and a cursor (¶26); obtaining image data associated with hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (¶48, “Interactions in 3D space (that may be translated to 2D interactions using App Space) may occur using any number of different 3D input modalities, including but not limited to gaze pointer, raycast, hand/arm gestures”; ¶91, “The camera 306 may also be used for gesture recognition to recognize gestures made by the user using their hand, arm, etc. consistent with present principles”; ¶97, “App Space may detect gaze, raycast, keyboard, and keypress events from any buttons on the head-mounted headset 400 itself or even other controller devices (such as 3D hand-held controllers) via the headset's own SDK for 3D rendering”); based on the hand data, operating in a first mode where 3D motion is converted to 2D motion on a 2D surface displayed within a 3D space of the XR environment (¶26, “App Space may intercept the 3D AR coordinates from a 3D cursor and convert them to 2D coordinates”; ¶93, “App Space may therefore render the 2D apps in a 3D spatial environment, as well as convert 3D coordinates in the 3D spatial coordinate system into 2D coordinates in the 2D coordinate system at runtime (and vice versa)”). Singh does not explicitly mention that based on the hand data, operating in a first mode where 3D motion of the hand is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment; detecting a 3D user input criteria; and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the hand is maintained without conversion to the 2D motion. Zhang teaches a method (figs. 13A to 13B; ¶65) comprising: at an electronic device (figs. 1-2, item 10) having a processor (fig. 1, item 32; ¶26) and a display (figs. 1-2, item 20 comprises item 21: display; ¶24): presenting an extended reality (XR) environment (¶57) comprising a virtual element (fig. 11, item 240 and 244; ¶56) and a cursor (fig. 3, depicted as item 210 when outside a desktop window; fig. 11, depicted as item 214 when inside a desktop window or application window; ¶31, “In some examples, if the pointer 210 collides with virtual content in the three-dimensional environment 56, the pointer may be moved closer to the user 36 to overlap the virtual object or other content”; ¶56); obtaining spatial input signals associated with hand data corresponding to a three-dimensional (3D) cursor motion of a hand in a 3D environment (¶30, “the three-dimensional pointer 210 may be controlled by other user input modalities, such as gaze detection using a targeting ray and/or gesture detection”); based on the spatial input signals, operating in a first mode where 3D motion of the spatial input signals is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment (fig. 7; fig. 13B, item 280-282; ¶45, “the system may determine that the location of the three-dimensional pointer 210 moves from outside the boundary of the desktop window 40 to inside the window. In response, the translation of the spatial input signals may be changed from three-dimensional motion of the three-dimensional pointer 210 to two-dimensional motion of the two-dimensional pointer 214 within the desktop window 40”); detecting a 3D user input criteria (fig. 13B, item 293); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the spatial input signals is maintained without conversion to the 2D motion (fig. 13B, item 294; ¶41, “when the location of the two-dimensional pointer crosses the boundary area of the desktop window 40 into the surrounding three-dimensional environment 56, the two-dimensional pointer is replaced with the three-dimensional pointer 210 at the corresponding location”; ¶46; ¶65-66 – the 3D motion of the cursor is maintained since the cursor and the application (desktop window) are operating in a three-dimensional environment). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of Singh, such that spatial input signals are hand gestures associated with hand data that correspond to a cursor, resulting in based on the hand data, operating in a first mode where 3D motion of the hand is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment; detecting a 3D user input criteria; and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the hand is maintained without conversion to the 2D motion, as taught by Zhang so as to enable input to freely migrate between a desktop window and virtual space, thereby enabling a user to conveniently interact with desktop and non-desktop virtual content and also such that desktop applications also may be moved into and out from of a desktop window to provide continuum between an operating system shell and a holographic/three-dimensional shell displayed by an HMD device (¶23). Singh and Zhang combined do not explicitly teach wherein the 3D motion of the hand results in 3D movement of an element in the 3D environment. Ueno teaches a method (figs. 17-18, 20, 23, 25) comprising: at an electronic device (fig. 5) having a processor (fig. 5, item 4D; ¶92) and a display (fig. 5, items 32a and 32b; ¶93); presenting an extended reality (XR) environment (fig. 7; ¶255) comprising a virtual element (fig. 7, item BL1 or fig. 15, item OB1; ¶108; ¶137); obtaining image data associated with hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (fig. 7, item H1; ¶96; ¶98, “the imaging units 40 and 42 function as both of the detection unit 44 and the distance measuring unit 46”); detecting a 3D user input criteria (¶114, “the user moves the hand H1 in the direction of an arrow A1 … the display device 1 determines that the movement of the hand H1 is operation to move the three-dimensional object to the right while holding the three-dimensional object, and moves the position of the three-dimensional object to the right in the virtual space according to the operation”); wherein the 3D motion of the hand results in 3D movement of an element in the XR environment (¶114; ). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Singh and Zhang, wherein the 3D motion of the hand results in 3D movement of an element in the 3D environment, as taught by Ueno so as to provide the user with a highly convenient operation method (¶22). With respect to Claim 2, claim 1 is incorporated, Singh does not mention wherein, based on said operating in the first mode, cursor movement, of the cursor, is limited to 2D motion. Zhang teaches a method (figs. 13A to 13B; ¶65) comprising: at an electronic device (figs. 1-2, item 10) having a processor (fig. 1, item 32; ¶26) and a display (figs. 1-2, item 20 comprises item 21: display; ¶24): presenting an extended reality (XR) environment (¶57) comprising a virtual element (fig. 11, item 240 and 244; ¶56) and a cursor (fig. 3, depicted as item 210 when outside a desktop window; fig. 11, depicted as item 214 when inside a desktop window or application window; ¶31, “In some examples, if the pointer 210 collides with virtual content in the three-dimensional environment 56, the pointer may be moved closer to the user 36 to overlap the virtual object or other content”; ¶56); obtaining spatial input signals associated with hand data corresponding to a three-dimensional (3D) pointer motion of a hand in a 3D environment (¶30, “the three-dimensional pointer 210 may be controlled by other user input modalities, such as gaze detection using a targeting ray and/or gesture detection”); based on the spatial input signals, operating in a first mode where 3D motion of the cursor is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment (fig. 7; fig. 13B, item 280-282; ¶45, “the system may determine that the location of the three-dimensional pointer 210 moves from outside the boundary of the desktop window 40 to inside the window. In response, the translation of the spatial input signals may be changed from three-dimensional motion of the three-dimensional pointer 210 to two-dimensional motion of the two-dimensional pointer 214 within the desktop window 40”); detecting a 3D user input criteria (fig. 13B, item 293); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the cursor is maintained without conversion to the 2D motion (fig. 13B, item 294; ¶41, “when the location of the two-dimensional pointer crosses the boundary area of the desktop window 40 into the surrounding three-dimensional environment 56, the two-dimensional pointer is replaced with the three-dimensional pointer 210 at the corresponding location”; ¶46; ¶65-66 – the 3D motion of the cursor is maintained since the cursor and the application (desktop window) are operating in a three-dimensional environment); wherein, based on said operating in the first mode, cursor movement, of the cursor, is limited to 2D motion (¶32; ¶38). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Singh and Ueno, wherein, based on said operating in the first mode, cursor movement, of the cursor, is limited to 2D motion, as taught by Zhang so as to enable input to freely migrate between a desktop window and virtual space, thereby enabling a user to conveniently interact with desktop and non-desktop virtual content and also such that desktop applications also may be moved into and out from of a desktop window to provide continuum between an operating system shell and a holographic/three-dimensional shell displayed by an HMD device (¶23). With respect to Claim 11, claim 1 is incorporated, Singh does not teach wherein: the first mode moves the cursor on a surface of the virtual element; and the second mode moves the virtual element within the XR environment while a position of the cursor on the surface of the virtual element is maintained. Zhang teaches a method (figs. 13A to 13B; ¶65) comprising: at an electronic device (figs. 1-2, item 10) having a processor (fig. 1, item 32; ¶26) and a display (figs. 1-2, item 20 comprises item 21: display; ¶24): presenting an extended reality (XR) environment (¶57) comprising a virtual element (fig. 11, item 240 and 244; ¶56) and a cursor (fig. 3, depicted as item 210 when outside a desktop window; fig. 11, depicted as item 214 when inside a desktop window or application window; ¶31, “In some examples, if the pointer 210 collides with virtual content in the three-dimensional environment 56, the pointer may be moved closer to the user 36 to overlap the virtual object or other content”; ¶56); obtaining spatial input signals associated with hand data corresponding to a three-dimensional (3D) pointer motion of a hand in a 3D environment (¶30, “the three-dimensional pointer 210 may be controlled by other user input modalities, such as gaze detection using a targeting ray and/or gesture detection”); based on the spatial input signals, operating in a first mode where 3D motion of the cursor is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment (fig. 7; fig. 13B, item 280-282; ¶45, “the system may determine that the location of the three-dimensional pointer 210 moves from outside the boundary of the desktop window 40 to inside the window. In response, the translation of the spatial input signals may be changed from three-dimensional motion of the three-dimensional pointer 210 to two-dimensional motion of the two-dimensional pointer 214 within the desktop window 40”); detecting a 3D user input criteria (fig. 13B, item 293); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the cursor is maintained without conversion to the 2D motion (fig. 13B, item 294; ¶41, “when the location of the two-dimensional pointer crosses the boundary area of the desktop window 40 into the surrounding three-dimensional environment 56, the two-dimensional pointer is replaced with the three-dimensional pointer 210 at the corresponding location”; ¶46; ¶65-66 – the 3D motion of the cursor is maintained since the cursor and the application (desktop window) are operating in a three-dimensional environment); wherein: the first mode moves the cursor on a surface of the virtual element (fig. 8, item the cursor 214 if on the surface of a virtual element 214 within 40; ¶46, “while displaying the two-dimensional pointer 214 hit-testing a second application window 208 located within the desktop window 40, a user selection of the application window may be received”); and the second mode moves the virtual element within the XR environment while a position of the cursor on the surface of the virtual element is maintained (fig. 9, item 208 is moved from item 40 to environment 56; ¶46, “the user 36 may move the second application window 208 outside the boundary of the desktop window 40 via interaction with the mouse 204. In response to determining that the user moves the second application window 208 outside the boundary, view management of the second application window may be transitioned from the operating system shell 46 to the three-dimensional (holographic) shell 60 corresponding to the three-dimensional environment 56”; ¶47). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Singh and Ueno, wherein: the first mode moves the cursor on a surface of the virtual element; and the second mode moves the virtual element within the XR environment while a position of the cursor on the surface of the virtual element is maintained, as taught by Zhang so as to enable input to freely migrate between a desktop window and virtual space, thereby enabling a user to conveniently interact with desktop and non-desktop virtual content and also such that desktop applications also may be moved into and out from of a desktop window to provide continuum between an operating system shell and a holographic/three-dimensional shell displayed by an HMD device (¶23). With respect to Claim 13, Singh teaches an electronic device (fig. 3, item 216; ¶90) comprising: a non-transitory computer-readable storage medium (fig. 3, item 308; ¶12; ¶69-70; ¶92); and at least one processor (fig. 3, item 302; ¶90) coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions (¶66-67) that, when executed on the one or more processors, cause the electronic device to perform operations comprising: presenting an extended reality (XR) environment (¶90) comprising a virtual element (¶92, “the headset may be established by an AR headset that may have a transparent display that is able to present 3D virtual objects/content”) and a cursor (¶26), obtaining image data associated with hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (¶48, “Interactions in 3D space (that may be translated to 2D interactions using App Space) may occur using any number of different 3D input modalities, including but not limited to gaze pointer, raycast, hand/arm gestures”; ¶91, “The camera 306 may also be used for gesture recognition to recognize gestures made by the user using their hand, arm, etc. consistent with present principles”; ¶97, “App Space may detect gaze, raycast, keyboard, and keypress events from any buttons on the head-mounted headset 400 itself or even other controller devices (such as 3D hand-held controllers) via the headset's own SDK for 3D rendering”); based on the hand data, operating in a first mode where 3D motion is converted to 2D motion on a 2D surface displayed within a 3D space of the XR environment (¶26; ¶93, “App Space may therefore render the 2D apps in a 3D spatial environment, as well as convert 3D coordinates in the 3D spatial coordinate system into 2D coordinates in the 2D coordinate system at runtime (and vice versa)”). Singh does not explicitly mention that based on the hand data, operating in a first mode where 3D motion of the hand is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment; detecting a 3D user input criteria; and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the hand is maintained without conversion to the 2D motion. Zhang teaches an electronic device (figs. 1-2, items 10, 20, and 16) comprising: a non-transitory computer-readable storage medium (fig. 1, item 30; ¶26); and a processor (fig. 1, item 32; ¶26) coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions (¶26) that, when executed on the processor, causes the electronic device to perform operations comprising: presenting an extended reality (XR) environment (¶57) comprising a virtual element (fig. 11, item 240 and 244; ¶56) and a cursor (fig. 3, depicted as item 210 when outside a desktop window; fig. 11, depicted as item 214 when inside a desktop window or application window; ¶31, “In some examples, if the pointer 210 collides with virtual content in the three-dimensional environment 56, the pointer may be moved closer to the user 36 to overlap the virtual object or other content”; ¶56), obtaining spatial input signals associated with hand data corresponding to a three-dimensional (3D) cursor motion of a hand in a 3D environment (¶30, “the three-dimensional pointer 210 may be controlled by other user input modalities, such as gaze detection using a targeting ray and/or gesture detection”); based on the spatial input signals, operating in a first mode where 3D motion of the spatial input signals is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment (fig. 7; fig. 13B, item 280-282; ¶45, “the system may determine that the location of the three-dimensional pointer 210 moves from outside the boundary of the desktop window 40 to inside the window. In response, the translation of the spatial input signals may be changed from three-dimensional motion of the three-dimensional pointer 210 to two-dimensional motion of the two-dimensional pointer 214 within the desktop window 40”); detecting a 3D user input criteria (fig. 13B, item 293); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the spatial input signals is maintained without conversion to the 2D motion (fig. 13B, item 294; ¶41, “when the location of the two-dimensional pointer crosses the boundary area of the desktop window 40 into the surrounding three-dimensional environment 56, the two-dimensional pointer is replaced with the three-dimensional pointer 210 at the corresponding location”; ¶46; ¶65-66 – the 3D motion of the cursor is maintained since the cursor and the application (desktop window) are operating in a three-dimensional environment). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the electronic device of Singh, such that spatial input signals are hand gestures associated with hand data that correspond to a cursor resulting in based on the hand data, operating in a first mode where 3D motion of the hand is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment; detecting a 3D user input criteria; and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the hand is maintained without conversion to the 2D motion, as taught by Zhang so as to enable input to freely migrate between a desktop window and virtual space, thereby enabling a user to conveniently interact with desktop and non-desktop virtual content and also such that desktop applications also may be moved into and out from of a desktop window to provide continuum between an operating system shell and a holographic/three-dimensional shell displayed by an HMD device (¶23). Singh and Zhang combined do not explicitly teach wherein the 3D motion of the hand results in 3D movement of an element in the 3D environment. Ueno teaches an electronic device (fig. 5) comprising: a processor (fig. 5, item 4D; ¶92) and a display (fig. 5, items 32a and 32b; ¶93); the electronic device to perform operations comprising: presenting an extended reality (XR) environment (fig. 7; ¶255) comprising a virtual element (fig. 7, item BL1 or fig. 15, item OB1; ¶108; ¶137); obtaining image data associated with hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (fig. 7, item H1; ¶96; ¶98, “the imaging units 40 and 42 function as both of the detection unit 44 and the distance measuring unit 46”); detecting a 3D user input criteria (¶114, “the user moves the hand H1 in the direction of an arrow A1 … the display device 1 determines that the movement of the hand H1 is operation to move the three-dimensional object to the right while holding the three-dimensional object, and moves the position of the three-dimensional object to the right in the virtual space according to the operation”); wherein the 3D motion of the hand results in 3D movement of an element in the XR environment (¶114; ). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined electronic device of Singh and Zhang, wherein the 3D motion of the hand results in 3D movement of an element in the 3D environment, as taught by Ueno so as to provide the user with a highly convenient operation method (¶22). With respect to Claim 14, claim 13 is incorporated, Singh does not mention wherein, based on said operating in the first mode, cursor movement, of the cursor, is limited to 2D motion. Zhang teaches an electronic device (figs. 1-2, items 10, 20, and 16) comprising: a non-transitory computer-readable storage medium (fig. 1, item 30; ¶26); and a processor (fig. 1, item 32; ¶26) coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions (¶26) that, when executed on the one or more processors, cause the system to perform operations comprising: presenting an extended reality (XR) environment (¶57) comprising a virtual element (fig. 11, item 240 and 244; ¶56) and a cursor (fig. 3, depicted as item 210 when outside a desktop window; fig. 11, depicted as item 214 when inside a desktop window or application window; ¶31, “In some examples, if the pointer 210 collides with virtual content in the three-dimensional environment 56, the pointer may be moved closer to the user 36 to overlap the virtual object or other content”; ¶56), obtaining spatial input signals associated with hand data corresponding to a three-dimensional (3D) pointer motion of a hand in a 3D environment (¶30, “the three-dimensional pointer 210 may be controlled by other user input modalities, such as gaze detection using a targeting ray and/or gesture detection”); based on the spatial input signals, operating in a first mode where 3D motion of the cursor is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment (fig. 7; fig. 13B, item 280-282; ¶45, “the system may determine that the location of the three-dimensional pointer 210 moves from outside the boundary of the desktop window 40 to inside the window. In response, the translation of the spatial input signals may be changed from three-dimensional motion of the three-dimensional pointer 210 to two-dimensional motion of the two-dimensional pointer 214 within the desktop window 40”); detecting a 3D user input criteria (fig. 13B, item 293); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the cursor is maintained without conversion to the 2D motion (fig. 13B, item 294; ¶41, “when the location of the two-dimensional pointer crosses the boundary area of the desktop window 40 into the surrounding three-dimensional environment 56, the two-dimensional pointer is replaced with the three-dimensional pointer 210 at the corresponding location”; ¶46; ¶65-66 – the 3D motion of the cursor is maintained since the cursor and the application (desktop window) are operating in a three-dimensional environment); wherein, based on said operating in the first mode, cursor movement, of the cursor, is limited to 2D motion (¶32; ¶38). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined electronic device of Singh and Ueno, wherein, based on said operating in the first mode, cursor movement, of the cursor, is limited to 2D motion, as taught by Zhang so as to enable input to freely migrate between a desktop window and virtual space, thereby enabling a user to conveniently interact with desktop and non-desktop virtual content and also such that desktop applications also may be moved into and out from of a desktop window to provide continuum between an operating system shell and a holographic/three-dimensional shell displayed by an HMD device (¶23). With respect to Claim 20, Singh teaches a non-transitory computer-readable storage medium (fig. 3, item 308; ¶12; ¶69-70; ¶92) storing program instructions (¶66-67) executable via at least one processor (fig. 3, item 302; ¶90), of an electronic device (fig. 3, item 216; ¶90), to perform operations comprising: presenting an extended reality (XR) environment (¶90) comprising a virtual element (¶92, “the headset may be established by an AR headset that may have a transparent display that is able to present 3D virtual objects/content”) and a cursor (¶26), obtaining image data associated with hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (¶48, “Interactions in 3D space (that may be translated to 2D interactions using App Space) may occur using any number of different 3D input modalities, including but not limited to gaze pointer, raycast, hand/arm gestures”; ¶91, “The camera 306 may also be used for gesture recognition to recognize gestures made by the user using their hand, arm, etc. consistent with present principles”; ¶97, “App Space may detect gaze, raycast, keyboard, and keypress events from any buttons on the head-mounted headset 400 itself or even other controller devices (such as 3D hand-held controllers) via the headset's own SDK for 3D rendering”); based on the hand data, operating in a first mode where 3D motion is converted to 2D motion on a 2D surface displayed within a 3D space of the XR environment (¶26; ¶93, “App Space may therefore render the 2D apps in a 3D spatial environment, as well as convert 3D coordinates in the 3D spatial coordinate system into 2D coordinates in the 2D coordinate system at runtime (and vice versa)”). Singh does not explicitly mention that based on the hand data, operating in a first mode where 3D motion of the hand is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment; detecting a 3D user input criteria; and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the hand is maintained without conversion to the 2D motion. Zhang teaches a non-transitory computer-readable storage medium (fig. 1, item 30; ¶26) storing program instructions (¶26) executable via a processor (fig. 1, item 32; ¶26), of an electronic device (figs. 1-2, items 10, 20, and 16), to perform operations comprising: presenting an extended reality (XR) environment (¶57) comprising a virtual element (fig. 11, item 240 and 244; ¶56) and a cursor (fig. 3, depicted as item 210 when outside a desktop window; fig. 11, depicted as item 214 when inside a desktop window or application window; ¶31, “In some examples, if the pointer 210 collides with virtual content in the three-dimensional environment 56, the pointer may be moved closer to the user 36 to overlap the virtual object or other content”; ¶56), obtaining spatial input signals associated with hand data corresponding to a three-dimensional (3D) cursor motion of a hand in a 3D environment (¶30, “the three-dimensional pointer 210 may be controlled by other user input modalities, such as gaze detection using a targeting ray and/or gesture detection”); based on the spatial input signals, operating in a first mode where 3D motion of the spatial input signals is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment (fig. 7; fig. 13B, item 280-282; ¶45, “the system may determine that the location of the three-dimensional pointer 210 moves from outside the boundary of the desktop window 40 to inside the window. In response, the translation of the spatial input signals may be changed from three-dimensional motion of the three-dimensional pointer 210 to two-dimensional motion of the two-dimensional pointer 214 within the desktop window 40”); detecting a 3D user input criteria (fig. 13B, item 293); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the spatial input signals is maintained without conversion to the 2D motion (fig. 13B, item 294; ¶41, “when the location of the two-dimensional pointer crosses the boundary area of the desktop window 40 into the surrounding three-dimensional environment 56, the two-dimensional pointer is replaced with the three-dimensional pointer 210 at the corresponding location”; ¶46; ¶65-66 – the 3D motion of the cursor is maintained since the cursor and the application (desktop window) are operating in a three-dimensional environment). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the non-transitory computer-readable storage medium of Singh, such that spatial input signals are hand gestures associated with hand data that correspond to a cursor resulting in based on the hand data, operating in a first mode where 3D motion of the hand is converted to 2D motion to move the cursor on a 2D surface displayed within a 3D space of the XR environment; detecting a 3D user input criteria; and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the hand is maintained without conversion to the 2D motion, as taught by Zhang so as to enable input to freely migrate between a desktop window and virtual space, thereby enabling a user to conveniently interact with desktop and non-desktop virtual content and also such that desktop applications also may be moved into and out from of a desktop window to provide continuum between an operating system shell and a holographic/three-dimensional shell displayed by an HMD device (¶23). Singh and Zhang combined do not explicitly teach wherein the 3D motion of the hand results in 3D movement of an element in the 3D environment. Ueno teaches a non-transitory computer-readable storage medium (¶100) storing program instructions executable via one or more processors of an electronic device (fig. 5) perform operations comprising: presenting an extended reality (XR) environment (fig. 7; ¶255) comprising a virtual element (fig. 7, item BL1 or fig. 15, item OB1; ¶108; ¶137); obtaining image data associated with hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (fig. 7, item H1; ¶96; ¶98, “the imaging units 40 and 42 function as both of the detection unit 44 and the distance measuring unit 46”); detecting a 3D user input criteria (¶114, “the user moves the hand H1 in the direction of an arrow A1 … the display device 1 determines that the movement of the hand H1 is operation to move the three-dimensional object to the right while holding the three-dimensional object, and moves the position of the three-dimensional object to the right in the virtual space according to the operation”); wherein the 3D motion of the hand results in 3D movement of an element in the XR environment (¶114; ). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer-readable storage medium of Singh and Zhang, wherein the 3D motion of the hand results in 3D movement of an element in the 3D environment, as taught by Ueno so as to provide the user with a highly convenient operation method (¶22). With respect to Claim 21, claim 1 is incorporated, Singh teaches wherein the 2D surface is an interface element (fig. 6, item 602; ¶110). Claims 3-6, 8-10, 12, 15, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Singh, Zhang, and Ueno as applied to claims 1 and 13 above, and further in view of Cho et al. (Pub. No.: US 2019/0384420 A1) hereinafter referred to as Cho. With respect to Claim 3, claim 1 is incorporated, Singh, Zhang, and Ueno combined do not explicitly mention wherein the 3D user input criteria enables a particular type of UI element to be selected. Cho teaches a method (fig. 20; ¶147) comprising: at an electronic device (figs. 1 and 6, item 101; ¶44) having a processor (fig. 1, item 120; ¶45) and a display (fig. 1, item 160; ¶48): presenting an extended reality (XR) environment comprising a virtual element and a cursor (fig. 6; ¶87, “the electronic device 101 may form a viewport 610 in a real or virtual environment, and may output real external objects, virtual objects, or data information through the viewport 610 … In addition, the viewport 610 may refer to an area in which the pointer object is movable”); obtaining hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (via a control device – see figs. 1, 7, 8A/8B, 9 item 102; ¶50; ¶91; ¶97, “More specifically, the controller 770 may obtain a magnetic vector using the origin of the three-axis magnetic field (i.e., a reference point) generated in the electronic device 101 and information measured by the first sensor unit 710 (e.g., the intensity of current, the intensity of voltage, and the phase of a magnetic field signal), and may determine the coordinates of the control device 102 by means of a magnetic field formula” – since the hand data is comprises three axes, it therefore corresponds to 3D motion or ¶145, “If a camera of the electronic device 101 detects the hand shape of the user wearing the control devices 102, the three-dimensional pointer object may be displayed in the form of a user's hand or control device 102”); operating in first mode where 3D motion of the cursor is converted to 2D motion (fig. 17, items 1701, 1703, and 1705; fig. 19, the electronic device operates in a first mode when it is determined that the control device is close to ground; ¶142; ¶143-144); detecting a 3D user input criteria (fig. 17, after item 1705 has ended, the process begins again and the 3D user input criteria is detected which comprises of the control device not being close to ground); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the cursor is maintained without conversion to the 2D motion (fig. 17, items 1701, 1707, and 1709; ¶145-146); wherein the 3D user input criteria enables a particular type of UI element to be selected (fig. 19B; ¶146, “in FIG. 19B, the electronic device 101 may move the three-dimensional pointer object 1903 up, down, left, right, forward, or backwards. In this case, if the electronic device 101 receives an input signal from the control device 102, the electronic device 101 may control the pointer object 1903 so as to click and move one of a plurality of windows” – “click” corresponds to selecting). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Singh, Zhang, and Ueno, such that input via a control device is substituted with gesture input resulting in wherein the 3D user input criteria enables a particular type of UI element to be selected, as taught by Cho, so as to allow seamless input in a variety of uses and applications. With respect to Claim 4, claim 3 is incorporated, Singh, Zhang, and Ueno combined do not explicitly mention wherein the 3D user input criteria determines that the cursor is on an object having an object type while a selection input is provided. Cho teaches a method (fig. 20; ¶147) comprising: at an electronic device (figs. 1 and 6, item 101; ¶44) having a processor (fig. 1, item 120; ¶45) and a display (fig. 1, item 160; ¶48): presenting an extended reality (XR) environment comprising a virtual element and a cursor (fig. 6; ¶87, “the electronic device 101 may form a viewport 610 in a real or virtual environment, and may output real external objects, virtual objects, or data information through the viewport 610 … In addition, the viewport 610 may refer to an area in which the pointer object is movable”); obtaining hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (via a control device – see figs. 1, 7, 8A/8B, 9 item 102; ¶50; ¶91; ¶97, “More specifically, the controller 770 may obtain a magnetic vector using the origin of the three-axis magnetic field (i.e., a reference point) generated in the electronic device 101 and information measured by the first sensor unit 710 (e.g., the intensity of current, the intensity of voltage, and the phase of a magnetic field signal), and may determine the coordinates of the control device 102 by means of a magnetic field formula” – since the hand data is comprises three axes, it therefore corresponds to 3D motion or ¶145, “If a camera of the electronic device 101 detects the hand shape of the user wearing the control devices 102, the three-dimensional pointer object may be displayed in the form of a user's hand or control device 102”); operating in first mode where 3D motion of the cursor is converted to 2D motion (fig. 17, items 1701, 1703, and 1705; fig. 19, the electronic device operates in a first mode when it is determined that the control device is close to ground; ¶142; ¶143-144); detecting a 3D user input criteria (fig. 17, after item 1705 has ended, the process begins again and the 3D user input criteria is detected which comprises of the control device not being close to ground); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the cursor is maintained without conversion to the 2D motion (fig. 17, items 1701, 1707, and 1709; ¶145-146); wherein the 3D user input criteria enables a particular type of UI element to be selected (fig. 19B; ¶146, “in FIG. 19B, the electronic device 101 may move the three-dimensional pointer object 1903 up, down, left, right, forward, or backwards. In this case, if the electronic device 101 receives an input signal from the control device 102, the electronic device 101 may control the pointer object 1903 so as to click and move one of a plurality of windows” – “click” corresponds to selecting); wherein the 3D user input criteria determines that the cursor is on an object having an object type while a selection input is provided (fig. 19B; ¶146, “in FIG. 19B, the electronic device 101 may move the three-dimensional pointer object 1903 up, down, left, right, forward, or backwards. In this case, if the electronic device 101 receives an input signal from the control device 102, the electronic device 101 may control the pointer object 1903 so as to click and move one of a plurality of windows” – the object is a window and the object type is an active window to be moved, click and move corresponds to a selection input). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Singh, Zhang, and Ueno, such that input via a control device is substituted with gesture input resulting in wherein the 3D user input criteria determines that the cursor is on an object having an object type while a selection input is provided, as taught by Cho, so as to allow seamless input in a variety of uses and applications. With respect to Claim 5, claim 1 is incorporated, Singh, Zhang, and Ueno combined do not explicitly mention wherein the 3D user input criteria comprises performing a user gesture and moving the hand in a z-direction. Cho teaches a method (fig. 20; ¶147) comprising: at an electronic device (figs. 1 and 6, item 101; ¶44) having a processor (fig. 1, item 120; ¶45) and a display (fig. 1, item 160; ¶48): presenting an extended reality (XR) environment comprising a virtual element and a cursor (fig. 6; ¶87, “the electronic device 101 may form a viewport 610 in a real or virtual environment, and may output real external objects, virtual objects, or data information through the viewport 610 … In addition, the viewport 610 may refer to an area in which the pointer object is movable”); obtaining hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (via a control device – see figs. 1, 7, 8A/8B, 9 item 102; ¶50; ¶91; ¶97, “More specifically, the controller 770 may obtain a magnetic vector using the origin of the three-axis magnetic field (i.e., a reference point) generated in the electronic device 101 and information measured by the first sensor unit 710 (e.g., the intensity of current, the intensity of voltage, and the phase of a magnetic field signal), and may determine the coordinates of the control device 102 by means of a magnetic field formula” – since the hand data is comprises three axes, it therefore corresponds to 3D motion or ¶145, “If a camera of the electronic device 101 detects the hand shape of the user wearing the control devices 102, the three-dimensional pointer object may be displayed in the form of a user's hand or control device 102”); operating in first mode where 3D motion of the cursor is converted to 2D motion (fig. 17, items 1701, 1703, and 1705; fig. 19, the electronic device operates in a first mode when it is determined that the control device is close to ground; ¶142; ¶143-144); detecting a 3D user input criteria (fig. 17, after item 1705 has ended, the process begins again and the 3D user input criteria is detected which comprises of the control device not being close to ground); and in response to said 3D user input criteria, modifying a mode of operation to a second mode where the 3D motion of the cursor is maintained without conversion to the 2D motion (fig. 17, items 1701, 1707, and 1709; ¶145-146); wherein the 3D user input criteria enables a particular type of UI element to be selected (fig. 19B; ¶146, “in FIG. 19B, the electronic device 101 may move the three-dimensional pointer object 1903 up, down, left, right, forward, or backwards. In this case, if the electronic device 101 receives an input signal from the control device 102, the electronic device 101 may control the pointer object 1903 so as to click and move one of a plurality of windows” – “click” corresponds to selecting); wherein the 3D user input criteria comprises performing a user gesture and moving the hand in a z-direction (fig. 19B; ¶146, “For example, the electronic device 101 may display the rotational motion of hands, fingers, or the control device 102 using the three-dimensional direction coordinates (e.g., a roll angle, a pitch angle, and a yaw angle) received from the control device 102”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Singh, Zhang, and Ueno, such that input via a control device is substituted with gesture input resulting in wherein the 3D user input criteria comprises performing a user gesture and moving the hand in a z-direction, as taught by Cho, so as to allow seamless input in a variety of uses and applications. With respect to Claim 6, claim 5 is incorporated, Singh, Zhang, and Ueno combined do not explicitly mention wherein the method determines to provide the second mode while the user gesture is maintained. Cho teaches a method (fig. 20; ¶147) comprising: at an electronic device (figs. 1 and 6, item 101; ¶44) having a processor (fig. 1, item 120; ¶45) and a display (fig. 1, item 160; ¶48): presenting an extended reality (XR) environment comprising a virtual element and a cursor (fig. 6; ¶87, “the electronic device 101 may form a viewport 610 in a real or virtual environment, and may output real external objects, virtual objects, or data information through the viewport 610 … In addition, the viewport 610 may refer to an area in which the pointer object is movable”); obtaining hand data corresponding to a three-dimensional (3D) motion of a hand in a 3D environment (via a control device – see figs. 1, 7, 8A/8B, 9 item 102; ¶50; ¶91; ¶97, “More specifically, the controller 770 may obtain a magnetic vector using the origin of the three-axis magnetic field (i.e., a reference point) generated in the electronic device 101 and information measured by the first sensor unit 710 (e.g., the intensity of current, the intensity of voltage, and the phase of a magnetic field signal), and may determine the coordinates of the control device 102 by means of a magnetic field formula” – since the hand data is comprises three axes, it therefore corresponds to 3D motion or ¶145, “If a camera of the electronic device 101 detects the hand shape of the user wearing the control devices 102, the three-dimensional pointer object may be displayed in the form of a user's hand or control device 102”); operating in first mode where 3D motion of the cursor is converted to 2D motion (fig. 17, items 1701
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Apr 04, 2025
Non-Final Rejection — §103
Jun 23, 2025
Interview Requested
Jul 01, 2025
Examiner Interview Summary
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 09, 2025
Response Filed
Aug 07, 2025
Final Rejection — §103
Oct 31, 2025
Interview Requested
Nov 12, 2025
Examiner Interview Summary
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 14, 2025
Request for Continued Examination
Nov 15, 2025
Response after Non-Final Action
Dec 08, 2025
Non-Final Rejection — §103
Mar 16, 2026
Interview Requested
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 24, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591297
MULTIMODAL TASK EXECUTION AND TEXT EDITING FOR A WEARABLE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12536977
BRIGHTNESS CONTROL METHOD AND APPARATUS FOR DISPLAY PANEL
2y 5m to grant Granted Jan 27, 2026
Patent 12475825
DISPLAY SUBSTRATE INCLUDING SHIFT REGISTER AND DISPLAY DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12451088
LIQUID CRYSTAL DISPLAY DEVICE AND CONTROL MODULE THEREOF, AND INTEGRATED BOARD
2y 5m to grant Granted Oct 21, 2025
Patent 12451091
TEMPERATURE CONTROL CIRCUIT AND TEMPERATURE CONTROL METHOD OF DRIVER CHIP AND TIMING CONTROL DRIVER BOARD
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
77%
With Interview (+19.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month