Prosecution Insights
Last updated: April 19, 2026
Application No. 18/757,461

HEAD-MOUNTED DISPLAY, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM THEREOF

Final Rejection §103
Filed
Jun 27, 2024
Examiner
PICON-FELICIANO, ANA J
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
HTC Corporation
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
294 granted / 428 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+21.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
459
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office action is in response to Applicant’s amendments/remarks received on September 29, 2025. 3. Claims 1-9 and 11-20 are pending in this application. Claims 1, 11 and 20 have been amended. Claim 10 has been canceled. Response to Arguments 4. Applicant's arguments filed September 29, 2025 have been fully considered but they are not persuasive. 5. Applicant contends that “Liu, Wu or any combination thereof fails to disclose the emphasized technical features of claim 1: “…calculate a posture corresponding to the hand based on the real-time image, wherein the posture comprises a reference point located on the hand; in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receive a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus, and select a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus”. Examiner respectfully disagrees. Liu discloses a head mounted display performing an interaction method for presenting a virtual keyboard in an extended reality space; detecting a position and an action of a user finger relative to the virtual keyboard; and outputting a vibration signal based on the position and the action of the user finger relative to the virtual keyboard, wherein the vibration signal is used for prompting a position of the user finger in the virtual keyboard and/or an operation action of the user finger on a virtual key in the virtual keyboard. More specifically, the position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard. Further on, the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit. Also referring to FIG. 2, in the extended reality device, a sensor for posture detection (for example, a nine-axis sensor) is provided, for detecting a posture change of the extended reality device in real time; if a user wears the extended reality device, when a posture of the user head changes, the real-time posture of the head will be transmitted to a processor, to calculate a gaze point of a line of sight of the user in a virtual environment; and, according to the gaze point, an image within a user gaze range (i.e., a virtual field of view) in a three-dimensional model of the virtual environment is calculated, and the image is displayed on a display screen, thereby making the user have an immersion experience as if watching in a real environment. [See Liu: at least Figs. 1-6 and par. 37-53, 69-77, 110]. Further on, Wu discloses systems and method for providing dynamic haptic playback in augmented or virtual reality environments. More specifically, Wu discloses a system 100 for providing dynamic haptic playback or effects for an augmented or virtual reality environment in substantially real time . The processor 102 that can receive a sensor signal from the sensor 136 when the sensor 136 detects the user's interaction with a simulated reality environment using the user device 120. For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment. The haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion. In an example, the user can provide additional user input via one or more interactive control elements to indicate or modify parameters of the dynamic haptic effect including, for example, providing user input to indicate whether the dynamic haptic effect is a periodic dynamic haptic effect, a first (e.g., starting) dynamic haptic effect of the dynamic haptic effect, a second (e.g., ending) dynamic haptic effect of the dynamic haptic effect, a starting time or position of the dynamic haptic effect, an ending time or position of the dynamic haptic effect, and a type of a model (e.g., a linear model) for rendering or generating the dynamic haptic effect [See Wu: at least par. 21, 37-43, 53, 79, 99 ]. Accordingly, the cited prior art of record meet with the contended limitations. Therefore, the Office respectfully stands their position that the cited prior art meets with the contended limitations. All remaining arguments that are dependent on the aforementioned arguments are therefore deemed unpersuasive. Information Disclosure Statement 6. The information disclosure statement (IDS) submitted on January 7, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 9. Claims 1, 4, 5, 9, 11, 14, 15, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over LIU et al.(US 2024/0103625 A1)(hereinafter Liu) in view of Wu et al.(US 2019/0324539 A1)(hereinafter Wu). Regarding claim 1, Liu discloses a head-mounted display[See Liu: at least Figs. 1-2, 6 and par. 33-37 regarding head mounted display (HMD)], comprising: a display[See Liu: at least Figs. 1-6 and par. 35 and 37 regarding display screen]; a communication interface, communicatively connected to a wearable apparatus[See Liu: at least Figs. 1-6 and par. 55, 58, 92, 103-106 regarding the vibration signal is formed by a vibration motor, wherein the vibration motor is provided in an extended reality device, and/or the vibration motor is provided in a haptic feedback device in communication connection with the extended reality device. Further, in Fig. 6, the communication device 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information.]; a camera, configured to capture a real-time image comprising the wearable apparatus worn on a hand of a user[ See Liu: at least Figs. 1-6 and par. 37, 40-53, 56-58, 94, 103 regarding the vibration motor is re-used as a focusing motor of a camera. Specifically, a camera is configured in the extended reality device, and the vibration motor is re-used as a focusing motor of the camera in the extended reality device. Alternatively, a camera is configured in the haptic feedback device, and the vibration motor is re-used as a focusing motor of the camera in the haptic feedback device…Further, the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]; and a processor, coupled to the display, the communication interface, and the camera [See Liu: at least Figs. 1-6 and par. 56-58, 94, 102-103, 119-123 regarding the electronic device 1000 may comprise a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 1001 which may perform various appropriate actions and processes…The processing device 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to the bus 1004…Further on, there is provided an electronic device comprising: one or more processors], configured to: calculate a posture corresponding to the hand based on the real-time image, wherein the posture comprises a reference point located on the hand [See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]; and in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receive a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…], and select a first gesture based on a plurality of inertial measurement parameters received from the wearable apparatus[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]. Liu does not explicitly disclose in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receive a plurality of inertial measurement parameters captured in a time interval after activating the input operation form the wearable apparatus, and select a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus. However, Wu teaches in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receive a plurality of inertial measurement parameters captured in a time interval after activating the input operation form the wearable apparatus[See Wu: at least par. 21, 37-43, 53, 79, 99 regarding a system 100 for providing dynamic haptic playback or effects for an augmented or virtual reality environment in substantially real time … For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment... In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion… In an example, the user can provide additional user input via one or more interactive control elements to indicate or modify parameters of the dynamic haptic effect including, for example, providing user input to indicate whether the dynamic haptic effect is a periodic dynamic haptic effect, a first (e.g., starting) dynamic haptic effect of the dynamic haptic effect, a second (e.g., ending) dynamic haptic effect of the dynamic haptic effect, a starting time or position of the dynamic haptic effect, an ending time or position of the dynamic haptic effect, and a type of a model (e.g., a linear model) for rendering or generating the dynamic haptic effect.], and select a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus [See Wu: at least par. 37-43, 79, 99 regarding A detection module 158 can configure the processor 102 to receive sensor signals from the sensor 136…As an example, the processor 102 can receive a sensor signal from the sensor 136 when the sensor 136 detects the user's interaction with a simulated reality environment using the user device 120. For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment... In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion…]. Therefore, it would have been obvious to one of ordinary skill in the art to modify Liu with Wu teachings by including “in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receive a plurality of inertial measurement parameters captured in a time interval after activating the input operation form the wearable apparatus, and select a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus” because this combination has the benefit of providing dynamic haptic effects in an augmented or virtual reality environment in substantially real time as the user interacts with the augmented or virtual reality environment [See Wu: at least par. 1-4, 48]. Further on, when combined teachings, Liu and Wu teach generate an input event corresponding to the virtual object based on the first gesture and an input target corresponding to the input operation [See Liu: at least Figs. 1-6 and par. 40-54, 69-77, 110 regarding S130, outputting a vibration signal based on the position and the action of the user finger relative to the virtual keyboard, wherein the vibration signal is used for prompting a position of the user finger in the virtual keyboard and/or an operation action of the user finger on a virtual key in the virtual keyboard. See Wu: at least Figs. 1-4 and par. 37-43, 79, 99 regarding In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion. For instance, if the user moves the user device 120 with a high velocity to interact with the simulated reality environment, the processor 102 can determine a dynamic haptic effect that includes a strong vibration or a series of strong vibrations. Continuing with this example, if the user subsequently moves the user device 120 with a low or lower velocity, the processor 102 can determine another characteristic of the haptic effect or vary a characteristic of the haptic effect such as, for example, by reducing a magnitude of the vibration or series of vibrations such that user perceives a weaker vibration as the user reduces the velocity of the user device 120…]. Regarding claim 11, Liu discloses a control method, being adapted for use in an electronic apparatus[See Liu: at least Figs. 1-6 and par. 4-5, 39 regarding interaction method for electronic device], wherein the electronic apparatus is communicatively connected to a wearable apparatus[See Liu: at least Figs. 1-6 and par. 55, 58, 92, 103-106 regarding the vibration signal is formed by a vibration motor, wherein the vibration motor is provided in an extended reality device, and/or the vibration motor is provided in a haptic feedback device in communication connection with the extended reality device. Further, in Fig. 6, the communication device 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information.], and the control method comprises the following steps: capturing a real-time image comprising the wearable apparatus worn on a hand of a user[ See Liu: at least Figs. 1-6 and par. 37, 40-53, 56-58, 94, 103 regarding the vibration motor is re-used as a focusing motor of a camera. Specifically, a camera is configured in the extended reality device, and the vibration motor is re-used as a focusing motor of the camera in the extended reality device. Alternatively, a camera is configured in the haptic feedback device, and the vibration motor is re-used as a focusing motor of the camera in the haptic feedback device…Further, the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]; calculating a posture corresponding to the hand based on the real-time image, wherein the posture comprises a reference point located on the hand[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]; and in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…], and selecting a first gesture based on the plurality of inertial measurement parameters received from the wearable apparatus[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]. Liu does not explicitly disclose in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus, and selecting a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus. However, Wu teaches in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus[See Wu: at least par. 21, 37-43, 53, 79, 99 regarding a system 100 for providing dynamic haptic playback or effects for an augmented or virtual reality environment in substantially real time … For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment... In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion… In an example, the user can provide additional user input via one or more interactive control elements to indicate or modify parameters of the dynamic haptic effect including, for example, providing user input to indicate whether the dynamic haptic effect is a periodic dynamic haptic effect, a first (e.g., starting) dynamic haptic effect of the dynamic haptic effect, a second (e.g., ending) dynamic haptic effect of the dynamic haptic effect, a starting time or position of the dynamic haptic effect, an ending time or position of the dynamic haptic effect, and a type of a model (e.g., a linear model) for rendering or generating the dynamic haptic effect.], and selecting a first gesture corresponding to the reference point from a plurality of gestures based on a plurality of inertial measurement parameters received from the wearable apparatus [See Wu: at least par. 37-43, 79, 99 regarding A detection module 158 can configure the processor 102 to receive sensor signals from the sensor 136…As an example, the processor 102 can receive a sensor signal from the sensor 136 when the sensor 136 detects the user's interaction with a simulated reality environment using the user device 120. For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment... In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion…]. Therefore, it would have been obvious to one of ordinary skill in the art to modify Liu with Wu teachings by including “in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus, and selecting a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus” because this combination has the benefit of providing dynamic haptic effects in an augmented or virtual reality environment in substantially real time as the user interacts with the augmented or virtual reality environment [See Wu: at least par. 1-4, 48]. Further on, when combined teachings, Liu and Wu teach generating an input event corresponding to the virtual object based on the first gesture and an input target corresponding to the input operation[See Liu: at least Figs. 1-6 and par. 40-54, 69-77, 110 regarding S130, outputting a vibration signal based on the position and the action of the user finger relative to the virtual keyboard, wherein the vibration signal is used for prompting a position of the user finger in the virtual keyboard and/or an operation action of the user finger on a virtual key in the virtual keyboard. See Wu: at least Figs. 1-4 and par. 37-43, 79, 99 regarding In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion. For instance, if the user moves the user device 120 with a high velocity to interact with the simulated reality environment, the processor 102 can determine a dynamic haptic effect that includes a strong vibration or a series of strong vibrations. Continuing with this example, if the user subsequently moves the user device 120 with a low or lower velocity, the processor 102 can determine another characteristic of the haptic effect or vary a characteristic of the haptic effect such as, for example, by reducing a magnitude of the vibration or series of vibrations such that user perceives a weaker vibration as the user reduces the velocity of the user device 120…]. Regarding claim 20, Liu discloses a non-transitory computer readable storage medium, having a computer program stored therein, wherein the computer program comprises a plurality of codes, the computer program executes a control method after being loaded into an electronic apparatus[See Liu: at least Fig. 6 and par. 104-113, 117-122 regarding a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated by the flow diagrams so as to achieve the above-mentioned interaction method.], the controlling method comprises: capturing a real-time image comprising a wearable apparatus worn on a hand of a user[ See Liu: at least Figs. 1-6 and par. 37, 40-53, 56-58, 94, 103 regarding the vibration motor is re-used as a focusing motor of a camera. Specifically, a camera is configured in the extended reality device, and the vibration motor is re-used as a focusing motor of the camera in the extended reality device. Alternatively, a camera is configured in the haptic feedback device, and the vibration motor is re-used as a focusing motor of the camera in the haptic feedback device…Further, the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]; calculating a posture corresponding to the hand based on the real-time image, wherein the posture comprises a reference point located on the hand[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]; and in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…], and selecting a first gesture based on the plurality of inertial measurement parameters received from the wearable apparatus[See Liu: at least Figs. 1-6 and par. 40-53, 69-77, 110 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… the implementation of this step comprises: periodically acquiring a user hand image by using an image sensor integrated in the extended reality device; acquiring user hand action information by using a hand action acquisition device worn by a user hand; and detecting the position and the action of the user finger relative to the virtual keyboard based on the user hand image and the user hand action information. The hand action acquisition device may specifically comprise one or more of a myoelectricity sensor, a vibration sensor, a pulse sensor, and an inertia measurement unit…]. Liu does not explicitly disclose in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus, and selecting a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus. However, Wu teaches in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus[See Wu: at least par. 21, 37-43, 53, 79, 99 regarding a system 100 for providing dynamic haptic playback or effects for an augmented or virtual reality environment in substantially real time … For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment... In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion… In an example, the user can provide additional user input via one or more interactive control elements to indicate or modify parameters of the dynamic haptic effect including, for example, providing user input to indicate whether the dynamic haptic effect is a periodic dynamic haptic effect, a first (e.g., starting) dynamic haptic effect of the dynamic haptic effect, a second (e.g., ending) dynamic haptic effect of the dynamic haptic effect, a starting time or position of the dynamic haptic effect, an ending time or position of the dynamic haptic effect, and a type of a model (e.g., a linear model) for rendering or generating the dynamic haptic effect.], and selecting a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus [See Wu: at least par. 37-43, 79, 99 regarding A detection module 158 can configure the processor 102 to receive sensor signals from the sensor 136…As an example, the processor 102 can receive a sensor signal from the sensor 136 when the sensor 136 detects the user's interaction with a simulated reality environment using the user device 120. For instance, the detection module 158 can include instructions that, when executed by the processor 102, cause the processor 102 to receive a sensor signal from the sensor 136 when the sensor 136 captures information about the user's motion of the user device 120 as the user interacts with the simulated reality environment... In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion…]. Therefore, it would have been obvious to one of ordinary skill in the art to modify Liu with Wu teachings by including “in response to determining to activate an input operation based on the reference point and a relative position of a virtual object displayed by the display, receiving a plurality of inertial measurement parameters captured in a time interval after activating the input operation from the wearable apparatus, and selecting a first gesture corresponding to the reference point from a plurality of gestures based on the plurality of inertial measurement parameters received from the wearable apparatus” because this combination has the benefit of providing dynamic haptic effects in an augmented or virtual reality environment in substantially real time as the user interacts with the augmented or virtual reality environment [See Wu: at least par. 1-4, 48]. Further on, when combined teachings, Liu and Wu teach generating an input event corresponding to the virtual object based on the first gesture and an input target corresponding to the input operation[See Liu: at least Figs. 1-6 and par. 40-54, 69-77, 110 regarding S130, outputting a vibration signal based on the position and the action of the user finger relative to the virtual keyboard, wherein the vibration signal is used for prompting a position of the user finger in the virtual keyboard and/or an operation action of the user finger on a virtual key in the virtual keyboard. See Wu: at least Figs. 1-4 and par. 37-43, 79, 99 regarding In some examples, the haptic effect determination module 160 may cause the processor 102 to determine a user's motion (e.g., body gesture or motion of the user device 120) and/or a characteristic of the motion and determine or vary a characteristic (e.g., a magnitude, duration, location, type, pitch, frequency, etc.) of a dynamic haptic effect based on the motion and/or characteristic of the motion. For example, the haptic effect determination module 160 may cause the processor 102 to access one or more lookup tables or databases that include data corresponding to a characteristic of a dynamic haptic effect associated with a user's motion (e.g., body motion or motion of the user device 120) and/or characteristic of the motion. In this embodiment, the processor 102 can access the one or more lookup tables or databases and determine or vary a characteristic of one or more dynamic haptic effects associated with the motion and/or characteristic of the motion. For instance, if the user moves the user device 120 with a high velocity to interact with the simulated reality environment, the processor 102 can determine a dynamic haptic effect that includes a strong vibration or a series of strong vibrations. Continuing with this example, if the user subsequently moves the user device 120 with a low or lower velocity, the processor 102 can determine another characteristic of the haptic effect or vary a characteristic of the haptic effect such as, for example, by reducing a magnitude of the vibration or series of vibrations such that user perceives a weaker vibration as the user reduces the velocity of the user device 120…]. Regarding claims 4 and 14, Liu and Wu teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Further on, Liu teaches wherein the processor is further configured to / further comprises: determine / determining whether the relative position of the reference point is located in a vertical extension area of the virtual object, wherein the vertical extension area is constituted by vertically extending a distance from a plurality of subobjects in the virtual object; and in response to the relative position of the reference point is located in the vertical extension area, determine / determining to activate the input operation[See Liu: at least Figs. 1-6 and par. 40-53, 59-64, 66-76, 78-96, 108-111 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… An area of overlap between the vertical projection of the fingertip of the finger on the plane where the virtual keyboard(virtual object) is located and a triggerable range of a certain virtual key(subobject) in the virtual keyboard is greater than a set threshold, and the user keeps the position of the finger relative to the virtual keyboard unchanged, and completes a click action in a direction close to the plane where the virtual keyboard is located, which will trigger the virtual key, that is, input a character corresponding to the virtual key…S130, outputting a vibration signal based on the position and the action of the user finger relative to the virtual keyboard, wherein the vibration signal is used for prompting a position of the user finger in the virtual keyboard and/or an operation action of the user finger on a virtual key in the virtual keyboard.]. Regarding claims 5 and 15, Liu and Wu teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Further on, Liu teaches or suggests wherein when the first gesture is a tap gesture, the input target is generated through the following operations/ steps : calculating a projection point of the reference point on a virtual plane corresponding to the virtual object; and selecting a first subobject from a plurality of subobjects corresponding to the virtual object as the input target based on the projection point[See Liu: at least Figs. 1-6 and par. 40-53, 59-64, 66-76, 78-96, 108-111 regarding S120, detecting a position and an action of a user finger relative to the virtual keyboard. The position of the user finger relative to the virtual keyboard refers to a position where, when a fingertip of the finger is vertically projected on a plane where the virtual keyboard is located, a projection of the fingertip of the finger overlaps with the virtual keyboard… An area of overlap between the vertical projection of the fingertip of the finger on the plane where the virtual keyboard(virtual object) is located and a triggerable range of a certain virtual key(subobject) in the virtual keyboard is greater than a set threshold, and the user keeps the position of the finger relative to the virtual keyboard unchanged, and completes a click action in a direction close to the plane where the virtual keyboard is located, which will trigger the virtual key, that is, input a character corresponding to the virtual key…S130, outputting a vibration signal based on the position and the action of the user finger relative to the virtual keyboard, wherein the vibration signal is used for prompting a position of the user finger in the virtual keyboard and/or an operation action of the user finger on a virtual key in the virtual keyboard.]. Regarding claims 9 and 19, Liu and Wu teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Further on, Liu teaches or suggests wherein the virtual object comprises a plurality of subobjects, and the processor is further configured to / the control method further comprises: in response to determining to activate the input operation, mark / marking one of the subobjects closest to the reference point [See Liu: at least Figs. 1-6 and par. 40-53, 59-64, 66-76, 78-96, 108-111 regarding the method further comprises: outputting visual prompt information based on the position and the action of the user finger relative to the virtual keyboard, wherein the visual prompt information is used for prompting the position of the user finger in the virtual keyboard and/or the operation action of the user finger on the virtual key in the virtual keyboard. The visual prompt information helps to intuitively convey the position of the current user finger in the virtual keyboard and/or the operation action of the current user finger on the virtual key in the virtual keyboard to the user, in a visually perceptible manner in the case that the user observes the virtual keyboard… There are various specific implementations of “outputting visual prompt information based on the position and the action of the user finger relative to the virtual keyboard”, which are not limited in the present application. Exemplarily, the outputting visual prompt information based on the position and the action of the user finger relative to the virtual keyboard comprises at least one of: if the user finger hovers over the virtual key, setting a presentation state of the virtual key hovered as a first highlight state; if the user finger clicks the virtual key, setting a presentation state of the virtual key clicked as a second highlight state; or, if the user finger moves in a first direction, setting a presentation state of the virtual key passed in the process of the moving of the user finger as a third highlight state…In the present application, any two of the first highlight state, the second highlight state, and the third highlight state may have the same or different display effects, which is not limited in the present application…]. 10. Claims 2, 3, 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over LIU et al.(US 2024/0103625 A1)(hereinafter Liu) in view of Wu et al.(US 2019/0324539 A1)(hereinafter Wu) in further view of Kin(US 10,261,595 B1)(hereinafter Kin). Regarding claims 2 and 12, Liu and Wu teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Liu and Wu do not explicitly disclose wherein the operation of / wherein the step of calculating the posture comprises: calculating a plurality of keypoints of the hand in the real-time image; and generating the posture of the hand in a three-dimensional space based on the keypoints and a depth information in the real-time image. However, Kin teaches wherein the operation of / wherein the step of calculating the posture comprises: calculating a plurality of keypoints of the hand in the real-time image; and generating the posture of the hand in a three-dimensional space based on the keypoints and a depth information in the real-time image [See Kin: at least Figs. 1-10 and col. 7 line 28-col. 8 line 7, col. 19 line 19-55 regarding the tracking module 150 is used to track movement of the digits of the user's hands and the hands themselves in order to recognize various poses for the user's hand. Each pose indicates a position of a user's hand. By detecting a combination of multiple poses over time, the tracking module 150 is able to determine a gesture for the user's hand. the tracking module 150 uses a deep learning model to determine the poses of the user's hands. The neural network may take as input feature data extracted from raw data from the imaging device 135 of the hand, e.g., depth information of the user's hand, or data regarding the location of locators on any input device 140 worn on the user's hands. The neural network may output the most likely pose that the user's hands are in. Alternatively, the neural network may output an indication of the most likely positions of the joints of the user's hands. The joints are positions of the user's hand, and may correspond to the actual physical joints in the user's hand, as well as other points on the user's hand that may be needed to sufficiently reproduce the motion of the user's hand in a simulation. If the neural network outputs the positions of joints, the tracking module 150 additionally converts the joint data into a pose, e.g., using inverse kinematics principles...(Here, the joint positions of the hand are keypoints of the hand)]. Therefore, it would have been obvious to one of ordinary skill in the art to modify Liu and Wu with Kin teachings by including “wherein the operation of / wherein the step of calculating the posture comprises: calculating a plurality of keypoints of the hand in the real-time image; and generating the posture of the hand in a three-dimensional space based on the keypoints and a depth information in the real-time image” because this combination has the benefit of providing a hand tracking method to accurately track positions of the user's fingers and thumbs and to track the precise movements of the user's digits and hand through space and time within the simulated environment[See Kin: at least col. 1 line 7- col. 2 line 2]. Regarding claims 3 and 13, Liu, Wu and Kin teach all of the limitations of claims 2 and 12, and are analyzed as previously discussed with respect to those claims. Further on, Kin teaches wherein the operation of / wherein the step of calculating the posture further comprises: selecting one of the keypoints as the reference point [See Kin: at least Figs. 1-10 and col. 7 line 28-col. 8 line 7, col. 19 line 19-55 regarding If the neural network outputs the positions of joints, the tracking module 150 additionally converts the joint data into a pose, e.g., using inverse kinematics principles. For example, the position of various joints of a user's hand, along with the natural and known restrictions (e.g., angular, length, etc.) of joint and bone positions of the user's hand allow the tracking module 150 to use inverse kinematics to determine a most likely pose of the user's hand based on the joint information. The pose data may also include an approximate structure of the user's hand, e.g., in the form of a skeleton, point mesh, or other format...]. 11. Claims 6, 7, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over LIU et al.(US 2024/0103625 A1)(hereinafter Liu) in view of Wu et al.(US 2019/0324539 A1)(hereinafter Wu) in further view of O’LEARY et al.(US 2023/0109787 A1)(hereinafter O’Leary). Regarding claims 6 and 16, Liu and Wu teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Further on, Liu teaches wherein at least one edge position of the virtual object comprises a virtual label[See Liu: at least Figs. 1-6 and par. 40-53, 59-64, 66-76, 78-96, 108-111 regarding the virtual keyboard comprises a plurality of virtual keys and a non-key area surrounding the virtual key, wherein the non-key area corresponds to a third vibration signal; and the output module 330 is configured to: output the third vibration signal if the user finger hovers over the non-key area…]. Liu and Wu do not explicitly disclose and when the first gesture is a double tap gesture, the operation of / the step of generating the input event corresponding to the virtual object further comprises: determining whether the reference point is located on a space position of the virtual label; and in response to the reference point is located on the space position of the virtual label, generating the input event corresponding to the virtual object based on a displacement distance of the double tap gesture. However, O’Leary teaches and when the first gesture is a double tap gesture, the operation of / the step of generating the input event corresponding to the virtual object further comprises: determining whether the reference point is located on a space position of the virtual label; and in response to the reference point is located on the space position of the virtual label, generating the input event corresponding to the virtual object based on a displacement distance of the double tap gesture[See O’Leary: at least Figs. 1A-3, 9G, par. 100, 103, 115, 125, 184, 467 regarding In one example, the definition for event 1 (186) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190… At FIG. 9G, first electronic device 906a detects user input 950d (e.g., a tap gesture, a double tap gesture, a de-pinch gesture, and/or a long press gesture) at a location corresponding to sub-region 944 of table view region 940. In response to detecting user input 950d, first electronic device 906a causes second communication user interface 938a to modify (e.g., enlarge) display of table view region 940 and/or magnify an appearance of first representation 944a of surface 908b.]. Therefore, it would have been obvious to one of ordinary skill in the art to modify Liu and Wu with O’Leary teachings by including “and when the first gesture is a double tap gesture, the operation of / the step of generating the input event corresponding to the virtual object further comprises: determining whether the reference point is located on a space position of the virtual label; and in response to the reference point is located on the space position of the virtual label, generating the input event corresponding to the virtual object based on a displacement distance of the double tap gesture” because this combination has the benefit of providing an alternate double tap gesture operation to generate an input event to drag or zoom an object on the display. Regarding claims 7 and 17, Liu, Wu and O’Leary teach all of the limitations of claims 6 and 16, and are analyzed as previously discussed with respect to those claims. Further on, O’Leary teaches wherein the input event comprises a zooming operation and a virtual object dragging operation[See O’Leary: at least Figs. 1A-3, 9G, par. 100, 103, 115, 125, 184, 467 regarding In one example, the definition for event 1 (186) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190… At FIG. 9G, first electronic device 906a detects user input 950d (e.g., a tap gesture, a double tap gesture, a de-pinch gesture, and/or a long press gesture) at a location corresponding to sub-region 944 of table view region 940. In response to detecting user input 950d, first electronic device 906a causes second communication user interface 938a to modify (e.g., enlarge) display of table view region 940 and/or magnify an appearance of first representation 944a of surface 908b.]. 12. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over LIU et al.(US 2024/0103625 A1)(hereinafter Liu) in view of Wu et al.(US 2019/0324539 A1)(hereinafter Wu) in further view of Mahalingam et al. (US 2024/0070994 A1)(hereinafter Mahalingam). Regarding claims 8 and 18, Liu and Wu teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Liu and Wu do not explicitly disclose wherein when the first gesture is a flick gesture, the operation of / the step of generating the input event corresponding to the virtual object further comprises: moving the virtual object to an initial position; and adjusting a size of the virtual object. However, Mahalingam teaches wherein when the first gesture is a flick gesture, the operation of / the step of generating the input event corresponding to the virtual object further comprises: moving the virtual object to an initial position; and adjusting a size of the virtual object [See Mahalingam: at least Figs. 4A-7B, par. 70-79, 103-108 regarding the gesture component categorizer 406 recognizes continuous movement gesture components composed of continuous movement temporal segments of the skeletal model data 422. Continuous movement temporal segments are temporal segments with definite movement gesture components and their derivatives recognized as additional features, such as a displacement of a hand or a velocity of a hand…An articulate start or stop movement is a type of continuous movement temporal segment with an abrupt beginning movement or an abrupt stop to a movement. An articulate start or stop's salient feature is a starting, or stopping movement that has an abrupt start or end where the acceleration is not uniform. For example, pointing at something with a definite halt that has a start of the movement that is arbitrary and vague, but has an end that is sharp. As another example, a flicking gesture is an example of the opposite (starting) movement where a start is definite, and an end is indefinite... FIG. 6A and FIG. 6B are illustrations of a user interaction with a virtual object during a reduction in size or zooming out from the virtual object in accordance with some examples. FIG. 7A and FIG. 7B are illustrations of a user interaction with a virtual object during an increase in size or zooming in to a virtual object in accordance with some examples.]. Therefore, it would have been obvious to one of ordinary skill in the art to modify Liu and Wu with Mahalingam teachings by including “wherein when the first gesture is a flick gesture, the operation of / the step of generating the input event corresponding to the virtual object further comprises: moving the virtual object to an initial position; and adjusting a size of the virtual object” because this combination has the benefit of providing an alternate flick gesture operation to generate an input event to move or resize an object on the display. Conclusion 13. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ana Picon-Feliciano/Examiner, Art Unit 2482 /CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Jul 12, 2025
Non-Final Rejection — §103
Sep 29, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598287
DISPLAY DEVICE, METHOD, COMPUTER PROGRAM CODE, AND APPARATUS FOR PROVIDING A CORRECTION MAP FOR A DISPLAY DEVICE, METHOD AND COMPUTER PROGRAM CODE FOR OPERATING A DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593021
ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12567163
IMAGING SYSTEM AND OBJECT DEPTH ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561788
FLUORESCENCE MICROSCOPY METROLOGY SYSTEM AND METHOD OF OPERATING FLUORESCENCE MICROSCOPY METROLOGY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554122
TECHNIQUES FOR PRODUCING IMAGERY IN A VISUAL EFFECTS SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.8%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month