DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
2. Claim(s) 2-6, 9-13, and 16-18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Nguyen et al. (US Patent/PGPub. No. 20200097077).
Regarding Claim 2, (New)
Nguyen et al. teach
a method for providing a virtual experience ([0014], FIG. 2 & 6, i.e. method for displaying and modifying a visualization) to a user of an artificial-reality headset ([0053], FIG. 1, i.e. user 104 via a head mounted device 106), the method comprising:
detecting ([0055], FIG. 1, i.e. detect a gesture command performed by a user 104 (Please note that Applicant defines “ … “posture” is a position or configuration of one or more parts of a user's body …” [0023]) a first posture ([0098], FIG. 8, i.e. head mounted device 106 may detect a gesture command 802 performed by the user 104) of a user wearing the artificial-reality headset (FIG. 8, i.e. as shown by the figure(s));
determining positions of one or more virtual objects ([0098], FIG. 8, i.e. virtual industrial automation device 806) within the virtual experience around the user while the user is in the first posture ([0098], FIG. 8, i.e. within the visualization 114 associated with the AR environment);
detecting a change in the user's posture ([0099], FIG. 9, i.e. detect a grasping gesture command 808);
in accordance with a determination that the change in posture of the user changes from the first posture to a second posture ([0098], [0099], FIG. 8 & 9, i.e. gesture command 802 to gesture command 806), the second posture different from the first posture ([0098], [0099], FIG. 8 & 9, i.e. 808 ≠ 802), adjusting the positions of the one or more virtual objects (FIG. 8 & 9, i.e. as shown by the figure(s) position of 806 is adjusted) within the virtual experience (i.e. please see above citation(s)) such that each respective virtual object moves from a first position ([0098], [0099], FIG. 8 & 9, i.e. as shown by the figure(s) 806 is away from 802 hand) within the virtual experience to a second position ([0098], [0099], FIG. 8 & 9, i.e. as shown by the figure(s) 806 is in 808 hand) within the virtual experience around the user (i.e. please see above citation(s)).
Regarding Claim 3, (New)
Nguyen et al. teach
the method of claim 2, wherein each respective virtual object in the second position (i.e. please see above citation(s)) is within reach of the user ([0099], FIG. 9, i.e. grasping gesture) in the virtual experience (i.e. please see above citation(s)).
Regarding Claim 4, (New)
Nguyen et al. teach
the method of claim 2, wherein the first position of each respective virtual object (i.e. please see above citation(s)) is a first distance ([0106], FIG. 11, i.e. as shown by the figure(s) distance between 1102 and 1104) away with respect to the user and the second position of each respective virtual object (i.e. please see above citation(s)) is a second distance away ([0107], FIG. 12, i.e. as shown by the figure(s) distance between 1102 and 1104) with respect to the user (i.e. please see above citation(s)).
Regarding Claim 5, (New)
Nguyen et al. teach
the method of claim 4, wherein the first distance (i.e. please see above citation(s)) is larger than (FIG. 11 & 12, i.e. as shown by the figure(s) the distance between 1102 and 1104 in FIG. 11 is greater than that of FIG. 12) the second distance (i.e. please see above citation(s)).
Regarding Claim 6, (New)
Nguyen et al. teach
the method of claim 2, wherein the method (i.e. please see above citation(s)) further includes:
detecting another change in the user's posture ([0108], FIG. 12, i.e. after the user 104 has completed a pushing or pulling motion); and
in accordance with a determination that the change in posture of the user changes from the second posture to a third posture, the third posture different from the first posture and the second posture (FIG. 11 & 12, i.e. as shown by the figure(s) the first posture being before push gesture in FIG. 11, the second posture being the push gesture on 1102 in FIG. 12, and the third posture being after push gesture completed), adjusting the positions of the one or more virtual objects within the artificial-reality headset such that each respective virtual object moves from the second position within the virtual experience ([0106], [0107], FIG. 11 & 12, i.e. as shown by the figure(s) 1102 before push in FIG. 11, being push in FIG. 12) to a third position ([0108], FIG. 12, i.e. 1104 is moving after the user 104 has completed a pushing or pulling motion … apply the virtual force to … 1104 … to simulate a movement of the virtual industrial automation device 1104) within the virtual experience around the user (i.e. please see above citation(s)).
Regarding Claim 9, (New)
Nguyen et al. teach
the method of claim 2, wherein the method (i.e. please see above citation(s)) further includes:
before detecting the change in the user's posture ([0098], FIG. 8, i.e. as shown by the figure(s) before grasping gesture command 808) and while the user is in a first posture ([0098], FIG. 8, i.e. as shown by the figure(s) executing gesture command 802), displaying the virtual experience with a first point of view (FIG. 8, i.e. as shown by the figure(s)); and
after detecting the second posture ([0099], FIG. 9, i.e. grasping gesture command 808), displaying the virtual experience with a second point of view (FIG. 9, i.e. as shown by the figure(s)).
Regarding Claim 10, (New)
Nguyen et al. teach
the method of claim 2, wherein the first posture and the second posture (i.e. please see above citation(s)) include at least a user standing up (FIG. 8-9 & 11-12, i.e. as shown by the figure(s) user is standing) or sitting down (i.e. alternative limitation(s) omitted).
Regarding Claim 11, (New)
Nguyen et al. teach
an artificial-reality headset configured to provide a virtual experience to a user ([0053], FIG. 1-2, i.e. user 104 to display a visualization 114 that includes a virtual representation of an industrial automation device 102 (e.g., virtual industrial automation device) in an AR environment), comprising:
one or more processors ([0068], FIG. 1-2, i.e. processor 208); and
one or more programs ([0069], FIG. 1-2, i.e. processor-executable code, data), wherein the one or more programs are stored in memory ([0068], FIG. 1-2, i.e. memory 210) and configured to be executed by the one or more processors, the one or more programs including instructions ([0069], FIG. 1-2, i.e. executing computer-executable code) for:
detecting ([0055], FIG. 1, i.e. detect a gesture command performed by a user 104 (Please note that Applicant defines “ … “posture” is a position or configuration of one or more parts of a user's body …” [0023]) a first posture ([0098], FIG. 8, i.e. head mounted device 106 may detect a gesture command 802 performed by the user 104) of a user wearing the artificial-reality headset (FIG. 8, i.e. as shown by the figure(s));
determining positions of one or more virtual objects ([0098], FIG. 8, i.e. virtual industrial automation device 806) within the virtual experience around the user while the user is in the first posture ([0098], FIG. 8, i.e. within the visualization 114 associated with the AR environment);
detecting a change in the user's posture ([0099], FIG. 9, i.e. detect a grasping gesture command 808); and
in accordance with a determination that the change in posture of the user changes from the first posture to a second posture ([0098], [0099], FIG. 8 & 9, i.e. gesture command 802 to gesture command 806), the second posture different from the first posture ([0098], [0099], FIG. 8 & 9, i.e. 808 ≠ 802), adjusting the positions of the one or more virtual objects (FIG. 8 & 9, i.e. as shown by the figure(s) position of 806 is adjusted) within the virtual experience (i.e. please see above citation(s)) such that each respective virtual object moves from a first position ([0098], [0099], FIG. 8 & 9, i.e. as shown by the figure(s) 806 is away from 802 hand) within the virtual experience to a second position ([0098], [0099], FIG. 8 & 9, i.e. as shown by the figure(s) 806 is in 808 hand) within the virtual experience around the user (i.e. please see above citation(s)).
Regarding Claim 12, (New)
Nguyen et al. teach
the artificial-reality headset of claim 11, wherein the first position of each respective virtual object (i.e. please see above citation(s)) is a first distance ([0106], FIG. 11, i.e. as shown by the figure(s) distance between 1102 and 1104) away with respect to the user and the second position of each respective virtual object (i.e. please see above citation(s)) is a second distance away ([0107], FIG. 12, i.e. as shown by the figure(s) distance between 1102 and 1104) with respect to the user (i.e. please see above citation(s)).
Regarding Claim 13, (New)
Nguyen et al. teach
the artificial-reality headset of claim 11, wherein the one or more programs further include instructions (i.e. please see above citation(s)) for:
detecting another change in the user's posture ([0108], FIG. 12, i.e. after the user 104 has completed a pushing or pulling motion); and
in accordance with a determination that the change in posture of the user changes from the second posture to a third posture, the third posture different from the first posture and the second posture (FIG. 11 & 12, i.e. as shown by the figure(s) the first posture being before push gesture in FIG. 11, the second posture being the push gesture on 1102 in FIG. 12, and the third posture being after push gesture completed), adjusting the positions of the one or more virtual objects within the artificial-reality headset such that each respective virtual object moves from the second position within the virtual experience ([0106], [0107], FIG. 11 & 12, i.e. as shown by the figure(s) 1102 before push in FIG. 11, being push in FIG. 12) to a third position ([0108], FIG. 12, i.e. 1104 is moving after the user 104 has completed a pushing or pulling motion … apply the virtual force to … 1104 … to simulate a movement of the virtual industrial automation device 1104) within the virtual experience around the user (i.e. please see above citation(s)).
Regarding Claim 16, (New)
Nguyen et al. teach
the artificial-reality headset of claim 11, wherein the one or more programs further include instructions (i.e. please see above citation(s)) for:
before detecting the change in the user's posture ([0098], FIG. 8, i.e. as shown by the figure(s) before grasping gesture command 808) and while the user is in a first posture ([0098], FIG. 8, i.e. as shown by the figure(s) executing gesture command 802), displaying the virtual experience with a first point of view (FIG. 8, i.e. as shown by the figure(s)); and
after detecting the second posture ([0099], FIG. 9, i.e. grasping gesture command 808), displaying the virtual experience with a second point of view (FIG. 9, i.e. as shown by the figure(s)).
Regarding Claim 17, (New)
Nguyen et al. teach
a non-transitory computer readable storage medium ([0069], FIG. 1-2, i.e. memory 210 may store non-transitory processor-executable code) including instructions ([0069], FIG. 1-2, i.e. processor-executable code, data) that, when executed by a computing device ([0068], FIG. 1-2, i.e. processor 208), cause the computing device (i.e. please see above citation(s)) to:
detecting ([0055], FIG. 1, i.e. detect a gesture command performed by a user 104 (Please note that Applicant defines “ … “posture” is a position or configuration of one or more parts of a user's body …” [0023]) a first posture ([0098], FIG. 8, i.e. head mounted device 106 may detect a gesture command 802 performed by the user 104) of a user wearing the artificial-reality headset (FIG. 8, i.e. as shown by the figure(s));
determining positions of one or more virtual objects ([0098], FIG. 8, i.e. virtual industrial automation device 806) within the virtual experience around the user while the user is in the first posture ([0098], FIG. 8, i.e. within the visualization 114 associated with the AR environment);
detecting a change in the user's posture ([0099], FIG. 9, i.e. detect a grasping gesture command 808); and
in accordance with a determination that the change in posture of the user changes from the first posture to a second posture ([0098], [0099], FIG. 8 & 9, i.e. gesture command 802 to gesture command 806), the second posture different from the first posture ([0098], [0099], FIG. 8 & 9, i.e. 808 ≠ 802), adjusting the positions of the one or more virtual objects (FIG. 8 & 9, i.e. as shown by the figure(s) position of 806 is adjusted) within the virtual experience (i.e. please see above citation(s)) such that each respective virtual object moves from a first position ([0098], [0099], FIG. 8 & 9, i.e. as shown by the figure(s) 806 is away from 802 hand) within the virtual experience to a second position ([0098], [0099], FIG. 8 & 9, i.e. as shown by the figure(s) 806 is in 808 hand) within the virtual experience around the user (i.e. please see above citation(s)).
Regarding Claim 18, (New)
Nguyen et al. teach
the non-transitory computer readable storage medium of claim 17, further including instructions that cause the computing device (i.e. please see above citation(s)) to:
detecting another change in the user's posture ([0108], FIG. 12, i.e. after the user 104 has completed a pushing or pulling motion); and
in accordance with a determination that the change in posture of the user changes from the second posture to a third posture, the third posture different from the first posture and the second posture (FIG. 11 & 12, i.e. as shown by the figure(s) the first posture being before push gesture in FIG. 11, the second posture being the push gesture on 1102 in FIG. 12, and the third posture being after push gesture completed), adjusting the positions of the one or more virtual objects within the artificial-reality headset such that each respective virtual object moves from the second position within the virtual experience ([0106], [0107], FIG. 11 & 12, i.e. as shown by the figure(s) 1102 before push in FIG. 11, being push in FIG. 12) to a third position ([0108], FIG. 12, i.e. 1104 is moving after the user 104 has completed a pushing or pulling motion … apply the virtual force to … 1104 … to simulate a movement of the virtual industrial automation device 1104) within the virtual experience around the user (i.e. please see above citation(s)).
Response to Argument/Amendment
3. Applicants’ Preliminary Amendment dated 08/25/2025, has been entered and made of record. Claim(s) 1 is/are cancelled, and Claims 2-21 is/are new. Thus, Claim(s) 2-21 is/are pending in this application.
Allowable Subject Matter
4. Claim(s) 7-8, 14-15, and 19-21 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
5. The following is an examiner’s statement of reasons for allowance:
Nguyen et al. (US Patent/PGPub. No. 20200097077) teach a method may include receiving, via a processor, image data associated with a user's surrounding and generating, via the processor, a visualization that may include a virtual industrial automation device. The virtual industrial automation device may depict a virtual object within image data, and the virtual object may correspond to a physical industrial automation device. The method may include displaying, via the processor, the visualization via an electronic display and detecting, via the processor, a gesture in image data that may include the user's surrounding and the visualization. The gesture may be indicative of a request to move the virtual industrial automation device. The method may include tracking, via the processor, a user's movement, generating, via the processor, a visualization that may include an animation of the virtual industrial automation device moving based on the user's movement, and displaying, via the processor, the visualization via the electronic display.
Caron et al. (US PGPUB./Pat. No. 10852814) teach A head-mounted display system is provided, including a head-mounted display, an imaging sensor, and a processor. The processor may convey instructions to the head-mounted display to display a virtual object at a world-locked location in a physical environment. The processor may receive imaging data of the physical environment from the imaging sensor. The processor may determine, based on the imaging data, that a world-space distance between a hand of a user and the world-locked location is below a predetermined distance threshold. The processor may convey instructions to the head-mounted display to display a bounding virtual object that covers at least a portion of the virtual object. The processor may detect, based on the imaging data, a change in the world-space distance. The processor may convey instructions to the head-mounted display to modify a visual appearance of the bounding virtual object based on the change in the world-space distance.
The subject matter of the claim(s) that could neither be found/suggested nor obviously combinable in the prior arts of record. The subject matter was a device/method including
“…detecting another change in the user's posture; and
in accordance with a determination that the change in posture of the user changes from the second posture to a third posture, the third posture different from the first posture and the second posture, pausing the virtual experience and generating an indication to the user to return to the first posture or the second posture.” (Claim 7; Claims 14 and 19 are similar), in combination with the other elements (or steps) of the device or apparatus and method recited in the claims.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINH TANG LAM whose telephone number is (571) 270-3704. The examiner can normally be reached Monday to Friday 8:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin K Patel can be reached at (571) 272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VINH T LAM/Primary Examiner, Art Unit 2628