DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The present application, filed on October 14, 2024, is accepted.
Claims 2 – 21 are being considered on the merits.
Drawings
The drawings, filed on October 14, 2024, are accepted.
Specification
The specification, filed on October 14, 2024, is accepted.
Double Patenting
No rejection warranted at application’s initial filling time of filling for a patent.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over US 20130342564 A1 to Kinnebrew et al., (hereinafter, “Kinnebrew”) in view of US 20200160607 A1 to Kjallstrom et al., (hereinafter, “Kjallstrom”).
Regarding claim 2, Kinnebrew teaches a computer-implemented method for creating shared virtual spaces, the method comprising: providing a shared virtual space capable of being viewed by multiple users; [Kinnebrew, para. 2 discloses a configured virtual environment comprising a set of virtual objects defined with respect to a source physical environment. The configured environments may be rendered in a head mounted display system which may comprise a see-through head mounted display system. Each configured environment may be selected by a user and loaded for rendering to the user in the display system. Configured environments based on a source environment may be rendered for a user in the source environment or in a different rendering environment. Configured environments may be associated with users, environments or locations. This association allows control over the rendering of configured environments to users as virtual objects may comprise manifestations of physical features of an environment.] mapping a physical object in a physical environment to create a representation of the physical object at a first position in the shared virtual space; [Kinnebrew, para. 37 discloses a 3D mapping application may be executing on the one or more computers systems 12 and a user's personal A/V apparatus 5. In some embodiments, the application instances may perform in a master and client role in which a client copy is executing on the personal A/V apparatus 5 and performs 3D mapping of its display field of view, receives updates of the 3D mapping from the computer system(s) 12 including updates of objects in its view from the master 3D mapping application and sends image data, and depth and object identification data, if available, back to the master copy.] modifying the first position of the representation of the physical object in the shared virtual space; [Kinnebrew, para. 68 discloses each virtual object may be rendered through an understanding of a description of the object used by the display device to render the object and interactions between the object and other real and virtual objects. In order to learn new functions and inputs, the description of the object may be modified to reflect the new inputs and functions.], but Kinnebrew does not teach anchoring a virtual object to an anchor point associated with the representation of the physical object, the anchor point having a second position in the shared virtual space; displaying, to the multiple users, the virtual object relative to the second position of the anchor point; modifying, based on the modified first position of the representation of the physical object, the second position of the anchor point in the shared virtual space; and displaying, to the multiple users, the virtual object relative to the modified second position of the anchor point, such that a relative position of the anchor point and the virtual object remains consistent in the shared virtual space.
However, Kjallstrom does teach anchoring a virtual object to an anchor point associated with the representation of the physical object, the anchor point having a second position in the shared virtual space; [Kjallstrom, para. 4 discloses identifying a location on the second instance of the object corresponding to the point on the physical object; attaching an anchor to the identified location on the second instance of the object corresponding to the point on the physical object;] displaying, to the multiple users, the virtual object relative to the second position of the anchor point; [Kjallstrom, para. 26 discloses the system may receive 75 digital information from the user about the object (or the point on the object which may be an element of the object (for example, a bolt or a crack in the surface)). The system may generate 80 an anchor (for example, a stored coordinate and icon associated with the coordinate) for the point on the object. The system may associate 85 the digital information with the anchor point. An icon may be displayed 90 on the anchor point which can be selected by the user. Selection of the icon may generate the stored digital information as a digital note to the user.] modifying, based on the modified first position of the representation of the physical object, the second position of the anchor point in the shared virtual space; [Kjallstrom, para. 35 discloses the markers 280, 285, and 295 may be sub-digital notes which may include their own respective embedded information anchored to the points a user attaches them to. The manager modified version of the digital note may have its information automatically synchronized back to a file associated with the physical engine 170 in the real-world environment as shown in FIG. 7.] and displaying, to the multiple users, the virtual object relative to the modified second position of the anchor point, such that a relative position of the anchor point and the virtual object remains consistent in the shared virtual space. [Kjallstrom, para. 35 discloses the worker puts on his wearable device and sees the synchronized digital note superimposed on top of the physical engine in an AR environment. When the user selects the digital note 150, the modified information may be retrieved and becomes visible (or audible in the case of audio files) as a run-time file is executed. The note is automatically synchronized which includes the initial diagnosis from the worker, along with detailed step-by-step instructions from the manager. The markers 280, 285, and 295 may become visible to the user in the real-world as AR markers 180, 185, and 190. With this information at hand, the worker fixes the noise issue with ease. In such a way, the manager may communicate a task along with instructions in how to proceed to the technician.]
Therefore, it would have been obvious to one of ordinary skill within the art before the effective filling date to combine Kjallstrom’s system with Kinnebrew’s system, with a motivation to facilitate virtual training, employee on-boarding, knowledge transfer and remote assistance, embodiments of the subject technology provide a training environment where a user (for example, U1) donning the device 100 may be trained virtually on how to operate a piece of machinery by a user U2 operating the device 200. As will be shown, in some embodiments, the user is working on a physical piece of equipment (for example, a machine) 110 (sometimes referred as “machine 110” in the context of the disclosure) or in a physical environment. [Kjallstrom, para. 24]
As per claim 3, modified Kinnebrew teaches the computer-implemented method of claim 2, wherein the first position of the representation of the physical object is modified based on a change in position of the physical object in the physical environment. [Kinnebrew, para. 68 discloses Each virtual object may be rendered through an understanding of a description of the object used by the display device to render the object and interactions between the object and other real and virtual objects. In order to learn new functions and inputs, the description of the object may be modified to reflect the new inputs and functions. In order to make the interaction with the objects as natural for humans as possible, a multitude of inputs may be used to provide input data creating the input actions which drive the functions of a virtual object.]
As per claim 4, modified Kinnebrew teaches the computer-implemented method of claim 3, wherein the change in position of the physical object in the physical environment is determined based on a scan of the physical environment. [Kinnebrew, para. 89 discloses virtual object rendering engine 525 renders each instance of a three dimensional holographic virtual object within the display of a display device 2. Object rendering engine 528 works in conjunction with object tracking engine 524 to track the positions of virtual objects within the display. The virtual objects rendering engine 525 uses the object definition contained within the local object store as well as the instance of the object created in the processing engine 520 and the definition of the objects visual and physical parameters to render the object within the device.]
As per claim 5, modified Kinnebrew teaches the computer-implemented method of claim 2, further comprising: displaying, in the shared virtual space, the representation of the physical object at the modified first position. [Kinnebrew, para. 39 discloses the shared data in some examples may be referenced with respect to a common coordinate system for the environment. In other examples, one head mounted display (HMD) system 10 may receive data from another HMD system 10 including image data or data derived from image data, position data for the sending HMD, e.g. GPS or IR data giving a relative position, and orientation data. An example of data shared between the HMDs is depth map data including image data and depth data captured by its front facing capture devices 113, object identification data, and occlusion volumes for real objects in the depth map.]
Regarding claim 6, modified Kinnebrew teaches the computer-implemented method of claim 2, but Kinnebrew does not teach wherein the representation of the physical object comprises a second virtual object.
However, Kjallstrom does teach wherein the representation of the physical object comprises a second virtual object. [Kjallstrom, para. 4 discloses identifying a location on the second instance of the object corresponding to the point on the physical object; attaching an anchor to the identified location on the second instance of the object corresponding to the point on the physical object;]
Therefore, it would have been obvious to one of ordinary skill within the art before the effective filling date to combine Kjallstrom’s system with Kinnebrew’s system, with a motivation to facilitate virtual training, employee on-boarding, knowledge transfer and remote assistance, embodiments of the subject technology provide a training environment where a user (for example, U1) donning the device 100 may be trained virtually on how to operate a piece of machinery by a user U2 operating the device 200. As will be shown, in some embodiments, the user is working on a physical piece of equipment (for example, a machine) 110 (sometimes referred as “machine 110” in the context of the disclosure) or in a physical environment. [Kjallstrom, para. 24]
As per claim 7, modified Kinnebrew teaches the computer-implemented method of claim 6, wherein the second virtual object is displayed in the shared virtual space. [Kinnebrew, para. 25 discloses Each configured virtual environment may include a set of virtual objects defined with respect to a source physical environment. The configured environments may be rendered in a head mounted display system which may comprise a see-through, near eye head mounted display system.]
Regarding claim 8, modified Kinnebrew teaches the computer-implemented method of claim 2, but Kinnebrew does not teach wherein the virtual space is configured to be viewed by at least a portion of the multiple users from the same physical location.
However, Kjallstrom does teach wherein the virtual space is configured to be viewed by at least a portion of the multiple users from the same physical location. [Kjallstrom, para. 35 discloses the worker puts on his wearable device and sees the synchronized digital note superimposed on top of the physical engine in an AR environment. When the user selects the digital note 150, the modified information may be retrieved and becomes visible (or audible in the case of audio files) as a run-time file is executed. The note is automatically synchronized which includes the initial diagnosis from the worker, along with detailed step-by-step instructions from the manager. The markers 280, 285, and 295 may become visible to the user in the real-world as AR markers 180, 185, and 190. With this information at hand, the worker fixes the noise issue with ease. In such a way, the manager may communicate a task along with instructions in how to proceed to the technician.]
Therefore, it would have been obvious to one of ordinary skill within the art before the effective filling date to combine Kjallstrom’s system with Kinnebrew’s system, with a motivation to facilitate virtual training, employee on-boarding, knowledge transfer and remote assistance, embodiments of the subject technology provide a training environment where a user (for example, U1) donning the device 100 may be trained virtually on how to operate a piece of machinery by a user U2 operating the device 200. As will be shown, in some embodiments, the user is working on a physical piece of equipment (for example, a machine) 110 (sometimes referred as “machine 110” in the context of the disclosure) or in a physical environment. [Kjallstrom, para. 24]
Regarding claim 9 – 15, they recite features similar to features within claims 2 – 8, therefore, they are rejected in a similar manner.
Regarding claim 16 – 21, they recite features similar to features within claims 2 –7, therefore, they are rejected in a similar manner.
Conclusion
Pertinent prior art made of record however not relied upon:
US 20190310757 A1 to Lee et al.
“Disclosed herein are system, method, and computer program product embodiments for providing a local scene recreation of an augmented reality meeting space to a mobile device, laptop computer, or other computing device. By decoupling the augmented reality meeting space from virtual reality headsets, the user-base expands to include users that could otherwise not participate in the collaborative augmented reality meeting spaces. Users participating on mobile devices and laptops may choose between multiple modes of interaction including an auto-switch view and manual views as well as interacting with the augmented reality meeting space by installing an augmented reality toolkit. Users may deploy and interact with various forms of avatars representing other users in the augmented reality meeting space.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Phuc Pham whose telephone number is (571)272-8893. The examiner can normally be reached Monday - Thursday 7:30 AM - 4:30 PM; Friday 8:00 AM - 12:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached at (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/P.P./Patent Examiner, Art Unit 2408
/LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408