DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/17/2025 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 12 recites the limitation "adjusting the dimensions" in line 10. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 recites the limitation "where the step of changing the displayed position or rendering order" in lines 1-2. There is insufficient antecedent basis for this limitation in the claim. It should be noted that the above limitation implies a step of changing the displayed position or rendering order is disclosed in claim 12, however, that limitation in the amended claim 12 was deleted.
Claims 14-16 are rejected for being dependent on claims 12-13.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 7-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Maciocci et al. (US 2012/0249741, hereinafter Maciocci), in view of Aronzon et al. (US 2013/0065692, hereinafter Aronzon), in view of Light et al. (US 8416083, hereinafter Light), in view of Hosenpud et al. (US 2014/0300547, hereinafter Hosenpud), and further in view of Sheaffer et al. (US 2015/0070388, hereinafter Sheaffer).
Regarding claim 7, Maciocci teaches a method of displaying images on a display of an immersive display (abstract: A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users), the method comprising:
(b) dimensions within the first participant's view of the virtual multi-user event ([0086]: the head mounted devices 10 may transmit to each other three-dimensional virtual object models and/or data sets for rendering on their respective displays);
determining at least one object (virtual object 14a, 14b fig. 2, fig. 3) at which the first participant is looking (based on the line of sight of the user, the virtual object (virtual object 14a anchored to the surface of a table or a virtual table) that the user is viewing is determined; [0080]: For example, the processor may detect that a user is pointing to a particular surface. The processor may detect the surface and determine an angle of the surface with respect to the line of sight of the user, and anchor the virtual object 14 on the particular surface where the user pointed with an orientation and perspective consistent with the determined angle of the surface; [0085]: a first user may view a virtual object 14 connected to a wall on a head mounted display; [0165]: In an embodiment, the virtual object may be anchored on a virtual physical surface, such as, for example, a virtual table, which may appear within the rendered image);
determining whether the at least one object (virtual object 14a, 14b fig. 2, fig. 3) at which the first participant is looking is obstructed by the one or more (real object such as hand or body part occluding virtual objects; fig. 7 step 708: recognizing objects in the image; [0031]: a head mounted display output showing a virtual object with a user's hands and with other individual's hands occluding the virtual object; [0190] and fig. 13 step 1306: the processor may determine whether any objects are recognized within the images. For example, an anatomical detection algorithm may be applied to detect images and data to determine if a port part is detected in the image. For example, an anatomical or a skeletal detection algorithm may be applied to the captured images to detect a body part, such as, an arm or a hand; fig. 14 and [0194]: objects (such as another individual’s body parts) that are made invisible by the image of the virtual object are functionally equivalent to objects that obstruct the virtual environment; as shown in fig. 4, the user’s hands (real object) are recognized as objects interfering with the virtual object 14b anchored in the virtual environment (virtual table, [0165]); [0194]: the processor may render a virtual object as partially transparent in places where a user's feature (e.g., hands and arms) occludes the virtual object and as nontransparent where the user's feature does not occlude the virtual object; [0202]: The user's hands will appear over and will occlude the virtual object 14; 0254]: FIG. 27 illustrates an embodiment method 2700 where a user's hands may be occluded over a virtual object by tracking images and applying an anatomical model to the tracked images to detect body parts contained in the image; [0257]: In block 2804, the processor may identify the user's hands and a second individual's hands. The processor may render the image in block 2805 on an anchor surface contained in the image, for example, on a desktop or on a wall. At determination block 2806, the processor may determine whether to superimpose the virtual object over a body part. If the processor determines to superimpose the virtual object over a body part (i.e., determination block 2806="Yes"), in block 2807, the processor may render the second individual's hands transparent or absent over the virtual object, or when the body part occludes the virtual object, the virtual object may be displayed over a top surface of the body part so the body part appears to be transparent. In another embodiment, an outline of the body part may still remain visible with virtual object superimposed over a body part surface. In block 2808, the displayed image of the virtual object may be updated by the processor to account for changes in the content and movement of the body part. If the processor determines not to superimpose the virtual object over a body part (i.e., determination block 2806="No"), the processor will display the virtual object in block 2809 and update the virtual object for movement of the user; [0136]: In determination block 704, the processor may determine whether the position of the user or the anchor surface has changed. For example, the user may anchor the image on a surface such as a wall or in free space and the user may walk away from the wall during collaboration with another user, thus changing position. If the position has changed (i.e., determination block 704="Yes") which indicates the user has moved away from the anchored virtual object or the anchor surface has moved, the processor may determine and calculate a change of the anchored virtual object based on the new position in block 705);
in response to determining that the at least one object at which the first participant is looking is obstructed by the one or more obscuring virtual objects, adjusting the virtual reality space as displayed for the first participant (rendering the virtual object in front of the obstructing object; [0191]: In an embodiment, if the unnecessary object is in front of a wall or table, a virtual object may be generated that resembles the wall and/or table and superimpose the virtual wall/table over the unnecessary object; [0194]: The adjustment to the display may involve rendering the virtual object in front of, behind or blended with a recognized body part. For example, the processor may render a virtual object as partially transparent in places where a user's feature (e.g., hands and arms) occludes the virtual object and as nontransparent where the user's feature does not occlude the virtual object).
Maciocci does not explicitly teach receiving virtual reality location data from a first participant in a virtual multi-user event, where the virtual reality location data comprises one or more of: (a) a location of the first participant within the virtual multi-user event; (c) a view direction in which the first participant is looking within the virtual multi- user event; (d) a location of one or more objects within the virtual multi-user event; and (e) a location of a second participant relative to the first participant; adjusting one or more of the dimensions of the virtual reality space for the first participant as used for display on the immersive display in a manner that causes the one or more obscuring virtual objects to no longer be in the line of sight of the at least one object; and the object obstructing the virtual object is a virtual object.
Aronzon teaches receiving virtual reality location data from a first participant in a virtual multi-user event ([0046]: the gaming goggles 708 and 748 can allow the gaming server 730 to determine the physical and virtual field of view of the player; [0062]: the gaming server 730 can capture first position and orientation information for the first player 814 at the first physical location 810 and the second position and orientation information for the second player 854 at the second physical location 850 … The gaming server 730 can further collect information based on other sensory devices, such as gyroscopes, compasses, accelerometers, level detectors, diode arrays, infrared detectors and/or antennas, to detect position and orientation information for the first player 814 as this player moves about in the first gaming space 810. Similarly, the gaming server 730 can identify presence, position, and orientation information for a second player 854 who has entered the second gaming space 850; [0063]: the gaming server 730 can map the first position and orientation information for the first player and the second position and orientation information for the second player to the virtual gaming space to generate a first virtual player corresponding to the first player and a second virtual player corresponding to the second player. Since the first physical location 810 can be mapped to a virtual gaming space, the gaming server 730 can map the first player's position and orientation information on to the virtual gaming space. Similarly, the gaming server 730 can map the position and orientation information for the second player, who is physically located at the second physical location 850, onto the virtual gaming space; [0066]: the gaming server 730 can identify a computer-controlled player that is being commanded by a software application at a computer device 782, mobile device 790, or gaming controller 784. If a computer-controlled player is identified, then, in step 1108, the gaming server 730 can map position and orientation from the computer-controlled player to the virtual gaming space to generate a virtual computer-controlled player 834 and 874; [0067]: the gaming server 730 can detect any virtual objects 828 and 868 that would virtually obstruct at least a part of a view of one of the virtual players a perspective of another of the virtual players; [0070]: the gaming server 730 can transmit to the goggles 708 of a physical player 814, such as first physical player 814 at a first physical location 810, information representative of a virtual player 824, such as the virtual player 824 representing the second player, who is physically present at the second location 850 but only virtually present a the first location 810; [0074]: The gaming server 730 can capture and track position and orientation information for the first and second players 934 and 938 at the physical location 910), where the virtual reality location data comprises one or more of:
(a) a location of the first participant within the virtual multi-user event ([0062]: the gaming server 730 can capture first position and orientation information for the first player 814 at the first physical location 810 and the second position and orientation information for the second player 854 at the second physical location 850; [0063]: the gaming server 730 can map the first position and orientation information for the first player and the second position and orientation information for the second player to the virtual gaming space to generate a first virtual player corresponding to the first player and a second virtual player corresponding to the second player. Since the first physical location 810 can be mapped to a virtual gaming space, the gaming server 730 can map the first player's position and orientation information on to the virtual gaming space. Similarly, the gaming server 730 can map the position and orientation information for the second player, who is physically located at the second physical location 850, onto the virtual gaming space;);
(c) a view direction in which the first participant is looking within the virtual multi- user event ([0046]: the gaming goggles 708 and 748 can allow the gaming server 730 to determine the physical and virtual field of view of the player; [0063]: Since the first physical location 810 can be mapped to a virtual gaming space, the gaming server 730 can map the first player's position and orientation information on to the virtual gaming space; [0067]: virtual objects in line of sight of the players wearing googles);
(d) a location of one or more objects within the virtual multi-user event ([0066]: the gaming server 730 can identify a computer-controlled player that is being commanded by a software application at a computer device 782, mobile device 790, or gaming controller 784. If a computer-controlled player is identified, then, in step 1108, the gaming server 730 can map position and orientation from the computer-controlled player to the virtual gaming space to generate a virtual computer-controlled player 834 and 874; [0067]: the gaming server 730 can detect any virtual objects 828 and 868 that would virtually obstruct at least a part of a view of one of the virtual players a perspective of another of the virtual players); and
(e) a location of a second participant ([0062]: the gaming server 730 can capture first position and orientation information for the first player 814 at the first physical location 810 and the second position and orientation information for the second player 854 at the second physical location 850; [0063]: the gaming server 730 can map the first position and orientation information for the first player and the second position and orientation information for the second player to the virtual gaming space to generate a first virtual player corresponding to the first player and a second virtual player corresponding to the second player. Since the first physical location 810 can be mapped to a virtual gaming space, the gaming server 730 can map the first player's position and orientation information on to the virtual gaming space. Similarly, the gaming server 730 can map the position and orientation information for the second player, who is physically located at the second physical location 850, onto the virtual gaming space). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Aronzon’s knowledge as described and modify the system of Maciocci because such a system players can experience a virtual presence of each player at each location at the same time and thereby enhancing user experience.
Light teaches a location of a second participant relative to the first participant (claim 6: determining a location associated with the second user; and displaying a graphic representation of the second user at a second location within the virtual space, the second location being determined relative to the first location). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Light’s knowledge as described and modify the system of Maciocci and Aronzon because such a system of using relative location of matching attendees in a virtual space facilitates the exchange of protected information and protected content (abstract).
Hosenpud teaches the object obstructing the virtual object is a virtual object ([0045]: The virtual scene 110 can include a first object 104, depicted here as a building, and a second object 108, depicted here as a pear. The first virtual object 104 and the second virtual object when rendered and subsequently imaged on the screen 102 for the first perspective 106 can image the first object 104 as opaque with any virtual object beyond the first virtual object (from the corresponding user's viewpoint) being unrendered as if the first virtual object 104 obstruct the user's view of the second object 108). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Hosenpud’s knowledge as described and modify the system of Maciocci, Aronzon and Light because such a system allows the user to interact with a virtual space in a more realistic fashion ([0008]).
Sheaffer teaches adjusting one or more of the dimensions of the virtual reality space (decreasing a size of the virtual object) for the first participant (user 102, fig. 1) as used for display on the immersive display in a manner that causes the one or more obscuring virtual objects to no longer be in the line of sight of the at least one object (when a virtual object is obstructing a user’s line of sight viewing the real object, the size of the virtual object can be adjusted so that it does not interfere with the user’s line of sight viewing the real object; as seen 106 in fig. 1, the user 102’s line of sight viewing the real object 120 is obstructed by virtual object 130; as seen as 108 in fig. 1, the user’s line of sight viewing the real object is not obstructed by modifying the virtual object 132; [0064]: The virtual object may be modified by stopping the rendering of the virtual object, decreasing a size of the virtual object so it does not interfere with the real object, flashing the virtual object such as prior to stopping the rendering or decreasing the size, imaging the virtual object as transparent, highlighting the virtual object, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Sheaffer’s knowledge of adjusting the size of the virtual object as taught and modifying the system of Sheaffer because such a system improves a user’s experience by enhancing a user’s view of the real world ([0002] and fig. 1).
Regarding claim 8, the combination of Maciocci, Aronzon, Light, Hosenpud and Sheaffer teaches the method of claim 7, where the adjusting comprises adjusting (decreasing a size of the virtual object), for at least the first participant, one or more dimensions of the virtual reality space such that at least one object is not obstructed (Sheaffer - when a virtual object is obstructing a user’s line of sight viewing the real object, the size of the virtual object can be adjusted so that it does not interfere with the user’s line of sight viewing the real object; Sheaffer - as seen 106 in fig. 1, the user 102’s line of sight viewing the real object 120 is obstructed by virtual object 130; Sheaffer - as seen as 108 in fig. 1, the user’s line of sight viewing the real object is not obstructed by modifying the virtual object 132; Sheaffer - [0064]: The virtual object may be modified by stopping the rendering of the virtual object, decreasing a size of the virtual object so it does not interfere with the real object, flashing the virtual object such as prior to stopping the rendering or decreasing the size, imaging the virtual object as transparent, highlighting the virtual object, etc.).
Regarding claim 9, the combination of Maciocci, Aronzon, Light, Hosenpud and Sheaffer teaches the method of claim 8, where dimensions adjustment information is sent (data transmitted from the first head mounted device to the second head mounted device) to at least one additional participant (second user) and is adjusted (decreasing the size of the virtual object) so the first participant (a first user) appears to be interacting with the at least one object at a location at which the at least one object is displayed for the at least one additional (a second user) participant (changing the position of the virtual object by selecting a new anchor point by the first user to display the virtual object), where a location of the at least one object (14a, fig. 4) as seen by the first participant (10a, fig. 4; as shown in fig. 4, the virtual object is displayed to user 10a on a wall surface 16a) and the location at which the at least one object (14b, fig. 4) is displayed for the at least one additional participant (10b, fig. 4; as shown in fig. 4, the virtual object is displayed to user 10b on a table surface 16b) are not the same (when the first user selects a new anchor point to display the virtual object, the second user continues to display the virtual object at the same anchor point; Maciocci - [0086]: Others viewing a collaboration session wearing head mounted devices or using another mobile device such as a smartphone or tablet may not only see the virtual objects and user interactions with them, but have limited interaction capabilities with the virtual augmentations seen by one of the head mounted device users. This limited interaction may include touching the augmentation to cause an effect, defining an interactive area or anchor point on the physical surface (effectively adding a new augmentation to the shared experience), and interacting with the shared mixed reality scene via gestural and/or audio inputs; Maciocci - [0088]: For collaborative purposes, a second user may wear a second head mounted device 10b to view the same virtual object within the same physical space. The processor within or coupled to the second head mounted device 10b may render the virtual object on a user-selected anchor surface 16. The second head mounted device 10b may display the virtual object 14b on the same anchor surface or position as designated for the first head mounted device 10a. The second user may also designate a different position or anchor surface for rendering the virtual object 14b as seen through the second head mounted device 10b. In order to enable the second head mounted device 10b to properly render the virtual object 14b on the anchor surface from the second user's perspective, the data transmitted from the first head mounted device to the second head mounted device may include the shape or object data. This data may enable the second head mounted device processor to render a displayed image of the virtual object corresponding to the second user's viewing perspective. The virtual object data may be in the form of a geometric model, coordinates and fill data, or similar rendering data that may be used in a three-dimensional object rendering module implemented in a processor within the second head mounted device 10b; Maciocci - [0089]: In this application, the second user views the scene and the anchored virtual object 14a on the second head mounted display from the first user's perspective; Maciocci - [0095]: a second user wearing a second head mounted device 10b sits across from the first user. The second head mounted device 10b may either receive an input to select the desktop to be the anchor surface 16 or may receive data from the first head mounted device 10a identifying the selected anchor surface 16. Using this information the second head mounted device 10b may generate a display of the virtual object 14b reoriented to appear right side up and with the proper perspective for the second user. To generate this display, the second head mounted device 10b may receive data regarding the virtual object to be rendered, such as its content and data regarding its general shape and orientation. The second head mounted device 10b may use the anchor surface selected by the first user (or another anchor surface selected by the second user) to determine a location, orientation and perspective for displaying the virtual object. This may include determining a proper top of the object, and an angle of projection of the object to match the anchor surface that results in the proper perspective of the rendered object. Thus, as illustrated in FIG. 3, the second user views the same virtual object 14b anchored to the desk top surface 16 but right side up from the second user's perspective; Maciocci - [0109]: the second user wearing the second head mounted device 10b may provide an input to summon a new virtual object from a personal data space (e.g., cloud or mobile device) and add the new virtual object to a shared display so the first user also sees it in the first head mounted device 10a. In an embodiment, the first head mounted device 10a may receive a prompt which informs the user that a third virtual object is present and requests a user input or command to accept and display the third virtual object. The user may select a new physical surface to anchor the new virtual object to, or may accept the anchor surface selected by the second user; Maciocci - [0137]: A first input may be provided anchoring a virtual object on a first anchor surface. Later, the processor may receive a second input to anchor the virtual object on a second different anchor surface; Maciocci - [0140]: the processor may generate images presented on the head mounted display so that the virtual object 14 appears to be moved from the first anchor surface to the second new anchor surface. In block 717, the processor may modify the image of virtual object to correspond to changes of position, and thus viewing perspective, of the user; Maciocci - [0143]: a second head mounted device may display an image of the virtual object anchored to either the same anchor surface as designated by the user of the first head mounted device, or to a different anchor surface identified by the second user; Maciocci - [0149]: FIG. 8B illustrates an embodiment method 815 for correctly orienting an anchored virtual object in an image that is output on a display of a first user from the first user's point of view, and on another display for a second user's point of view; Sheaffer - as seen 106 in fig. 1, the user 102’s line of sight viewing the real object 120 is obstructed by virtual object 130; Sheaffer - as seen as 108 in fig. 1, the user’s line of sight viewing the real object is not obstructed by modifying the virtual object 132; Sheaffer - [0064]: The virtual object may be modified by stopping the rendering of the virtual object, decreasing a size of the virtual object so it does not interfere with the real object, flashing the virtual object such as prior to stopping the rendering or decreasing the size, imaging the virtual object as transparent, highlighting the virtual object, etc.).
Regarding claim 10, the combination of Maciocci, Aronzon, Light, Hosenpud and Sheaffer teaches the method of claim 8, where the first participant (a first user) points to the at least one object, and a direction of the pointing (Maciocci - [0078]: The processor may recognize an input command via a detected gesture (e.g., a finger pointing to a point in space); Maciocci - [0080]: he processor may detect that a user is pointing to a particular surface) is modified (pointing to a different surface to select a new anchor point is functionally equivalent to modifying the direction of pointing) for at least one additional participant (a second user) so that both the first participant and the at least one additional participant see the pointing as directed to the same at least one object (fig. 2, fig. 4), where a location of the at least one object as seen by the first participant and a location at which the at least one object is displayed for the at least one additional participant are not the same (Maciocci - [0074]: The head mounted device 10 may receive a second input (gesture, audio, from an input device, etc.) indicating a new or a second anchor surface 16 within the image that is different from the first anchor surface 16. The second anchor surface 16 may correspond to a second different surface located in the image. Further, the first and second anchor surfaces may not be adjacent and the first surface may not be in view of the head mounted device cameras when the second/alternative surface is designated. For example, one surface might be a desktop 16 as shown in FIG. 2, while another surface may be a horizontal wall 16 or a ceiling as shown in FIG. 1. For example, a first user may select a first anchor surface 16 for personal usage and then select a second anchor surface 16 for a second user in a different geographic location. In an embodiment, the user inputs may be voice inputs, inputs provided using a tangible input device (keyboard or mouse), detected gestures, or may be provided by different users. A processor within or coupled to the head mounted device 10 may calculate parameters, including distance and orientation with respect to the head mounted or body mounted camera that corresponds to the second anchor surface 16. The processor within or coupled to the head mounted device 10 may then display the generated virtual object 14 so the virtual object appears to the user to be anchored to the selected second anchor surface 16. In another embodiment, instead of or in addition to a head mounted device 10, a pico projector may be used to project a virtual object 14 onto the selected anchor surface 16. The pico projector may be a separate modular device, and or may be included within the head mounted device 10; Maciocci - [0109]: the second user wearing the second head mounted device 10b may provide an input to summon a new virtual object from a personal data space (e.g., cloud or mobile device) and add the new virtual object to a shared display so the first user also sees it in the first head mounted device 10a. In an embodiment, the first head mounted device 10a may receive a prompt which informs the user that a third virtual object is present and requests a user input or command to accept and display the third virtual object. The user may select a new physical surface to anchor the new virtual object to, or may accept the anchor surface selected by the second user; Maciocci - [0137]: A first input may be provided anchoring a virtual object on a first anchor surface. Later, the processor may receive a second input to anchor the virtual object on a second different anchor surface; Maciocci - [0140]: the processor may generate images presented on the head mounted display so that the virtual object 14 appears to be moved from the first anchor surface to the second new anchor surface. In block 717, the processor may modify the image of virtual object to correspond to changes of position, and thus viewing perspective, of the user; Maciocci - [0143]: a second head mounted device may display an image of the virtual object anchored to either the same anchor surface as designated by the user of the first head mounted device, or to a different anchor surface identified by the second user; [0149]: FIG. 8B illustrates an embodiment method 815 for correctly orienting an anchored virtual object in an image that is output on a display of a first user from the first user's point of view, and on another display for a second user's point of view; Maciocci - [0182]: For example, a user may gesture to anchor the virtual object 14 on a physical surface by pointing. Upon recognizing this gesture, the processor may generate a prompt, such as an audible tone or message presented in the head mounted display requesting the user to confirm the command. To do so, the user may speak words like "okay," "confirm" or "make it so" to confirm that a gesture command recognized by the head mounted device should be executed. Thus, when the processor detects the confirmatory or audible command, the processor may present images in the head mounted display that shows the virtual object anchored on the physical surface to which the user is pointing).
Response to Arguments
Applicant’s arguments with respect to claim(s) 12-16 (see page 3 of Applicant’s Remarks filed on 10/17/2025) have been considered but are moot because the amended claims 12-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Please refer to the rejection of claims 12-16 above for details.
Applicant’s arguments with respect to claim(s) 7-10 have been considered but are moot because the new ground of rejection does not rely on the same combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Response to the argument that for amended claim 7, none of the cited references teach where an obstruction is found, adjusting the dimensions of the virtual reality space for the second participant.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., adjusting the dimensions of the virtual reality space for the second participant) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). It should be noted that the amended claim 7 recites to adjust the dimensions of the virtual reality space for the first participant and not the second participant.
The claimed limitations in amended claim 7 are interpreted to be rejected in view of Maciocci, Aronzon, Light, Hosenpud and Sheaffer.
Especially Sheaffer teaches adjusting one or more of the dimensions of the virtual reality space (decreasing a size of the virtual object) for the first participant (user 102, fig. 1) as used for display on the immersive display in a manner that causes the one or more obscuring virtual objects to no longer be in the line of sight of the at least one object (when a virtual object is obstructing a user’s line of sight viewing the real object, the size of the virtual object can be adjusted so that it does not interfere with the user’s line of sight viewing the real object; as seen 106 in fig. 1, the user 102’s line of sight viewing the real object 120 is obstructed by virtual object 130; as seen as 108 in fig. 1, the user’s line of sight viewing the real object is not obstructed by modifying the virtual object 132; [0064]: The virtual object may be modified by stopping the rendering of the virtual object, decreasing a size of the virtual object so it does not interfere with the real object, flashing the virtual object such as prior to stopping the rendering or decreasing the size, imaging the virtual object as transparent, highlighting the virtual object, etc.).
Response to the argument that none of the cited references teach solving the obstruction by reconfiguring the space itself that contains the object.
Maciocci, in view of Aronzon, in view of Light, in view of Hosenpud and further in view of Sheaffer teaches the limitations recited in claim 8.
Especially, Sheaffer teaches the adjusting comprises adjusting (decreasing a size of the virtual object), for at least the first participant, one or more dimensions of the virtual reality space such that at least one object is not obstructed (Sheaffer - when a virtual object is obstructing a user’s line of sight viewing the real object, the size of the virtual object can be adjusted so that it does not interfere with the user’s line of sight viewing the real object; Sheaffer - as seen 106 in fig. 1, the user 102’s line of sight viewing the real object 120 is obstructed by virtual object 130; Sheaffer - as seen as 108 in fig. 1, the user’s line of sight viewing the real object is not obstructed by modifying the virtual object 132; Sheaffer - [0064]: The virtual object may be modified by stopping the rendering of the virtual object, decreasing a size of the virtual object so it does not interfere with the real object, flashing the virtual object such as prior to stopping the rendering or decreasing the size, imaging the virtual object as transparent, highlighting the virtual object, etc.). It should be noted that a virtual object is inherently a part of the virtual space, and hence adjusting the dimensions of the virtual object implies changing a part of the virtual space.
Allowable Subject Matter
Claims 12-16 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claims 12-16, none of the cited prior art references of record teach, either individually or in combination, in response to determining the object with which the first participant is interacting is obstructed, adjusting dimensions of virtual reality space that second participant is viewing from so that the object with which the first participant is interacting is no longer obstructed from the second participant’s view.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JWALANT AMIN/Primary Examiner, Art Unit 2612