DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Remarks
This is a reply to the amendments filed on 11/07/2025, in which, claims 1, 4, 6-7, and 9-14 are amended; and claim 8 is cancelled. Claims 1-7 and 9-14 remain pending in the present application with claims 1, 6, 7, 9, 10, 11, and 12 being independent claims.
When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments.
Response to Arguments
Regarding the outstanding 35 U.S.C. §112(a) rejections, Applicants have amended the claims to remove “augmented reality mode and a virtual reality mode” and cancel claim 8 rendering the invocation moot. Therefore, the outstanding 35 U.S.C. §112(a) rejections of claims 1-7 and 9-14 have been withdrawn.
Applicant's arguments filed on 11/07/2025 with respect to amended claims 1, 6, 7, 9, 10, 11, and 12 have been considered but are moot in view of the new ground(s) of rejection.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. - An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Use of the word “device” (or “step for”, “unit”, “element”, “mechanism”, “module”, “means”, “engine”, “component”, “member”, “apparatus”, “machine”, “system”, “assembly”, “portion”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function.
Absence of the word “device” in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function.
The claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
a real-point virtualization device configured to receive a measurement signal… in claims 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. (FP 7.30.06).
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7 and 9-14 are rejected under 35 U.S.C. 103 as being unpatentable over Spivack et al. (US 20190108578 A1, hereinafter referred to as “Spivack”) in view of Mate et al. (US 20180300040 A1, hereinafter referred to as “Mate”), and further in view of Ord et al. (US 20180361260 A1, hereinafter referred to as “Ord”).
Regarding claim 1, Spivack discloses an interaction peripheral comprising:
a rangefinder (see Spivack, FIG. 1 and FIG. 3B and paragraph [0278]: “The location sensor 340 can include … an optical rangefinder”),
in answer to a command of a user of the virtual reality headset by using the interaction peripheral to sight a real point in a real room of the real space while displaying the real room on the display (see Spivack, paragraph [0059]: “The virtual object can appear and be accessed (preview, view, shared, edited, modified), acted on and/or interacted with via an imaging device such as a smartphone camera, wearable device such as an augmented reality (AR) or virtual reality (VR) headset”).
Regarding claim 1, Spivack discloses all the claimed limitations with the exception of an interaction peripheral able to be connected to a virtual reality headset having a display able to reproduce a virtual point in a virtual space and a real point in a real space, the interaction peripheral being distinct from the virtual reality headset and comprising: configured to provide the virtual reality headset with a measurement signal, the measurement signal comprising a measurement of a relative position of a real point of the real room, the real point being sighted by the rangefinder during a survey of positions in the real room, the measurement signal being able to allow reproduction of the measured real point in
Mate from the same or similar fields of endeavor discloses the real point being sighted by the rangefinder (see Mate, paragraph [0088]: “The user input circuitry 44 detects the user's real point of view 14 using user point of view sensor 45. The user's real point of view is used by the controller 42 to determine the point of view 24 within the virtual space 20, changing the virtual scene 22”), the measurement signal being able to allow reproduction of the measured real point in the virtual space rendered by the virtual reality headset (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Mate with the teachings as in Spivack. The motivation for doing so would ensure the system to have the ability to use the system and method disclosed in Mate to display a user a current virtual scene of a virtual space from a current point of view at a current position; to transmit the computer program as a computer data signal; to detect the user's real point of view using user point of view sensor wherein the user's real point of view is used by the controller to determine the point of view within the virtual space and to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction of the real point of view thus reproducing a virtual point in a virtual space on a display; computing signal using computer; sighting the real point by the rangefinder and allowing reproduction of the measured real point in a virtual space rendered by the virtual reality headset in order to transform the location of the sighted point or measurement point into a virtual location of the measurement point with user inputs so that the measurement tools can be manipulated which could reduce the risk of errors during reproduction in the real space.
Regarding claim 1, Spivack and Mate discloses all the claimed limitations with the exception of an interaction peripheral able to be connected to a virtual reality headset having a display able to reproduce a virtual point in a virtual space and a real point in a real space, the interaction peripheral being distinct from the virtual reality headset and comprising: configured to provide the virtual reality headset with a measurement signal, the measurement signal comprising a measurement of a relative position of a real point of the real room, during a survey of positions in the real room, during use of the virtual space.
Ord from the same or similar fields of endeavor discloses an interaction peripheral able to be connected to a virtual reality headset (see Ord, paragraph [0018]: “a virtual reality platform may include a virtual reality headset (e.g., goggles, glasses, and/or other headset), and/or other virtual reality platform”) having a display able to reproduce a virtual point in a virtual space and a real point in a real space (see Ord, paragraph [0042]: “The user input may include entry and/or selection of individual virtual objects depicted in a representation of a location within a field of view of the real-world space presented on the computing platform. In some implementations, entry and/or selection may be facilitated through a display of the computing platform” and paragraph [0048]: “computing platform 404 may comprise a virtual reality platform and/or other computing platform. A virtual reality platform may be configured to provide the user an immersive experience into a different location. User input via a virtual reality platform may be provided by one or more of gesture and/or motion tracking/detection, input via an external device (e.g., a handheld controller, and/or other devices), and/or other input mechanisms”), the interaction peripheral being distinct from the virtual reality headset (see Ord, paragraph [0023]: “Executing machine-readable instructions 106 may cause one or more physical processors 104 to facilitate user interactions with virtual objects depicted as being present in a real-world space. The machine-readable instructions 106 may include one or more computer program components. The one or more computer program components may include one or more of a space component 108, a an orientation component 110, a presentation component 112, a user component 114, a game component 116, and/or other components”) and comprising:
configured to provide the virtual reality headset with a measurement signal (see Ord, paragraph [0022]: “a location sensor of a computing platform may be configured to generate output signals conveying location information and/or other information. Location information derived from output signals of a location sensor may define one or more of a geo-location of the computing platform, an elevation of the computing platform, and/or other measurements”),
the measurement signal comprising a measurement of a relative position of a real point of the real room (see Ord, FIG. 6 and paragraph [0050]: “FIG. 6 illustrates display 406 of computing platform 404 of FIG. 4. The display 406 shows a representation 500 of first location 300 (FIG. 3) within a second field of view of the real-world space. The one or more virtual objects depicted as being present in first location 300 within the second field of view may include second virtual object 510, and/or other virtual objects. The second field of view of the real-world space may be determined based on an orientation of computing platform 404 (FIG. 4) at a second point in time. As shown, the second field of view may not include first virtual object 508. The relative position of second virtual object 510 relative a second object representation 506 of object 306 may also change”),
during a survey of positions in the real room (see Ord, paragraph [0060]: “The space information may define one or more representations of one or more locations in a real-world space. By way of non-limiting illustration, space information may define a representation of a first location in a real-world space. The first location in the real-world space may include real-world objects and/or people present in and/or moving through the real-world space. The representation of the first location including one or more virtual objects depicted in the representation of the first location as being present in the first location. Individual virtual objects may be configured to experience locomotion within the representation of the first location”), during use of the virtual space (see Ord, paragraph [0029]: “appearance information and spatial information may be combined to generate a model. For example, real-world objects, people, surfaces, and/or other content depicted in one or more images of the real-world space may be mapped to corresponding locations specified by the spatial information in order to realistically model both the visual appearance and the physical dimensions of the physical space. By way of non-limiting illustration, a model of a location of a real-world space may comprise a real-world virtual reality model providing 360 degree views of the location”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Ord with the teachings as in Spivack and Mate. The motivation for doing so would ensure the system to have the ability to use the systems and methods to facilitate user interactions with virtual objects depicted as being present in a real-world space disclosed in Ord to include a virtual reality headset in a virtual reality platform wherein the virtual reality platform is configured to provide the user an immersive experience into a different location and user input via the virtual reality platform may be provided by an external device; to select virtual objects in a representation of a location within a field of view of the real-world space presented on the computing platform via the user input on a display; to generate output signals conveying location information and/or other information via a location sensor to define measurements; to generate a virtual reality model to map real world objects in real world space to corresponding virtual views of the physical space; and to display a representation of the real-world space using virtual objects and its relative position in the real room thus using a virtual reality headset having a display to display a virtual object in a virtual space and a real object in a real space; providing the virtual reality headset with a measurement signal wherein the measurement signal comprising a measurement of a relative position of a real point of the a real room during a survey of positions in the real room and use of the virtual space in order to create a virtual space based on a survey of a real space so that a real space can be reproduced as a virtual point of a virtual space.
Regarding claim 2, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose the interaction peripheral as claimed in claim 1, wherein the interaction peripheral comprises a direction detector able to provide the virtual reality headset with a signal comprising a direction in which the rangefinder measured the real point (see Mate, paragraph [0049]: “as illustrated in FIGS. 1A, 1B, 1C a position 23 of the point of view 24 within the virtual space 20 may be changed and/or a direction or orientation 25 of the point of view 24 within the virtual space 20 may be changed”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 3, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose the interaction peripheral as claimed in claim 1, wherein the interaction peripheral comprises a location detector able to provide the virtual reality headset with a signal comprising a position of the interaction peripheral (see Spivack, paragraph [0132]: “The client application can sense, detect or recognize virtual objects and/or other human users, actors, non-player characters or any other human or computer participants that are within range of their physical location, and can enable the users to observe, view, act, interact, react with respect to the VOBs”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 4, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose the interaction peripheral as claimed in claim 1, wherein the interaction peripheral comprises a controller able to be manipulated by the user wearing the virtual reality headset, the controller being configured to activate the rangefinder upon command of the user (see Mate, paragraph [0086]: “The user input circuitry 44 detects user actions using user input 43. These user actions are used by the controller 42 to determine what happens within the virtual space 20. This may enable interaction with a visual element 28 within the virtual space”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 5, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose the interaction peripheral as claimed in claim 1, wherein the interaction peripheral is a portable peripheral (see Spivack, paragraph [0100]: “using IR range finding device or sensors that can be built into portable devices”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 6, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose a surveying method for surveying a real space intended to be integrated into a virtual space by a virtual reality headset (see Ord, paragraph [0018]: “a virtual reality platform may include a virtual reality headset (e.g., goggles, glasses, and/or other headset), and/or other virtual reality platform”) a display able to reproduce a virtual point in a virtual space and a real point in a real space (see Ord, paragraph [0042]: “The user input may include entry and/or selection of individual virtual objects depicted in a representation of a location within a field of view of the real-world space presented on the computing platform. In some implementations, entry and/or selection may be facilitated through a display of the computing platform” and paragraph [0048]: “computing platform 404 may comprise a virtual reality platform and/or other computing platform. A virtual reality platform may be configured to provide the user an immersive experience into a different location. User input via a virtual reality platform may be provided by one or more of gesture and/or motion tracking/detection, input via an external device (e.g., a handheld controller, and/or other devices), and/or other input mechanisms”), the surveying method being implemented by an interaction peripheral and comprising:
pointing at the real point in the real space by using a rangefinder of the interaction peripheral (see Spivack, paragraph [0121]: “Augmented reality enabled technology and devices can therefore facilitate and enable various types of activities with respect to and within virtual locations in the virtual world. Due to the inter connectivity and relationships between the physical world and the virtual world in the augmented reality environment, activities in the virtual world can drive traffic to the corresponding locations in the physical world”); and
in answer to a command of a user of the virtual reality headset while displaying the real space on the display (see Spivack, paragraph [0059]: “The virtual object can appear and be accessed (preview, view, shared, edited, modified), acted on and/or interacted with via an imaging device such as a smartphone camera, wearable device such as an augmented reality (AR) or virtual reality (VR) headset”), measuring a relative position of the real point in a real room of the real space sighted by the interaction peripheral during the pointing (see Ord, FIG. 6 and paragraph [0050]: “FIG. 6 illustrates display 406 of computing platform 404 of FIG. 4. The display 406 shows a representation 500 of first location 300 (FIG. 3) within a second field of view of the real-world space. The one or more virtual objects depicted as being present in first location 300 within the second field of view may include second virtual object 510, and/or other virtual objects. The second field of view of the real-world space may be determined based on an orientation of computing platform 404 (FIG. 4) at a second point in time. As shown, the second field of view may not include first virtual object 508. The relative position of second virtual object 510 relative a second object representation 506 of object 306 may also change”), the measurement of the relative position providing a measurement signal able to make it possible to reproduce the real point in the virtual space rendered by a virtual reality headset (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”) during use of the virtual space (see Ord, paragraph [0029]: “appearance information and spatial information may be combined to generate a model. For example, real-world objects, people, surfaces, and/or other content depicted in one or more images of the real-world space may be mapped to corresponding locations specified by the spatial information in order to realistically model both the visual appearance and the physical dimensions of the physical space. By way of non-limiting illustration, a model of a location of a real-world space may comprise a real-world virtual reality model providing 360 degree views of the location”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 7, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose a virtual reality headset able to be connected to an interaction peripheral, the virtual reality headset comprising:
a display configured to reproduce a virtual point in a virtual space and a real point in a real space (see Mate, paragraph [0141]: “In FIG. 12A, the apparatus 30 is displaying to a user a current virtual scene 22 of a virtual space 20 from a current point of view 24 at a current position 23”); and
a processor which is configured to receive (see Spivack, paragraph [0051]: “platforms (e.g., as hosted by the host server 100 as depicted in the example of FIG. 1) of shareable virtual objects and virtual objects as message objects to facilitate communications sessions in an augmented reality environment”) a measurement signal for the real point sighted by the interaction peripheral in a real room of the real space (see Ord, FIG. 6 and paragraph [0050]: “FIG. 6 illustrates display 406 of computing platform 404 of FIG. 4. The display 406 shows a representation 500 of first location 300 (FIG. 3) within a second field of view of the real-world space. The one or more virtual objects depicted as being present in first location 300 within the second field of view may include second virtual object 510, and/or other virtual objects. The second field of view of the real-world space may be determined based on an orientation of computing platform 404 (FIG. 4) at a second point in time. As shown, the second field of view may not include first virtual object 508. The relative position of second virtual object 510 relative a second object representation 506 of object 306 may also change”) in answer to a command of a user of the virtual reality headset by using the interaction peripheral (see Spivack, paragraph [0059]: “The virtual object can appear and be accessed (preview, view, shared, edited, modified), acted on and/or interacted with via an imaging device such as a smartphone camera, wearable device such as an augmented reality (AR) or virtual reality (VR) headset”) while displaying the real space during a survey of positions in the real room and to display the virtual point in the virtual space on the display during use of the virtual space (see Ord, paragraph [0029]: “appearance information and spatial information may be combined to generate a model. For example, real-world objects, people, surfaces, and/or other content depicted in one or more images of the real-world space may be mapped to corresponding locations specified by the spatial information in order to realistically model both the visual appearance and the physical dimensions of the physical space. By way of non-limiting illustration, a model of a location of a real-world space may comprise a real-world virtual reality model providing 360 degree views of the location”), wherein the virtual point is defined by a relative position of the virtual point in the virtual space based on a measurement signal for a real point sighted by the interaction peripheral in the real room of the real space (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 9, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose a method comprising:
reproducing a real point in a virtual space rendered by a virtual reality headset (see Ord, paragraph [0018]: “a virtual reality platform may include a virtual reality headset (e.g., goggles, glasses, and/or other headset), and/or other virtual reality platform”) and able to be connected to an interaction peripheral and displaying able to reproduce a virtual point in a virtual space and a real point in a real space (see Ord, paragraph [0042]: “The user input may include entry and/or selection of individual virtual objects depicted in a representation of a location within a field of view of the real-world space presented on the computing platform. In some implementations, entry and/or selection may be facilitated through a display of the computing platform” and paragraph [0048]: “computing platform 404 may comprise a virtual reality platform and/or other computing platform. A virtual reality platform may be configured to provide the user an immersive experience into a different location. User input via a virtual reality platform may be provided by one or more of gesture and/or motion tracking/detection, input via an external device (e.g., a handheld controller, and/or other devices), and/or other input mechanisms”), the reproduction method comprising:
displaying on the display a real room in the real space during a surveying of positions in the real room by the interaction peripheral (see Ord, FIG. 6 and paragraph [0050]: “FIG. 6 illustrates display 406 of computing platform 404 of FIG. 4. The display 406 shows a representation 500 of first location 300 (FIG. 3) within a second field of view of the real-world space. The one or more virtual objects depicted as being present in first location 300 within the second field of view may include second virtual object 510, and/or other virtual objects. The second field of view of the real-world space may be determined based on an orientation of computing platform 404 (FIG. 4) at a second point in time. As shown, the second field of view may not include first virtual object 508. The relative position of second virtual object 510 relative a second object representation 506 of object 306 may also change”);
receive a measurement signal for the real point sighted by the interaction peripheral in the real room (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”) in answer to a command of a user of the virtual reality headset by using the interaction peripheral while displaying the real room on the display (see Ord, paragraph [0042]: “The user input may include entry and/or selection of individual virtual objects depicted in a representation of a location within a field of view of the real-world space presented on the computing platform. In some implementations, entry and/or selection may be facilitated through a display of the computing platform” and paragraph [0048]: “computing platform 404 may comprise a virtual reality platform and/or other computing platform. A virtual reality platform may be configured to provide the user an immersive experience into a different location. User input via a virtual reality platform may be provided by one or more of gesture and/or motion tracking/detection, input via an external device (e.g., a handheld controller, and/or other devices), and/or other input mechanisms”); and
displaying on a display the virtual point in the virtual space (see Mate, paragraph [0141]: “In FIG. 12A, the apparatus 30 is displaying to a user a current virtual scene 22 of a virtual space 20 from a current point of view 24 at a current position 23”), the virtual point being defined by a relative position of the virtual point in the virtual space (see Spivack, paragraph [0122]: “By virtual of the inter-relationship and connections between virtual spaces and real world locations enabled by or driven by AR, just as there is a value to real-estate in the real world locations, there can be inherent value or values for the corresponding virtual real-estate in the virtual spaces”) based on a measurement signal for a real point sighted by the interaction peripheral in the real space (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 10, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose a virtualization device for virtualizing a real point in a virtual space rendered by a virtual reality headset having a display able to reproduce a virtual point in a virtual space and a real point in a real space and able to be connected to an interaction peripheral (see Ord, paragraph [0042]: “The user input may include entry and/or selection of individual virtual objects depicted in a representation of a location within a field of view of the real-world space presented on the computing platform. In some implementations, entry and/or selection may be facilitated through a display of the computing platform” and paragraph [0048]: “computing platform 404 may comprise a virtual reality platform and/or other computing platform. A virtual reality platform may be configured to provide the user an immersive experience into a different location. User input via a virtual reality platform may be provided by one or more of gesture and/or motion tracking/detection, input via an external device (e.g., a handheld controller, and/or other devices), and/or other input mechanisms”), the virtualization device comprising:
a computer which is configured to compute (see Mate, paragraph [0068]: “The apparatus 30 may propagate or transmit the computer program 48 as a computer data signal”) a relative position of the virtual point in the virtual space based on a measurement signal for the real point sighted by the interaction peripheral in a real room of the real space (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”) in answer to a command of a user of the virtual reality headset while displaying the real room on the display (see Spivack, paragraph [0059]: “The virtual object can appear and be accessed (preview, view, shared, edited, modified), acted on and/or interacted with via an imaging device such as a smartphone camera, wearable device such as an augmented reality (AR) or virtual reality (VR) headset”), the computer being configured to receive (see Spivack, paragraph [0051]: “platforms (e.g., as hosted by the host server 100 as depicted in the example of FIG. 1) of shareable virtual objects and virtual objects as message objects to facilitate communications sessions in an augmented reality environment”) the measurement signal from the interaction peripheral and command display of the virtual point in the virtual space by providing the relative position of the virtual point to the virtual reality headset rendering the virtual space on the display (see Ord, FIG. 6 and paragraph [0050]: “FIG. 6 illustrates display 406 of computing platform 404 of FIG. 4. The display 406 shows a representation 500 of first location 300 (FIG. 3) within a second field of view of the real-world space. The one or more virtual objects depicted as being present in first location 300 within the second field of view may include second virtual object 510, and/or other virtual objects. The second field of view of the real-world space may be determined based on an orientation of computing platform 404 (FIG. 4) at a second point in time. As shown, the second field of view may not include first virtual object 508. The relative position of second virtual object 510 relative a second object representation 506 of object 306 may also change”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 11, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose a method comprising:
virtualizing a real point in a virtual space rendered by a virtual reality headset (see Ord, paragraph [0018]: “a virtual reality platform may include a virtual reality headset (e.g., goggles, glasses, and/or other headset), and/or other virtual reality platform”) having a display able to reproduce a virtual point in the virtual space and real point in a real space and able to be connected to an interaction peripheral (see Ord, paragraph [0042]: “The user input may include entry and/or selection of individual virtual objects depicted in a representation of a location within a field of view of the real-world space presented on the computing platform. In some implementations, entry and/or selection may be facilitated through a display of the computing platform” and paragraph [0048]: “computing platform 404 may comprise a virtual reality platform and/or other computing platform. A virtual reality platform may be configured to provide the user an immersive experience into a different location. User input via a virtual reality platform may be provided by one or more of gesture and/or motion tracking/detection, input via an external device (e.g., a handheld controller, and/or other devices), and/or other input mechanisms”), the virtualizing comprising:
computing a relative position of the virtual point in the virtual space (see Mate, paragraph [0088]: “The user input circuitry 44 detects the user's real point of view 14 using user point of view sensor 45. The user's real point of view is used by the controller 42 to determine the point of view 24 within the virtual space 20, changing the virtual scene 22”) based on a measurement signal for the real point sighted by the interaction peripheral in a real room of the real space (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”) while displaying the real room on the display in answer to a command of a user of the virtual reality headset during a survey of positions in the real room (see Spivack, paragraph [0059]: “The virtual object can appear and be accessed (preview, view, shared, edited, modified), acted on and/or interacted with via an imaging device such as a smartphone camera, wearable device such as an augmented reality (AR) or virtual reality (VR) headset”); and
providing the relative position of the virtual point to the virtual reality headset so as to trigger reproduction on the display the virtual point in the virtual space (see Ord, paragraph [0029]: “appearance information and spatial information may be combined to generate a model. For example, real-world objects, people, surfaces, and/or other content depicted in one or more images of the real-world space may be mapped to corresponding locations specified by the spatial information in order to realistically model both the visual appearance and the physical dimensions of the physical space. By way of non-limiting illustration, a model of a location of a real-world space may comprise a real-world virtual reality model providing 360 degree views of the location”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 12, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose a device for virtualizing a real room of a real space in a virtual space rendered by a virtual reality headset (see Ord, paragraph [0018]: “a virtual reality platform may include a virtual reality headset (e.g., goggles, glasses, and/or other headset), and/or other virtual reality platform”) having and able to be connected to an interaction peripheral, the real room comprising at least one real object (see Ord, paragraph [0023]: “Executing machine-readable instructions 106 may cause one or more physical processors 104 to facilitate user interactions with virtual objects depicted as being present in a real-world space. The machine-readable instructions 106 may include one or more computer program components. The one or more computer program components may include one or more of a space component 108, a an orientation component 110, a presentation component 112, a user component 114, a game component 116, and/or other components”), the device for virtualizing comprising:
a real-point virtualization device configured to receive a measurement signal for real points defining the real object sighted by the interaction peripheral in the real room of the real space (see Mate, paragraph [0092]: “pupil tracking technology, based for example on computer vision, may be used to track movement of a user's eye or eyes and therefore determine a direction of a user's gaze and consequential changes in the real direction 15 of the real point of view 14”) while displaying the real room on the display in answer to a command of a user of the virtual reality headset by using the interaction peripheral (see Spivack, paragraph [0059]: “The virtual object can appear and be accessed (preview, view, shared, edited, modified), acted on and/or interacted with via an imaging device such as a smartphone camera, wearable device such as an augmented reality (AR) or virtual reality (VR) headset”) and to perform a virtualization of the real points defining the real object (see Spivack, paragraph [0119]: “interactive virtual objects that correspond to content or physical objects in the physical world are detected and/or generated”),
a virtual object generator configured to generate a virtual object based on relative positions of the virtualized real points provided by the real-point virtualization device (see Spivack, paragraph [0122]: “By virtual of the inter-relationship and connections between virtual spaces and real world locations enabled by or driven by AR, just as there is a value to real-estate in the real world locations, there can be inherent value or values for the corresponding virtual real-estate in the virtual spaces”), the virtual object generator being configured to trigger reproduction on the display of the virtual object in the virtual space by providing at least one dimension of the virtual object and a relative position of the virtual object in the virtual space to the virtual reality headset rendering the virtual space (see Mate, paragraph [0141]: “In FIG. 12A, the apparatus 30 is displaying to a user a current virtual scene 22 of a virtual space 20 from a current point of view 24 at a current position 23”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 13, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose the method of claim 11, further comprising:
virtualizing a real area of the real space in the virtual space rendered by the virtual reality headset, the real area comprising at least one real object, the virtualizing the real area comprising:
performing the virtualizing for real points defining the real object (see Mate, paragraph [0050]: “FIG. 3A illustrates a real space 10 comprising real objects 11 that partially corresponds with the virtual space 20 of FIG. 1A. In this example, each real object 11 in the real space 10 has a corresponding virtual object 21 in the virtual space 20”), and
generating virtual objects based on relative positions of multiple virtual points as provided by the virtualizing (see Mate, paragraph [0051]: “A linear mapping exists between the real space 10 and the virtual space 20 and the same mapping exists between each real object 11 in the real space 10 and its corresponding virtual object 21. The relative relationship of the real objects 11 in the real space 10 is therefore the same as the relative relationship between the corresponding virtual objects 21 in the virtual space 20”), the generating triggering reproduction of the virtual object in the virtual space by providing at least one dimension of the virtual object and a relative position of the virtual object to the virtual reality headset rendering the virtual space (see Mate, paragraph [0145]: “performing user-perspective mediated reality, augmented reality or virtual reality, for example via a headset apparatus 33, then the object of interest 21 may be determined by a direction of the user's gaze when performing the three-dimensional gesture 80. As previously described, the user's gaze determines the point of view 24 from the current position 23 and therefore defines the content of the virtual scene 22”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 14, the combination teachings of Spivack, Mate, and Ord as discussed above also disclose the method as claimed in claim 13, wherein the method comprises repeating the virtualizing and virtual object generating for multiple objects in the real area, to generate a virtual plan of the real area, the virtual plan comprising the generated virtual objects (see Spivack, paragraph [0261]: “multiple virtual objects are rendered by the product offering rendering engine 372 to represent each of the product listings in the augmented reality environment. One of the multiple virtual objects can be associated with a given physical location in the real world environment. The given physical location can be specified by one of the seller entities who listed a given product offering associated with the one of the multiple virtual objects associated with the given physical location. The given virtual object can then be rendered (e.g., by the product offering rendering engine 372) in the marketplace in the augmented reality environment, at the given physical location or is rendered in the augmented reality environment to appear to be located at the given physical. Location”).
The motivation for combining the references has been discussed in claim 1 above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NIENRU YANG
Examiner
Art Unit 2484
/NIENRU YANG/Examiner, Art Unit 2484
/THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484