DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1, 2, 4, 5, 9, 10, 13-24 and 26-29 are objected to because of the following informalities:
For claim 1, Examiner believes this claim should be amended in the following manner:
A method for delivering extended reality (XR) content comprising:
providing a physical environment having a physical entity, a first physical surface and a second physical surface;
providing a first projected surface that is co-planar with the first physical surface and a second projected surface that is co-planar with the second physical surface, wherein the first projected surface intersects with the second projected surface along a first elongate intersection;
providing an XR system comprising an XR content generation system (XGS) for generating the XR content and input-output (I/O) components including an input device for receiving inputs for interacting with the XR content and an output device for outputting the XR content including at least visual XR content;
with the XGS, generating a
aligning the XR environment with the physical environment via an alignment process such that a physical location in three-dimensional physical space, and movement of the physical entity within the physical environment is identically aligned and identically mirrored by a corresponding XR location in three-dimensional XR space, and corresponding movement of the XR entity within the XR environment, the alignment process including assigning a location to the XR environment with respect to the physical environment such that:
the first XR surface is co-planar with the first physical surface;
the second XR surface is co-planar with the second physical surface;
the XR intersecting plane intersects with the first XR surface along a fourth elongate intersection and intersects with the second XR surface along a fifth elongate intersection; and
an intersection point is located at an intersection of the fourth elongate intersection with the fifth elongate intersection, such that the intersection point is disposed along the first elongate intersection.
For claim 2, Examiner believes this claim should be amended in the following manner:
The method of claim 1 further comprising aligning the XR environment with the physical environment such that a physical position, including the physical location and an orientation in the three-dimensional physical space, is identically aligned and identically mirrored by a corresponding XR position, including the XR location and an orientation in the three-dimensional XR space, the alignment process further including assigning a position, including said XR location and an orientation, to the XR environment with respect to the physical environment.
For claim 4, Examiner believes this claim should be amended in the following manner:
The method of claim 3 further comprising, with the output device, outputting visual XR content that identically mirrors movement of the XR entity from the first XR position to the second XR position in the XR environment that identically mirrors [[the]] movement of the physical entity from the first physical position to the second physical position in the physical environment.
For claim 5, Examiner believes this claim should be amended in the following manner:
The method of claim 1 further comprising providing a selected intersecting plane that intersects the first physical surface along a second elongate intersection and that intersects the second physical surface along a third elongate intersection, the alignment process further comprising assigning a position to the intersection point at a selected height with respect to the physical environment by positioning the XR intersecting plane at [[a]] the selected height relative to the selected intersecting plane.
For claim 9, Examiner believes this claim should be amended in the following manner:
The method of claim 6 wherein the first alignment vector extends between a pair of immutable features in the physical environment or wherein the second alignment vector extends between a pair of immutable features in the XR environment.
For claim 10, Examiner believes this claim should be amended in the following manner:
The method of claim 1 wherein the XR system includes an XR peripheral and a locator device configured to locate and to track a position and movement of the XR peripheral in the physical environment, the method further including establishing a linkage between the XR peripheral and the locator device to establish an initial position of the XR peripheral within the physical environment; and, using the locator device and while the linkage is established, tracking changes in a position of the XR peripheral in the three-dimensional physical space.
For claim 13, Examiner believes this claim should be amended in the following manner:
The method of claim 1 wherein the XR system includes at least two I/O components and a calibrated sensor associated with one of the at least two I/O components, wherein the calibrated sensor is configured to sense tracked information comprising at least one of location, orientation, or movement of the one of the at least two I/O components, the method further comprising using the calibrated sensor to sense the tracked information of the one of the at least two I/O components and making the tracked information available to another one of the at least two I/O components.
For claim 14, Examiner believes this claim should be amended in the following manner:
The method of claim 13 wherein a separate calibrated sensor is associated with each of the at least two I/O components, the method comprising using each calibrated sensor to sense [[the]] tracked information of [[the]] an associated I/O component and then making the tracked information of the associated I/O component available to another one of the at least two I/O components.
For claim 15, Examiner believes this claim should be amended in the following manner:
The method of claim 13 wherein the calibrated sensor is configured to sense tracked information comprising each of the location, the orientation, and the movement of the one of the at least two I/O components.
For claim 16, Examiner believes this claim should be amended in the following manner:
The method of claim 13 wherein the [[one]] calibrated sensor comprises a magnetometer.
For claim 17, Examiner believes this claim should be amended in the following manner:
An alignment method for extended reality (XR) content comprising:
providing a first environment and a second environment, each having a first surface and a second surface;
in each of the first environment and the second environment, defining [[an]] intersection points;
defining at least one of the intersection points by:
providing a first line A that is coplanar with the first surface and a second line B that is coplanar with the second surface;
providing an intersecting plane that intersects the first surface and the second surface;
projecting line A onto the intersecting plane to provide projected line segment AP and projecting line B onto the intersecting plane to provide projected line segment BP, wherein the line segment AP and the line segment BP are sized and configured to intersect with one another at the at least one of the intersection points;
aligning a position of the first environment with a position of the second environment via an alignment process comprising aligning the intersection points,
wherein one environment of the first environment and the second environment is a XR environment comprising the XR content generated by an XR content generation system (XGS) and wherein another environment of the first environment or the second environment is a physical environment.
For claim 18, Examiner believes this claim should be amended in the following manner:
The method of claim 17, wherein, in at least one of the first environment and [[a]] the second environment, providing points A1 and A2 that each has a location defined as co-planar with the first surface and providing points B1 and B2 that each has a location defined as co-planar with the second surface, wherein line A passes through the points A1 and A2 and line B passes through the points B1 and B2.
For claim 19, Examiner believes this claim should be amended in the following manner:
The method of claim 18 wherein at least one of the points A1, A2, B1, B2 is provided by selecting a portion of the first surface or the second surface of the at least one of the first environment and [[a]] the second environment in order to specify a location of the at least one [[point]] of the points A1, A2, B1, B2.
For claim 20, Examiner believes this claim should be amended in the following manner:
The method of claim 19 further comprising, using the XGS, automatically selecting the at least one of the points A1, A2, B1, B2 in response to the at least one [[point]] of the points A1, A2, B1, B2 being selected.
For claim 21, Examiner believes this claim should be amended in the following manner:
The method of claim 17 further comprising:
providing an alignment vector in each of the first environment and the second environment, wherein each alignment vector has an alignment angle measured relative to either the first surface or the second surface; and
aligning an orientation of the first environment with an orientation of the second environment by aligning the alignment vectors.
For claim 22, Examiner believes this claim should be amended in the following manner:
The method of claim 17 wherein the line segment AP and the line segment BP, when not extended, do not intersect one another, the method further comprising extending at least one of the line segment AP and the line segment BP in order to provide the intersection.
For claim 23, Examiner believes this claim should be amended in the following manner:
The method of claim 17 wherein:
said one environment further includes a three-dimensional XR environment having an XR entity;
said [[another]] other environment further includes a three-dimensional physical environment having a physical entity that corresponds to the XR entity; and
the XGS comprising an input device for receiving inputs for interacting with the XR content, and an output device for outputting the XR content including at least visual XR content;
the method further comprising:
after the position of the first environment is aligned with [[a]] the position of the second environment, when the physical entity is in a first physical position, using the output device, outputting visual XR content where the XR entity is located at a corresponding first XR position;
after the position of the first environment is aligned with [[a]] the position of the second environment, with the input device, receiving an input corresponding to a change in position of the physical entity in the three-dimensional physical environment from the first physical position to a second and different physical position; and
after the position of the first environment is aligned with [[a]] the position of the second environment, in response to the change of position of the physical entity, with the output device, outputting visual XR content where the XR entity is located at a corresponding second XR position.
For claim 24, Examiner believes this claim should be amended in the following manner:
The method of claim 23 further comprising, after the position of the first environment is aligned with [[a]] the position of the second environment, with the output device, outputting visual XR content that identically mirrors movement of the XR entity from the first XR position to the second XR position in the three-dimensional XR environment that identically mirrors [[the]] movement of the physical entity from the first physical position to the second physical position in the three-dimensional physical environment.
For claim 26, Examiner believes this claim should be amended in the following manner:
The method of claim 17 wherein the XR system includes at least two I/O components and a calibrated sensor associated with one of the least two I/O components, wherein the calibrated sensor is configured to sense tracked information comprising at least one of location, orientation, or movement of the one of the at least two I/O components, the method further comprising using the calibrated sensor to sense the tracked information of the one of the at least two I/O components and making the tracked information available to another one of the at least two I/O components.
For claim 27, Examiner believes this claim should be amended in the following manner:
The method of claim 26 wherein a separate calibrated sensor is associated with each of the at least two I/O components, the method comprising using each calibrated sensor to sense [[the]] tracked information of [[the]] an associated I/O component and then making the tracked information of the associated I/O component available to another one of the at least two I/O components.
For claim 28, Examiner believes this claim should be amended in the following manner:
The method of claim 26 wherein the calibrated sensor is configured to sense tracked information comprising each of the location, the orientation, and the movement of the one of the at least two I/O components.
For claim 29, Examiner believes this claim should be amended in the following manner:
The method of claim 26 wherein the [[one]] calibrated sensor comprises a magnetometer.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1, 5-9 and 17-22 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 12, 13, 16, 18, 19 and 21 of U.S. Patent 12,190,464 in view of Huston et al. (U.S. Patent Application Publication 2017/0365102 A1).
The following is a claim comparison of claims 1, 5-9 and 17-22 of the instant application and claims 1, 12, 13, 16, 18, 19 and 21 of U.S. Patent 12,190,464.
Application No. 18/970,039
U.S. Patent 12,190,464
1. A method for delivering extended reality (XR) content comprising:
providing a physical environment having a physical entity, a first physical surface and a second physical surface; providing a first projected surface that is co-planar with the first physical surface and a second projected surface that is co-planar the second physical surface, wherein the first projected surface intersects with the second projected surface along a first elongate intersection;
providing an XR system comprising an XR content generation system (XGS) for generating XR content and input-output (I/O) components including an input device for receiving inputs for interacting with the XR content and an output device for outputting XR content including at least visual XR content;
with the XGS, generating a three-dimensional XR environment having an XR entity that corresponds to the physical entity, a first XR surface, a second XR surface, and a XR intersecting plane, wherein the XR environment may be traversed by receiving the inputs via the input device; aligning the XR environment with the physical environment via an alignment process such that a physical location in three-dimensional physical space, and movement of the physical entity within the physical environment is identically aligned and identically mirrored by a corresponding XR location in three-dimensional XR space, and corresponding movement of the XR entity within the XR environment,
the alignment process including assigning a location to the XR environment with respect to the physical environment such that: the first XR surface is co-planar with the first physical surface; the second XR surface is co-planar with the second physical surface; the XR intersecting plane intersects with the first XR surface along a fourth elongate intersection and intersects with the second XR surface along a fifth elongate intersection; an intersection point is located at an intersection of the fourth elongate intersection with the fifth intersection, such that the intersection point is disposed along the first elongate intersection.
1. A method for aligning extended reality (XR) content with a physical environment comprising:
providing the physical environment having a first physical surface and a second physical surface; providing a first projected surface that is co-planar with the first physical surface and a second projected surface that is co-planar with the second physical surface, wherein the first projected surface and the second projected surface intersect with one another along a first elongate intersection;
using an XR generation system, generating an XR model having a first virtual surface, a second virtual surface, and a virtual intersecting plane; providing a selected intersecting plane;
assigning a position and orientation to the XR model such that: the first virtual surface is co-planar with the first physical surface; the second virtual surface is co-planar with the second physical surface; the virtual intersecting plane intersects with the first virtual surface along a fourth elongate intersection and intersects with the second virtual surface along a fifth elongate intersection; an intersection point is located at an intersection of the fourth elongate intersection with the fifth elongate intersection, such that the intersection point is disposed along the first elongate intersection;
bisecting an angle formed between the first virtual surface and the second virtual surface to define an alignment angle Θ; and, using the XR generation system to define an alignment vector E that is co-planar with the virtual intersecting plane and extends away from the intersection point at the alignment angle Θ.
5
21
6
1
7
1
8
1
9
1
17. An alignment method for extended reality (XR) content comprising:
providing a first environment and a second environment, each having a first surface and a second surface; in each of the first environment and the second environment, defining an intersection point; defining at least one of the intersection points by: providing a first line A that is coplanar with the first surface and a second line B that is coplanar with the second surface;
providing an intersecting plane that intersects the first surface and the second surface; projecting line A onto the intersecting plane to provide projected line segment AP and projecting line B onto the intersecting plane to provide projected line segment BP, wherein line segment AP and line segment BP are sized and configured to intersect with one another at the at least one intersection point; aligning a position of the first environment with a position of the second environment via an alignment process comprising aligning the intersection points, wherein one environment of the first environment and the second environment is a XR environment comprising XR content generated by an XR content generation system (XGS) and wherein another environment of the first environment or the second environment is a physical environment.
12. A method for aligning extended reality (XR) content with a physical environment comprising:
providing an X axis, a Y axis, a Z axis, an XY plane defined by the X axis and the Y axis, an XZ plane defined by the X axis and the Z axis, and a YZ plane defined by the Y axis and the Z axis; providing a first physical plane that is parallel with the XY plane; providing a second physical plane that is parallel with the YZ plane and that intersects the first physical plane along a first elongate intersection; using an XR generation system, generating a XR model having: points A1 and A2 that each has a position defined as co-planar with the first physical plane; points B1 and B2 that each has a position defined as co-planar with the second physical plane; in the XR model and using an XR generation system: defining a virtual line A that extends through the points A1 and A2 such that the virtual line A is co-planar with the first physical plane; defining a virtual line B that extends through the points B1 and B2 such that the virtual line B is co-planar with the second physical plane; using the virtual line A and the virtual line B to define an intersection point; and aligning the XR model with the physical environment such that the intersection point is co-linear with the first elongate intersection; bisecting an angle formed between the virtual line A and the virtual line B to define an alignment angle Θ; and, using the XR generation system to define an alignment vector E that extends away from the intersection point at the alignment angle Θ.
13. The method of claim 12 wherein the XR model includes a selected intersecting plane defined as intersecting the first physical plane along a second elongate intersection and as intersecting the second physical plane along a third elongate intersection, the method further comprising projecting the virtual line A and the virtual line B onto the selected intersecting plane to provide a line AP and a line BP, respectively, and wherein the intersection point is located at an intersection of the line AP with the line BP.
18
12 and 13
19
18 and 19
20
18 and 19
21
12
22
16
Claims 1, 5-9 and 17-22 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 12, 13, 16, 18, 19 and 21 of U.S. Patent 12,190,464 in view of Huston et al. (U.S. Patent Application Publication 2017/0365102 A1).
For independent claim 1, claim 1 of U.S. Patent 12,190,464 does not disclose input-output (I/O) components including an input device for receiving inputs for interacting with the XR content and an output device for outputting XR content including at least visual XR content; wherein an XR environment may be traversed by receiving the inputs via the input device; aligning the XR environment with a physical environment via an alignment process such that a physical location in three-dimensional physical space, and movement of physical entity within the physical environment is identically aligned and identically mirrored by a corresponding XR location in three-dimensional XR space, and corresponding movement of a corresponding XR entity within the XR environment. However, these limitations are well-known in the art as disclosed in Huston et al. (U.S. Patent Application Publication 2017/0365102 A1). It would have been obvious to apply the use of I/O components such as a camera system and depth sensors as input devices for receiving inputs for interacting with the augmented reality content and head mounted devices (HMD) as display devices for outputting visual augmented reality content such as a 3D virtual model as a 3D augmented reality environment having an avatar as an augmented reality entity corresponding to a physical person, a left virtual wall 760 as a first augmented reality surface and a right virtual wall as a second augmented reality surface where the 3D augmented reality is traversed by inputs received from the camera system and the depth sensor so that the augmented reality environment is aligned with the room such that a physical location and movement of the physical person in 3D physical space of the room is identically aligned and mirrored by a corresponding augmented reality location and movement of the avatar in 3D augmented reality space of the augmented reality environment to appropriate present augmented reality (Figs. 19-20; page 18/par. 231-234) as taught in Huston et al. (U.S. Patent Application Publication 2017/0365102 A1). Claim 1 of U.S. Patent 12,190,464 otherwise discloses the same limitations of claim 1 of the instant application as illustrated in the claim chart above. Therefore, claim 1 is not patentably distinct from claim 1 of U.S. Patent 12,190,464.
For dependent claims 5-9, claims 1 and 21 of U.S. Patent 12,190,464 mirror and recite the same limitations of claims 5-9 as set forth in the claim chart above. Therefore, claims 5-9 are not patentably distinct from claims 1 and 21 of U.S. Patent 12,190,464.
For independent claim 17, claims 12 and 13 of U.S. Patent 12,190,464 do not disclose wherein one environment of the first environment and the second environment is a XR environment comprising XR content generated by an XR content generation system (XGS). However, these limitations are well-known in the art as disclosed in Huston et al. (U.S. Patent Application Publication 2017/0365102 A1). It would have been obvious to apply the use of a system for generating a 3D virtual model as an extended reality model to present an augmented reality environment for appropriately presenting augmented reality (Fig. 3; page 2/par. 40) as taught in Huston et al. (U.S. Patent Application Publication 2017/0365102 A1). Claims 12 and 13 of U.S. Patent 12,190,464 otherwise discloses the same limitations of claim 17 of the instant application as illustrated in the claim chart above. Therefore, claim 17 is not patentably distinct from claims 12 and 13 of U.S. Patent 12,190,464.
For dependent claims 18-22, claims 12, 13, 16 and 18-19 of U.S. Patent 12,190,464 mirror and recite the same limitations of claims 18-22 as set forth in the claim chart above. Therefore, claims 18-22 are not patentably distinct from claims 12, 13, 16 and 18-19 of U.S. Patent 12,190,464.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-16, 20, 23, 24 and 27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
For independent claim 1, this claim establishes a first “extended reality (XR) content” and a second “XR content”. Claim 1 goes on to recite the phrase “the XR content” and it is unclear and ambiguous to which of the previously established first “XR content” or second “XR content” is being referenced by the phrase “the XR content”. Examiner has suggested amendments in the claim objections to resolve the ambiguities.
Dependent claims 2-16 depend from claim 1 and inherit the deficiencies of claim 1. Therefore, claims 2-16 are likewise indefinite.
Furthermore, for dependent claim 2, parent claim 1 establishes a “physical location” and an “XR location”. Claim 2 goes on to recite the phrase “said location” and it is unclear and ambiguous to which of the previously established “physical location” and “XR location” is being referenced by the phrase “said location”. Examiner has suggested amendments in the claim objections to resolve the ambiguities.
Furthermore, for dependent claim 14, parent claim 13 establishes “tracked information of the one I/O component” and claim 14 goes on to establish “tracked information of the associated I/O component”. Claim 14 goes on to recite the phrase “the tracked information” and it is unclear and ambiguous to which of the previously established “tracked information of the one I/O component” and “tracked information of the associated I/O component” is being referenced by the phrase “the tracked information. Examiner has suggested amendments in the claim objections to resolve the ambiguities.
For dependent claim 20, parent claim 18 established points “A1, A2, B1, B2”. Claim 20 goes on to recite the phrase “the at least one point” and it is unclear and ambiguous to which of the previously established points “A1, A2, B1, B2” is being referenced by the phrase “the at least one point”. Examiner has suggested amendments in the claim objections to resolve the ambiguities.
For dependent claim 23, parent claim 17 establishes a first “extended reality (XR) content” and a second “XR content”. Claim 23 goes on to recite the phrase “the XR content” and it is unclear and ambiguous to which of the previously established first “XR content” or second “XR content” is being referenced by the phrase “the XR content”. Furthermore, parent claim 17 establishes a “physical environment” and an “XR environment”. Claim 23 goes on to establish a “three-dimensional physical environment” and a “three-dimensional XR environment”. Claim 23 goes on to recite the phrase “the physical environment” and it is unclear and ambiguous to which of the previously stablished “physical environment” and “three-dimensional physical environment” is being referenced by the phrase “the physical environment”. Claim 23 further goes on to recite the phrase “the XR environment” and it is unclear and ambiguous to which of the previously established “XR environment” and “three-dimensional XR environment” is being referenced by the phrase “the XR environment”. Examiner has suggested amendments in the claim objections to resolve the ambiguities.
For dependent claim 24, this claim depends from claim 23 and inherits the deficiencies of claim 23. Therefore, claim 24 is likewise indefinite.
For dependent claim 27, parent claim 26 establishes “tracked information of the one I/O component” and claim 27 goes on to establish “tracked information of the associated I/O component”. Claim 27 goes on to recite the phrase “the tracked information” and it is unclear and ambiguous to which of the previously established “tracked information of the one I/O component” and “tracked information of the associated I/O component” is being referenced by the phrase “the tracked information. Examiner has suggested amendments in the claim objections to resolve the ambiguities.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4 and 10-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huston et al. (U.S. Patent Application Publication 2017/0365102 A1, hereinafter “Huston”) in view of Jovanovic et al. (U.S. Patent Application Publication 2019/0051054 A1, hereinafter “Jovanovic”) and Huang et al. (U.S. Patent Application Publication 2021/0027539 A1, hereinafter “Huang”).
For claim 1, Huston discloses a method for delivering extended reality (XR) content (disclosing a method to align virtual models as extended reality content with a physical environment to present augmented reality as extended reality (Figs. 19-20; page 1/par. 3; page 4/par. 53; and page 17/par. 224-225)) comprising: providing a physical environment having a physical entity, a first physical surface and a second physical surface (disclosing a room as a physical environment having a physical person 710 as a physical entity, a left physical wall 712 as a first physical surface and a right physical wall as a second physical surface (Fig. 19)), wherein the first physical surface intersects with the second physical surface along a first elongate intersection (disclosing the left physical wall 712 and the right physical wall intersect along a line forming a first elongate intersection (Fig. 19)); providing an XR system comprising an XR content generation system (XGS) for generating XR content and input-output (I/O) components including an input device for receiving inputs for interacting with the XR content and an output device for outputting XR content including at least visual XR content (disclosing a system for generating a 3D virtual model as an extended reality model for presenting the augmented reality (Fig. 3; page 2/par. 40); disclosing the system includes I/O components such as a camera system and depth sensors as input devices for receiving inputs for interacting with the augmented reality content and head mounted devices (HMD) as display devices for outputting visual augmented reality content (page 18/par. 233-234)); with the XGS, generating a three-dimensional XR environment having an XR entity that corresponds to the physical entity, a first XR surface, a second XR surface, wherein the XR environment may be traversed by receiving the inputs via the input device (disclosing the system generates 3D virtual model as a 3D augmented reality environment having an avatar as an augmented reality entity corresponding to the physical person, a left virtual wall 760 as a first augmented reality surface and a right virtual wall as a second augmented reality surface where the 3D augmented reality is traversed by inputs received from the camera system and the depth sensor (Fig. 20; page 18/par. 233-234)); providing a selected intersecting plane (disclosing the room has a physical floor as a selected intersecting plane to intersect the left physical wall 712 and the right physical wall (Fig. 19)); aligning the XR environment with the physical environment via an alignment process such that a physical location in three-dimensional physical space, and movement of the physical entity within the physical environment is identically aligned and identically mirrored by a corresponding XR location in three-dimensional XR space, and corresponding movement of the XR entity within the XR environment (disclosing the augmented reality environment is aligned with the room such that a physical location and movement of the physical person in 3D physical space of the room is identically aligned and mirrored by a corresponding augmented reality location and movement of the avatar in 3D augmented reality space of the augmented reality environment (Figs. 19-20; page 18/par. 231-234)), the alignment process including assigning a location to the XR environment with respect to the physical environment such that: the first XR surface is co-planar with the first physical surface (disclosing the 3D virtual model is assigned a position and orientation to align the left virtual wall to be coplanar with the left physical wall of the room (Figs. 19-20; page 17/par. 224-225)); the XR virtual surface is co-planar with the second physical surface (disclosing the 3D virtual model is assigned a position and orientation to align the right virtual wall to be coplanar with the right physical wall of the room (Figs. 19-20; page 17/par. 224-225)); the XR model with the first XR surface provides a fourth elongate intersection and the XR model with the second XR surface provides a fifth elongate intersection (disclosing the left virtual wall intersects with a floor to form a line as a fourth elongate intersection and the right virtual wall intersects with a floor to form a line as a fifth elongate intersection (Fig. 20)); an intersection point is located at an intersection of the fourth elongate intersection with the fifth intersection, such that the intersection point is disposed along the first elongate intersection (disclosing an intersection point at the intersection of the lines forming the fourth elongate intersection and the fifth elongate intersection such that the intersection is aligned with the room for disposal along the line forming the first elongate intersection (Figs. 19-20)).
Huston does not disclose providing a first projected surface that is co-planar with a first physical surface and a second projected surface that is co-planar with a second physical surface.
However, these limitations are well-known in the art as disclosed in Jovanovic.
Jovanovic similarly discloses a system and method for generating a 3D model of a physical space for presenting augmented reality (page 1/par. 2). Jovanovic explains its system may extend vertical planes as projected surfaces from the corners of walls as physical surfaces such that the vertical planes are coplanar with the walls (page 1/par. 2; and pages 7-8/par. 51). It follows Huston may be accordingly modified with the teachings of Jovanovic to provide a first projected surface that is co-planar with its first physical surface and a second projected surface that is co-planar with its second physical surface for intersection along its first elongate intersection.
A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Huston with the teachings of Jovanovic. Jovanovic is analogous art in dealing with system and method for generating a 3D model of a physical space for presenting augmented reality (page 1/par. 2). Jovanovic discloses its use of extending planes as projected surfaces is advantageous in appropriately modeling physical surfaces of a physical space for presenting augmented reality (page 1/par. 2; and pages 7-8/par. 51). Consequently, a PHOSITA would incorporate the teachings of Jovanovic into Huston for appropriately modeling physical surfaces of a physical space for presenting augmented reality.
Huston as modified by Jovanovic does not specifically disclose an extended reality intersecting plane wherein the extended reality intersecting plane intersects with a first extended reality surface along a fourth elongate intersection and intersects with a second extended reality surface along a fifth elongate intersection.
However, these limitations are well-known in the art as disclosed in Huang.
Huang similarly discloses a system and method for generating a 3D model environment of a physical environment for facilitating replacement of physical objects with corresponding simulated objects to present extended reality (page 1/par. 2; and pages 4-5/par. 49). Huang likewise discloses a 3D virtual model with virtual walls and further explains a virtual floor as a extended reality intersecting plane can be created such that the virtual floor intersects with a left virtual wall along a fourth elongate intersection and intersects with a right virtual wall along a fifth elongate intersection (Fig. 12; pages 4-5/par. 49). It follows Huston and Jovanovic may be accordingly modified with the teachings of Huang to implement an extended reality intersecting plane in its XR model wherein the extended reality intersecting plane intersects with its first extended reality surface along its fourth elongate intersection and intersects with its second extended reality surface along its fifth elongate intersection.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Huston and Jovanovic with the teachings of Huang. Huang is analogous art in dealing with system and method for generating a 3D model environment of a physical environment for facilitating replacement of physical objects with corresponding simulated objects to present extended reality (page 1/par. 2; and pages 4-5/par. 49). Huang discloses its use of a virtual floor is advantageous in appropriately aligning a virtual model with corresponding physical environment to present extended reality (page 1/par. 2; and pages 4-5/par. 49). Consequently, a PHOSITA would incorporate the teachings of Huang into Huston and Jovanovic for appropriately aligning a virtual model with corresponding physical environment to present extended reality. Therefore, claim 1 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 2, depending on claim 1, Huston as modified by Jovanovic and Huang discloses further comprising aligning the XR environment with the physical environment such that a physical position, including the physical location and an orientation in three-dimensional physical space, is identically aligned and identically mirrored by a corresponding XR position, including the XR location and an orientation in three-dimensional XR space, the alignment process further including assigning a position, including said location and an orientation, to the XR environment with respect to the physical environment (Huston discloses a physical position for the physical person including the physical location and orientation in 3D physical space is identically aligned and mirrored by a corresponding augmented reality position of the avatar including the augmented reality location and orientation in 3D augmented reality space to assign a position including the location and orientation to the avatar in the augmented reality environment with respect to the room (Figs. 19-20; page 18/par. 231-234)).
For claim 3, depending on claim 1, Huston as modified by Jovanovic and Huang discloses further comprising: when the physical entity is in a first physical position, using the output device, outputting visual XR content where the XR entity is located at a corresponding first XR position (Huston discloses the physical person is in a first physical position such that a corresponding HMD outputs visual augmented reality content where the avatar is located at a corresponding first augmented reality position (Figs. 19-20; page 18/par. 231-234)); with the input device, receiving an input corresponding to a change in position of the physical entity in the physical environment from the first physical position to a second and different physical position (Huston discloses the camera system and depth sensors receives input corresponding to a change in position of the physical person in the room from the first physical position to a second and different physical position (Figs. 19-20; page 18/par. 231-234)); and in response to the change of position of the physical entity, with the output device, outputting visual XR content where the XR entity is located at a corresponding second XR position (Huston discloses the HMD, in response to the change of position of the physical person, outputs visual augmented reality content where the avatar is located at a second augmented reality position (Figs. 19-20; page 18/par. 231-234)).
For claim 4, depending on claim 3, Huston as modified by Jovanovic and Huang discloses further comprising, with the output device, outputting visual XR content that identically mirrors movement of the XR entity from the first XR position to the second XR position in the XR environment that identically mirrors the movement of the physical entity from the first physical position to the second physical position in the physical environment (Huston discloses the HMD outputs visual augmented reality content that identically mirrors movement of the avatar from the first augmented reality position to the second augmented reality position in the augmented reality environment that identically mirrors the movement of the physical person from the first physical position to the second physical position in the physical environment (Figs. 19-20; page 18/par. 231-234)).
For claim 10, depending on claim 1, Huston as modified by Jovanovic and Huang discloses wherein the XR system includes an XR peripheral and a locator device configured to locate and to track a position and movement of the XR peripheral in the physical environment, the method further including establishing a linkage between the XR peripheral and the locator device to establish an initial position of the XR peripheral within the physical environment; and, using the locator device and while the linkage is established, tracking changes in a position of the XR peripheral in the physical space (Huston discloses its system includes the HMD as an augmented reality peripheral and depth sensors as a locator device for tracking a position and movement of the HMD in the room (Figs. 19-20; page 18/par. 231-234); Huston further explains a communication link is established between the HMD and the depth sensors to determine an initial position of the HMD within the room and using the depth sensors while the communication link is established to track changes in position of the HMD in the room (Figs. 19-20; page 1/par. 8; page 12/par. 129-130; and page 18/par. 231-234)).
For claim 11, depending on claim 10, Huston as modified by Jovanovic and Huang discloses wherein, in establishing the linkage between the XR peripheral and the locator device, the locator device is imaged (Huston discloses, in establishing the communication link between the HMD and the depth sensors, the depth sensors of the HMD may be imaged by the camera system (Figs. 19-20; page 1/par. 8; page 12/par. 129-130; and page 18/par. 231-234)).
For claim 12, depending on claim 10, Huston as modified by Jovanovic and Huang discloses wherein, in establishing the linkage between the XR peripheral and the locator device, the locator device is contacted by the XR peripheral (Huston discloses establishing the communication link between the HMD and the depth sensors so that the depth sensors are contacted by the HMD (Figs. 19-20; page 1/par. 8; page 12/par. 129-130; and page 18/par. 231-234)).
For claim 13, depending on claim 1, Huston as modified by Jovanovic and Huang discloses wherein the XR system includes at least two I/O components and a calibrated sensor associated with one of the least two I/O components, wherein the calibrated sensor is configured to sense tracked information comprising at least one of location, orientation, or movement of the one I/O component, the method further comprising using the calibrated sensor to sense the tracked information of the one I/O component and making the tracked information available to another one of the at least two I/O components (Huston discloses the system includes at least two HMDs as two I/O components and a calibrated camera as a calibrated sensor associated with at least one of the HMDs to sense tracked information of a location, orientation and movement of at least one of the HMDs and makes the tracked information available to another HMD over a network (Figs. 19-20; page 1/par. 8; page 12/par. 129-130; and page 18/par. 231-234)).
For claim 14, depending on claim 13, Huston as modified by Jovanovic and Huang discloses wherein a separate calibrated sensor is associated with each of the