Prosecution Insights
Last updated: April 19, 2026
Application No. 18/615,309

SYSTEM FOR PROVIDING SYNCHRONIZED SHARING OF AUGMENTED REALITY CONTENT IN REAL TIME ACROSS MULTIPLE DEVICES

Final Rejection §103§112§DP
Filed
Mar 25, 2024
Examiner
FIORILLO, JAMES N
Art Unit
2444
Tech Center
2400 — Computer Networks
Assignee
Iris Xr Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
382 granted / 444 resolved
+28.0% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
30 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
8.6%
-31.4% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 444 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office correspondence is in response to the application number 18/615309 filed on March 25, 2024. Claims 1 – 18 are pending. Priority This application is claiming the benefit of prior filed application 18/075329 (now U.S. Patent 11,943,282) which in turn claimed benefit to application 17/075443 (now U.S. Patent 11,522,945) under 35 U.S.C. 120, 121, 365(c), or 386(c). Co-pendency between the current application and the prior application 18/075329 is required. Since the applications were co-pending at the time of the filing date of the current application, the applicant is entitled to the benefit claim to the prior-filed application, which is a priority date of October 20, 2020. Double Patenting The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based e-Terminal Disclaimer may be filled out completely online using web-screens. An e-Terminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about e-Terminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1 – 18 are rejected on the ground of non-provisional non-statutory anticipatory-type over double patenting as being unpatentable over Claims 1 – 13 and 15 – 20 of U.S. Patent 11,943,282 (herein referred to as ‘282). Although the conflicting claims are not identical, they are not patently distinct from each other because both sets of claims are directed to the same invention. This is a non-provisional non-statutory anticipatory-type double patenting rejection since the claims directed to the same invention have in fact been patented. In regard to claim 1: Application 18/615309 U.S. Patent 11,943,282 1. A system comprising: 1. An augmented reality (AR) platform configured to communicate and exchange data with a plurality of augmented reality (AR)-capable devices over a network, a plurality of augmented reality (AR)-capable devices for providing respective users with an augmented view of a real-world environment, wherein the real-world environment comprises a teaching-based environment and the respective users comprise one or more instructors and/or one or more students; and the plurality of AR-capable devices provide respective users with an augmented view of a real-world environment and each comprises an associated localization system that is specific to a platform of the respective AR-capable device, an augmented reality (AR) platform configured to communicate and exchange data with each of the plurality of AR-capable devices over a network, the AR platform comprising a hardware processor coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the AR platform to: wherein the AR platform is configured to synchronize sharing of augmented reality content in real time across the plurality of AR-capable devices within the real-world environment, the AR platform comprising a hardware processor coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the AR platform to: receive, from each of the AR-capable devices, localization data associated with at least one of an anchor-based system and an image tracking-based system for establishing a location of a respective AR-capable device within the real-world environment; receive, from each of the plurality of AR-capable devices, localization data for establishing a location of a respective AR-capable device within the real-world environment, wherein the localization data is based on a platform-specific localization algorithm run by each AR-capable device to thereby localize the respective AR-capable device within the real-world environment, wherein at least a first AR-capable device provides localization data associated with an anchor-based system and at least a second AR-capable device provides localization data associated with an image tracking-based system; process the localization data and assign a location of each AR-capable device relative to a shared, fixed origin point within the real-world environment; and process the localization data, including synchronously aligning the localization data from each of the AR-capable devices relative to a shared, fixed origin point within the real-world environment, and assign a location of each AR-capable device relative to a shared, fixed origin point within the real-world environment, wherein the shared, fixed origin point comprises position data associated with a specific physical location and orientation within the real-world environment to which AR content is to be associated; and transmit AR content to each of the AR-capable devices, the AR content configured to be displayed and rendered by each AR-capable device based, at least in part, on the assigned location of each respective AR-capable device. transmit AR content to each of the AR-capable devices, the AR content configured to be displayed and rendered by each AR-capable device based, at least in part, on the assigned location of each respective AR-capable device, in which visual presentation of the AR content is adapted to each respective user's point of view as a result of the assigned location within the real-world environment relative to the shared, fixed origin point. It is clear that all of the elements of the instant application 18/615309 (herein ‘309) claim 1 are to be found in U.S. Patent 11,943,282 (herein ‘282) claim 1 (as the instant application ‘309 claim 1 fully encompasses ‘282 claim 1). The difference between the application ‘309 claim 1 and patent ‘282 claim 1 lies in the fact that the '282 claim includes many more elements and is thus much more specific. Thus the invention of claim 1 of the ‘282 patent is in effect a “species” of the “generic” invention of ‘309 claim 1. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘309 claim 1 is anticipated by claim 1 of ‘282 it is not patently distinct from ‘282 claim 1. In regard to claim 2, see claim 19 of ‘282. In regard to claim 3, see claim 20 of ‘282. In regard to claim 4, see claim 10 of ‘282. In regard to claim 5, see claim 11 of ‘282. In regard to claim 6, see claim 2 of ‘282. In regard to claim 7, see claim 3 of ‘282. In regard to claim 8, see claim 4 of ‘282. In regard to claim 9, see claim 5 of ‘282. In regard to claim 10, see claim 6 of ‘282. In regard to claim 11, see claim 7 of ‘282. In regard to claim 12, see claim 8 of ‘282 In regard to claim 13, see claim 9 of ‘282. In regard to claim 14, see claim 15 of ‘282. In regard to claim 15, see claim 16 of ‘282. In regard to claim 16, see claim 17 of ‘282. In regard to claim 17, see claim 18 of ‘282. In regard to claim 18, see claim 1 of ‘282. Claims 1 – 17 are rejected on the ground of non-provisional non-statutory anticipatory-type over double patenting as being unpatentable over Claims 1 – 13 and 17 – 20 of U.S. Patent 11,522,945 (herein referred to as ‘945). Although the conflicting claims are not identical, they are not patently distinct from each other because both sets of claims are directed to the same invention. This is a non-provisional non-statutory anticipatory-type double patenting rejection since the claims directed to the same invention have in fact been patented. In regard to claim 1: Application 18/615309 U.S. Patent 11,522,945 1. A system comprising: 1. A system comprising: a plurality of augmented reality (AR)-capable devices for providing respective users with an augmented view of a real-world environment, wherein the real-world environment comprises a teaching-based environment and the respective users comprise one or more instructors and/or one or more students; and a plurality of augmented reality (AR)-capable devices for providing respective users with an augmented view of a real-world environment, wherein each of the AR-capable devices comprises an associated localization system that is specific to a platform of the respective AR-capable device; and an augmented reality (AR) platform configured to communicate and exchange data with each of the plurality of AR-capable devices over a network, the AR platform comprising a hardware processor coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the AR platform to: an augmented reality (AR) platform configured to communicate and exchange data with each of the plurality of AR-capable devices over a network and synchronize sharing of augmented reality content in real time across the plurality of AR-capable devices within the real-world environment, the AR platform comprising a hardware processor coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the AR platform to: receive, from each of the AR-capable devices, localization data associated with at least one of an anchor-based system and an image tracking-based system for establishing a location of a respective AR-capable device within the real-world environment; receive, from each of the AR-capable devices, localization data for establishing a location of a respective AR-capable device within the real-world environment, wherein each AR-capable device runs a platform-specific localization algorithm to thereby localize the respective AR-capable device within the real-world environment and transmits associated localization data to the AR platform of the system, wherein at least two of the plurality of AR-capable devices comprise different respective platform-specific localization systems running different associated localization algorithms; process the localization data and assign a location of each AR-capable device relative to a shared, fixed origin point within the real-world environment; and process the localization data, including synchronously aligning the localization data from each of the AR-capable devices relative to a shared, fixed origin point within the real-world environment, and assign a location of each AR-capable device relative to the shared, fixed origin point within the real-world environment, wherein the shared, fixed origin point comprises position data associated with a specific physical location and orientation within the real-world environment to which AR content is to be associated; and transmit AR content to each of the AR-capable devices, the AR content configured to be displayed and rendered by each AR-capable device based, at least in part, on the assigned location of each respective AR-capable device transmit AR content to each of the AR-capable devices, the AR content configured to be displayed and rendered by each AR-capable device based, at least in part, on the assigned location of each respective AR-capable device, in which visual presentation of the AR content is adapted to each respective user's point of view as a result of the assigned location within the real-world environment relative to the shared, fixed origin point. It is clear that all of the elements of the instant application 18/615309 (herein ‘309) claim 1 are to be found in U.S. Patent 11,522,945 (herein ‘945) claim 1 (as the instant application ‘309 claim 1 fully encompasses ‘945 claim 1). The difference between the application ‘309 claim 1 and patent ‘945 claim 1 lies in the fact that the '945 claim includes many more elements and is thus much more specific. Thus the invention of claim 1 of the ‘945 patent is in effect a “species” of the “generic” invention of ‘309 claim 1. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘309 claim 1 is anticipated by claim 1 of ‘945 it is not patently distinct from ‘945 claim 1. In regard to claim 2, see claim 19 of ‘945. In regard to claim 3, see claim 20 of ‘945. In regard to claim 4, see claim 17 of ‘945. In regard to claim 5, see claim 18 of ‘945. In regard to claim 6, see claim 2 of ‘945. In regard to claim 7, see claim 3 of ‘945. In regard to claim 8, see claim 4 of ‘945. In regard to claim 9, see claim 5 of ‘945. In regard to claim 10, see claim 6 of ‘945. In regard to claim 11, see claim 7 of ‘945. In regard to claim 12, see claim 8 of ‘945. In regard to claim 13, see claim 9 of ‘945. In regard to claim 14, see claim 10 of ‘945. In regard to claim 15, see claim 11 of ‘945. In regard to claim 16, see claim 12 of ‘945. In regard to claim 17, see claim 13 of ‘945. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Particularly, claim 18 as a dependent claim of independent claim 1 does not further limit claim 1 as the claim 18 limitation “wherein the real-world environment comprises a teaching based environment including one or more instructors and one or more students is already a limitation of claim 1. Appropriate correction is required. Claim Analysis - 35 USC § 101 (Judicial Exception) 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1– 18 are directed to statutory subject matter and no 35 USC 101 rejection is applied for the judicial exception. The claims are directed to non-abstract improvements in computer related technology. The claimed subject matter is integrated into a practical application under Prong 2 of the Step 2A analysis described in MPEP 2016.04(d). A claim is non-statutory when it is directed to a judicial exception (e.g. either one of mathematical concepts, mental processes, or certain methods of organizing human activity) without significantly more. The claimed invention is not directed to a judicial exception. Instead, the claimed invention is directed to a technological improvement to augmented reality platforms and, more particular, to an augmented view of a real-world environment, wherein the real-world environment comprises a teaching-based environment and the respective users comprise one or more instructors and/or one or more students, wherein the AR platform is configured to communicate with a plurality of augmented reality (AR)-capable devices for providing respective users with an augmented view of a real-world environment and exchange data over a network, the AR platform comprising a hardware processor coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the AR platform to: receive, from each of the AR-capable devices of diverse platforms, localization data associated with at least one of an anchor based system and an image tracking based system for establishing a location of a respective AR-capable device within the real-world environment, and further process the localization data including synchronously aligning the localization data form each of the AR- capable devices relative to a shared fixed origin point within the real-world environment and assign a location of each AR-capable device relative to a shared, fixed origin point within the real-world environment, and further transmit AR content to each of the AR-capable devices, the AR content configured to be displayed and rendered by each AR-capable device based, at least in part, on the assigned location of each respective AR-capable device. The ordered steps of the claim language impose meaningful limits on the scope of the claims and provides an asserted improvement to augmented reality systems by providing synchronized sharing of AR content in real time and across multiple AR capable devices within a controlled, physical environment or space. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 8, 10 – 12, 14 – 16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (U.S. 2021/0256768 A1; herein referred to as Zhao) in view of Aloisio et al. (U.S. 11,887,505 B1; herein referred to as Aloisio) In regard to claim 1, Zhao teaches A system comprising (see abstract – “ . . . A cross reality system enables any of multiple devices to efficiently access previously stored maps. Both stored maps and tracking maps used by portable devices may have any of multiple types of location metadata associated with them. The location metadata may be used to select a set of candidate maps for operations, such as localization or map merge that involve finding a match between a location defined by location information from a portable device and any of a number of previously stored maps . . .”): a plurality of augmented reality (AR)-capable devices (see ¶ [0005] “ . . . an XR system may build a representation of the physical world around a user of the system. This representation, for example, may be constructed by processing images acquired with sensors on a wearable device that forms a part of the XR system. In such a system, a user might perform an initialization routine by looking around a room or other physical environment in which the user intends to use the XR system until the system acquires sufficient information to construct a representation of that environment. As the system operates and the user moves around the environment or to other environments, the sensors on the wearable devices might acquire additional information to expand or update the representation of the physical world. . . .” for providing respective users with an augmented view (see Fig. 3, ¶ [0044] “ . . . a schematic diagram illustrating data flow for a single user in an AR system configured to provide an experience to the user of AR content interacting with a physical world . . .”) of a real-world environment (see Fig. 2, ¶ [0027] “ . . . a portable electronic device configured to operate within a three-dimensional (3D) environment and display virtual content of a cross reality system is provided. The portable device comprises at least one processor; an operating system executing on the at least one processor, the operating system providing a geo-location application programming interface (API); computer executable instructions configured to, when executed by the at least one processor: form a map of the 3D environment based on a local coordinate frame as the portable electronic device moves in the 3D environment, the map comprising persistent locations; an augmented reality application comprising computer executable instructions configured to, when executed by the at least one processor: query the geo-location API; and store geo-location information received through the geo-location API as location metadata associated with the persistent locations. . . .”), an augmented reality (AR) platform configured to communicate and exchange data with each of the plurality of AR-capable devices over a network (see ¶ [0016] “ . . . a cross reality system that supports specification of a position of virtual content relative to stored maps is provided. The system comprises one or more computing devices configured for network communication with one or more portable electronic devices, comprising: a communication component configured to receive, from a portable electronic device, information about a set of features in a three-dimensional (3D) environment of the portable electronic device, position information for the set of features expressed in a first coordinate frame . . .”), the AR platform comprising a hardware processor (see ¶ [0134] “ . . . FIGS. 1 and 2 illustrate scenes with virtual content displayed in conjunction with a portion of the physical world. For purposes of illustration, an AR system is used as an example of an XR system. FIGS. 3-6B illustrate an exemplary AR system, including one or more processors, memory, sensors and user interfaces that may operate according to the techniques described herein . . .”) coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the AR platform to (see ¶ [0030] “ . . . According to one aspect a cloud service for a cross reality system is provided. The service comprises: computer storage media storing: a database of maps; a plurality of types of location metadata associated with maps of the database of maps, the types of location metadata comprising wireless finger prints and geo-location information; one or more computing devices configured for network communication with a plurality of portable electronic devices, the one or more computing devices comprising non-transitory computer-readable media comprising computer executable instructions that, when executed perform a method comprising: receiving, from a portable electronic device of the plurality of portable electronic devices, a communication comprising position information for the portable electronic devices, the one or more computing devices comprising non-transitory computer-readable media comprising computer executable instructions that, when executed perform a method . . . “): receive, from each of the AR-capable devices, localization data (e.g. location metadata) (see ¶ [0016] “ . . . location metadata associated with a position of the portable electronic device in the 3D environment; a localization component, connected to the communication component, the localization component configured to: select, based on the received location metadata and location metadata associated with the stored maps, a set of stored maps; identify a stored map from the set of stored maps based on matching the received set of features to features of the identified stored map, wherein the identified stored map comprises a second coordinate frame; generate a transform between the first coordinate frame and the second coordinate frame based on a computed alignment between the received set of features and the matching set of features in the identified stored map; and send the transform to the portable electronic device. . . “) associated with at least one of an anchor-based system (see Fig. 16, ¶ [0304] “ . . . an anchor identification system 128 . . .”) and an image tracking-based system for establishing a location of a respective AR-capable device within the real-world environment (see Fig. 7, ¶ [0207] “ . . . FIG. 7 depicts an exemplary tracking map 700, according to some embodiments. The tracking map 700 may provide a floor plan 706 of physical objects in a corresponding physical world, represented by points 702. In some embodiments, a map point 702 may represent a feature of a physical object that may include multiple features. For example, each corner of a table may be a feature that is represented by a point on a map. The features may be derived from processing images, such as may be acquired with the sensors of a wearable device in an augmented reality system. The features, for example, may be derived by processing an image frame output by a sensor to identify features based on large gradients in the image or other suitable criteria. Further processing may limit the number of features in each frame. For example, processing may select features that likely represent persistent objects. One or more heuristics may be applied for this selection . .”); process the localization data (see ¶ [0012] “ . . . communicating over the network to the localization service comprises transmitting the plurality of data structures from the buffer as a localization request . . .”) and assign a location of each AR-capable device (see ¶ [0121] “ . . . Sharing data about the physical world among multiple devices may enable shared user experiences of virtual content. Two XR devices that have access to the same stored map, for example, may both localize with respect to the stored map. Once localized, a user device may render virtual content that has a location specified by reference to the stored map by translating that location to a frame of reference maintained by the user device. The user device may use this local frame of reference to control the display of the user device to render the virtual content in the specified location . . .”) relative to a shared, fixed origin point (see ¶ [0123] “ . . . components may apply transformations that transform information captured or described in relation to one reference frame into another reference frame. For example, sensors may be attached to a head mounted display such that the data read from that sensor indicates locations of objects in the physical world with respect to the head pose of the wearer. One or more transformations may be applied to relate that location information to the coordinate frame associated with a persistent environment map . . .”) within the real-world environment (see ¶ ¶ [0124-0125] “ . . . The stored map may be represented in a canonical form that may be related to a local frame of reference on each XR device. The relationship between the canonical map and a local map for each device may be determined through a localization process. The relationship may be used to enable the device to use its local map to render content in locations specified relative to the canonical map. Alternatively or additionally, a relationship between a local map and a canonical map may enable merging of the local map into a set of canonical maps—either by updating or extending an existing canonical map or by determining that the local map depicts a region for which no canonical map exists and adding the local map as a new canonical map to a set of stored canonical maps. The localization process may be performed on each XR device based on a set of canonical maps selected and sent to the device. Alternatively or additionally, localization may be performed by a localization service, which may be implemented on remote processors, such as might be deployed in the cloud . . . “); and transmit AR content to each of the AR-capable devices (see ¶ [0136] “ . . . The first XR device 12.1 may include a PCF integration unit similar to the PCF integration unit 1300 of the second XR device 12.2. When the map transmitter 122 transmits the canonical map 120 to the first XR device 12.1, the map transmitter 122 may transmit the persistent poses 1332 and PCFs 1330 associated with the canonical map 120 and originating from the second XR device 12.2. The first XR device 12.1 may store the PCFs and the persistent poses within a data store on a storage device of the first XR device 12.1. The first XR device 12.1 may then make use of the persistent poses and the PCFs originating from the second XR device 12.2 for image display relative to the PCFs. Additionally or alternatively, the first XR device 12.1 may retrieve, generate, make use, upload, and download PCFs and persistent poses in a manner similar to the second XR device 12.2 as described above . . .”), the AR content configured to be displayed and rendered by each AR-capable device (see ¶ [0302] “ . . . Such an AR scene may be achieved with a system that builds maps of the physical world based on tracking information, enable users to place AR content in the physical world, determine locations in the maps of the physical world where AR content are placed, preserve the AR scenes such that the placed AR content can be reloaded to display in the physical world during, for example, a different AR experience session, and enable multiple users to share an AR experience. The system may build and update a digital representation of the physical world surfaces around the user. This representation may be used to render virtual content so as to appear fully or partially occluded by physical objects between the user and the rendered location of the virtual content, to place virtual objects, in physics based interactions, and for virtual character path planning and navigation, or for other operations in which information about the physical world is used. . . “) based, at least in part, on the assigned location of each respective AR-capable device (see ¶ [0171] “ . . . FIG. 5A illustrates a user 530 wearing an AR display system rendering AR content as the user 530 moves through a physical world environment 532 (hereinafter referred to as “environment 532”). The information captured by the AR system along the movement path of the user may be processed into one or more tracking maps. The user 530 positions the AR display system at positions 534, and the AR display system records ambient information of a passable world (e.g., a digital representation of the real objects in the physical world that can be stored and updated with changes to the real objects in the physical world) relative to the positions 534. That information may be stored as poses in combination with images, features, directional audio inputs, or other desired data. The positions 534 are aggregated to data inputs 536, for example, as part of a tracking map, and processed at least by a passable world module 538, which may be implemented, for example, by processing on a remote processing module 572 of FIG. 4. In some embodiments, the passable world module 538 may include the head pose component 514 and the world reconstruction component 516, such that the processed information may indicate the location of objects in the physical world in combination with other information about physical objects used in rendering virtual content . . . “). Zhao fails to explicitly teach but Aloisio teaches wherein the real-world environment comprises a teaching-based environment (see Col 1: Lines 43-60 “ . . . The present disclosure describes techniques for implementing a system that deploys and monitors training simulations and exercises across a network, and that enables the development and execution of virtual training courses. Virtual reality (VR) based simulations and training exercises enable highly effective, compelling training mechanisms that are more accessible to trainees and more efficient for instructors to manage. The disclosed techniques have the potential to improve many types and aspects of training, particularly operational training, by reducing costs and improving trainee throughput and effectiveness. These techniques provide a learning management system for virtual training courses that are deployed and executed in a web browser, and which may in some cases leverage three-dimensional (3D) web rendering technology in conjunction with a training platform to utilize agents that are deployed on an exercise network to collect exercise data and objectively monitor cyber training events . . .”) and the respective users comprise one or more instructors and/or one or more students (see Fig. 3 Col 13: Lines 48-67; Col 14: Lines 1 – 54 “ . . . FIG. 3 is a diagram illustrating an example trainee computing system 20A communicatively coupled to a head-mounted display 62 of a user (e.g., trainee) 60, in accordance with one or more aspects of the present disclosure . . . In addition to rendering the received content for output at a web browser of trainee computing system 20A, client web rendering module 50 may also render the received content for output at head-mounted display 62 of trainee 60. Head-mounted display 62 may be communicatively coupled (e.g., wirelessly coupled) to client web rendering module 50 of trainee computing system 20A. Head-mounted display 62 may be a device (e.g., WebVR enabled device) that is configured to communicate with client web rendering module 50 via one or more application programming interfaces (API's) to receive rendered content for display to trainee 60. Client web rendering module 50 may be configured, for example, to detect the presence of head-mounted display 62 and query its device capabilities. Head-mounted display 62 may be configured to display rendered content provided by client web rendering module 50 at a determined frame rate during a training exercise. In addition, during the exercise, client web rendering module 50 may receive information about the position and/or orientation of head-mounted display 62 as trainee 60 moves and interacts with content that is displayed on head-mounted display 62 . . . trainee computing system 20A may include one or more other modules, in addition to client web rendering module 50, that provide support for augmented reality and/or mixed reality. In these examples, these one or more other modules may render content received from content provider module 10 (FIG. 1), where the rendered content comprises one or more of augmented content or mixed reality content, which may be output in a web browser of trainee computing system 20A and/or at head-mounted display 62. This content may correspond to the one or more scenes of an at least partially virtual environment and an at least partially real-world environment that is output for display. In some instances, a given training exercise may comprise a team-based exercise in which multiple different trainees participate. In these instances, the trainees may be included in one or more teams, and each trainee may use one of trainee computing systems 20. Trainee computing system 20A may be one example of a trainee computing system that is used by each a trainee in these team-based exercises, and each trainee (e.g., user 60) may wear a corresponding head-mounted display 62 while participating in these exercises. The agents 14 may then provide corresponding interaction data 44 back to performance monitoring module 4 for each trainee, and evaluation dashboard module 12 may include trainee and team-based information in corresponding dashboards that are output for display at evaluator computing system 22 (such as shown in the example of FIG. 8). . . .”); It would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s invention to incorporate a system and method for implementing a system that deploys and monitors training simulations and exercises across a network, and that enables the development and execution of virtual training, using augmented reality devices, as taught by Alosio, into a system and method for a cross reality system that uses augmented reality devices to provide cross reality scenes to the users of the augmented reality devices, as taught by Zhao. Such incorporation provides that the cross-reality scenes can be used in an education and training environment. In regard to claim 2, the combination of Zhao and Alosio teaches wherein the anchor-based system comprises a cloud anchor system (see Zhao ¶ [0664] “ . . . , the transformation may provide a transformation between a PCF, in a format used by cloud services 7130 to represent locations in the 3D environment, into an anchor in a format used by native AR framework 7160. Such a transformation may be applied, for example, in app engine 7185, to transform information specifying the location of virtual content generated by application 7190 into a format that may be supplied to native AR framework 7160 through its native APIs. The transformation may be applied in reverse, for position information from native AR framework 7160 being supplied to other components implementing the XR system . . . “). In regard to claim 3, the combination of Zhao and Alosio teaches wherein the anchor-based system comprises a persistent anchor system (see Zhao ¶ [0654] “ . . . identify persistent points that serve as anchors for virtual content. They may also provide interfaces through which applications may specify the virtual content, and its location with respect to the anchors. These devices, however, may not enable applications to access the WiFi chipset to obtain a WiFi signature, but may provide an API through which geo-location information may be provided . . .”). In regard to claim 4, the combination of Zhao and Alosio teaches wherein processing of the localization data comprises synchronous alignment of the localization data from each of the AR-capable devices relative to the shared, fixed origin point within the real-world environment (see Zhao Fig. 20A – C, ¶ [0316] “ . . . FIGS. 20A to 20C are schematic diagrams illustrating an example of establishing and using a persistent coordinate frame. FIG. 20A shows two users 4802A, 4802B with respective local tracking maps 4804A, 4804B that have not localized to a canonical map. The origins 4806A, 4806B for individual users are depicted by the coordinate system (e.g., a world coordinate system) in their respective areas. These origins of each tracking map may be local to each user, as the origins are dependent on the orientation of their respective devices when tracking was initiated . . .”). In regard to claim 5, the combination of Zhao and Alosio teaches wherein the AR content is configured to be displayed and rendered by each AR-capable device such that one or more digital images associated with the AR content appears at an identical location within the real-world environment (see Zhao Fig. 5A, ¶ [0171] “ . . . FIG. 5A illustrates a user 530 wearing an AR display system rendering AR content as the user 530 moves through a physical world environment 532 (hereinafter referred to as “environment 532”). The information captured by the AR system along the movement path of the user may be processed into one or more tracking maps. The user 530 positions the AR display system at positions 534, and the AR display system records ambient information of a passable world (e.g., a digital representation of the real objects in the physical world that can be stored and updated with changes to the real objects in the physical world) relative to the positions 534. That information may be stored as poses in combination with images, features, directional audio inputs, or other desired data. The positions 534 are aggregated to data inputs 536, for example, as part of a tracking map, and processed at least by a passable world module 538, which may be implemented, for example, by processing on a remote processing module 572 of FIG. 4. In some embodiments, the passable world module 538 may include the head pose component 514 and the world reconstruction component 516, such that the processed information may indicate the location of objects in the physical world in combination with other information about physical objects used in rendering virtual content . . . “) In regard to claim 6, the combination of Zhao and Alosio teaches wherein the AR-capable devices comprise at least one of a smartphone, a tablet, and a wearable headset or eyewear (see Zhao Fig 3, ¶ [0141] “ . . . FIG. 3 depicts an AR system 502 configured to provide an experience of AR contents interacting with a physical world 506, according to some embodiments. The AR system 502 may include a display 508. In the illustrated embodiment, the display 508 may be worn by the user as part of a headset such that a user may wear the display over their eyes like a pair of goggles or glasses. At least a portion of the display may be transparent such that a user may observe a see-through reality 510. The see-through reality 510 may correspond to portions of the physical world 506 that are within a present viewpoint of the AR system 502, which may correspond to the viewpoint of the user in the case that the user is wearing a headset incorporating both the display and sensors of the AR system to acquire information about the physical world. . . “). In regard to claim 7, the combination of Zhao and Alosio teaches wherein the AR platform is configured to receive additional data from each of the AR-capable devices associated with at least one of, a point of gaze of the associated user within the real-world environment (see Zhao ¶ [0177] “ . . A depth sensor, for example, may quickly determine whether objects have entered the field of view of the user, either as a result of motion of those objects or a change of pose of the user. . .”) , a field of view of the user within the real-world environment (see Zhao ¶ [0157] “ . . . component 520 may be configured to output updates when a representation in a region of interest of the physical world changes. That region of interest, for example, may be set to approximate a portion of the physical world in the vicinity of the user of the system, such as the portion within the view field of the user, or is projected (predicted/determined) to come within the view field of the user . . . “), and a physical setting and objects within the real-world environment (see Zhao ¶ [0156] “ . . . he reconstruction 518 may be used for AR functions, such as producing a surface representation of the physical world for occlusion processing or physics-based processing. This surface representation may change as the user moves or objects in the physical world change. Aspects of the reconstruction 518 may be used, for example, by a component 520 that produces a changing global surface representation in world coordinates, which may be used by other components. . . “) In regard to claim 8, the combination of Zhao and Alosio teaches wherein each of the AR-capable devices comprises one or more sensors (see Zhao ¶ [0007] “ . . . a portable electronic device configured to operate within a cross reality system is provided. The portable electronic device comprises one or more sensors configured to capture information about a three-dimensional (3D) environment, the captured information comprising a plurality of images . . .”) selected from the group consisting of a camera (see Zhao¶ [0146] “ . . . the head pose tracking component may compute relative position and orientation of an AR device to physical objects based on visual information captured by cameras and inertial information captured by IMUs. The head pose tracking component may then compute a headpose of the AR device by, for example, comparing the computed relative position and orientation of the AR device to the physical objects with features of the physical objects. In some embodiments, that comparison may be made by identifying features in images captured with one or more of the sensors 522 that are stable over time such that changes of the position of these features in images captured over time can be associated with a change in headpose of the use . . .”), a motion sensor (see Zhao ¶ [0140] “ . . . , the system may include one or more sensors that may measure parameters of the physical portions of the scene, including position and/or motion of the user within the physical portions of the scene . . . “), and a global positioning satellite (GPS) sensor (see Zhao ¶ [0131] “ . . . where direct access to a GPS data is available, GPS data may be given higher priority . . .”). In regard to claim 10, the combination of Zhao and Alosio teaches wherein each AR-capable device comprises a display unit for providing respective users with an augmented view of the real-world environment (see Zhao ¶ [0159] “ . . . an AR experience may be provided to a user through an XR device, which may be a wearable display device, which may be part of a system that may include remote processing and or remote data storage and/or, in some embodiments, other wearable display devices worn by other users. FIG. 4 illustrates an example of system 580 (hereinafter referred to as “system 580”) including a single wearable device for simplicity of illustration. The system 580 includes a head mounted display device 562 (hereinafter referred to as “display device 562”), and various mechanical and electronic modules and systems to support the functioning of the display device 562. The display device 562 may be coupled to a frame 564, which is wearable by a display system user or viewer 560 (hereinafter referred to as “user 560”) and configured to position the display device 562 in front of the eyes of the user 560. According to various embodiments, the display device 562 may be a sequential display. The display device 562 may be monocular or binocular. In some embodiments, the display device 562 may be an example of the display 508 in FIG. 3. . . “). In regard to claim 11, the combination of Zhao and Alosio teaches wherein the display unit comprises a lens (e.g. cameras) (see Zhao ¶ [0178] “ . . . world cameras 552 record a greater-than-peripheral view to map and/or otherwise create a model of the environment 532 and detect inputs that may affect AR content. In some embodiments, the world camera 552 and/or camera 553 may be grayscale and/or color image sensors, which may output grayscale and/or color image frames at fixed time intervals . . .”) and a digital display (see Zhao ¶ [0171] “ . . . FIG. 5A illustrates a user 530 wearing an AR display system rendering AR content as the user 530 moves through a physical world environment 532 (hereinafter referred to as “environment 532”). The information captured by the AR system along the movement path of the user may be processed into one or more tracking maps. The user 530 positions the AR display system at positions 534, and the AR display system records ambient information of a passable world (e.g., a digital representation of the real objects in the physical world that can be stored and updated with changes to the real objects in the physical world) relative to the positions 534 . . . “). In regard to claim 12, the combination of Zhao and Alosio teaches wherein, when wearing the headset or eyewear (see Zhao ¶ [0145] “ . . . The system may also acquire information about the headpose (or “pose”) of the user with respect to the physical world. In some embodiments, a head pose tracking component of the system may be used to compute headposes in real time. The head pose tracking component may represent a headpose of a user in a coordinate frame with six degrees of freedom including, for example, translation in three perpendicular axes (e.g., forward/backward, up/down, left/right) and rotation about the three perpendicular axes (e.g., pitch, yaw, and roll). In some embodiments, sensors 522 may include inertial measurement units (“IMUs”) that may be used to compute and/or determine
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Mar 15, 2025
Non-Final Rejection — §103, §112, §DP
Sep 22, 2025
Response Filed
Dec 19, 2025
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602457
PREVENTING ACCIDENTAL PASSWORD DISCLOSURE
2y 5m to grant Granted Apr 14, 2026
Patent 12585739
IMAGE FORMING DEVICE TRANSMITTING DATA FOR DISPLAYING AUTHENTICATION CHANGING WEB PAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12572631
System and Method for Watermarking Data for Tracing Access
2y 5m to grant Granted Mar 10, 2026
Patent 12562921
CERTIFICATE ENROLLMENT FOR SHARED NETWORK ELEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12554557
PRECISION GEOMETRY CLIENT FOR THIN CLIENT APPLICATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+36.9%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 444 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month