Prosecution Insights
Last updated: April 19, 2026
Application No. 18/522,575

Dynamic Artificial Reality Coworking Spaces

Final Rejection §102
Filed
Nov 29, 2023
Examiner
AUGUSTINE, NICHOLAS
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Meta Platforms Technologies, LLC
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
596 granted / 814 resolved
+18.2% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
36.2%
-3.8% vs TC avg
§102
50.1%
+10.1% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 814 resolved cases

Office Action

§102
DETAILED ACTION A. This action is in response to the following communications: Amendment filed: 09/24/2025. This action is made Final. B. Claims 1-18 and 21-23 remain pending. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-18 and 21-23 is/are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Agarawala, Anand et al. (US Pub. 2021/0165557 A1), herein referred to as “Agarawala”. As for claims 1, 14 and 18, Agarawala teaches. A method for providing a dynamic artificial reality coworking space on an artificial reality device, the method comprising and corresponding computer-readable storage medium of claim 14 storing instructions that, when executed by a computer system, cause the computing system to perform a process for providing a dynamic artificial reality coworking space on an artificial reality device the process comprising and a computer system of claim 18 comprising one or more processors; and one or more memories storing instructions that when executed by the one or more processors cause the computing system to perform a process (fig. 44, par. 205-207 hardware environment used to execute the invention): receiving one or more images, captured by the artificial reality device, of a physical workspace in a real-world environment of a user of the artificial reality device, wherein the physical workspace of the user includes a first real-world object (par. 16 FIG. 2A is an example usage of the AR system, according to an embodiment. As may be seen in the example of FIG. 2, different data may be ‘floating’ in the real-world physical environment at different height and depth levels. The data may include any 2D data, such as images, word processing documents, HTML, or other web pages, videos, etc.) ; mapping, using the one or more images, the physical workspace of the user to a virtual workspace in the dynamic artificial reality coworking space, such that a surface of the first real-world object corresponds to a surface of a first virtual object in the virtual workspace (par.22-23 FIG. 2B illustrates through glasses 204 the system 206 may generate an augmented environment in which a virtual mesh 208 overlays a physical wall 210 of a room in which the user is using the AR glasses 204. However, system 206 may also be used to generate or display the virtual mesh 208 may be displayed on any surface); receiving an instruction to combine A) the virtual workspace with B) an other virtual workspace, to create a combined virtual workspace, wherein the other virtual workspace is mapped to an other physical workspace of an other user, such that a surface of a second real-world object corresponds to a surface of a second virtual object in the other virtual workspace (par. 48 Users may also import or add new data elements into an existing workspace or environment. In an embodiment, a user may import data elements from a different room or workspace and merge them together with another workspace to create a new workspace of data elements which may or may not be anchored to a particular room or physical environment; par. 54 The AR system allows multiple users who may be co-located in the same room to share and interact with the same display elements within the virtual or augmented reality); and in response to the instruction, remapping the physical workspace of the user and the other physical workspace of the other user to the combined virtual workspace, such that the surface of the first real-world object and the surface of the second real-world object correspond to separate portions of a same surface of a third virtual object in the combined virtual workspace (par. 50; fig. 8B shows how objects are mapped in a physical space and this mapping is brought to a shared environment of fig. 9 par. 51-52 such that the physical space is not remapped to a virtual space with virtual object meshes mapped to surfaces for users to collaborate with; other examples are abundant throughout the disclosure; paragraph 12 users share 2D content to a 3D area. Paragraph 32 the first user may share a particular virtual wall or mesh with a second user without sharing all of the virtual walls of a particular room. Paragraphs 54-55 The AR system allows multiple users who may be co-located in the same room to share and interact with the same display elements within the virtual or augmented reality. Other remote users (who may not be co-located) who may be participating in the AR workspace or session may also have access to data elements that are brought from a 2D computing device into the AR environment. These remote users may similarly drag and drop their own data files and elements from their own computing devices into the AR environment, which would then be made visible to the other users in the same AR session or workspace. Paragraphs 126-131 describe in details of three users in two rooms sharing content from different surfaces such as mobile phone and laptop screens onto room walls. Figure 48 depicts a user selecting content to share onto an invisible wall. FIG. 58 illustrates an example embodiment of the AR/VR system described herein. In the example shown, a user may be working on an augmented desktop or workspace. As may be seen, without the use of a physical computer on the table, the user may (using the AR enabled headset) retrieve and interact with various documents, applications, and files which may be stored across one or more remotely located computing devices which may be communicating with the AR system. Lastly looking at figure 61 discussed in paragraphs 223-258 shows various scenarios of meeting space save and load functionality which allows for more than one user to share content from their personal device onto a shared virtual wall; thus the user A can be interacting with their own virtual wall or virtual desktop and share an item onto a shared wall with user B and user B can do the same activity, thereby providing two separate physical walls (user A and B own physical space) interacting with their own virtual desktop and sharing content from said virtual desktop/space onto a collaboration wall in the saved meeting space 6106.). As for claims 2, Agarawala teaches. The method of claim 1, further comprising: assigning the artificial reality device and an other artificial reality device, of the other user, to a cluster; and receiving and transmitting audio signals, between the artificial reality device and the other artificial device, within the cluster (par. 104 portions of the client-side functionality may be handled by network-based or cloud-based components or devices. This client-side functionality may include user management functions (login, register, avatar settings, friend list), session management functionality (real-time networking, joining/leaving room, state persistence), WebRTC (real-time communication) integration (ability to send and receive video streams, ability to establish peer-to-peer data streams), internal browser standalone VR functionality (browse webpages, tabs rendered as display elements in AR/VR environments, video/audio streaming, scroll replication), . As for claim 3, Agarawala teaches. The method of claim 2, wherein the audio signals are not transmitted, in the dynamic artificial reality coworking space, to artificial reality devices outside of the cluster (par. 62 perspective sound/audio is used to denote real world sound from location and if a user is outside threshold then no sound can be heard). As for claim 4, Agarawala teaches. The method of claim 1, wherein the dynamic artificial reality coworking space includes multiple virtual workspaces each corresponding to a respective artificial reality device, and wherein the method further comprises: assigning the artificial reality device and an other artificial reality device, of the other user, to a cluster, wherein the combined virtual workspace is visible on one or more artificial reality devices of the multiple artificial reality devices outside of the cluster (par. 71-72 a cluster of AR/VR devices can collaborate in a shared environment for a meeting). As for claim 5, Agarawala teaches. The method of claim 1, wherein the dynamic artificial reality coworking space includes multiple virtual workspaces each corresponding to a respective artificial reality device, and wherein the method further comprises: assigning the artificial reality device and an other artificial reality device, of the other user, to a cluster, wherein the combined virtual workspace is not visible on one or more artificial reality devices of the multiple artificial reality devices outside of the cluster (par. 225 user has to be logged in and authenticated to join a meeting “cluster” of AR/VR devices for collaborating shared spaces). As for claim 6, Agarawala teaches. The method of claim 1, wherein the combined virtual workspace is larger than the virtual workspace of the user (par. 236 different size rooms). As for claim 7, Agarawala teaches. The method of claim 1, wherein the instruction is made by the user via a gesture detected by the artificial reality device (par. 12 gesture are used for interaction with the shared AR/VR space and objects rendered within). As for claim 8, Agarawala teaches. The method of claim 1, further comprising: receiving a selection, from the artificial reality device, to exit the combined virtual workspace; and remapping the physical workspace of the user to the virtual workspace of the user, wherein an other artificial reality device, of the other user, renders a shrunken combined virtual workspace (par. 299 thumbnails are used to denote a “shrunken” minimized or unselected workspaces that the user can select to join). As for claims 9 and 16, Agarawala teaches. The method of claim 1, further comprising: receiving, from the artificial reality device, a selection of an avatar of the other user, the other virtual workspace of the other user, or both; and transmitting an invitation to create the combined virtual workspace to an other artificial reality device of the other user, wherein the instruction is generated upon acceptance of the invitation by the other artificial reality device of the other user (fig. 11; par. 59 operation of the AR/VR system in which both users have created avatars for themselves and are displayed in a physical space while interacting with virtual objects with their avatars in a shared environment). As for claim 10, Agarawala teaches. The method of claim 1, further comprising: extending the virtual workspace of the user into the combined virtual workspace (fig. 11 is just one of many examples of a user joining a shared space; much like shown in fig. 10B is a user joining a meeting; par. 58-59). As for claim 11, Agarawala teaches. The method of claim 10, wherein the dynamic artificial reality coworking space includes multiple other virtual workspaces of other users, and wherein the virtual workspace is extended into the combined virtual workspace through an outer virtual wall of the dynamic artificial reality coworking space, such that the combined virtual workspace does not encroach on the multiple other virtual workspaces of the other users (par. 138 one example is combining locations to form one location). As for claims 12 and 17, Agarawala teaches. The method of claim 1, wherein the dynamic artificial reality coworking space includes multiple other virtual workspaces of other users, and wherein at least one of the other users meeting predefined criteria can join the combined virtual workspace such that at least one of the multiple other virtual workspaces corresponding to the at least one of the other users are joined to the combined virtual workspace (par. 56-58 AR workspaces that the user can join and combine together for shared experience with other users). As for claim 13, Agarawala teaches. The method of claim 1, further comprising: mapping one or more video conference feeds to the combined virtual workspace (par. 59 users avatars are combined together in same workspace; par. 301 video feed can be displayed in workspace and take place of avatar wherein where avatar is mentioned in disclosure). As for claims 15, Agarawala teaches. The computer-readable storage medium of claim 14, wherein the surface of the first real-world object corresponds to a surface of a first virtual object in the virtual workspace, and wherein the surface of the second real-world object corresponds to a surface of a second virtual object in the other virtual workspace (par. 143 matching surface of virtual workspace to physical workspace for augmented shared workspace to be rendered for one or more users in a shared space). As for claim 21, Agarawala teaches. The method of claim 1, wherein the first real-world object is a first physical desk, wherein the first virtual object is a first virtual desk, wherein the second real-world object is a second physical desk, wherein the second virtual object is a second virtual desk, wherein the third virtual object is a virtual table, wherein the surface of the first physical desk corresponds to a first portion of the surface of the virtual table, and wherein the surface of the second physical desk corresponds to a second portion of the surface of the virtual table, the second portion being separate from the first portion (fig. 22 depicts virtual desktops, desks, computers, and other users all utilizing different virtual surfaces; par. 177 describes VR desktops; fig. 40 depicts a virtual desktop). As for claim 22, Agarawala teaches. The method of claim 1, wherein the dynamic artificial reality coworking space is a virtual reality environment (fig. 22 depicts coworking space with vr desktops as shown in fig. 40). As for claim 23, Agarawala teaches. The method of claim 1, wherein one or more first actions made by the user relative to the first real-world object are made relative to a first portion of the separate portions of the same surface of the third virtual object, and wherein one or more second actions made by the other user relative to the second real-world object are made relative to a second portion of the separate portions of the same surface of the third virtual object (par. 126-131 fig. 41 shows the interaction with three users interacting with display elements a-d wherein the elements can be laptops, mobile devices and the such each of which can be their own surface for interaction for each user to view collaboratively and share data across different rooms). (Note :) It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)). Response to Arguments Applicant's arguments filed 09/24/2025 have been fully considered but they are not persuasive. After careful review of the amended claims (given the broadest reasonable interpretation) and the remarks provided by the Applicant along with the cited reference(s) the Examiner respectfully disagrees with the Applicant for at least the reasons provided below: A1. Applicant argues that Agarawala fail to disclose that two real-world environment surfaces (e.g. two separate physical walls) of two users real -world environments such that for example a first physical wall is mapped to a first portion of a mesh and a second physical wall is mapped to a second separate portion of the same mesh. R1. Examiner does not agree, firstly the claim limitations to not mention mesh and the mapping to the same ‘mesh’ 3D object; secondly Agarawala teaches paragraph 12 users share 2D content to a 3D area. Paragraph 32 the first user may share a particular virtual wall or mesh with a second user without sharing all of the virtual walls of a particular room. Paragraphs 54-55 The AR system allows multiple users who may be co-located in the same room to share and interact with the same display elements within the virtual or augmented reality. Other remote users (who may not be co-located) who may be participating in the AR workspace or session may also have access to data elements that are brought from a 2D computing device into the AR environment. These remote users may similarly drag and drop their own data files and elements from their own computing devices into the AR environment, which would then be made visible to the other users in the same AR session or workspace. Paragraphs 126-131 describe in details of three users in two rooms sharing content from different surfaces such as mobile phone and laptop screens onto room walls. Figure 48 depicts a user selecting content to share onto an invisible wall. FIG. 58 illustrates an example embodiment of the AR/VR system described herein. In the example shown, a user may be working on an augmented desktop or workspace. As may be seen, without the use of a physical computer on the table, the user may (using the AR enabled headset) retrieve and interact with various documents, applications, and files which may be stored across one or more remotely located computing devices which may be communicating with the AR system. Lastly looking at figure 61 discussed in paragraphs 223-258 shows various scenarios of meeting space save and load functionality which allows for more than one user to share content from their personal device onto a shared virtual wall; thus the user A can be interacting with their own virtual wall or virtual desktop and share an item onto a shared wall with user B and user B can do the same activity, thereby providing two separate physical walls (user A and B own physical space) interacting with their own virtual desktop and sharing content from said virtual desktop/space onto a collaboration wall in the saved meeting space 6106. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Inquires Any inquiry concerning this communication should be directed to NICHOLAS AUGUSTINE at telephone number (571)270-1056. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. PNG media_image1.png 213 559 media_image1.png Greyscale /NICHOLAS AUGUSTINE/Primary Examiner, Art Unit 2178 November 19, 2025
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Jun 26, 2025
Non-Final Rejection — §102
Sep 05, 2025
Interview Requested
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Examiner Interview Summary
Sep 24, 2025
Response Filed
Nov 20, 2025
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598212
Cybersecurity Risk Analysis and Modeling of Risk Data on an Interactive Display
2y 5m to grant Granted Apr 07, 2026
Patent 12584752
VISUAL VEHICLE-POSITIONING FUSION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12586264
WORD EVALUATION VALUE ACQUISITION METHOD, APPARATUS AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12578836
USER INTERFACE FOR INTERACTING WITH AN AFFORDANCE IN AN ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12580920
SYSTEM AND METHOD FOR FACILITATING USER INTERACTION WITH A SIMULATED OBJECT ASSOCIATED WITH A PHYSICAL LOCATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+27.8%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 814 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month