Prosecution Insights
Last updated: April 19, 2026
Application No. 18/141,866

METHOD AND SYSTEM FOR ENABLING ENHANCED USER-TO-USER COMMUNICATION IN DIGITAL REALITIES

Non-Final OA §103§DP
Filed
May 01, 2023
Examiner
LHYMN, SARAH
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Tmrw Group Ip
OA Round
3 (Non-Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
357 granted / 546 resolved
+3.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
30 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 546 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment / Arguments Applicant’s amendments to the independent claims have been considered. In response, the examiner is not persuaded that the amendments overcome the prior art of record. That adding language of: “making the notification perceptible to the counterpart user device based on the sending of the notification”, is taught by Bradski and Avery. The “perceptible” part is taught by Avery and its transparency image processing, to be able to see through physical or blocking virtual structures. And the rest of the added language, of making the notification perceptible based on sending of said notification, this is taught and mapped by Bradski in the “sending a notification” step, and is also, with respect, somewhat redundant claim language. A notification is sent (“sending a notification” step in claim 1), and therefore the notification will be perceptible (per Avery transparency image processing, mapped in the subsequent portion of this claim phrase), based on the sending. Accordingly, the 103 rejections are maintained. Non-Statutory Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-41, 44 and 47 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 11,651,562 in view of U.S. Patent App. Pub. No. 2019/0094981 (“Bradski”). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application contain language that is broader than (see independent claims), or language that is substantially identical/the same as (see dependent claims) claims of the issued patent, per the correspondence table below. Dependent claims that share the same claim features are grouped together. U.S. App. Serial No. 18/141,866 U.S. Patent No. 11,651,562 21. A method for enabling communication in a virtual world system implemented by a server comprising memory and at least one processor, the method comprising: detecting presence in the virtual world system of a target user device and a counterpart user device connected to the server via a wireless network; sending a notification to the counterpart user device informing about the presence of the target user device in the virtual world system based on a set of rules; adding transparency to 3D structure data in the virtual world system, and making the notification perceptible to the counterpart user device based on the sending of the notification, and wherein the notification is in the form of a highlighted virtual replica a target user that is visible within a field of view of a user of the counterpart user device through an object corresponding to the 3D structure data to which the transparency has been added; detecting one or more forms of interaction by the counterpart user device with the virtual replica of the target user in the virtual world system; and opening up a communication channel between the counterpart user device and the target user device responsive to the detected one or more forms of interaction. 1. A method for enabling communication in a virtual world system implemented by a server comprising memory and at least one processor, the method comprising: generating a marker of a target user that is associated with the target user and presented in the virtual world system; detecting presence in the virtual world system of a target user device and a counterpart user device connected to the server via a wireless network; sending a notification to the counterpart user device informing about the presence of the target user device in the virtual world system based on a set of rules, wherein the notification sent to the counterpart user device is made perceptible by virtually adding transparency to 3D structure data in the virtual world system, and wherein the notification is in the form of a highlighted version of the marker of the target user that is visible within a field of view of a user of the counterpart user device through an object corresponding to the 3D structure data to which the transparency has been added; receiving one or more forms of interaction from the counterpart user device on the marker of the target user in the virtual world system; opening up a communication channel between the counterpart user device and the target user device responsive to the one or more received forms of interaction; and receiving and directing communications between the counterpart user device and the target user device via the communication channel. Re: virtual replicas (as opposed to markers), Bradski teaches it is known to have user virtual replicas when users are interacting (see Figs. 66-67 and related descriptions). Modifying claim 1, in view of Bradski, such that the marker is a virtual replica, would have been obvious to one of ordinary skill, motivated to make use of known virtual and visual means of user interaction. Claim 22 Claim 1 (see last receiving step above) Claims 23, 32, 39 Claims 2, 11, 17 Claims 24, 33, 40 Claims 3, 12, 18 Claim 25 Claim 4 Claim 26 Claim 5 Claim 27 Claim 6 Claims 28, 35 Claims 7, 13 Claim 29 Claim 8 Claim 30 Claim 9 Claim 31 (system embodiment of claim 21) Claim 10 (system embodiment of claim 1) Claim 34 Claim 12 Claim 36 Claim 14 Claim 37 (computer readable media embodiment of claim 21) Claim 15 Claim 38 Claim 16 (computer readable media embodiment of claim 1) Claims 41, 44, 47 Claim 1 in view of Bradski, para. 952, voice cues to control avatar (virtual replica) Claims 42, 43, 45, 46, 48 and 49 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 in view of Bradski, and further in view of U.S. Patent App. Pub. No. 2019/0043239 (“Goel”), per the correspondence table below. U.S. App. Serial No. 18/141,866 U.S. Patent No. 11,651,562 Claims 42, 45, 48 Claim 1 in view of Bradski, and Goel (para. 16) (see also discussion in 103 rejection) Claims 43, 46, 49 Claim 1 in view of Bradski, Goel, para. 51-54 (see also discussion in 103 rejection) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-41, 44, 47 and 50-53 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski (U.S. Patent App. Pub. No. 2019/0094981 A1) in view of Avery, B., Sandor, C., & Thomas, B. H. (2009, March). Improving spatial perception for augmented reality x-ray vision. In 2009 IEEE Virtual Reality Conference (pp. 79-82). IEEE (“Avery”). Regarding claim 21: Bradski teaches: a method for enabling communication in a virtual world system (para. 17, methods for facilitating virtual and/or augmented reality interaction for one or more users, in a system such as that of Fig. 1: 10 AR (augmented reality) system) implemented by a server (Fig. 1: 11, servers) comprising memory and at least one processor (para. 170, the servers include memory for storing programs to be executed by processors), the method comprising: detecting presence in the virtual world system of a target user device and a counterpart user device connected to the server( see para 172, user devices 12 (one user device corresponding to a target device, and a second to a counterpart device) can communicate with each other and/or the server) via a wireless network (para. 173, via wireless network connections. See also para. 181); sending a notification to the counterpart user device (e.g. para. 1365, 1397, 1531, the system can send notifications to users (i.e. a counterpart user device) in the virtual world system) informing about the presence of the target user device in the virtual world system based on a set of rules (see e.g. Figs. 66-68, these are examples of users (i.e. target and counterpart device) engaged in virtual reality world system game. These images show example of notifications informing of presence of another user (see for example Fig. 68: 6836, notifying counterpart user device (Agent 006) that target user device (Agent 009) is in the area). The set of rules are those for the game that is being played. Note that Bradski teaches many different virtual environments, not just these three figures)… wherein the notification is in the form of a highlighted virtual replica of a target user that is visible within a field of view of a user of the counterpart user device… (Bradski, e.g. paras. 947, 1260, teaches that it is known to highlight virtual or augmented reality objects to draw attention to said object. Bradski also gives non-limited examples of highlighting, such as highlighting around an agent 006 character in the above-described games to indicate the players/users’ status (see paras. 963 and 971 and Figs. 66-68). Here, the agent game character corresponds to “virtual replica of a target user”. Non-limiting examples of virtual replicas of users: Fig. 66: 6618, 6622; Fig. 67: 6718, 6750). Modifying Bradski, in view of same, such to highlight a virtual replica of a target user that is visible within a field of view, all of which are taught by Bradski, is all of taught and suggested by Bradski, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A)); detecting one or more forms of interaction by the counterpart user device with the virtual replica of the target user in the virtual world system (see Fig. 67: 6736, if agent 006 (counterpart user device’s virtual replica) decided to interact with agent 009 (target device’s virtual replica), agent 006 can interact by selecting yes) (alternatively, one or more forms of interaction are taught simply by the users playing the game via their respective virtual replicas); and opening up a communication channel between the counterpart user device and the target user device responsive to the detected one or more forms of interaction (see Figs. 66-68 and related description. Communication channel is open in Fig. 67 if assistance is required). However, Bradski does not expressly teach: adding transparency to 3D structure data in the virtual world system and making the notification perceptible to the counterpart user device based on the sending of the notification, … and wherein the notification is in the form of a highlighted version of a marker of a target user that is visible within a field of view of a user of the counterpart user device through an object corresponding to the 3D structure data to which the transparency has been added. Consider the following. In analogous art, Avery teaches that augmented reality x-ray vision is known, which allows users to see through walls (i.e. 3D structure data) in order to view occluded objects and locations (Abstract). As expressly taught by Avery, “The edge overlay visualization provides depth cues to make hidden objects appear to be behind walls, rather than floating in front of them.” (Abstract and Fig. 1). This corresponds to a teaching of virtually adding transparency to 3D structure data in the virtual world system (i.e. part of a building or built structure, for example), such that the notification (as taught by Bradski and mapped above) is visible through an object (i.e. a wall, see Avery, Figs. 1 and 3) corresponding to the 3D data structure to which the transparency has been added (to the 3D building to which transparency has been added). Likewise to Avery, Bradski also teaches 3D models and 3D display (see e.g. Bradski, para. 1500, 1501), Accordingly, it would have been obvious for one of ordinary skill in the art to have modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to modify Bradski such to include the x-ray vision, as per Avery, to the system of Bradski. One example could be in the game setting of Figs. 66-68, whereby a user is notified that another user is present and behind a wall, or inside a building, in the graphical x-ray manner taught by Avery. The notification, therefore, would be made perceptible using transparency, and based on sending the notification (see Bradski, mapping above re: “sending a notification”. Such as modification is all of taught and suggested by the prior art, as mapped above. The prior art included each element recited in claim 21, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 22: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 21, wherein the communication channel enables communication that is perceptible through the object, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). See above mapping to claim 21. Modifying the applied references, such that the communication channel (e.g. sending and displaying message between users, Bradski, e.g. paras. 963, 1354, 1546) is perceptible through the object (via x-ray rendering as taught by Avery; see Abstract, Figs. 1 and 3), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 23: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 21, further comprising generating an interactive virtual shape generated from the virtual replica of the target user, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Bradski the following interactive virtual shapes: (1) a highlighted outline of an object (see para. 963, which teaches visually-presented highlighting around the agent 006 character); and (2) icons positioned on top of objects (see Fig. 95A, which illustrates a highlighted plus sign on the right top portion of the “Sublime” sign. The plus + sign is an interactive virtual shape). Modifying the applied references, such that the interactive virtual shape is generated from the virtual replica (i.e. avatar or game character) of the target user (i.e. outline the virtual replica, or place an icon above the virtual replica to indicate that metadata or other data of the target user is available, per para. 1510), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. The prior art included each element recited in claim 23, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 24: Bradski teaches: the method of claim 23, further comprising, during communications between the user of the counterpart user device and the target user: retrieving, by the server (Fig. 1: 11 server; also mapped in claim 21), sensor data associated with facial expressions or body language of the users (para. 716-717, system can obtain facial expression image sensor data. See also para. 547, inward facing camera (image sensor) to obtain facial expression of the user); and updating versions of the virtual replica of the corresponding users based on the sensor data (para. 717, the system can update facial expressions of the avatar (virtual replica) based on discerned user facial expressions). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of personalizing user based images using data obtained from said user. Regarding claim 25: Bradski teaches: the method of claim 23, further comprising virtually placing the versions of the virtual replicas in spatial proximity to simulate a close-range communication (see e.g. paras. 606-607, which teach that avatars of a user can be rendered in a conference room; another example is in para. 1499 (two user avatars rendered in a conference room). Both of these examples are virtually placing avatars (virtual replicas) in spatial proximity (i.e. in a conference room) to simulate close-range communication (as with a meeting or conference)). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of providing users with a variety of interactive virtual settings. Regarding claim 26: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 23, wherein the interactive virtual shape is one of a pointer, an icon positioned on top of the target user, or an outline, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Bradski the following interactive virtual shapes: (1) a highlighted outline of an object (see para. 963, which teaches visually-presented highlighting around the agent 006 character); and (2) icons positioned on top of objects (see Fig. 95A, which illustrates a highlighted plus sign on the right top portion of the “Sublime” sign. The plus + sign is an interactive virtual shape). Modifying the applied references, such that the interactive virtual shape is one of an icon on top of, or an outline, as per Bradski, (that is highlighted, also taught by Bradski), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. The prior art included each element recited in claim 26, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 27: Bradski teaches: the method of claim 21, wherein the notification to the counterpart user device is performed by augmenting the virtual replica of the target user with at least one distinctive characteristic comprising one or more distinctive colors (para. 1260, color changes as visual cues), lighting effects (para. 1260, lighting effects as visual cue), sounds (para. 1483, sound notifications), shapes (para. 1483, shape notification), haptic sensations (para. 184, 866 haptic feedback, or combinations thereof. It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of providing users with a variety of display communication protocol for notifications, or to attract attention to users. Alternatively, claim 27 would have been an obvious design choice to one of ordinary skill in the art regarding the design of the notification. Applicant’s specification does not describe any criticality to any one design of notification to Applicant’s invention. Regarding claim 28: Bradski teaches: the method of claim 21, wherein the set of rules comprises specific entitlements based on social connections between users, interest zones comprising one or more interests assigned to one or more predetermined geographic zones, individual user interests, a distance factor, availability configuration settings, or combinations thereof (see mapping to claim 1 re: game setting as per Figs. 66-68. The rules as pertaining to the game teach: interest zones; a distance factor; availability configuration settings). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of managing interaction with users. Regarding claim 29: Bradski teaches: the method of claim 21, wherein opening up the communication channel between the counterpart user device and the target user device responsive to the one or more received forms of interaction comprises: generating and sending an invitation to the target user device; and if the invited target user accepts the invitation, receiving, from the invited target user, an invitation confirmation (See e.g. para. 1488, which teaches an example invitation and confirmation, in the context of a game for users. The generation of a virtual monster character peeking over the cubicle to challenge another user (target user) corresponds to a “virtual invitation to join a game”. If the target user accepts, confirmation is in the form of the target user selecting her own virtual monster, and assigning it to a game battleground) (another example: para. 1502-03, users can be invited to a group meeting, if a user (i.e. target user) accepts, a handshake protocol can be used as invitation confirmation) (another example: paras. 1530-33, which is another example of sending invitation to a target device for communication, and receiving confirmation of acceptance in the form of the system generating communications dialog and rendering a virtual representation of the target user in the FOV of the user who generated the invite). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to facilitate multi-user interaction in a system. Regarding claim 30: Bradski teaches: the method of claim 21, wherein the communication channel enables communications between human users, artificial reality users (see mapping to claim 1, human and artificial reality users (avatars in the game, for example) are taught), or combinations thereof, through sharing of audio, video, text, hand or facial gestures, or movements (see mapping to claim 1, Figs. 66-68, text, movements and sharing of video (the game) is taught; see also para 1341, facial gestures). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of providing users with a variety of interactive communication protocol. Regarding claim 31: see also claim 21. Bradski teaches: a system for enabling communication in a virtual world system (Fig. 1: 10 augmented reality system), the system comprising: a server (Fig. 1: 11 server) comprising memory and at least one processor, the memory comprising instructions that, when executed by the at least one processor (para. 170, servers have processors and memory for storing executable program instructions), trigger the at least one processor to. The instructions correspond to the method of claim 21; the same rationale for rejection applies. Regarding claim 32: see claim 22. These claims are similar; the same rationale for rejection applies. Regarding claim 33: see claim 23. These claims are similar; the same rationale for rejection applies. Regarding claim 34: see claims 24 and 25. Claim 34 is a combination of the features of claims 24 and 25. Modifying the applied references, in view of same, such to have obtained claim 34, is taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). The prior art included each element recited in claim 34, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 35: see claim 28. These claims are similar; the same rationale for rejection applies. Regarding claim 36: Bradski teaches: the system of claim 31, wherein the one or more forms of interaction comprise at least one of looking (para. 1256), pointing (para. 1177), clicking (para. 813), grabbing (para. 1256), pinching (para. 1045) , swiping (para. 1177), interaction through voice (para. 1387 voice commands), interaction through text (para. 180, keyboard input for text interaction), hand or facial gestures (para. 1341 facial gesture) or movements (para. 814, 1177 hand gestures), or combinations thereof. It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system with wide interactive user capabilities. Regarding claim 37: Bradski teaches: the system of claim 31, wherein the communication channel enables group interactions of more than two users (para. 182, the system is capable of supporting a large number of simultaneous users (e.g. millions of users), each interfacing with the same digital world, using some type of user device). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of supporting many users interactively. Regarding claim 38: see also claim 21. Bradski teaches: one or more non-transitory computer-readable media having stored thereon instructions configured to cause a server computer system comprising memory and at least one processor to perform a method for enabling communication in a virtual world system (see Fig. 1: 10, AR system, 11, server and para. 170, memory storing instructions to cause a processor of a server to perform tasks), comprising the steps of. The steps of claim 38 correspond to the method of claim 21. The same rationale for rejection applies. Regarding claim 39: see claim 23. These claims are similar; the same rationale for rejection applies. Regarding claim 40: see claim 24. These claims are similar; the same rationale for rejection applies. Regarding claim 41: Bradski teaches: the method of claim 21, wherein communication between the counterpart user device and the target user device comprises simulating expressions of the virtual replica of the target user based on audio or text input from the target user device (para. 952, “Subtle voice cues, hand tracking, and head motion may be sent to the remote avatar. Based on the above information, the avatar may be animated.”; see also para. 1237: For example, in addition to gestures, user interfaces and/or other virtual content (e.g., applications, pages, web sites, etc.), may be rendered in response to voice commands, direct inputs, totems, gaze tracking input, eye tracking input or any other type of user input discussed in detail above). Modifying the applied references, in view of Bradski such that the avatar/game character (virtual replica) as mapped in claim 21, is controlled via audio input, per Bradski, to control the avatar (see para. 952), would have been obvious to one of ordinary skill as of the effective filing date of Applicant’s claims. Motivation would be to allow for immersive interaction and control with a user and their virtual replica. Regarding claim 44: see claim 41. These claims are similar; the same rationale for rejection applies. Regarding claim 47: see claim 41. These claims are similar; the same rationale for rejection applies. Regarding claim 50: Avery teaches: the method of claim 21, wherein adding transparency to the 3D structure data comprises: adding transparency to a relevant portion of the 3D structure data corresponding to a location of the target user in the virtual world (see Section 3.2, which teaches/suggests in “Tunnel Cut-out”, adding transparency to relevant portions (i.e. in the Avery example, to provide a user information about occluded objects behind walls. This is a relevant portion)). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to have a system capable of providing users with relevant visual information. Regarding claim 51: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 21 further comprising: adding transparency to 3D structure data associated with all non-transparent physical matter between the target user device and the counterpart user device in the virtual world. and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). See mapping to claim 21. Transparency would not need to be added to already transparent physical matter. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 52: see claim 50. These claims are similar; the same rationale for rejection applies. Regarding claim 53: see claim 51. These claims are similar; the same rationale for rejection applies. Claim(s) 42, 43, 45, 46, 48 and 49 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski in view of Avery, and further in view of Goel (U.S. Patent App. Pub. No. 2019/0043239 A1). Regarding claim 42: The applied references to claim 41 do not specify the features of claim 42. Consider the following. In analogous art, Goel teaches: the method of claim 41, wherein simulating the expressions of the virtual replica of the target user based on audio or text input uses at least one artificial intelligence algorithm (para. 16: “Examples disclosed herein modify and/or otherwise control (e.g., generate) one or more audio and/or visual characteristics of an avatar based on a musical input (e.g., input from a musical instrument digital interface (MIDI) protocol/interface) associated with at least one of stored musical data and/or a live musical presentation passed through a model trained utilizing machine learning techniques”) Modifying the applied references, in view of Goel, such that the simulating of the expressions of virtual replica based on audio or text, per Bradski, includes musical audio data, per a machine learning algorithm, as taught by Goal, would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of Applicant’s claims. See MPEP 2143(A). One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 43: Goel teaches: the method of claim 42, wherein the at least one artificial intelligence algorithm is trained using a plurality of labelled or unlabeled data sets comprising audio or text input (para. 53 or 54, audio training data, in combination with para. 51, supervised learning, supervised being labelled training data). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to make use of known training techniques for machine learning. Regarding claim 45: see claim 42. These claims are similar; the same rationale for rejection applies. Regarding claim 46: see claim 43. These claims are similar; the same rationale for rejection applies. Regarding claim 48: see claim 42. These claims are similar; the same rationale for rejection applies. Regarding claim 49: see claim 43. These claims are similar; the same rationale for rejection applies. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Sarah Lhymn Primary Examiner Art Unit 2613 /Sarah Lhymn/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

May 01, 2023
Application Filed
Apr 28, 2025
Non-Final Rejection — §103, §DP
Jul 25, 2025
Response Filed
Aug 01, 2025
Final Rejection — §103, §DP
Oct 30, 2025
Applicant Interview (Telephonic)
Oct 30, 2025
Examiner Interview Summary
Oct 31, 2025
Request for Continued Examination
Nov 10, 2025
Response after Non-Final Action
Jan 30, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602882
AUGMENTED REALITY DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602764
METHODS OF ARTIFICIAL INTELLIGENCE-ASSISTED INFRASTRUCTURE ASSESSMENT USING MIXED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602746
SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
2y 5m to grant Granted Apr 14, 2026
Patent 12585888
AUTOMATICALLY GENERATING DESCRIPTIONS OF AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12586163
INTERACTIVELY REFINING A DIGITAL IMAGE DEPTH MAP FOR NON DESTRUCTIVE SYNTHETIC LENS BLUR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
81%
With Interview (+15.2%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 546 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month