Prosecution Insights
Last updated: April 19, 2026
Application No. 18/561,175

COMMUNICATION DEVICES, ADAPTING ENTITY AND METHODS FOR AUGMENTED/MIXED REALITY COMMUNICATION

Non-Final OA §103§112
Filed
Nov 15, 2023
Examiner
PATEL, JITESH
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
312 granted / 398 resolved
+16.4% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
14 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 398 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9 and 21-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 9 and 23, each, recite the limitation “the recognized objects” in limitation 1. There is insufficient antecedent basis for this limitation in the claim. Claims 21-23, each, recite “the first communication entity”. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 14 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Post et al (US 11200742 B1). Regarding claim 1, Post discloses a method performed in an adapting entity, for adaptation between different environments in Augmented Reality or Mixed Reality, AR/MR communication in an AR/MR system, wherein the AR/MR system comprising at least two, a first communication device and a second communication devices device, wherein the first and second communication devices are being in their respective physical environments and being are associated with their respective users (Post fig. 2; col. 2, l. 10, “a method”; col. 9, l. 1, “The augmented reality projection system 252 (an adapting entity) enables the virtual avatar to be presented during interactions between a customer and support agent by rendering virtual content for display via the display 214.”; col. 7, l. 23, “the system 200 (AR/MR system) includes a client device (second user) and AR system (“client device”) 210 and an agent (first user) computing device 250 that are configured to communicate with one another (in their respective physical environments)”; col. 7, l. 33, “the client device 210 includes a device display (“display”) 214 that may comprise an HMD … head mounted display may be semitransparent, thereby enabling the user to view the real-world scene beyond the display, with projected images (such as a virtual avatar) appearing superimposed or overlaid upon the background scene”), the method comprising: obtaining a virtual representation of the user of the first communication device, wherein the virtual representation comprises comprising information on one or both of gestures and facial expressions of the user of the first communication device (Post col. 6, l. 33, “First agent 112 further initiates a projection of a first virtual avatar (“first avatar”) 150 that is controlled by first agent 112 (a virtual representation of the user of the first communication device) … the first avatar 150 can take any form or appearance, in this case, the first avatar 150 appears as an image of a human person of a size”; col. 10, l. 25, “the augmented reality projection system 252 may be configured to recognize user inputs made through head and hand gestures or other body movements”); obtaining spatial and semantic characteristics data of the physical environment and user of the second communication device from the second communication device (Post col. 8, l. 20, “the client device 210 can include one or more distance measuring sensors such as a laser or sonic range finder can measure distances to various surfaces within the image. Different types of distance measuring sensors and algorithms may be used an imaged scene to measure for measuring distances to objects within a scene viewed by a user (obtaining spatial and semantic characteristics data of the physical environment)”; col. 12, l. 14, “the second user 300 is joined to a support agent via a network connection, turns his head toward the smart speaker system 320 (obtained spatial and semantic characteristics data of a user)”); generating an adapted virtual representation of the user of the first communication device by adapting the virtual representation of the user of the first communication device based on the spatial and semantic characteristics data of the physical environment and user of the second communication device (Post col. 12, l. 23, “in FIG. 7, … the second user 300 is still seated on sofa 304 at a third time subsequent to the second time, but a virtual avatar 700 is also being projected in a first area 760 of the living room 302 at a first relative location (“first location”) 710 a few feet away from and directly in front of the second user 300, as well as ‘behind’ and near the table 330 (based on the spatial and semantic characteristics data of the physical environment and user of the second communication device).”); and providing the adapted virtual representation to the second communication device for displaying in the physical environment of the second communication device using AR/MR technology (Post col. 12, l. 23, “in FIG. 7, … the experience of the second user 300 while wearing smartglasses 420 (displaying in the physical environment of the second communication device using AR/MR technology) … the second user 300 is still seated on sofa 304 at a third time subsequent to the second time, but a virtual avatar 700 is also being projected in a first area 760 of the living room 302 at a first relative location (“first location”) 710 a few feet away from and directly in front of the second user 300, as well as ‘behind’ and near the table 330 (adapted virtual representation to the second communication device for displaying).”). Post does not expressly disclose in exact words spatial and semantic characteristics data as recited in limitations 2 and 3. However, spatial and semantic characteristics data recited in limitations 2 and 3 would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention based on the cited portions above. This would have been done to generate a correct augmented reality environment for a proper client and customer agent interaction. See for example, Post col. 12, l. 48. Regarding claim 3, Post discloses the method according to claims claim 1,wherein generating an adapted virtual representation of the user of the first communication device comprises: creating a motion model for the virtual representation of the user of the first communication device based on movement data of the user of the second communication device (Post col. 8, l. 12, “In order for the avatar representing support agent 240 to appear to move through and interact with the physical environment associated with customer 212, image data for the physical environment must be shared with the agent computing device 250 … images are used to generate image data that a processor can analyze to estimate distances to objects in the image, while in other implementations, the client device 210 can include one or more distance measuring sensors such as a laser or sonic range finder can measure distances to various surfaces within the image (creating a motion model for the virtual representation of the user of the first communication device)”; col. 12, l. 66, “the support agent can also configure the virtual avatar so that when the user moves their head while wearing an HMD, the virtual avatar ‘keeps up’ or maintains an appearance in their field of view (based on movement data of the user of the second communication device).”); and combining the motion model and the virtual representation of the user of the first communication device to generate the adapted virtual representation of the user of the first communication device (Post 12, l. 66, “the support agent can also configure the virtual avatar so that when the user moves their head while wearing an HMD, the virtual avatar ‘keeps up’ or maintains an appearance in their field of view (combining the motion model and the virtual representation of the user of the first communication device to generate the adapted virtual representation of the user of the first communication device).”). Regarding claim 4, Post discloses the method according to any one of claim 1, further comprising: obtaining a virtual representation of one or more objects, with which the user of the first communication device is interacting in its physical environment (Post col. 9, l. 57, “The avatar generation engine 258 can receive information from a mapping module 260 that can convert the image content received from image processor 224 of the client device 210 … avatar generation engine 258 may generate a virtual representation of geo-spatial floor layout corresponding to the real property or physical space associated with the customer”); generating an adapted virtual representation of the object by adapting the virtual representation of the object based on the spatial and semantic characteristics data of the physical environment of the second communication device (Post col. 9, l. 64, “avatar generation engine 258 may generate a virtual representation of geo-spatial floor layout corresponding to the real property or physical space associated with the customer”); and providing the adapted virtual representation of the object to the second communication device for displaying in the physical environment of the second communication device using AR/MR technology (Post col. 9, l. 64, “the generated virtual environment can include a functionality where the support agent 240 (using the second device) can view images of the customer's environment (in the second physical environment) and make changes through use of augmented reality to how the environment would appear after adding/removing certain objects”). Regarding claim 5, Post discloses the method according to claim 4, wherein obtaining the virtual representation of one or more objects comprises: obtaining the virtual representation of the object from the first communication device; or obtaining information on one or more objects with which the user of the first communication device is interacting in its physical environment from the first communication device (Post col. 9, l. 57, “The avatar generation engine 258 can receive information from a mapping module 260 that can convert the image content received from image processor 224 of the client device 210 (obtaining information from the first device)”); and generating a virtual representation of the object based on the information on one or more objects with which the user of the first communication device is interacting (Post col. 9, l. 64, “avatar generation engine 258 may generate a virtual representation of geo-spatial floor layout corresponding to the real property or physical space associated with the customer”). Claim 14 recites an adapting entity which corresponds to the function performed by the method of claim 1. As such, the mapping and rejection of claim 1 above is considered applicable to the adapting entity of claim 14. Claim 18 recites an adapting entity which corresponds to the function performed by the method of claim 3. As such, the mapping and rejection of claim 3 above is considered applicable to the adapting entity of claim 18. Claim 19 recites an adapting entity which corresponds to the function performed by the method of claim 4. As such, the mapping and rejection of claim 4 above is considered applicable to the adapting entity of claim 19. Claim 20 recites an adapting entity which corresponds to the function performed by the method of claim 5. As such, the mapping and rejection of claim 5 above is considered applicable to the adapting entity of claim 20. Claims 2 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Post in view of Kim et al (US 20190370544 A1). Regarding claim 2, Post discloses the method according to claim 1, wherein obtaining the virtual representation of the user of the first communication device comprises: obtaining the virtual representation of the user of the first communication device from the first communication device (Post col. 6, l. 33, “First agent 112 further initiates a projection of a first virtual avatar (“first avatar”) 150 that is controlled by first agent 112 via second device 114. (an obtained virtual representation of the user of the first communication device 114) … the first avatar 150 can take any form or appearance, in this case, the first avatar 150 appears as an image of a human person of a size”); obtaining a model representing the user of the first communication device (Post col. 6, l. 37, “the first avatar 150 appears as an image of a human person of a size that generally is in proportion with the surrounding physical environment (a model representing the user of the first communication device)”); obtaining one or both of gestures data and facial expressions data of the user of the first communication device from the first communication device (Post col. 10, l. 25, “the augmented reality projection system 252 may be configured to recognize user inputs made through head and hand gestures or other body movements (gesture data of the first user obtained from the first communication device)”); and generating the virtual representation of the user of the first communication device (generated virtual representation 700 based on gesture data); col. 12, l. 33, “The appearance of the virtual avatar 700, including its pose, are for the most part controlled by the remote support agent”). Post does not disclose the highlighted portion obtaining a model representing the user of the first communication device; generating the virtual representation of the user of the first communication device based on the model representing the user of the first communication device and one or both of the gestures data and facial expressions data of the user of the first communication device. However, discloses the highlighted portion obtaining a model representing the user of the first communication device (Kim [0031], “a computerized 3D model (a model representing the user) is created as a hologram after multi-angle video data are extracted … The computer-generated 3D model can then be utilized as a mixed-reality object (MRO)”; [0052], “3D holographic avatar (e.g. 308B) to mirror and mimic the movements and the expressions of the human subject”); generating the virtual representation of the user of the first communication device based on the model representing the user of the first communication device and one or both of the gestures data and facial expressions data of the user of the first communication device (Kim [0052], “the subject-to-avatar pose matching and real-time movement mirroring and retargeting engine (e.g. 603 in FIG. 6) of the physical target movement-mirroring avatar superimposition and visualization creation system executes motion retargeting by correlating the real-time motion tracking points of the human subject … 3D holographic avatar”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Post with Kim to generate a model of a user for generating a corresponding avatar. This would have been done to generate accurate and easily customizable user representations. Claim 17 recites an adapting entity which corresponds to the function performed by the method of claim 2. As such, the mapping and rejection of claim 2 above is considered applicable to the adapting entity of claim 17. Claims 6-7, 15 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Post in view of Kim et al (US 20190370544 A1). Regarding claim 6, Post discloses a method performed in a first communication device for adaptation between different environments in Augmented Reality or Mixed Reality, AR/MR communication in an AR/MR system, wherein the AR/MR system comprising at least two, the first communication device and a second communication devices device, wherein the first and second communication devices are being in their respective physical environments and are being associated with their respective users (Post fig. 2; col. 2, l. 10, “a method”; col. 9, l. 1, “The augmented reality projection system 252 enables the virtual avatar to be presented during interactions between a customer and support agent by rendering virtual content for display via the display 214.”; col. 7, l. 23, “the system 200 (AR/MR system) includes a client device (second user) and AR system (“client device”) 210 and an agent (first user) computing device 250 that are configured to communicate with one another (in their respective physical environments)”; col. 7, l. 33, “the client device 210 includes a device display (“display”) 214 that may comprise an HMD … head mounted display may be semitransparent, thereby enabling the user to view the real-world scene beyond the display, with projected images (such as a virtual avatar) appearing superimposed or overlaid upon the background scene”), the method comprising: establishing one or both of gestures data and facial expressions data of the user of the first communication device (Post col. 10, l. 25, “the augmented reality projection system 252 may be configured to recognize user inputs made through head and hand gestures or other body movements”); generating a virtual representation of the user of the first communication device based providing the virtual representation of the user of the first communication device to an adapting entity for adapting the virtual representation of the user of the first communication device based on spatial and semantic characteristics data of the physical environment and user of the second communication device (Post col. 12, l. 23, “in FIG. 7, … the experience of the second user 300 while wearing smartglasses 420 (displaying in the physical environment of the second communication device using AR/MR technology) … the second user 300 is still seated on sofa 304 at a third time subsequent to the second time, but a virtual avatar 700 is also being projected in a first area 760 of the living room 302 at a first relative location (“first location”) 710 a few feet away from and directly in front of the second user 300, as well as ‘behind’ and near the table 330 (adapted virtual representation to the second communication device for displaying).”). Post does not disclose (highlight where applicable) obtaining a model representing a user of the first communication device; generating a virtual representation of the user of the first communication device based on the model of the user of the first communication device; However, Kim discloses (highlight where applicable) obtaining a model representing a user of the first communication device (Kim [0031], “a computerized 3D model (a model representing the user) is created as a hologram after multi-angle video data are extracted … The computer-generated 3D model can then be utilized as a mixed-reality object (MRO)”; [0052], “3D holographic avatar (e.g. 308B) to mirror and mimic the movements and the expressions of the human subject”); generating a virtual representation of the user of the first communication device based on the model of the user of the first communication device (Kim [0052], “the subject-to-avatar pose matching and real-time movement mirroring and retargeting engine (e.g. 603 in FIG. 6) of the physical target movement-mirroring avatar superimposition and visualization creation system executes motion retargeting by correlating the real-time motion tracking points of the human subject … 3D holographic avatar”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Post with Kim to generate a model of a user for generating a corresponding avatar. This would have been done to generate accurate and easily customizable user representations.; Regarding claim 7, Post in view of Kim discloses the method according to claim 6, further comprising: detecting one or more objects with which the user of the first communication device is interacting in its physical environment based on any one or a combination of gesture tracking, image object detection, the user's and other's speech (Post col. 10, l. 25, “the augmented reality projection system 252 may be configured to recognize user inputs made through head and hand gestures or other body movements (gesture data of the first user obtained from the first communication device) … user input can comprise a button press, a specific gesture performed in view of the camera, a gaze-direction or other eye tracking movement by the support agent, a voice activation, or other recognizable input made in proximity to the presented virtual objects and digital assets, received via cameras of the HMD system”); and providing information on the detected one or more objects to the adapting entity for generating a virtual representation of the one or more objects and adapting the virtual representation of the object based on spatial and semantic characteristics data of the physical environment and user of the second communication device (Post col. 10, l. 47, “The gesture or other input to the HMD system by the support agent 240 can be sent to an HMD command processor for execution of the command”; col. 11, l. 9, “in response, the virtual avatar can appear to interact with a nearby real object, add a label to the real object, or access additional information that may be stored in relation to that real object”). Claim 15 recites an adapting entity which corresponds to the function performed by the method of claim 6. As such, the mapping and rejection of claim 6 above is considered applicable to the adapting entity of claim 15. Claim 21 recites an adapting entity which corresponds to the function performed by the method of claim 7. As such, the mapping and rejection of claim 7 above is considered applicable to the adapting entity of claim 21. Claims 8-9 and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Post in view of Kim et al (US 20190370544 A1). Regarding claim 8, Post in view of Kim discloses the method according to claim 6, further comprising: But does not disclose detecting one or more objects with which the user of the first communication device is interacting in its physical environment based on any one or a combination of gesture tracking, image object detection, the user's and other's speech; generating a virtual representation for the object with which the user of the first communication device is interacting; and providing the virtual representation of the object to the adapting entity for adapting the virtual representation of object based on spatial and semantic characteristics data of the physical environment and user of the second communication device. However, Short discloses detecting one or more objects with which the user of the first communication device is interacting in its physical environment based on any one or a combination of gesture tracking, image object detection, the user's and other's speech (Short [0045], “A virtual representation 160 of the wrench 110 can then be rendered within the view 150 for the expert user at a location relative to the 3D furnace model 154 that corresponds to the location at which the field user 102 physically applied (user of the first communication device is interacting in its physical environment) the wrench 110 (detected object) to the actual furnace system 104”; [0058], “mechanism for detecting user inputs, e.g., … gesture recognizer, a microphone and speech recognizer, … or a combination of two or more of these (based on a combination of gesture tracking, … user's … speech). A user may provide input to the field system 202 for various purposes such as to select a target object”); generating a virtual representation for the object with which the user of the first communication device is interacting (Short [0045], “A virtual representation 160 of the wrench 110 can then be rendered within the view 150 for the expert user at a location relative to the 3D furnace model 154 that corresponds to the location at which the field user 102 physically applied”); and providing the virtual representation of the object to the adapting entity for adapting the virtual representation of object based on spatial and semantic characteristics data of the physical environment and user of the second communication device (Short fig. 1F; [0045], “the virtual representation 160 of the wrench 110 can be a 3D model of the wrench 110 that visually resembles the tool used by the field user 102.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Post further with Short to generate virtual representations of additional objects for clients. This would have enabled Post to provide a variety of additional services to customers. Regarding claim 9, Post in view of Kim discloses the method according to claim 6, wherein generating a virtual representation for one or more objects with which the user of the first communication device is interacting But does not disclose comprises any one of: using available models for the recognized objects; generating 3D or 2D models for the objects using point cloud reconstructions; combining available models with point cloud reconstructions. However, Short discloses using available models for the recognized objects (Short [0045], “the virtual representation 160 of the wrench 110 can be a 3D model of the wrench 110 that visually resembles the tool (a recognized object) used by the field user 102”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Post further with Short to utilize models of objects while processing mixed reality experiences. This would have been done to quickly generate a wide variety of virtual representations of objects. Claim 22 recites an adapting entity which corresponds to the function performed by the method of claim 8. As such, the mapping and rejection of claim 8 above is considered applicable to the adapting entity of claim 22. Claim 23 recites an adapting entity which corresponds to the function performed by the method of claim 9. As such, the mapping and rejection of claim 9 above is considered applicable to the adapting entity of claim 23. Conclusion See the notice of references cited (PTO-892) for prior art made of record, including art that is not relied upon but considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JITESH PATEL whose telephone number is (571)270-3313. The examiner can normally be reached 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JITESH PATEL/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602866
DIGITAL TWIN AUTHORING AND EDITING ENVIRONMENT FOR CREATION OF AR/VR AND VIDEO INSTRUCTIONS FROM A SINGLE DEMONSTRATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597245
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586313
DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12579739
2D CONTROL OVER 3D VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12579765
DEFINING AND MODIFYING CONTEXT AWARE POLICIES WITH AN EDITING TOOL IN EXTENDED REALITY SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
91%
With Interview (+12.4%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 398 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month