Prosecution Insights
Last updated: April 19, 2026
Application No. 18/769,325

MULTI-USER CONTENT SHARING IN IMMERSIVE VIRTUAL REALITY ENVIRONMENTS

Non-Final OA §101§DP
Filed
Jul 10, 2024
Examiner
MCDOWELL, JR, MAURICE L
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Sim Ip Hxr LLC
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
790 granted / 913 resolved
+24.5% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
23 currently pending
Career history
936
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 913 resolved cases

Office Action

§101 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement All of the NPL references in the ids submitted on 7/15/24 have been lined through and not considered because they are missing dates (at least the year is required); note the date the reference was retrieved doesn’t count in regards to determining an effective filing date of a submitted npl document. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 1 is directed to a method of sharing content between wearable sensor systems with the steps of generating, sending and generating that amount to nothing more than software instructions. Software instructions are non-statutory under 35 U.S.C. 101. Claims 2-12 depend from claim 1 and contain additonal instructions, for example claim 2 contains the steps of detecting, detecting, generating and generating, therefore claims 2-12 are rejected under the same rationale as claim 1. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,050,757 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because they are a broader version of the patent claims. 18/769,325 (claims) Patent (claims) 1. A method of sharing content between wearable sensor systems, the method including: generating, for display towards a first user, a first presentation output based on information from a first video stream captured using at least one camera coupled to a first wearable sensor system, the first video stream corresponding to a first portion of a real world space; sending, to a second wearable sensor system and in association with a communication channel established between the first wearable sensor system and the second wearable sensor system, a second video stream captured using at least one camera coupled to the first wearable sensor system, the second video stream corresponding to a second portion of the real world space, wherein the second video stream corresponds to a concurrent timespan with the first video stream; and generating, for display towards a second user, a second presentation output based on information from the second video stream. 2. The method of claim 1, wherein a first set of features is detected corresponding to the first video stream and a second set of features is detected corresponding to the second video stream, the first set of features being processed to generate the first presentation output displayed towards the first user and the second set of features being processed to generate the second presentation output displayed towards the second user. 3. The method of claim 2, wherein the second set of features further comprises one or more overlapping features in relation to the first set of features, and one or more nonoverlapping features in relation to the second set of features, the one or more nonoverlapping features relating to at least one of: (i) a field of view corresponding to the second video stream, (ii) a resolution of the second video stream, (iii) a luminance measure of the second video stream, (iv) an additional virtual graphic overlay to be combined with the second video stream, and (v) an additional annotation overlay to be combined with the second video stream. 4. The method of claim 3, wherein the first presentation output comprises a first virtual interface associated with a virtual reality system, and wherein the second presentation output comprises a second virtual interface associated with the virtual reality system. 5. The method of claim 4, wherein at least one of the first feature set and the second feature set is processed, as input, to generate, as output, a three-dimensional map of at least one of the first portion and the second portion of the real world space, and wherein the generated three-dimensional map is presented to a virtual reality system and the virtual reality system applies the generated three-dimensional map as a constraint upon which to construct a virtual reality environment. 6. The method of claim 5, wherein the first virtual interface, responsive to a selection of an operation by the first user, enables the first user to interact with content within the virtual reality environment, and wherein the second virtual interface, responsive to a selection of an operation by the second user, enables the second user to interact with content within the virtual reality environment. 7. The method of claim 6, wherein the virtual reality environment enables collaboration between the first user and the second user participating in a collaborative operation, the collaborative operation facilitated by the first virtual interface and the second virtual interface, respective to the first user and the second user. 8. The method of claim 5, wherein the virtual reality environment is configured to be updated to display one or more of new content and altered content, the updating of the display further comprising: receiving, from the first wearable sensor system, an updated video stream of at least one of the first portion of the real world space and second portion of the real world space; detecting, from the updated video stream, an updated set of features corresponding to the updated video stream; processing, as input, at least one of the updated feature set and a previous three-dimensional map to generate, as output, an updated three-dimensional map; and presenting, to the virtual reality system, the updated three-dimensional map to be applied as a constraint on the virtual reality environment. 9. The method of claim 1, further comprising: capturing a third video stream of a third portion of the real world space using at least one camera electronically coupled to the second wearable sensor system, wherein the first video stream and the third video stream correspond to a concurrent timespan; generating, for display towards a second user, a third presentation output based on information from the third video stream and corresponding to the third portion of the real world space; and sharing, in association with the communication channel established between the first wearable sensor system and the second wearable sensor system, data associated with the first presentation output and data associated with the third presentation output to at least one of the first wearable sensor system and the second wearable sensor system. 10. The method of claim 1, wherein the first portion of the real world space and the second portion of the real world space overlap one another. 11. The method of claim 1, wherein the first portion of the real world space and the second portion of the real world space do not overlap one another. 12. The method of claim 1, wherein the first portion of the real world space is completely overlapped by the second portion of the real world space. 13. A non-transitory computer-readable recording medium having computer instructions recorded thereon for sharing content between wearable sensor systems, the computer instructions, when executed on one or more processors, causing the one or more processors to implement operations comprising: generating, for display towards a first user, a first presentation output based on information from a first video stream captured using at least one camera coupled to a first wearable sensor system, the first video stream corresponding to a first portion of a real world space; sending, to a second wearable sensor system and in association with a communication channel established between the first wearable sensor system and the second wearable sensor system, a second video stream captured using at least one camera coupled to the first wearable sensor system, the second video stream corresponding to a second portion of the real world space, wherein the second video stream corresponds to a concurrent timespan with the first video stream; and generating, for display towards a second user, a second presentation output based on information from the second video stream. 14. The non-transitory computer-readable recording medium of claim 13, wherein a first set of features is detected corresponding to the first video stream and a second set of features is detected corresponding to the second video stream, the first set of features being processed to generate the first presentation output displayed towards the first user and the second set of features being processed to generate the second presentation output displayed towards the second user. 15. The non-transitory computer-readable recording medium of claim 14, wherein at least one of the first feature set and the second feature set is processed, as input, to generate, as output, a three-dimensional map of at least one of the first portion and the second portion of the real world space, and wherein the generated three-dimensional map is presented to a virtual reality system and the virtual reality system applies the generated three-dimensional map as a constraint upon which to construct a virtual reality environment. 16. The non-transitory computer-readable recording medium of claim 15, wherein the virtual reality environment enables collaboration between the first user and the second user participating in a collaborative operation, the collaborative operation facilitated by a first virtual interface and a second virtual interface, respective to the first user and the second user. 17. A system including one or more processors coupled to memory, the memory being loaded with computer instructions to share content between wearable sensor systems, the computer instructions, when executed on the one or more processors, causing the one or more processors to implement actions comprising: generating, for display towards a first user, a first presentation output based on information from a first video stream captured using at least one camera coupled to a first wearable sensor system, the first video stream corresponding to a first portion of a real world space; sending, to a second wearable sensor system and in association with a communication channel established between the first wearable sensor system and the second wearable sensor system, a second video stream captured using at least one camera coupled to the first wearable sensor system, the second video stream corresponding to a second portion of the real world space, wherein the second video stream corresponds to a concurrent timespan with the first video stream; and generating, for display towards a second user, a second presentation output based on information from the second video stream. 18. The system of claim 17, wherein: a first set of features is detected corresponding to the first video stream and a second set of features is detected corresponding to the second video stream, at least one of the first feature set and the second feature set is processed, as input, to generate, as output, a three-dimensional map of at least one of the first portion and the second portion of the real world space, and the generated three-dimensional map is presented to a virtual reality system and the virtual reality system applies the generated three-dimensional map as a constraint upon which to construct a virtual reality environment. 19. The system of claim 18, wherein the virtual reality environment is configured to be updated to display one or more of new content and altered content, the updating of the display further comprising: receiving, from the first wearable sensor system, an updated video stream of at least one of the first portion of the real world space and second portion of the real world space; detecting, from the updated video stream, an updated set of features corresponding to the updated video stream; processing, as input, at least one of the updated feature set and a previous three-dimensional map to generate, as output, an updated three-dimensional map; and presenting, to the virtual reality system, the updated three-dimensional map to be applied as a constraint on the virtual reality environment. 20. The system of claim 19, further comprising: capturing a third video stream of a third portion of the real world space using at least one camera electronically coupled to the second wearable sensor system, wherein the first video stream and the third video stream correspond to a concurrent timespan; generating, for display towards a second user, a third presentation output based on information from the third video stream and corresponding to the third portion of the real world space; and sharing, in association with the communication channel established between the first wearable sensor system and the second wearable sensor system, data associated with the first presentation output and data associated with the third presentation output to at least one of the first wearable sensor system and the second wearable sensor system. 1. A method of sharing content between wearable sensor systems, the method including: capturing a first video stream of a first portion of a real world space using at least one camera electronically coupled to a first wearable sensor system; generating, for display towards a first user, a first presentation output based on information from the first video stream and corresponding to the first portion of the real world space; capturing a second video stream of a second portion of the real world space using at least one camera electronically coupled to the first wearable sensor system, wherein the first video stream and the second video stream correspond to a concurrent timespan; sending, in association with a communication channel established between the first wearable sensor system and a second wearable sensor system, the second video stream to the second wearable sensor system; and generating, for display towards a second user, a second presentation output based on information from the second video stream and corresponding to the second portion of the real world space. 2. The method of claim 1, wherein a first set of features is detected corresponding to the first video stream and a second set of features is detected corresponding to the second video stream, the first set of features being processed to generate the first presentation output displayed towards the first user and the second set of features being processed to generate the second presentation output displayed towards the second user. 3. The method of claim 2, wherein the second set of features further comprises one or more overlapping features in relation to the first set of features, and one or more nonoverlapping features in relation to the second set of features, the one or more nonoverlapping features relating to at least one of: (i) a field of view corresponding to the second video stream, (ii) a resolution of the second video stream, (iii) a luminance measure of the second video stream, (iv) an additional virtual graphic overlay to be combined with the second video stream, and (v) an additional annotation overlay to be combined with the second video stream. 4. The method of claim 3, wherein the first presentation output comprises a first virtual interface associated with a virtual reality system, and wherein the second presentation output comprises a second virtual interface associated with the virtual reality system. 5. The method of claim 4, wherein at least one of the first feature set and the second feature set is processed, as input, to generate, as output, a three-dimensional map of at least one of the first portion and the second portion of the real world space, and wherein the generated three-dimensional map is presented to a virtual reality system and the virtual reality system applies the generated three-dimensional map as a constraint upon which to construct a virtual reality environment. 6. The method of claim 5, wherein the first virtual interface, responsive to a selection of an operation by the first user, enables the first user to interact with content within the virtual reality environment, and wherein the second virtual interface, responsive to a selection of an operation by the second user, enables the second user to interact with content within the virtual reality environment. 7. The method of claim 6, wherein the virtual reality environment enables collaboration between the first user and the second user participating in a collaborative operation, the collaborative operation facilitated by the first virtual interface and the second virtual interface, respective to the first user and the second user. 8. The method of claim 5, wherein the virtual reality environment is configured to be updated to display one or more of new content and altered content, the updating of the display further comprising: receiving, from the first wearable sensor system, an updated video stream of at least one of the first portion of the real world space and second portion of the real world space; detecting, from the updated video stream, an updated set of features corresponding to the updated video stream; processing, as input, at least one of the updated feature set and a previous three-dimensional map to generate, as output, an updated three-dimensional map; and presenting, to the virtual reality system, the updated three-dimensional map to be applied as a constraint on the virtual reality environment. 9. The method of claim 1, further comprising: capturing a third video stream of a third portion of the real world space using at least one camera electronically coupled to the second wearable sensor system, wherein the first video stream and the third video stream correspond to a concurrent timespan; generating, for display towards a second user, a third presentation output based on information from the third video stream and corresponding to the third portion of the real world space; and sharing, in association with the communication channel established between the first wearable sensor system and the second wearable sensor system, data associated with the first presentation output and data associated with the third presentation output to at least one of the first wearable sensor system and the second wearable sensor system. 10. The method of claim 1, wherein the first portion of the real world space and the second portion of the real world space overlap one another. 11. The method of claim 1, wherein the first portion of the real world space and the second portion of the real world space do not overlap one another. 12. The method of claim 1, wherein the first portion of the real world space is completely overlapped by the second portion of the real world space. 13. A non-transitory computer-readable recording medium having computer instructions recorded thereon for sharing content between wearable sensor systems, the computer instructions, when executed on one or more processors, causing the one or more processors to implement operations comprising: capturing a first video stream of a first portion of a real world space using at least one camera electronically coupled to a first wearable sensor system; generating, for display towards a first user, a first presentation output based on information from the first video stream and corresponding to the first portion of the real world space; capturing a second video stream of a second portion of the real world space using at least one camera electronically coupled to the first wearable sensor system, wherein the first video stream and the second video stream correspond to a concurrent timespan; sending, in association with a communication channel established between the first wearable sensor system and a second wearable sensor system, the second video stream to the second wearable sensor system; and generating, for display towards a second user, a second presentation output based on information from the second video stream and corresponding to the second portion of the real world space. 14. The non-transitory computer-readable recording medium of claim 13, wherein a first set of features is detected corresponding to the first video stream and a second set of features is detected corresponding to the second video stream, the first set of features being processed to generate the first presentation output displayed towards the first user and the second set of features being processed to generate the second presentation output displayed towards the second user. 15. The non-transitory computer-readable recording medium of claim 14, wherein at least one of the first feature set and the second feature set is processed, as input, to generate, as output, a three-dimensional map of at least one of the first portion and the second portion of the real world space, and wherein the generated three-dimensional map is presented to a virtual reality system and the virtual reality system applies the generated three-dimensional map as a constraint upon which to construct a virtual reality environment. 16. The non-transitory computer-readable recording medium of claim 15, wherein the virtual reality environment enables collaboration between the first user and the second user participating in a collaborative operation, the collaborative operation facilitated by a first virtual interface and a second virtual interface, respective to the first user and the second user. 17. A system including one or more processors coupled to memory, the memory being loaded with computer instructions to share content between wearable sensor systems, the computer instructions, when executed on the one or more processors, causing the one or more processors to implement actions comprising: capturing a first video stream of a first portion of a real world space using at least one camera electronically coupled to a first wearable sensor system; generating, for display towards a first user, a first presentation output based on information from the first video stream and corresponding to the first portion of the real world space; capturing a second video stream of a second portion of the real world space using at least one camera electronically coupled to the first wearable sensor system, wherein the first video stream and the second video stream correspond to a concurrent timespan; sending, in association with a communication channel established between the first wearable sensor system and a second wearable sensor system, the second video stream to the second wearable sensor system; and generating, for display towards a second user, a second presentation output based on information from the second video stream and corresponding to the second portion of the real world space. 18. The system of claim 17, wherein: a first set of features is detected corresponding to the first video stream and a second set of features is detected corresponding to the second video stream, at least one of the first feature set and the second feature set is processed, as input, to generate, as output, a three-dimensional map of at least one of the first portion and the second portion of the real world space, and the generated three-dimensional map is presented to a virtual reality system and the virtual reality system applies the generated three-dimensional map as a constraint upon which to construct a virtual reality environment. 19. The system of claim 18, wherein the virtual reality environment is configured to be updated to display one or more of new content and altered content, the updating of the display further comprising: receiving, from the first wearable sensor system, an updated video stream of at least one of the first portion of the real world space and second portion of the real world space; detecting, from the updated video stream, an updated set of features corresponding to the updated video stream; processing, as input, at least one of the updated feature set and a previous three-dimensional map to generate, as output, an updated three-dimensional map; and presenting, to the virtual reality system, the updated three-dimensional map to be applied as a constraint on the virtual reality environment. 20. The system of claim 19, further comprising: capturing a third video stream of a third portion of the real world space using at least one camera electronically coupled to the second wearable sensor system, wherein the first video stream and the third video stream correspond to a concurrent timespan; generating, for display towards a second user, a third presentation output based on information from the third video stream and corresponding to the third portion of the real world space; and sharing, in association with the communication channel established between the first wearable sensor system and the second wearable sensor system, data associated with the first presentation output and data associated with the third presentation output to at least one of the first wearable sensor system and the second wearable sensor system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAURICE L MCDOWELL, JR whose telephone number is (571)270-3707. The examiner can normally be reached Mon-Fri: 2pm-10pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAURICE L. MCDOWELL, JR/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Jul 10, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §101, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602875
TECHNIQUE FOR THREE DIMENSIONAL (3D) HUMAN MODEL PARSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602887
AUGMENTED REALITY CONTROL SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12598281
CONTROL APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETERMINING A CAMERA PATH INDICATING A MOVEMENT PATH OF A VIRTUAL VIEWPOINT IN A THREE-DIMENSIONAL SPACE
2y 5m to grant Granted Apr 07, 2026
Patent 12579741
DETECTING THREE DIMENSIONAL (3D) CHANGES BASED ON MULTI-VIEWPOINT IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12561905
Optimizing Generative Machine-Learned Models for Subject-Driven Text-to-3D Generation
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+12.9%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 913 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month