Prosecution Insights
Last updated: April 19, 2026
Application No. 18/058,302

VIRTUAL BOUNDARY ALLOCATION AND USER INTERACTION IN MULTI-USER ENVIRONMENT

Non-Final OA §103§112
Filed
Nov 23, 2022
Examiner
COCHRAN, BRIANNA RENAE
Art Unit
2615
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
40%
Grant Probability
Moderate
1-2
OA Rounds
2y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
2 granted / 5 resolved
-22.0% vs TC avg
Minimal -40% lift
Without
With
+-40.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
29 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statements (IDS) submitted on November 23rd 2022 and January 9th 2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 8, and 15, state the following limitation “determining mobility boundaries required for performance of one or more activities;”. Examiner is unsure what applicant means by “mobility boundaries”. Therefore, examiner determines by broadest reasonable interpretation that “mobility boundaries” means any boundary associated with mobility. Thus, the claim will be examined as best understood by the Examiner. Claims 3, 10, and 17 depend from claims 2, 9, and 16 which have two limitations separated by an and/or. If the second limitation of claims 2, 9, and 16 which states “detecting that the one or more workers have finished the one or more activities“ is selected utilizing the OR statement. Then the limitation from claims 3, 10, and 17 “displaying the modified personalized virtual boundaries to the one or more workers depending on the one or more activities the worker is performing“ contradicts claims 2, 9, and 16. As the workers in claim 2, 9, and 16 limitation have finished the one or more activities, while in claims 3, 10, and 17’s limitation the workers are performing the one or more activities. Thus, claims 3, 10, and 17 are indefinite and claims 3, 10, and 17 will be examined as best understood by the Examiner. Claims 2-7, 9-14, and 16-20 inherit their indefiniteness from independent claims 1, 8, and 15 from which they depend. Claims 2, 4-7, 9, 11-14, and 16, 18-20 will also be examined as best understood by the Examiner. Claims 2, 9, and 16 recites the limitation "the space" in the second line of claims 2, 9, and 16 . There is insufficient antecedent basis for this limitation in the claims. As claims 1, 8, and 15 recite “physical space”. Examiner is unsure which space claims 2, 9, and 16 is referring to. Thus, claims 2, 9, and 16 will be examined as best understood by the Examiner. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 3, 10, and 17 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claims 3, 10, and 17 depend from claims 2, 9, and 16 which have two limitations separated by an and/or. If the second limitation of claims 2, 9, and 16 which states “detecting that the one or more workers have finished the one or more activities“ is selected utilizing the OR statement. Then the limitation from claims 3, 10, and 17 “displaying the modified personalized virtual boundaries to the one or more workers depending on the one or more activities the worker is performing“ contradicts claims 2, 9, and 16. As the workers in claim 2, 9, and 16 limitation have finished the one or more activities, while in claims 3, 10, and 17’s limitation the workers are performing the one or more activities. Thus, claims 3, 10, and 17 fail to include all the limitations of the claim upon which it depends. Claims 3, 10, and 17 will be examined as best understood by the Examiner. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 6, 8, 11, 13, 15, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zavesky et al. U.S. Patent Application Publication 20230306689 A1 (hereinafter Zavesky) in view of Berliner et al. U.S. Patent 11574451 B2 (hereinafter Berliner). Regarding claim 1, Zavesky teaches a processor-implemented method for creating virtual boundaries (Boundaries of an XR Environment, para. 0022), the method comprising: creating a virtual simulation (XR Environment, Para. 0012) of a physical space (Physical Environments) and of one or more workers (Workers, People, Users, Para. 0068) and/or one or more machines and/or objects (Real World Objects, Para. 0002 and 0012) in the physical space(Physical Environments); determining mobility boundaries (“Off Limits” Areas, Para. 0048) required for performance of one or more activities (Metaverse Activities - Games, Collaboration Between Workers, Fitness Class, Virtual Classroom, etc... Para. 0068-0070); Off limit areas are areas where the users are not allowed to be in. They are used to define the XR Environment Boundaries that prevent collisions with real world objects and allow multiple users to safely interact, Para. 0022. personalizing virtual boundaries (XR Environment Boundaries, Para. 0022) for the one or more workers(Workers, People, Users, Para. 0068) to perform the one or more activities (Metaverse Activities, Para. 0012) based on the determined mobility boundaries (“Off Limits” Areas, 0048); XR Environment Boundaries are personalized to the physical spaces of the users and can be further adjusted to the user’s height, Para. 0022. and displaying the virtual boundaries (XR Environment Boundaries, Para. 0022) to the one or more workers (Workers, People, Users, Para. 0068) depending on the one or more activities(Metaverse Activities, Para. 0012) the worker is performing. However, Zavesky fails to explicitly teach displaying the virtual boundaries. Zavesky and Berliner are analogous to the claimed invention because both of them are in the same field of creating virtual environment boundaries based on physical environments. Berliner teaches displaying the virtual boundaries. (Col. 65 Lines 18-46 and Col. 121 50-67 and Col. 122 Lines 1-8) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zavesky’s Virtual Boundaries to incorporate Berliner’s displaying of Virtual Boundaries. Since doing so would provide the benefit of showing the virtual boundaries to the users. As without them a user would not know where the virtual space ends and could go outside of the intended virtual environment. Resulting in collisions with the physical objects in the real-world with the potential hurting themselves. Regarding claim 4, Zavesky teaches the method of claim 1, further comprising: tracking movements of the one or more workers (Workers, People, Users, Para. 0068) continuously using at least one IoT device (Para. 0034, 0044, and 0072) in addition to a mixed reality device (Endpoint Devices – VR Headsets, Head Mounted Displays, Smart Glasses, etc… , Para. 0029). The IoT devices are used to collect information about the physical environments, which can include the users in said environments. Regarding claim 6, Zavesky teaches the method of claim 1, wherein the personalization of virtual boundaries (XR Environment Boundaries, Para. 0022) for the one or more workers (Workers, People, Users, Para. 0068) to perform the one or more activities (Metaverse Activities, Para. 0012) based on the determined mobility boundaries(“Off Limits” Areas, 0048) is performed by one or more machine learning models (Para. 0063) trained on a knowledge corpus. A machine learning model can be used to track/update the XR Environment attributes to predict the least disruptive XR Environment to a user. Attributes can include the virtual boundaries as the entire space of the XR Environment is tracked. However, Zavesky doesn’t explicitly teach the machine learning model is trained on a knowledge corpus. Berliner teaches the machine learning model (NPLs and other Various Machine Learning Models, Col. 29 Lines 26-67 and Col. 30 Lines 1-25) is trained on a knowledge corpus (Large Collection of Data). NPLs (Natural Language Processing) utilize a knowledge corpus. As well as various other Machine Learning Models can be used to implement any of the methods disclosed in Berliner, which would include virtual boundaries. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zavesky’s XR Environment’s to incorporate Berliner’s Machine Learning Models Since doing so would provide the benefit of utilizing machine learning to implement portions of the XR Environment. Enhancing the personalization of the XR Environment as the machine learning model could learn the user’s habits/tasks ensuring tailored content. Regarding claim 8, Zavesky teaches a computer system (Processing System, Para. 0004) for creating virtual boundaries, the computer system comprising: one or more processors (Processor 402, Para. 0039), one or more computer-readable memories (Memory 404, Para. 0076), one or more computer-readable tangible storage medium (Storage Devices, Para. 0023), and program (Programs, Para. 0021) instructions stored on at least one of the one or more tangible storage medium(Storage Devices, Para. 0023)for execution by at least one of the one or more processors (Processor 402, Para. 0039) via at least one of the one or more memories, wherein the computer system is capable of performing the method of claim 1, therefore it is rejected under the same rationale as claim 1. Regarding claim 11, has similar limitations as of claim 4, therefore it is rejected under the same rationale as claim 4. Regarding claim 13, has similar limitations as of claim 6, therefore it is rejected under the same rationale as claim 6. Regarding claim 15, Zavesky teaches a computer program product (Programs, Para. 0021) for creating virtual boundaries, the computer program product (Programs, Para. 0021) comprising: one or more computer-readable tangible storage medium (Storage Devices, Para. 0023) and program instructions stored on at least one of the one or more tangible storage medium (Storage Devices, Para. 0023), the program instructions executable by a processor (Processor 402, Para. 0039) to cause the processor to perform the method of claim 1, therefore it is rejected under the same rationale as claim 1. Regarding claim 18, has similar limitations as of claims 4 and 11, therefore it is rejected under the same rationale as claims 4 and 11. Regarding claim 20, has similar limitations as of claims 6 and 13, therefore it is rejected under the same rationale as claims 6 and 13. Claim(s) 2-3, 5, 7, 9-10, 12, 14, 16-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zavesky et al. U.S. Patent Application Publication 20230306689 A1 (hereinafter Zavesky) in view of Berliner et al. U.S. Patent 11574451 B2 (hereinafter Berliner) in further view of Madden et al. U.S. Patent Application Publication 20230196681 A1 (hereinafter Madden). Regarding claim 2, Zavesky teaches the method of claim 1, further comprising: detecting that one or more new workers(Workers, People, Users, Para. 0068) have entered the space, physically and/or virtually; (Para. 0057) The XR Environment tracks when a person or animal enters into the physical space to simulate them into the XR Environment to prevent collisions. However, Zavesky and Berliner fail to teach: and/or detecting that the one or more workers have finished the one or more activities. Zavesky, Berliner, and Madden are analogous to the claimed invention because all of them are in the same field of creating virtual environment boundaries based on physical environments. Madden teaches: and/or detecting that the one or more workers (User/Person) have finished the one or more activities (Tasks). (Para 0154-0157) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zavesky’s XR Environment altered by Berliner to incorporate Madden’s XR Environment Activity Tracking. Since doing so would provide the benefit of efficiently creating new XR Spaces for specific activities, when the previous activity has been completed. (Madden, Para. 0158). Regarding claim 3, Zavesky teaches the method of claim 2, further comprising: determining if the physical space (Physical Environment) has any availability changes (Changes in the Amount of People); (Para. 0057) The XR Environment tracks if people or pets enter or leave the physical environment the XR Environment is based on. modifying the personalized virtual boundaries (XR Environment Boundaries, Para. 0022) based on the availability changes (Changes in the Amount of People) of the physical space; (Para. 0013 and 0057). The XR Environment Boundaries are modified by adding transient moving objects representing people who have entered the physical space to prevent collisions. and displaying the modified personalized virtual boundaries (XR Environment Boundaries, Para. 0022) to the one or more workers (Workers, People, Users, Para. 0068) depending on the one or more activities(Metaverse Activities, Para. 0012) the worker is performing. The appearance of the XR Environment is modified based on potential collisions, Para. 0013. The XR Environment is shown to the user as long as they are engaging with the XR Environment. However, Zavesky fails to explicitly teach displaying the modified personalized virtual boundaries. Berliner teaches displaying the modified personalized virtual boundaries. (Col. 65 Lines 18-46 and Col. 121 50-67 and Col. 122 Lines 1-8) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zavesky’s Virtual Boundaries to incorporate Berliner’s displaying of Virtual Boundaries. Since doing so would provide the benefit of showing the virtual boundaries to the users. As without them a user would not know where the virtual space ends and could go outside of the intended virtual environment. Resulting in collisions with the physical objects in the real world potential hurting themselves. Regarding claim 5, Zavesky and Berliner fail to explicitly teach the method of claim 1, further comprising: notifying the one or more workers of a potential collision between the one or more workers with each other and/or a physical object. However, Madden teaches: notifying the one or more workers (User/Person) of a potential collision between the one or more workers(User/Person) with each other and/or a physical object. (Para. 0089, 0097, and 0102-0103) Alerting the user with approaching animals, people, and objects in the 3D Space (The 3D Space can encompass both Virtual and Physical Spaces, Para. 0063). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zavesky’s XR Environment altered by Berliner’s to incorporate Madden’s Alerts/Notifications System. Since doing so would provide the benefit of alerting/notifying the user of incoming people/objects that the user didn’t/failed to see. Which increases the user’s ability to avoid collisions. Regarding claim 7, Zavesky and Berliner fail to explicitly teach the method of claim 1, wherein the one or more activities the worker is performing is determined using object recognition and/or predetermined data. However, Madden teaches the method of claim 1, wherein the one or more activities (Tasks) the worker (User/Person) is performing is determined using object recognition and/or predetermined data (Predetermined Tasks, Para. 0142). Tasks are assigned to the user, which are based on predetermined data, such as the XR Environment around them or the XR Environment itself. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zavesky’s XR Environment altered by Berliner to incorporate Madden’s Task Tracking. Since doing so would provide the benefit of tracking tasks the user needs to complete and assigning said tasks to users in an efficient/flexible manner. As different users can be required to perform different actions in the same XR Environment. Regarding claim 9, has similar limitations as of claim 2, therefore it is rejected under the same rationale as claim 2. Regarding claim 10, has similar limitations as of claim 3, therefore it is rejected under the same rationale as claim 3. Regarding claim 12, has similar limitations as of claim 5, therefore it is rejected under the same rationale as claim 5. Regarding claim 14, has similar limitations as of claim 7, therefore it is rejected under the same rationale as claim 7. Regarding claim 16, has similar limitations as of claims 2 and 9, therefore it is rejected under the same rationale as claims 2 and 9. Regarding claim 17, has similar limitations as of claims 3 and 10, therefore it is rejected under the same rationale as claims 3 and 10. Regarding claim 19, has similar limitations as of claims 5 and 12, therefore it is rejected under the same rationale as claims 5 and 12. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIANNA R COCHRAN whose telephone number is (571)272-4671. The examiner can normally be reached Mon-Fri. 7:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIANNA RENAE COCHRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Nov 23, 2022
Application Filed
Oct 17, 2023
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541922
METHOD FOR GENERATING A MODEL FOR REPRESENTING RELIEF BY PHOTOGRAMMETRY
2y 5m to grant Granted Feb 03, 2026
Patent 12482144
METHOD AND APPARATUS OF ENCODING/DECODING POINT CLOUD GEOMETRY DATA USING AZIMUTHAL CODING MODE
2y 5m to grant Granted Nov 25, 2025
Patent 12417567
METHOD FOR GENERATING SIGNED DISTANCE FIELD IMAGE, METHOD FOR GENERATING TEXT EFFECT IMAGE, DEVICE AND MEDIUM
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
40%
Grant Probability
0%
With Interview (-40.0%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month