Prosecution Insights
Last updated: April 19, 2026
Application No. 18/315,575

VALIDATING AND FILTERING MIXED REALITY CONTENT

Non-Final OA §103
Filed
May 11, 2023
Examiner
CLOTHIER, MATTHEW MORRIS
Art Unit
2614
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
1y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
3 granted / 3 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
29 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.2%
+25.2% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/6/2026 has been entered. Response to Amendment 2. This action is in response to the amendment filed on 2/6/2026. Claims 1, 6, 8, and 15 have been amended. Claims 1-20 remain rejected in the application. Response to Arguments 3. Applicant’s arguments with respect to claim 1, and similarly claims 8 and 15, filed on 2/6/2026, with respect to the rejection under 35 U.S.C. 103 regarding that the prior art does not teach the limitation(s): “filtering, by the one or more computer processors of the first edge device, the mixed reality content to comply with the at least one pre-defined policy by replacing the mixed reality content with generic mixed reality content that complies with the at least one pre-defined policy associated with the physical location boundary of the first edge device” have been fully considered, but are moot because of new grounds for rejection. Claim 1, and similarly claims 8 and 15, are now disclosed by Kahan, Yang, and Oetting. 4. Regarding arguments with respect to claims 2-7, 9-14, and 16-20, they are dependent on independent claims 1, 8 and 15 respectively. Applicant does not argue anything other than independent claim 1, and similarly claims 8 and 15. The limitations in those claims, in conjunction with their combination, has previously been established and explained. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-5 and 7-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kahan et al. (US-2023/0237192-A1, hereinafter "Kahan") in view of Yang (CN-115830274-A), and further in view of Oetting et al. (US-2022/0028122-A1, hereinafter "Oetting"). 7. As per claim 1, Kahan discloses: A computer-implemented method comprising: receiving, by one or more computer processors [[of a first edge device,]] a request for mixed reality content from a user, [[wherein the first edge device is included in an array of edge devices that provide mixed reality content to users, each edge device having an associated physical location boundary]] and one or more pre-defined policies that determine mixed reality content to provide to users located within the physical location boundary [[of the edge device]]; retrieving, by the one or more computer processors [[of the first edge device,]] the mixed reality content; (Kahan, ¶ 0065, lines 1-6, “According to some embodiments, an extended reality appliance may include a digital communication device configured to at least one of: receiving virtual content data configured to enable a presentation of the virtual content, transmitting virtual content for sharing with at least one external device, …” and ¶ 0077, lines 6-14, “Consistent with the present disclosure, XR unit 204 may include a wearable extended reality appliance configured to present virtual content to user 100. ... Additional examples of wearable extended reality appliance may include ... a Mixed Reality (MR) device, or any other device capable of generating extended reality content.” and ¶ 0218, lines 48-51, “For example, the first rule may specify that while user 1004 is at initial location 1002, content associated with initial location 1002 may be permitted for display …” and ¶ 0236, lines 10-13, “For instance, the second rule may specify that while user 1004 is at subsequent location 1102, content associated with subsequent location 1102 may be permitted for display displayed …”) determining, by the one or more computer processors [[of the first edge device,]] a mobility of the user in a physical environment [[that corresponds to a physical location boundary of the first edge device]]; (Kahan, ¶ 0089, line 14-¶ 0090, line 3, “Thereafter, the physical orientation of the input device may be used by virtual content determination module 315 to modify display parameters of the virtual content to match the state of the user (e.g., attention, sleepy, active, sitting, standing, leaning backwards, leaning forward, walking, moving, riding, etc.). In some embodiments, virtual content determination module 315 may determine the virtual content to be displayed by the wearable extended reality appliance.”) determining, by the one or more computer processors [[of the first edge device]] the mixed reality content does not comply with at least one pre-defined policy associated with the physical location boundary [[of the first edge device]]; (Kahan, ¶ 0258, lines 10-18, “In some embodiments, one or more predetermined extended reality display rules may be associated with an account associated with a user, e.g., as default settings. For example, a user may define in advance a rule to prevent displaying promotional content in selected locations, contexts, and/or times. As another example, a user may define in advance a rule to prevent displaying content in selected regions of an FOV of the user, e.g., while driving or crossing a street.”) filtering, by the one or more computer processors [[of the first edge device,]] the mixed reality content to comply with the at least one pre-defined policy [[by replacing the mixed reality content with generic mixed reality content that complies with the at least one pre-defined policy associated with the physical location boundary of the first edge device;]] (Kahan, ¶ 0007, lines 3-12, “These embodiments may involve receiving an indication of an initial location of a particular wearable extended reality appliance; performing a first lookup in a repository for a match between the initial location and a first extended reality display rule associating the particular wearable extended reality appliance with the initial location, wherein the first extended reality display rule permits a first type of content display in the initial location and prevents a second type of content display in the initial location; ...”) based on the mobility of the user in the physical environment, processing, by the one or more computer processors [[of the first edge device,]] the filtered mixed reality content; and (Kahan, ¶ 0090, lines 1-6, “In some embodiments, virtual content determination module 315 may determine the virtual content to be displayed by the wearable extended reality appliance. The virtual content may be determined based on data from input determination module 312, sensors communication module 314, and other sources (e.g., database 380).” and ¶ 0089, lines 8-19, “In one embodiment, the data received from sensors communication module 314 may be used to determine the physical orientation of the input device. The physical orientation of the input device may be indicative of a state of the user and may be determined based on combination of a tilt movement, a roll movement, and a lateral movement. Thereafter, the physical orientation of the input device may be used by virtual content determination module 315 to modify display parameters of the virtual content to match the state of the user (e.g., attention, sleepy, active, sitting, standing, leaning backwards, leaning forward, walking, moving, riding, etc.).”) providing, by the one or more computer processors [[of the first edge device,]] the filtered mixed reality content to the user. (Kahan, ¶ 0007, lines 3-18, “These embodiments may involve receiving an indication of an initial location of a particular wearable extended reality appliance; … implementing the first extended reality display rule to thereby enable first instances of the first type of content to be displayed at the initial location via the particular wearable extended reality appliance while preventing second instances of the second type of content from being displayed at the initial location via the particular wearable extended reality appliance; ...” and ¶ 0077, lines 6-14, “Consistent with the present disclosure, XR unit 204 may include a wearable extended reality appliance configured to present virtual content to user 100. ... Additional examples of wearable extended reality appliance may include ... a Mixed Reality (MR) device, or any other device capable of generating extended reality content.”) 8. Kahan doesn't explicitly disclose but Yang discloses: [[receiving, by one or more computer processors]] of a first edge device, [[a request for mixed reality content from a user,]] wherein the first edge device is included in an array of edge devices that provide mixed reality content to users, each edge device having an associated physical location boundary [[and one or more pre-defined policies that determine mixed reality content to provide to users located within the physical location boundary]] of the edge device; [[retrieving, by the one or more computer processors]] of the first edge device, [[the mixed reality content;]] [[determining, by the one or more computer processors]] of the first edge device, [[a mobility of the user in a physical environment]] that corresponds to a physical location boundary of the first edge device; [[determining, by the one or more computer processors]] of the first edge device, [[the mixed reality content does not comply with at least one pre-defined policy associated with the physical location boundary]] of the first edge device; [[filtering, by the one or more computer processors]] of the first edge device, [[the mixed reality content to comply with the at least one pre-defined policy by replacing the mixed reality content with generic mixed reality content that complies with the at least one pre-defined policy associated with the physical location boundary]] of the first edge device; [[based on the mobility of the user in the physical environment, processing, by the one or more computer processors]] of the first edge device, [[the filtered mixed reality content; and]] [[providing, by the one or more computer processors]] of the first edge device, [[the filtered mixed reality content to the user.]] (Yang, page 8, ¶ 0072, “Optionally, the mixed reality implementation ... may include a mixed reality client 101, an edge cloud server 102, and a central cloud server 103. The number of mixed reality clients and edge cloud servers can be determined according to the actual situation. Figure 1 only uses one mixed reality client and one edge cloud server as an example for illustration.” and page 8, ¶ 0074, “The central cloud server 103 is deployed on the central cloud to permanently store all information and user data of the mixed reality application throughout its lifecycle. It can be divided into different sub-modules and numbered according to the different aggregation zones covered by the aggregation edge cloud. For example, a large city has 30 aggregation zones. These aggregation zones can be numbered according to their identity, such as from 1 to 30. The edge cloud server 102 of each aggregation zone obtains the virtual business scenario information of the corresponding aggregation zone from the central cloud server 103. At the same time, in order to avoid boundary effects, the edge cloud server 102 of each aggregation zone can contain information about the boundary areas. That is, the virtual business scenario information stored on the edge cloud servers 102 of adjacent aggregation zones overlaps appropriately.” and page 10, ¶ 0087, “Here, when a mixed reality client moves, the edge cloud server in the area where the mixed reality client is located provides services to it. That is, when the mixed reality client is in the aggregation area A of edge cloud server A, it is provided by A, and when the mixed reality client moves to the aggregation area B of edge cloud server B, it is provided by B. Therefore, after receiving a user request from a mixed reality client, the aforementioned edge cloud server first determines whether the user is in the target convergence area where the edge cloud server is located based on the location information carried in the user request. If the user is in the target convergence area where the edge cloud server is located, the aforementioned step of obtaining the 3D map information of the target convergence area where the edge cloud server is located is executed; otherwise, the operation is stopped.”) 9. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the computer-implemented method of Kahan to include the disclosure of an array of edge devices, each having an associated physical boundary location for content distribution, of Yang. The motivation for this modification could have been to provide edge devices closer in proximity to a mixed reality user so that the content can be quickly distributed, instead of through a central server. In addition, an edge device can also process content so that lower end devices can view the content. These advantages are also available as a mixed reality user moves throughout a physical environment and is “switched” between edge servers. 10. Kahan in view of Yang doesn't explicitly disclose but Oetting discloses: [[filtering, by the one or more computer processors of the first edge device, the mixed reality content to comply with the at least one pre-defined policy]] by replacing the mixed reality content with generic mixed reality content that complies with the at least one pre-defined policy associated with the physical location boundary [[of the first edge device;]] (Oetting, [0025], “In one example, the AS 104 may render a digital object based on a user request and on at least one policy that governs user behavior in the XR environment. For instance, a user of the XR environment may contribute some content to the XR environment, such as a personal avatar, an image or video or himself or herself, an emoji, or another type of content. However, a policy associated with the XR environment (which may be defined by a host, an administrator, or the like) may impose certain restrictions on the types of content that the users of the XR environment may contribute to the XR environment. For instance, if the XR environment is part of a virtual meeting application, a host of the XR environment may define a policy that requires that all user avatars contributed by users of the XR environment comprise images of the users in professional attire. Thus, the AS 104 may analyze the avatars of the users of the XR environment and may, as part of rendering the avatars for the XR environment, modify (potentially with the permission of the corresponding user(s)) any avatars that do not comply with the policy. For instance, the AS 104 may blur an avatar, cover an avatar with a blank box, replace the avatar with a generic avatar or placeholder, or ask the corresponding user to submit a new avatar. Thus, the AS 104 may render an XR environment for presentation to all users, where the XR environment complies with any policies that apply globally to the XR environment.” and [0039], “In one example, each user of the XR environment may be validated by checking a profile associated with the user. For instance, each user may have an associated profile that specifies information including at least one of: a physical context of the user (e.g., the location and/or technological capabilities of the location from which the user is participating in the XR environment, such as a public or private location, access to Internet of Things devices, etc.), ... and/or additional policies requested by the user that relate to the rendering of the XR environment.” and [0052], “Thus, if an objectionable word is printed on the shirt of an avatar contributed by another user of the virtual meeting, the processing system may blur, block, or otherwise obscure the offensive word in the second user's local rendering of the XR environment only. However, the objectionable word may still be visible to other users of the XR environment who have not requested removal of offensive language.” and [0018], “In one example, the core network 102 may include at least one application server (AS) 104, at least one database (DB) 106, and a plurality of edge routers 128-130.”; Examiner’s note: As disclosed by Oetting in [0025], a virtual meeting application in a mixed reality environment could have predefined policies in regard to a user’s avatar and replace an avatar with a generic version if it does not comply with a given policy (such as presenting an avatar in professional attire). In addition, as illustrated in [0039], a user could have a profile that defines parameters of the user’s physical location and desired policies. It is further disclosed in [0052] that user defined policies could block an objectionable word on the shirt of an avatar in a user’s local rendering at a virtual meeting. In these instances, policies could be defined to specific locations boundaries, like a meeting room, as defined by a user.) 11. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the computer-implemented method of Kahan in view of Yang to include the disclosure of replacing mixed reality content with generic mixed reality content that complies with the at least one pre-defined policy associated with a physical location boundary, of Oetting. The motivation for this modification could have been to allow edge devices to define appropriate content for the mixed reality environment based on defined policies. This could help prevent unwanted information from being displayed (such as personal information) or to filter potentially offensive content for users. 12. As per claim 2, Kahan in view of Yang, and further in view of Oetting discloses: The computer-implemented method of claim 1, further comprising: analyzing, by the one or more computer processors of the first edge device, a context of the physical environment; and adapting, by the one or more computer processors of the first edge device, the filtered mixed reality content based on the context of the physical environment. (Kahan, ¶ 0211, lines 1-24, “An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. … An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings). For instance, information may be associated with a specific location based on a particular context, use case, user preference, default setting, and/or relevance. To prevent unwanted distractions, a display rule for a specific location may limit the display of content via a wearable extended reality appliance, e.g., to only display content that is relevant to the particular context or use case.” and Yang, page 8, ¶ 0072, “Optionally, the mixed reality implementation ... may include a mixed reality client 101, an edge cloud server 102, and a central cloud server 103. The number of mixed reality clients and edge cloud servers can be determined according to the actual situation. Figure 1 only uses one mixed reality client and one edge cloud server as an example for illustration.”) The motivation for this modification is the same as claim 1. 13. As per claim 3, Kahan in view of Yang, and further in view of Oetting discloses: The computer-implemented method of claim 2, wherein the context of the physical environment includes at least one of a contextual attribute of the environment and a context of the user in the environment. (Kahan, ¶ 0215, lines 1-9, “For instance, a rule permitting to display a type of content via a wearable extended reality appliance while positioned at a location may be based on one or more default settings, user preferences, safety considerations, lighting conditions, context, preferences of an establishment associated with the location, other content currently displayed via the wearable extended reality appliance, and/or any other factor that may be used to decide whether to display content at a location.”) 14. As per claim 4, Kahan in view of Yang, and further in view of Oetting discloses: The computer-implemented method of claim 1, wherein the mobility of the user in the physical environment includes at least one of a location coordinate, a relative position of the user, and whether the user is moving from one physical location to another. (Kahan, ¶ 0007, lines 1-9, “Some disclosed embodiments may include systems, methods and non-transitory computer readable media for enabling location-based virtual content. These embodiments may involve receiving an indication of an initial location of a particular wearable extended reality appliance; performing a first lookup in a repository for a match between the initial location and a first extended reality display rule associating the particular wearable extended reality appliance with the initial location, ...”) 15. As per claim 5, Kahan in view of Yang, and further in view of Oetting discloses: The computer-implemented method of claim 1, wherein filtering the mixed reality content to comply with the at least one pre-defined policy further comprises: removing, by the one or more computer processors of the first edge device, at least one of: malicious content, misleading content, personal information, financial information, and information the user is not authorized to view. (Kahan, ¶ 0279, lines 1-23, “In some embodiments, operations may be performed for managing privacy in an extended reality environment. Data may be received from an image sensor associated with a wearable extended reality appliance. The image data may be reflective of a physical environment. Data may be accessed, the data characterizing a plurality of virtual objects for association with locations in the physical environment. The data may represent a first virtual object and a second virtual object. Privacy settings may be accessed, the privacy settings classifying at least one of the first virtual object and a location of the first virtual object as private. A first extended reality appliance may be classified as approved for presentation of private information. A second extended reality appliance may be classified as not-approved for presentation of the private information. A simultaneous presentation of an augmented viewing of the physical environment may be enabled, such that during the simultaneous presentation, the first extended reality appliance may present the first virtual object and the second virtual object in the physical environment, and the second extended reality appliance may present the second virtual object, omitting presentation of the first virtual object in compliance with the privacy settings.” and Kahan, ¶ 0281, lines 27-32, “Some examples of private information (e.g., sensitive data) may include personal identifying information, location information, genetic data, information related to health, financial, business, personal, family, education, political, religious, and/or legal matters, and/or sexual orientation or gender identification.” and Yang, page 8, ¶ 0072, “Optionally, the mixed reality implementation ... may include a mixed reality client 101, an edge cloud server 102, and a central cloud server 103. The number of mixed reality clients and edge cloud servers can be determined according to the actual situation. Figure 1 only uses one mixed reality client and one edge cloud server as an example for illustration.”) The motivation for this modification is the same as claim 1. 16. As per claim 7, Kahan in view of Yang, and further in view of Oetting discloses: The computer-implemented method of claim 1, wherein processing the filtered mixed reality content further comprises: identifying, by the one or more computer processors of the first edge device, an intent of the user with respect to an interaction with the mixed reality content. (Kahan, ¶ 0483, lines 9-16, “In some examples, the hand gestures 3526, 3612 of the user 3510 may indicate a user intention to move the virtual representation of the first participant 3518 to the first environment 3514 (e.g., by drag-and-drop hand gestures, by hold-and-move hand gestures, by selections of the first participant 3518 and its placement location in the first environment 3514, or other suitable indications).” and Yang, page 8, ¶ 0072, “Optionally, the mixed reality implementation ... may include a mixed reality client 101, an edge cloud server 102, and a central cloud server 103. The number of mixed reality clients and edge cloud servers can be determined according to the actual situation. Figure 1 only uses one mixed reality client and one edge cloud server as an example for illustration.”) The motivation for this modification is the same as claim 1. 17. Claim 8 is similar in scope to claim 1 except for additional limitations that Kahan in view of Yang, and further in view of Oetting discloses: A computer program product comprising: one or more computer-readable storage media; program instructions, stored on at least one of the one or more computer-readable storage media, (Kahan, ¶ 0012, lines 1-3, “Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, …”) Claim 8 is also rejected under the same rationale as claim 1, described above. The motivation for this modification is the same as claim 1. 18. Claim 9, which is similar in scope to dependent claim 2 and independent claim 8, is thus rejected under the same rationale as described above. 19. Claim 10, which is similar in scope to dependent claim 3 and independent claim 8, is thus rejected under the same rationale as described above. 20. Claim 11, which is similar in scope to dependent claim 4 and independent claim 8, is thus rejected under the same rationale as described above. 21. Claim 12, which is similar in scope to dependent claim 5 and independent claim 8, is thus rejected under the same rationale as described above. 22. As per claim 13, Kahan in view of Yang, and further in view of Oetting discloses: The computer program product of claim 8, wherein the at least one pre-defined policy of the first edge device is different from a pre-defined policy of a second edge device in the array of edge devices, wherein moving from the physical location boundary of the first edge device to a physical location boundary of the second edge device causes the second edge device to provide mixed reality content to the user that is determined by the pre-defined policy of the second edge device. (Kahan, Fig. 10-11; ¶ 0239, lines 1-16, “By way of a non-limiting example, in FIG. 10, while at location 1002, user 1004 may view via wearable extendible realty appliance 1006 multiple virtual objects associated with initial location 1002, such as virtual menu 1010 and corresponding virtual food items 1012. In FIG. 11, at subsequent location 1102, user 1004 may view via wearable extendible realty appliance 1006 multiple virtual objects associated with subsequent location 1102, such as promotional coupon 1110, a virtual guide 1112, and a virtual checkout 1114. At initial location 1002, at least one processor (e.g., processing device 460 and/or server 210) may block the display of promotional coupon 1110, virtual guide 1112, and virtual checkout 1114 via wearable extended reality appliance 1106, and at subsequent location 1102, at least one processor may block the display of virtual menu 1010 and virtual food items 1012.” and Kahan, ¶ 0218, lines 48-57, “For example, the first rule may specify that while user 1004 is at initial location 1002, content associated with initial location 1002 may be permitted for display, whereas content for other establishments (e.g., unrelated to initial location 1002) may be blocked. The at least one processor may receive a request (e.g., from a computing device associated with initial location 1002) to display menu 1010 and may determine that menu 1010 corresponds to a first type of content permitted for display at initial location 1002 according to the first rule.” and Kahan, ¶ 0236, lines 10-21, “For instance, the second rule may specify that while user 1004 is at subsequent location 1102, content associated with subsequent location 1102 may be permitted for display displayed, whereas content promoting other establishments (e.g., unassociated with and/or competing with subsequent location 1102) may be blocked. The at least one processor may receive a request (e.g., from a computing device associated with subsequent location 1102) to display promotional coupon 1110 and may determine that promotional coupon 1110 corresponds to a second type of content permitted for display at subsequent location 1102 according to the second rule.” and Yang, page 8, ¶ 0072, “Optionally, the mixed reality implementation ... may include a mixed reality client 101, an edge cloud server 102, and a central cloud server 103. The number of mixed reality clients and edge cloud servers can be determined according to the actual situation. Figure 1 only uses one mixed reality client and one edge cloud server as an example for illustration.” and Yang, page 8, ¶ 0074, “The central cloud server 103 is deployed on the central cloud to permanently store all information and user data of the mixed reality application throughout its lifecycle. It can be divided into different sub-modules and numbered according to the different aggregation zones covered by the aggregation edge cloud. For example, a large city has 30 aggregation zones. These aggregation zones can be numbered according to their identity, such as from 1 to 30.” and Yang, page 10, ¶ 0087, “Here, when a mixed reality client moves, the edge cloud server in the area where the mixed reality client is located provides services to it. That is, when the mixed reality client is in the aggregation area A of edge cloud server A, it is provided by A, and when the mixed reality client moves to the aggregation area B of edge cloud server B, it is provided by B.”) 23. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the computer program product of claim 8 of Kahan in view of Oetting to include the disclosure of two edge devices, where content distribution changes from the first edge device to the second edge device as a mixed reality user moves between the edge device’s boundaries, of Yang. The motivation for this modification could have been to provide a mixed reality user a seamless content experience so that the user may not even realize they have crossed an edge device’s boundary. As a result, a mixed reality user will remain in close proximity to an edge device so that the content can be quickly distributed, instead of through a central server. In addition, an edge device can also process content so that lower end devices can view the content. 24. Claim 14, which is similar in scope to dependent claim 7 and independent claim 8, is thus rejected under the same rationale as described above. 25. Claim 15 is similar in scope to claim 1 except for additional limitations that Kahan in view of Yang, and further in view of Oetting discloses: A computer system comprising: one or more computer processors; one or more computer-readable memories; and one or more computer-readable storage media; program instructions, stored on at least one of the one or more computer-readable storage media for execution by at least one of the one or more computer processors via at least one of the one or more memories, (Kahan, ¶ 0012, lines 1-5, “Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executed by at least one processing device and perform any of the methods described herein.” and ¶ 0065, lines 11-14, “In additional embodiments, the extended reality appliance may include memory for storing at least one of virtual data configured to enable a presentation of virtual content, …”) Claim 15 is also rejected under the same rationale as claim 1, described above. The motivation for this modification is the same as claim 1. 26. Claim 16, which is similar in scope to dependent claim 2 and independent claim 15, is thus rejected under the same rationale as described above. 27. Claim 17, which is similar in scope to dependent claim 3 and independent claim 15, is thus rejected under the same rationale as described above. 28. Claim 18, which is similar in scope to dependent claim 5 and independent claim 15, is thus rejected under the same rationale as described above. 29. Claim 19, which is similar in scope to dependent claim 13 and independent claim 15, is thus rejected under the same rationale as described above. The motivation for this modification is the same as claim 13. 30. Claim 20, which is similar in scope to dependent claim 7 and independent claim 15, is thus rejected under the same rationale as described above. 31. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kahan et al. (US-2023/0237192-A1, hereinafter "Kahan") in view of Yang (CN-115830274-A), and further in view of Oetting et al. (US-2022/0028122-A1, hereinafter "Oetting"), and further in view of Guim Bernat et al. (US-2021/01445170A1, hereinafter "Guim Bernat"). 32. As per claim 6, Kahan in view of Yang, and further in view of Oetting discloses: The computer-implemented method of claim 1, wherein the first edge device in the array of edge devices [[is a static computing device that hosts an instance of a mixed reality (MR) content control system]] and one or more pre-defined policies for the physical location boundary of the first edge device, and a second edge device in the array of edge devices [[is a mobile computing device that hosts an instance of the MR content control system]] and one or more pre-defined policies for a physical location boundary of the second edge device. (Yang, page 8, ¶ 0072, “Optionally, the mixed reality implementation ... may include a mixed reality client 101, an edge cloud server 102, and a central cloud server 103. The number of mixed reality clients and edge cloud servers can be determined according to the actual situation. Figure 1 only uses one mixed reality client and one edge cloud server as an example for illustration.” and Kahan, ¶ 0218, lines 48-51, “For example, the first rule may specify that while user 1004 is at initial location 1002, content associated with initial location 1002 may be permitted for display …” and ¶ 0236, lines 10-13, “For instance, the second rule may specify that while user 1004 is at subsequent location 1102, content associated with subsequent location 1102 may be permitted for display displayed …”) 33. Kahan in view of Yang, and further in view of Oetting doesn't explicitly disclose but Guim Bernat discloses: [[The computer-implemented method of claim 1, wherein the first edge device in the array of edge devices]] is a static computing device that hosts an instance of a mixed reality (MR) content control system [[and one or more pre-defined policies for the physical location boundary of the first edge device, and a second edge device in the array of edge devices]] is a mobile computing device that hosts an instance of the MR content control system [[and one or more pre-defined policies for a physical location boundary of the second edge device.]] (Guim Bernat, Fig. 12, [0164], “In further examples, FIG. 12 may utilize various types of mobile edge nodes, such as an edge node hosted in a vehicle (car/truck/tram/train) or other mobile unit, as the edge node will move to other geographic locations along the platform hosting it. With vehicle-to-vehicle communications, individual vehicles may even act as network edge nodes for other cars, (e.g., to perform caching, reporting, data aggregation, etc.). Thus, it will be understood that the application components provided in various edge nodes may be distributed in static or mobile settings, including coordination between some functions or operations at individual endpoint devices or the edge gateway nodes 1220, some others at the edge resource node 1240, and others in the core data center 1250 or global network cloud 1260.” and [0610], “Edge computing often involves physically locating modular computing pools, as positioned at the edge of a network. ... Edge computing installations are expanding to support a variety of use cases, such as smart cities, augmented or virtual reality, assisted or autonomous driving, factory automation, and threat detection, among others.” and [0309], “Image recognition services such as Vuforia and Catchoom have several AR apps using their service. This can only be expected to rise the in the future as more such AR apps are created and users of these AR apps increase. Since their services are currently hosted in the cloud, the latency and turnaround time of image recognition is pretty high. ... Therefore, moving the image recognition services from the cloud to the edge can improve the total turnaround time giving users seamless experience.” and [0312], “Third-party service providers (TSPs) may can monetize an AR service at the edge. The edge can host a central image recognition and object identification service which can be used by the AR apps. The AR apps specify the targets through an API and this service can respond with objects as desired when an input request is sent. The apps would pay for their service in the edge.” and [0559], “Each region may have a single orchestrator (e.g., 3411, 3421), rather than each edge location having its own orchestrator. Clusters may be created based on multiple factors, such as edge location locality (zoning) or edge cloud type (for example all street cabinets may be in one cluster while cell towers in another). Regional security policies can be supplied dynamically to a domain context where an acceptable multi-region policy can be determined for a domain context.”; Guim Bernat discloses in [0309] that augmented reality content services is often hosted in the cloud (or a static location) but also suggests in [0309] and [0312] that hosting AR content services at the edge (such as a mobile edge location) would help reduce latency and improve a user’s AR experience.) 34. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the computer-implemented method of claim 1 of Kahan in view of Yang, and further in view of Oetting to include the disclosure of a static edge computing device and mobile edge computing device that hosts an instance of a mixed reality (MR) content control system, of Guim Bernat. The motivation for this modification could have been to set up edge devices throughout a physical environment so that those devices can dynamically adapt to the demands of a mixed reality ecosystem. For instance, static edge devices could ensure that mixed reality services are supplied in a wide area, even if a local edge device is not nearby (providing a base level of coverage in an area). In addition, there could be mobile edge devices so that if there is more demand in a particular area, a mobile edge device could be deployed to help host mixed reality content. By doing so, this can help reduce latency of mixed reality applications as edge devices can service nearby MR users. Conclusion 35. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jiang et al. (TW-I823146-B) discloses a mixed reality system with mobile edge computing devices. 36. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW CLOTHIER whose telephone number is (571)272-4667. The examiner can normally be reached Mon-Fri 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW CLOTHIER/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
May 22, 2025
Non-Final Rejection — §103
Jul 28, 2025
Interview Requested
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 07, 2025
Examiner Interview Summary
Aug 21, 2025
Response Filed
Nov 08, 2025
Final Rejection — §103
Dec 15, 2025
Interview Requested
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 10, 2026
Examiner Interview Summary
Feb 06, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530842
AIRBORNE LiDAR POINT CLOUD FILTERING METHOD DEVICE BASED ON SUPER-VOXEL GROUND SALIENCY
2y 5m to grant Granted Jan 20, 2026
Patent 12499800
IN-VEHICLE DISPLAY DEVICE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
1y 11m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month