Prosecution Insights
Last updated: April 19, 2026
Application No. 18/816,085

APPARATUSES, SYSTEM AND METHOD FOR AUGMENTED REALITY DISPLAY

Non-Final OA §102§103
Filed
Aug 27, 2024
Examiner
DANG, PHILIP
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
363 granted / 470 resolved
+19.2% vs TC avg
Strong +33% interview lift
Without
With
+33.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
49 currently pending
Career history
519
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 470 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Objections Claims 21, 33, and 38 are objected. The claim limitation “AR display” should be read “augmented reality (AR) display”. An appropriate correction is required. The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, “a data packet”, “a see-through AR display” must be shown or the feature(s) must be canceled from the claims 21-39. No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 21-35 and 38-39 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dolev (US Patent 11,846,981 B2), (“Dolev”). Regarding claim 21, Dolev meets the claim limitations as follows: An apparatus (apparatus) [Dolev: col. 25, line 63; Fig. 5] comprising:at least one processor (at least one processor) [Dolev: col. 27, line 3; Fig. 5]; and at least one memory (memory devices) [Dolev: col. 26, line 60; Fig. 5] storing instructions that, when executed by the at least one processor, cause the apparatus at least to (Modules 512-517 may contain software instructions for execution by at least one processor) [Dolev: col. 27, line 1-2; Fig. 5]: enable display of at least one visual object on a first AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Figs. 36-40], wherein the display of the at least one visual object on the first AR display (The object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display (e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer) [Dolev: col. 71, line 45-54; Figs. 6A-7, 10-13] is defined by a first position parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and a first transparency parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]; obtain (Receiving may refer to, for example, taking delivery of, accepting, acquiring, retrieving, generating, obtaining, detecting, or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor, as described elsewhere in this disclosure. Receiving may involve obtaining data via wired and/or wireless communications links. A request may include, for example, an appeal, petition, demand, asking, call, and/or instruction (e.g., to a computing device to provide information or perform an action or function). A request to initiate a video conference between a plurality of participants may refer to, for example, a request to commence, institute, launch, establish, set up, or start a video conference between a plurality of participants, or to cause a video conference between a plurality of participants to begin) [Dolev: col. 115, line 12-27] at least one display constraint of a second AR display (In some instances, a size for presenting information may be constrained by other information displayed concurrently (e.g., in a nonoverlapping manner), such as a user interface. In the first mode, displaying the user interface concurrently with the information in the same display region may limit a number of pixels that may be devoted to present other information, e.g., an editable document) [Dolev: col. 54, line 38-45], wherein the at least one display constraint (Implementing a rule may refer to enforcing one or more conditions or constraints associated with a rule to cause conformance and/or compliance with the rule) [Dolev: col. 65, line 20-23] of the second AR display indicates one or more permitted combinations of a position parameter and a transparency parameter ((Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]; (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 44, line 4-32]; (An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. As another example, one or more extended reality display rules may define display characteristics (e.g., color, format, size, transparency, opacity, style) for displaying content in different types of situations. An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 35-52]; transform ((using a transformation function to obtain a transformed image data) [Dolev: col. 30, line 66-67]; (Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]) the first position parameter and the first transparency parameter to produce a second position parameter and a second transparency parameter respectively ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]); determine if a combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is a permitted combination ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object); and in dependence upon a determination that the combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is not a permitted combination (When a rule associating the particular wearable extended reality appliance with the new location is not found in the data store, the at least one processor may retrieve a default rule instead (e.g., corresponding to a location type for the new location)) [Dolev: col. 77, line 16-20], adapt at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] such that a combination of the second position parameter and the second transparency parameter is a permitted combination ((Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]; (For example, assume there are two extended reality objects: a first extended reality object that is viewable by the wearer and a second extended reality object that is not viewable by the wearer (e.g., the second extended reality object is outside the wearer's field of view). It is noted that there may be multiple extended reality objects that the wearer can see based on the wearer's point of view and multiple extended reality objects that the wearer cannot see based on the wearer's point of view. For purposes of explanation only, it is assumed that there are only two extended reality objects, and that the wearer can see the first extended reality object and cannot see the second extended reality object. Because the wearer can only see the first extended reality object, only changes in the first extended reality object are displayed to the wearer. A change in the first extended reality object may include any type of visible change to the first extended reality object, such as a change in viewing angle (e.g., caused by the wearer manipulating the first extended reality object or by the first extended reality object moving by itself) or a change in a property of the first extended reality object (e.g., a change in shape, size, color, opacity, object orientation, or the like). Even though changes in the second extended reality object (e.g., visible changes similar in scope to changes in the first extended reality object) may be occurring at the same time as changes in the first extended reality object, the changes to the second extended reality object would not be visible to the wearer. However, changes to the second extended reality object may be visible to a viewer (e.g., a non-wearer) either from a different perspective or at a different point in time, as will be explained below) [Dolev: col. 114, line 60 - col. 115, line 23] – Note: Dolev discloses that in case of the new location, size, and/or opacity of the object may not be fit to display, there is another option that the object can be seen). Regarding claims 22 and 39, Dolev meets the claim limitations as set forth in claims 21 and 38. Dolev further meets the claim limitations as follow. create ((In some examples, the image data may be read from memory, may be received from an external device, may be generated (for example, using a generative model), and so forth) [Dolev: col. 10, line 9-12]; (image data captured by at least one image sensor associated with a wearable extended reality appliance) [Dolev: col. 3, line 65-66]) a data packet (The data may be received as individual packets) [Dolev: col. 60, line 59-61] comprising at least the second position parameter and the second transparency parameter ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object); and transmit the created data packet towards a second AR apparatus (Such operations may additionally include communicating at least one initial location signal ( e.g., an indication of an initial location) between a particular wearable extended reality appliance and a location sensor (e.g., including transmitting an initial location signal from a particular wearable extended reality appliance to a location sensor and/or receiving an initial location signal by at least one processor associated with a particular wearable extended reality appliance from a location sensor). Such operations may further include using at least one location signal associated with a wearable to determine an initial location for a particular wearable extended reality appliance.) [Dolev: col. 61, line 12-27] such that the display of the at least one visual object (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55] on the second AR display is enabled (In some embodiments, the first instances of the first type of content include a first plurality of virtual objects, and wherein the second instances of the second type of content include a second plurality of virtual objects. An object may include an item, element, structure, building, thing, device, document, message, article, person, animal, or vehicle. A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display ( e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer. A virtual object may be displayed in two or three dimensions.) [Dolev: col. 71, line 40-55]. Regarding claim 23, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. receive an indication that the display of the at least one visual object is to be moved from the first AR display to the second AR display (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 44, line 4-32]; and based on receiving the indication (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed) [Dolev: col. 44, line 4-7], transform ((using a transformation function to obtain a transformed image data) [Dolev: col. 30, line 66-67]; (Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]) the first position parameter and the first transparency parameter ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]). Regarding claim 24, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein at least one of the first AR display or the second AR display is a see-through AR display (In some embodiments, the wearable extended reality appliance may include a see-through lens such that the wearer can directly view the physical environment and the extended reality objects may be projected onto the lens as described herein.) [Dolev: col. 117, line 24-28]; Regarding claim 25, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein transforming the first position parameter and the first transparency parameter (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] comprises a content-independent transform of the first position parameter and the first transparency parameter (A virtual screen may have any desired shape, color, contour, form, texture, pattern, or other feature or characteristic.) [Dolev: col. 134, line 42-44]. Regarding claim 26, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein adapting at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] comprises a content-dependent transform of at least one of the second position parameter or the second transparency parameter (A virtual screen may have any desired shape, color, contour, form, texture, pattern, or other feature or characteristic.) [Dolev: col. 134, line 42-44]; Regarding claim 27, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein adapting at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] comprises either: adapting the second position parameter and not adapting the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22]; or adapting the second transparency parameter and not adapting the second position parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22]. Regarding claim 28, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein the first position parameter indicates a position of the at least one visual object as being in one of a plurality of first display regions of the first AR display (In some embodiments, the first region of the physical space includes a physical object, and moving the virtual representation of the first participant includes overlying the virtual representation of the first participant on the physical object. A physical object may include any tangible thing, item, or entity, that exists in the physical world. Overlying refers to a condition where something is positioned at least partially on top of or at least partially covering or blocking something else. For example, the physical object may include a floor of the physical space, and the virtual representation of the first participant may be overlying on the floor (e.g., to simulate the first participant standing on the floor). In some examples, the physical object may include, for example, a chair, seat, or sofa in the physical space, and the virtual representation of the first participant may be overlying on the chair, seat, or sofa ( e.g., to simulate the first participant sitting on the chair, seat, or sofa). The physical object may include any other type of physical item that may be located in the physical space as desired.) [Dolev: col. 142, line 7-25; Figs. 36-40]; and wherein the second position parameter indicates a position of the at least one visual object as being in one of a plurality of second display regions of the second AR display. (With reference to FIG. 36, in response to the first selection 3524 and the first environmental placement location 3610, at least one processor associated with the wearable extended reality appliance 3512 may move a virtual representation of the first participant 3518 to the first environment 3514 in a manner simulating the first participant 3614 physically located in the first region of the physical space while the second participant 3520 remains in the second peripheral environment 3516. In some examples, the hand gestures 3526, 3612 of the user 3510 may indicate a user intention to move the virtual representation of the first participant 3518 to the first environment 3514 (e.g., by drag-and-drop hand gestures, by hold-and-move hand gestures, by selections of the first participant 3518 and its placement location in the first environment 3514, or other suitable indications). With reference to FIG. 37, after moving the virtual representation of the first participant 3518 to the first environment 3514, the virtual representation of the first participant 3518 may, for example, not be displayed in the second peripheral environment 3516, and the virtual representation of the first participant 3614 may, for example, be displayed in the first environment 3514) [Dolev: col. 142, line 26-47; Figs. 36-40]. Regarding claim 29, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein the at least one display constraint of the second AR display is at least partially pre-defined ((User interfaces may be indispensable for interacting with computing devices but may occupy significant space on an electronic display, leaving less room for displaying documents, images, or other information. Interfacing with a computing device while wearing a wearable extended reality appliance may alleviate some of these constraints by allowing a user to move a user interface to an area in the extended reality space (e.g., virtual space), beyond predefined boundaries of an electronic screen) [Dolev: col. 33, line 34-42]; (presented in the second display region outside the predefined boundaries of the first display region) [Dolev: col. 2, line 10-11]). Regarding claim 30, Dolev meets the claim limitations as set forth in claim 29. Dolev further meets the claim limitations as follow. wherein the at least one display constraint of the second AR display is further defined by a user input ((the user interface is presented in the second display region outside the predefined boundaries of the first display region) [Dolev: col. 2, line 10-11]; (User interfaces may be indispensable for interacting with computing devices but may occupy significant space on an electronic display, leaving less room for displaying documents, images, or other information. Interfacing with a computing device while wearing a wearable extended reality appliance may alleviate some of these constraints by allowing a user to move a user interface to an area in the extended reality space (e.g., virtual space), beyond predefined boundaries of an electronic screen) [Dolev: col. 33, line 34-42]). Regarding claim 31, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein enabling display of at least one visual object on a first AR display comprises displaying a plurality of visual objects on the first AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Please see more details in Figs. 36-40], and wherein the display of the plurality of visual objects on the first AR display is defined (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Please see more details in Figs. 36-40] by a plurality of respective first position parameters ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and first transparency parameters. ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]. Regarding claim 32, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. comprising the first AR display (data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 49-52]; (A display may refer to, for example, any device configured to permit exterior viewing. A display may include, 10 for example, a light-emitting diode (LED) display, an organic light-emitting diode (OLED) display, a liquid-crystal display (LCD), a dot-matrix display, a screen, a touch screen, a light indicator, a light source, or any other device configured to provide visual or optical output) [Dolev: col. 158, line 8-14]. Regarding claim 33, Dolev meets the claim limitations as follows: An apparatus (apparatus) [Dolev: col. 25, line 63; Fig. 5] comprising: at least one processor (at least one processor) [Dolev: col. 27, line 3; Fig. 5]; and at least one memory (memory devices) [Dolev: col. 26, line 60; Fig. 5] storing instructions that, when executed by the at least one processor, cause the apparatus at least to (Modules 512-517 may contain software instructions for execution by at least one processor) [Dolev: col. 27, line 1-2; Fig. 5]: transmit (transmit data) [Dolev: col. 179, line 34-35; Figs. 1-7] at least one display constraint of a second AR display to a first apparatus (In some instances, a size for presenting information may be constrained by other information displayed concurrently (e.g., in a nonoverlapping manner), such as a user interface. In the first mode, displaying the user interface concurrently with the information in the same display region may limit a number of pixels that may be devoted to present other information, e.g., an editable document) [Dolev: col. 54, line 38-45; Figs. 1-7]; (system 200 may include an input unit 202, an XR unit 204, a mobile communications device 206, and a remote processing unit 208. Remote processing unit 208 may include a server 210 coupled to one or more physical or virtual storage devices, such as a data structure 212) [Dolev: col. 13, line 33-38; Fig. 2] – Please see the AR display and an the first apparatus in the Figures 1-7), wherein a display constraint (Implementing a rule may refer to enforcing one or more conditions or constraints associated with a rule to cause conformance and/or compliance with the rule) [Dolev: col. 65, line 20-23; Figs. 1-7] of the second AR display indicates a permitted combination of a position parameter and a transparency parameter ((A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location) [Dolev: col. 44, line 4-32; Figs. 1-7]; (An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. As another example, one or more extended reality display rules may define display characteristics (e.g., color, format, size, transparency, opacity, style) for displaying content in different types of situations. An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 35-52; Figs. 1-7]; receive (Receiving may refer to, for example, taking delivery of, accepting, acquiring, retrieving, generating, obtaining, detecting, or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor, as described elsewhere in this disclosure. Receiving may involve obtaining data via wired and/or wireless communications links. A request may include, for example, an appeal, petition, demand, asking, call, and/or instruction (e.g., to a computing device to provide information or perform an action or function). A request to initiate a video conference between a plurality of participants may refer to, for example, a request to commence, institute, launch, establish, set up, or start a video conference between a plurality of participants, or to cause a video conference between a plurality of participants to begin) [Dolev: col. 115, line 12-27] a data packet from the first apparatus (Receiving may refer to accepting delivery of, acquiring, retrieving, generating, obtaining or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor. The data may be received via a communications channel, such as a wired channel (e.g., cable, fiber) and/or wireless channel (e.g., radio, cellular, optical, IR). The data may be received as individual packets or as a continuous stream of data.) [Dolev: col. 60, line 53-61] comprising at least a second position parameter and a second transparency parameter ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]) wherein the combination of the second position parameter and the second transparency parameter is a permitted combination (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10]; and in response to receiving the data packet, enable display of at least one visual object on the second AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Figs. 36-40]; In some examples, the data may be received from a memory unit, may be received from an external device, may be generated based on other information (for example, generated using a rendering algorithm based on at least one of geometrical information, texture information or textual information), and so forth. Receiving an indication of an initial location of a particular wearable extended reality appliance may include performing one or more operations. Such operations may include, for example, identifying a particular wearable extended reality appliance, identifying at least one location sensor, and/or establishing a communications link between a particular wearable extended reality appliance and at least one sensor. Such operations may additionally include communicating at least one initial location signal (e.g., an indication of an initial location) between a particular wearable extended reality appliance and a location sensor (e.g., including transmitting an initial location signal from a particular wearable extended reality appliance to a location sensor and/or receiving an initial location signal by at least one processor associated with a particular wearable extended reality appliance from a location sensor). Such operations may further include using at least one location signal associated with a wearable to determine an initial location for a particular wearable extended reality appliance) [Dolev: col. 61, line 3-27], wherein the display of the at least one visual object (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55] on the second AR display is defined by the second position parameter and the second transparency parameter ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object). Regarding claim 34, Dolev meets the claim limitations as set forth in claim 33. Dolev further meets the claim limitations as follow. wherein the display of the at least one visual object on the second AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Please see more details in Figs. 36-40] is further defined by an importance parameter ((data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 49-52]. Regarding claim 35, Dolev meets the claim limitations as set forth in claim 34. Dolev further meets the claim limitations as follow. wherein the importance parameter is at least one of:user-defined, or predefined by the apparatus ((data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 49-52]. Regarding claim 38, Dolev meets the claim limitations as follows: A method comprising (a method) [Dolev: col. 109, line 62]:enabling display of at least one visual object (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55] on a first AR display, wherein the display of the at least one visual object on the first AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55]; (The object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display (e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer) [Dolev: col. 71, line 45-54; Figs. 6A-7, 10-13] is defined by a first position parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and a first transparency parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]; identifying (Receiving may refer to, for example, taking delivery of, accepting, acquiring, retrieving, generating, obtaining, detecting, or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor, as described elsewhere in this disclosure. Receiving may involve obtaining data via wired and/or wireless communications links. A request may include, for example, an appeal, petition, demand, asking, call, and/or instruction (e.g., to a computing device to provide information or perform an action or function). A request to initiate a video conference between a plurality of participants may refer to, for example, a request to commence, institute, launch, establish, set up, or start a video conference between a plurality of participants, or to cause a video conference between a plurality of participants to begin) [Dolev: col. 115, line 12-27] at least one display constraint of a second AR display (In some instances, a size for presenting information may be constrained by other information displayed concurrently (e.g., in a nonoverlapping manner), such as a user interface. In the first mode, displaying the user interface concurrently with the information in the same display region may limit a number of pixels that may be devoted to present other information, e.g., an editable document) [Dolev: col. 54, line 38-45], wherein the at least one display constraint (Implementing a rule may refer to enforcing one or more conditions or constraints associated with a rule to cause conformance and/or compliance with the rule) [Dolev: col. 65, line 20-23] of the second AR display indicates one or more permitted combinations of a position parameter and a transparency parameter ((Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]; (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 44, line 4-32]; (An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. As another example, one or more extended reality display rules may define display characteristics (e.g., color, format, size, transparency, opacity, style) for displaying content in different types of situations. An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 35-52]; transforming ((using a transformation function to obtain a transformed image data) [Dolev: col. 30, line 66-67]; (Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]) the first position parameter and the first transparency parameter to produce a second position parameter and a second transparency parameter respectively ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]); determining if a combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is a permitted combination ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object); and in dependence upon a determination that the combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is not a permitted combination (When a rule associating the particular wearable extended reality appliance with the new location is not found in the data store, the at least one processor may retrieve a default rule instead (e.g., corresponding to a location type for the new location)) [Dolev: col. 77, line 16-20], adapt at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] such that a combination of the second position parameter and the second transparency parameter is a permitted combination ((Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]; (For example, assume there are two extended reality objects: a first extended reality object that is viewable by the wearer and a second extended reality object that is not viewable by the wearer (e.g., the second extended reality object is outside the wearer's field of view). It is noted that there may be multiple extended reality objects that the wearer can see based on the wearer's point of view and multiple extended reality objects that the wearer cannot see based on the wearer's point of view. For purposes of explanation only, it is assumed that there are only two extended reality objects, and that the wearer can see the first extended reality object and cannot see the second extended reality object. Because the wearer can only see the first extended reality object, only changes in the first extended reality object are displayed to the wearer. A change in the first extended reality object may include any type of visible change to the first extended reality object, such as a change in viewing angle (e.g., caused by the wearer manipulating the first extended reality object or by the first extended reality object moving by itself) or a change in a property of the first extended reality object (e.g., a change in shape, size, color, opacity, object orientation, or the like). Even though changes in the second extended reality object (e.g., visible changes similar in scope to changes in the first extended reality object) may be occurring at the same time as changes in the first extended reality object, the changes to the second extended reality object would not be visible to the wearer. However, changes to the second extended reality object may be visible to a viewer (e.g., a non-wearer) either from a different perspective or at a different point in time, as will be explained below) [Dolev: col. 114, line 60 - col. 115, line 23] – Note: Dolev discloses that in case of the new location, size, and/or opacity of the object may not be fit to display, there is another option that the object can be seen); and enabling display of the at least one visual object on the second AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55]; (The object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display (e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer) [Dolev: col. 71, line 45-54; Figs. 6A-7, 10-13], wherein the display of the at least one visual object (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55] on the second AR display is defined by the second position parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and the second transparency parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 21-35 and 38-39 are rejected under 35 U.S.C. 103 as being unpatentable over Dolev (US Patent 11,846,981 B2), (“Dolev”), in view of Shimizu et al. (US Patent Application Publication US 2023/0385011 A1), (“Shimizu”). Regarding claim 21, Dolev meets the claim limitations as as follow. An apparatus (apparatus) [Dolev: col. 25, line 63; Fig. 5] comprising:at least one processor (at least one processor) [Dolev: col. 27, line 3; Fig. 5]; and at least one memory (memory devices) [Dolev: col. 26, line 60; Fig. 5] storing instructions that, when executed by the at least one processor, cause the apparatus at least to (Modules 512-517 may contain software instructions for execution by at least one processor) [Dolev: col. 27, line 1-2; Fig. 5]: enable display of at least one visual object on a first AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Figs. 36-40], wherein the display of the at least one visual object on the first AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55]; (The object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display (e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer) [Dolev: col. 71, line 45-54; Figs. 6A-7, 10-13] is defined by a first position parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and a first transparency parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]; obtain (Receiving may refer to, for example, taking delivery of, accepting, acquiring, retrieving, generating, obtaining, detecting, or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor, as described elsewhere in this disclosure. Receiving may involve obtaining data via wired and/or wireless communications links. A request may include, for example, an appeal, petition, demand, asking, call, and/or instruction (e.g., to a computing device to provide information or perform an action or function). A request to initiate a video conference between a plurality of participants may refer to, for example, a request to commence, institute, launch, establish, set up, or start a video conference between a plurality of participants, or to cause a video conference between a plurality of participants to begin) [Dolev: col. 115, line 12-27] at least one display constraint of a second AR display (In some instances, a size for presenting information may be constrained by other information displayed concurrently (e.g., in a nonoverlapping manner), such as a user interface. In the first mode, displaying the user interface concurrently with the information in the same display region may limit a number of pixels that may be devoted to present other information, e.g., an editable document) [Dolev: col. 54, line 38-45], wherein the at least one display constraint (Implementing a rule may refer to enforcing one or more conditions or constraints associated with a rule to cause conformance and/or compliance with the rule) [Dolev: col. 65, line 20-23] of the second AR display indicates one or more permitted combinations of a position parameter and a transparency parameter ((A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 44, line 4-32]; (An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. As another example, one or more extended reality display rules may define display characteristics (e.g., color, format, size, transparency, opacity, style) for displaying content in different types of situations. An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 35-52]; transform ((using a transformation function to obtain a transformed image data) [Dolev: col. 30, line 66-67]; (Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]) the first position parameter and the first transparency parameter to produce a second position parameter and a second transparency parameter respectively ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]); determine if a combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is a permitted combination ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object); and in dependence upon a determination that the combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is not a permitted combination (When a rule associating the particular wearable extended reality appliance with the new location is not found in the data store, the at least one processor may retrieve a default rule instead (e.g., corresponding to a location type for the new location)) [Dolev: col. 77, line 16-20], adapt at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] such that a combination of the second position parameter and the second transparency parameter is a permitted combination (For example, assume there are two extended reality objects: a first extended reality object that is viewable by the wearer and a second extended reality object that is not viewable by the wearer (e.g., the second extended reality object is outside the wearer's field of view). It is noted that there may be multiple extended reality objects that the wearer can see based on the wearer's point of view and multiple extended reality objects that the wearer cannot see based on the wearer's point of view. For purposes of explanation only, it is assumed that there are only two extended reality objects, and that the wearer can see the first extended reality object and cannot see the second extended reality object. Because the wearer can only see the first extended reality object, only changes in the first extended reality object are displayed to the wearer. A change in the first extended reality object may include any type of visible change to the first extended reality object, such as a change in viewing angle (e.g., caused by the wearer manipulating the first extended reality object or by the first extended reality object moving by itself) or a change in a property of the first extended reality object (e.g., a change in shape, size, color, opacity, object orientation, or the like). Even though changes in the second extended reality object (e.g., visible changes similar in scope to changes in the first extended reality object) may be occurring at the same time as changes in the first extended reality object, the changes to the second extended reality object would not be visible to the wearer. However, changes to the second extended reality object may be visible to a viewer (e.g., a non-wearer) either from a different perspective or at a different point in time, as will be explained below) [Dolev: col. 114, line 60 - col. 115, line 23] – Note: Dolev discloses that in case of the new location, size, and/or opacity of the object may not be fit to display, there is another option that the object can be seen). In the same field of endeavor, Shimizu further discloses the claim limitations as follows: wherein the at least one display constraint of the second AR display indicates one or more permitted combinations of a position parameter and a transparency parameter (In each virtual object, many parameters such as polygon mesh information, vertex information, material information, rendering information of gloss and shadow, physical calculation information such as collision, friction, and light, three-dimensional spatial coordinate position, animation, color information, transparency, effects of video and sound, and control script are set, and when all parameters are combined, an enormous amount of setting data is obtained) [Shimizu: para. 0041]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dolev with Shimizu to program the system to implement of Shimizu’s method. Therefore, the combination of Dolev with Shimizu will enable the system to increase a virtual reality feeling (i.e. sense of immersion in the virtual space) at the time of viewing [Shimizu: para. 0036]. Regarding claims 22 and 39, Dolev meets the claim limitations as set forth in claims 21 and 38. Dolev further meets the claim limitations as follow. create ((In some examples, the image data may be read from memory, may be received from an external device, may be generated (for example, using a generative model), and so forth) [Dolev: col. 10, line 9-12]; (image data captured by at least one image sensor associated with a wearable extended reality appliance) [Dolev: col. 3, line 65-66]) a data packet (The data may be received as individual packets) [Dolev: col. 60, line 59-61] comprising at least the second position parameter and the second transparency parameter ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object); and transmit the created data packet towards a second AR apparatus (Such operations may additionally include communicating at least one initial location signal ( e.g., an indication of an initial location) between a particular wearable extended reality appliance and a location sensor (e.g., including transmitting an initial location signal from a particular wearable extended reality appliance to a location sensor and/or receiving an initial location signal by at least one processor associated with a particular wearable extended reality appliance from a location sensor). Such operations may further include using at least one location signal associated with a wearable to determine an initial location for a particular wearable extended reality appliance.) [Dolev: col. 61, line 12-27] such that the display of the at least one visual object (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55] on the second AR display is enabled (In some embodiments, the first instances of the first type of content include a first plurality of virtual objects, and wherein the second instances of the second type of content include a second plurality of virtual objects. An object may include an item, element, structure, building, thing, device, document, message, article, person, animal, or vehicle. A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display ( e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer. A virtual object may be displayed in two or three dimensions.) [Dolev: col. 71, line 40-55]. Regarding claim 23, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. receive an indication that the display of the at least one visual object is to be moved from the first AR display to the second AR display (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 44, line 4-32]; and based on receiving the indication (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed) [Dolev: col. 44, line 4-7], transform ((using a transformation function to obtain a transformed image data) [Dolev: col. 30, line 66-67]; (Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]) the first position parameter and the first transparency parameter ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]). Regarding claim 24, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein at least one of the first AR display or the second AR display is a see-through AR display (In some embodiments, the wearable extended reality appliance may include a see-through lens such that the wearer can directly view the physical environment and the extended reality objects may be projected onto the lens as described herein.) [Dolev: col. 117, line 24-28]; Regarding claim 25, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein transforming the first position parameter and the first transparency parameter (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] comprises a content-independent transform of the first position parameter and the first transparency parameter (A virtual screen may have any desired shape, color, contour, form, texture, pattern, or other feature or characteristic.) [Dolev: col. 134, line 42-44]. Regarding claim 26, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein adapting at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] comprises a content-dependent transform of at least one of the second position parameter or the second transparency parameter (A virtual screen may have any desired shape, color, contour, form, texture, pattern, or other feature or characteristic.) [Dolev: col. 134, line 42-44]; Regarding claim 27, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein adapting at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] comprises either: adapting the second position parameter and not adapting the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22]; or adapting the second transparency parameter and not adapting the second position parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22]. Regarding claim 28, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein the first position parameter indicates a position of the at least one visual object as being in one of a plurality of first display regions of the first AR display (In some embodiments, the first region of the physical space includes a physical object, and moving the virtual representation of the first participant includes overlying the virtual representation of the first participant on the physical object. A physical object may include any tangible thing, item, or entity, that exists in the physical world. Overlying refers to a condition where something is positioned at least partially on top of or at least partially covering or blocking something else. For example, the physical object may include a floor of the physical space, and the virtual representation of the first participant may be overlying on the floor (e.g., to simulate the first participant standing on the floor). In some examples, the physical object may include, for example, a chair, seat, or sofa in the physical space, and the virtual representation of the first participant may be overlying on the chair, seat, or sofa ( e.g., to simulate the first participant sitting on the chair, seat, or sofa). The physical object may include any other type of physical item that may be located in the physical space as desired.) [Dolev: col. 142, line 7-25; Figs. 36-40]; and wherein the second position parameter indicates a position of the at least one visual object as being in one of a plurality of second display regions of the second AR display. (With reference to FIG. 36, in response to the first selection 3524 and the first environmental placement location 3610, at least one processor associated with the wearable extended reality appliance 3512 may move a virtual representation of the first participant 3518 to the first environment 3514 in a manner simulating the first participant 3614 physically located in the first region of the physical space while the second participant 3520 remains in the second peripheral environment 3516. In some examples, the hand gestures 3526, 3612 of the user 3510 may indicate a user intention to move the virtual representation of the first participant 3518 to the first environment 3514 (e.g., by drag-and-drop hand gestures, by hold-and-move hand gestures, by selections of the first participant 3518 and its placement location in the first environment 3514, or other suitable indications). With reference to FIG. 37, after moving the virtual representation of the first participant 3518 to the first environment 3514, the virtual representation of the first participant 3518 may, for example, not be displayed in the second peripheral environment 3516, and the virtual representation of the first participant 3614 may, for example, be displayed in the first environment 3514) [Dolev: col. 142, line 26-47; Figs. 36-40]. Regarding claim 29, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein the at least one display constraint of the second AR display is at least partially pre-defined ((User interfaces may be indispensable for interacting with computing devices but may occupy significant space on an electronic display, leaving less room for displaying documents, images, or other information. Interfacing with a computing device while wearing a wearable extended reality appliance may alleviate some of these constraints by allowing a user to move a user interface to an area in the extended reality space (e.g., virtual space), beyond predefined boundaries of an electronic screen) [Dolev: col. 33, line 34-42]; (presented in the second display region outside the predefined boundaries of the first display region) [Dolev: col. 2, line 10-11]). Regarding claim 30, Dolev meets the claim limitations as set forth in claim 29. Dolev further meets the claim limitations as follow. wherein the at least one display constraint of the second AR display is further defined by a user input ((the user interface is presented in the second display region outside the predefined boundaries of the first display region) [Dolev: col. 2, line 10-11]; (User interfaces may be indispensable for interacting with computing devices but may occupy significant space on an electronic display, leaving less room for displaying documents, images, or other information. Interfacing with a computing device while wearing a wearable extended reality appliance may alleviate some of these constraints by allowing a user to move a user interface to an area in the extended reality space (e.g., virtual space), beyond predefined boundaries of an electronic screen) [Dolev: col. 33, line 34-42]). Regarding claim 31, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. wherein enabling display of at least one visual object on a first AR display comprises displaying a plurality of visual objects on the first AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Please see more details in Figs. 36-40], and wherein the display of the plurality of visual objects on the first AR display is defined (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Please see more details in Figs. 36-40] by a plurality of respective first position parameters ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and first transparency parameters. ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]. Regarding claim 32, Dolev meets the claim limitations as set forth in claim 21. Dolev further meets the claim limitations as follow. comprising the first AR display (data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 49-52]; (A display may refer to, for example, any device configured to permit exterior viewing. A display may include, 10 for example, a light-emitting diode (LED) display, an organic light-emitting diode (OLED) display, a liquid-crystal display (LCD), a dot-matrix display, a screen, a touch screen, a light indicator, a light source, or any other device configured to provide visual or optical output) [Dolev: col. 158, line 8-14]. Regarding claim 33, Dolev meets the claim limitations as follows: An apparatus (apparatus) [Dolev: col. 25, line 63; Fig. 5] comprising: at least one processor (at least one processor) [Dolev: col. 27, line 3; Fig. 5]; and at least one memory (memory devices) [Dolev: col. 26, line 60; Fig. 5] storing instructions that, when executed by the at least one processor, cause the apparatus at least to (Modules 512-517 may contain software instructions for execution by at least one processor) [Dolev: col. 27, line 1-2; Fig. 5]: transmit (transmit data) [Dolev: col. 179, line 34-35; Figs. 1-7] at least one display constraint of a second AR display to a first apparatus (In some instances, a size for presenting information may be constrained by other information displayed concurrently (e.g., in a nonoverlapping manner), such as a user interface. In the first mode, displaying the user interface concurrently with the information in the same display region may limit a number of pixels that may be devoted to present other information, e.g., an editable document) [Dolev: col. 54, line 38-45; Figs. 1-7]; (system 200 may include an input unit 202, an XR unit 204, a mobile communications device 206, and a remote processing unit 208. Remote processing unit 208 may include a server 210 coupled to one or more physical or virtual storage devices, such as a data structure 212) [Dolev: col. 13, line 33-38; Fig. 2] – Please see the AR display and an the first apparatus in the Figures 1-7), wherein a display constraint (Implementing a rule may refer to enforcing one or more conditions or constraints associated with a rule to cause conformance and/or compliance with the rule) [Dolev: col. 65, line 20-23; Figs. 1-7] of the second AR display indicates a permitted combination of a position parameter and a transparency parameter ((A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location) [Dolev: col. 44, line 4-32; Figs. 1-7]; (An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. As another example, one or more extended reality display rules may define display characteristics (e.g., color, format, size, transparency, opacity, style) for displaying content in different types of situations. An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 35-52; Figs. 1-7]; receive (Receiving may refer to, for example, taking delivery of, accepting, acquiring, retrieving, generating, obtaining, detecting, or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor, as described elsewhere in this disclosure. Receiving may involve obtaining data via wired and/or wireless communications links. A request may include, for example, an appeal, petition, demand, asking, call, and/or instruction (e.g., to a computing device to provide information or perform an action or function). A request to initiate a video conference between a plurality of participants may refer to, for example, a request to commence, institute, launch, establish, set up, or start a video conference between a plurality of participants, or to cause a video conference between a plurality of participants to begin) [Dolev: col. 115, line 12-27] a data packet from the first apparatus (Receiving may refer to accepting delivery of, acquiring, retrieving, generating, obtaining or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor. The data may be received via a communications channel, such as a wired channel (e.g., cable, fiber) and/or wireless channel (e.g., radio, cellular, optical, IR). The data may be received as individual packets or as a continuous stream of data.) [Dolev: col. 60, line 53-61] comprising at least a second position parameter and a second transparency parameter ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]) wherein the combination of the second position parameter and the second transparency parameter is a permitted combination (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10]; and in response to receiving the data packet, enable display of at least one visual object on the second AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Figs. 36-40]; (An object may include an item, element, structure, building, thing, device, document, message, article, person, animal, or vehicle. A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display ( e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer. A virtual object may be displayed in two or three dimensions.) [Dolev: col. 71, line 40-55]; (In some examples, the data may be received from a memory unit, may be received from an external device, may be generated based on other information (for example, generated using a rendering algorithm based on at least one of geometrical information, texture information or textual information), and so forth. Receiving an indication of an initial location of a particular wearable extended reality appliance may include performing one or more operations. Such operations may include, for example, identifying a particular wearable extended reality appliance, identifying at least one location sensor, and/or establishing a communications link between a particular wearable extended reality appliance and at least one sensor. Such operations may additionally include communicating at least one initial location signal (e.g., an indication of an initial location) between a particular wearable extended reality appliance and a location sensor (e.g., including transmitting an initial location signal from a particular wearable extended reality appliance to a location sensor and/or receiving an initial location signal by at least one processor associated with a particular wearable extended reality appliance from a location sensor). Such operations may further include using at least one location signal associated with a wearable to determine an initial location for a particular wearable extended reality appliance) [Dolev: col. 61, line 3-27], wherein the display of the at least one visual object (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55] on the second AR display is defined by the second position parameter and the second transparency parameter ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object). In the same field of endeavor, Shimizu further discloses the claim limitations as follows: wherein the combination of the second position parameter and the second transparency parameter is a permitted combination (In each virtual object, many parameters such as polygon mesh information, vertex information, material information, rendering information of gloss and shadow, physical calculation information such as collision, friction, and light, three-dimensional spatial coordinate position, animation, color information, transparency, effects of video and sound, and control script are set, and when all parameters are combined, an enormous amount of setting data is obtained) [Shimizu: para. 0041]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dolev with Shimizu to program the system to implement of Shimizu’s method. Therefore, the combination of Dolev with Shimizu will enable the system to increase a virtual reality feeling (i.e. sense of immersion in the virtual space) at the time of viewing [Shimizu: para. 0036]. Regarding claim 34, Dolev meets the claim limitations as set forth in claim 33. Dolev further meets the claim limitations as follow. wherein the display of the at least one visual object on the second AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line 54-55; Please see more details in Figs. 36-40] is further defined by an importance parameter ((data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 49-52]. Regarding claim 35, Dolev meets the claim limitations as set forth in claim 34. Dolev further meets the claim limitations as follow. wherein the importance parameter is at least one of:user-defined, or predefined by the apparatus ((data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 49-52]. Regarding claim 38, Dolev meets the claim limitations as follows: A method comprising (a method) [Dolev: col. 109, line 62]:enabling display of at least one visual object on a first AR display (multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55], wherein the display of the at least one visual object on the first AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55]; (In some embodiments, the first instances of the first type of content include a first plurality of virtual objects, and wherein the second instances of the second type of content include a second plurality of virtual objects. An object may include an item, element, structure, building, thing, device, document, message, article, person, animal, or vehicle. A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display ( e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer. A virtual object may be displayed in two or three dimensions.) [Dolev: col. 71, line 40-55]; (A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display (e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer) [Dolev: col. 71, line 45-54; Figs. 6A-7, 10-13] is defined by a first position parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and a first transparency parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]; identifying (Receiving may refer to, for example, taking delivery of, accepting, acquiring, retrieving, generating, obtaining, detecting, or otherwise gaining access to. For example, information or data may be received in a manner that is detectable by or understandable to a processor, as described elsewhere in this disclosure. Receiving may involve obtaining data via wired and/or wireless communications links. A request may include, for example, an appeal, petition, demand, asking, call, and/or instruction (e.g., to a computing device to provide information or perform an action or function). A request to initiate a video conference between a plurality of participants may refer to, for example, a request to commence, institute, launch, establish, set up, or start a video conference between a plurality of participants, or to cause a video conference between a plurality of participants to begin) [Dolev: col. 115, line 12-27] at least one display constraint of a second AR display (In some instances, a size for presenting information may be constrained by other information displayed concurrently (e.g., in a nonoverlapping manner), such as a user interface. In the first mode, displaying the user interface concurrently with the information in the same display region may limit a number of pixels that may be devoted to present other information, e.g., an editable document) [Dolev: col. 54, line 38-45], wherein the at least one display constraint (Implementing a rule may refer to enforcing one or more conditions or constraints associated with a rule to cause conformance and/or compliance with the rule) [Dolev: col. 65, line 20-23] of the second AR display indicates one or more permitted combinations of a position parameter and a transparency parameter ((Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]; (A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). … Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 44, line 4-32]; (An extended reality display rule may refer to one or more guidelines and/or criteria for displaying content via an extended reality appliance, e.g., specifying a type of content that may be displayed, when content may be displayed, and/or how content may be displayed. For instance, one or more extended reality display rules may specify a context for displaying certain types of content and/or for blocking a display of certain types of content display. As another example, one or more extended reality display rules may define display characteristics (e.g., color, format, size, transparency, opacity, style) for displaying content in different types of situations. An extended reality display rule associating a particular wearable extended reality appliance with a location may include one or more criteria specifying what, when, and how data may be displayed based on a location of a wearable extended reality appliance (e.g., based on one or more user-defined, device-specific, and/or default settings)) [Dolev: col. 63, line 35-52]; transforming ((using a transformation function to obtain a transformed image data) [Dolev: col. 30, line 66-67]; (Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]) the first position parameter and the first transparency parameter to produce a second position parameter and a second transparency parameter respectively ((In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world) [Dolev: col. 8, line 32-36; Please also see step 4418 in Fig. 44]; (The pixels or voxels may be selected, activated, deactivated and/or set (e.g., by defining a color, hue, shade, saturation, transparency, opacity, or any other display characteristic) to present information. In some instances, an electronic display ( e.g., including a display region defined by one or more pixels) may correspond to physical electronic screen, and the display region be viewable by anyone (e.g., multiple users) within a viewing range of the physical display screen (e.g., display 352 of FIG. 3)) [Dolev: col. 36, line 1-9; Fig. 3]); determining if a combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is a permitted combination ((In some embodiments, information in virtual may be viewable only by a wearer of a wearable extendible reality appliance. Overlaying may refer to superimposing, positioning, or displaying on top of an object. Overlaying information in virtual form on a physical object may include one or more of detecting a physical object within viewing range of a wearable extendible reality appliance, determining a boundary of a physical object, determining a layout and/or format for displaying information within a boundary of a physical object, mapping a layout and/or format for displaying information onto a pattern of pixels of a wearable extendible reality appliance, and activating a pattern of pixels to cause an image corresponding to the information to be projected for viewing by a user such that the image may appear as though displayed over (e.g., overlayed or superimposed) on a physical object) [Dolev: col. 43, line 41-56]; (In some embodiments, the predefined boundaries are associated with a physical object and the display of the information is performed by the wearable extended reality appliance by overlaying the information in virtual form, on the physical object. A physical object may refer to matter (e.g., tangible matter) contained within an identifiable volume or area that may be moved as a unit. A physical object may be moveable or stationary, at least partially opaque, translucent, and/or transparent. Predefined boundaries associated with a physical object may include dimensions (e.g., length, width, height) of at least a part of a physical object. For instance, predefined boundaries associated with a physical object may correspond to at least a portion of a physical object contained within an FOV of a user ( e.g., wearing a wearable extended reality appliance), within a viewing range of a wearable extendible reality appliance, and/or within a projection range of a projector device. Information in virtual form may refer to information mapped to a pattern of pixels ( e.g., of a wearable extended reality appliance), such that activating the pattern of pixels causes an image corresponding to the information (e.g., via the mapping) to be projected onto a retina of a user, allowing the user to receive the information as an image) [Dolev: col. 43, line 10-37]; (Some embodiments involve providing a control for altering a location of the user interface. A control may refer to an element (e.g., an interactive element) associated with one or more managing, governing, commanding, adjusting, maneuvering, and/or manipulating functionalities (e.g., control functionalities). A control may allow a user to decide one or more operational aspects for a software application (e.g., whether, how, where, and when information may be displayed and/or processed). Examples of control elements may include buttons, tabs, switches, check boxes, input fields, clickable icons or images, links, and/or any other text and/or graphical element configured to receive an input and invoke a corresponding action in response. Providing a control may include displaying a graphic element ( e.g., a graphic control element), associating a graphic control element with one or more control functionalities, enabling a graphic control element to receive an input (e.g., using an event listener), associating a user input received via a graphic control element with a control functionality, and invoking an action corresponding to a control functionality upon receiving an input via a graphic control element. Altering may refer to changing, moving, modifying, and/or adjusting. A location may refer to a position (e.g., defined in 2D or 3D space). A location may be absolute (e.g., relative to a fixed point on the Earth) or relative (e.g., with respect to a user and/or a wearable extendible reality appliance). Altering a location of a user interface may involve one or more of determining a new location for displaying a user interface, determining a layout and/or format for displaying a user interface at a new location, selecting pixels for displaying a user interface at a new location, activating selected pixels for displaying a user interface at a new location, or deactivating pixels displaying a user interface at a prior location.) [Dolev: col. 43, line 66 – col. 44, line 32] – Note: Dolve discloses that his invention has a control system that will determine several aspects of an object such as moveable or stationary, and at least partially opaque, translucent, and/or transparent in order to determine whether and where to display the object); and in dependence upon a determination that the combination of (Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value) [Dolev: col. 33, line 6-10] the second position parameter and the second transparency parameter is not a permitted combination (When a rule associating the particular wearable extended reality appliance with the new location is not found in the data store, the at least one processor may retrieve a default rule instead (e.g., corresponding to a location type for the new location)) [Dolev: col. 77, line 16-20], adapt at least one of the second position parameter or the second transparency parameter (a software application may include multiple modes (e.g., use modes) each associated with a set of parameter settings and definitions allowing to tailor, adapt, and/or adjust one or more functionalities of the software application for one or more contexts, use cases, users, accounts, and/or devices. Parameter settings and definitions of a mode may affect a location, style, size, and/or device for displaying content, and/or functionalities of a software application. For example, a first mode may include settings allowing a user to interact with a software application via a single electronic display, and a second mode may include settings allowing a user to interact with a software application via multiple electronic displays) [Dolev: col. 44, line 50-22] such that a combination of the second position parameter and the second transparency parameter is a permitted combination ((Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value) [Dolev: col. 32, line 63-67]; (For example, assume there are two extended reality objects: a first extended reality object that is viewable by the wearer and a second extended reality object that is not viewable by the wearer (e.g., the second extended reality object is outside the wearer's field of view). It is noted that there may be multiple extended reality objects that the wearer can see based on the wearer's point of view and multiple extended reality objects that the wearer cannot see based on the wearer's point of view. For purposes of explanation only, it is assumed that there are only two extended reality objects, and that the wearer can see the first extended reality object and cannot see the second extended reality object. Because the wearer can only see the first extended reality object, only changes in the first extended reality object are displayed to the wearer. A change in the first extended reality object may include any type of visible change to the first extended reality object, such as a change in viewing angle (e.g., caused by the wearer manipulating the first extended reality object or by the first extended reality object moving by itself) or a change in a property of the first extended reality object (e.g., a change in shape, size, color, opacity, object orientation, or the like). Even though changes in the second extended reality object (e.g., visible changes similar in scope to changes in the first extended reality object) may be occurring at the same time as changes in the first extended reality object, the changes to the second extended reality object would not be visible to the wearer. However, changes to the second extended reality object may be visible to a viewer (e.g., a non-wearer) either from a different perspective or at a different point in time, as will be explained below) [Dolev: col. 114, line 60 - col. 115, line 23] – Note: Dolev discloses that in case of the new location, size, and/or opacity of the object may not be fit to display, there is another option that the object can be seen); and enabling display of the at least one visual object on the second AR display ((multiple elements (e.g., visually displayed objects)) [Dolev: col. 72, line54-55]; (In some embodiments, the first instances of the first type of content include a first plurality of virtual objects, and wherein the second instances of the second type of content include a second plurality of virtual objects. An object may include an item, element, structure, building, thing, device, document, message, article, person, animal, or vehicle. A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display ( e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer. A virtual object may be displayed in two or three dimensions.) [Dolev: col. 71, line 40-55]; (A virtual object may include any one of the forgoing presented as a simulation or synthetization. The virtual object may be presented electronically or digitally. Such electronic or digital presentations may occur in extended reality, virtual reality, augmented reality, or any other format in which objects may be presented digitally or electronically. The presentation may occur via an electronic display (e.g., a wearable extended reality appliance), and/or as a visual presentation of information rendered by a computer) [Dolev: col. 71, line 45-54; Figs. 6A-7, 10-13], wherein the display of the at least one visual object on the second AR display is defined by the second position parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (FIGS. 41, 42, and 43 are exemplary use snapshots of perspective views of a physical environment associated with the positioning of participants) [Dolev: col. 6, line 14-16; Figs. 41-43]) and the second transparency parameter ((In some embodiments, the predefined boundaries are associated with a virtual screen and the display of the information occurs via the wearable extended reality appliance. A virtual screen (e.g., a virtual display screen) may refer to simulation of a physical screen (e.g., using a wearable extended reality appliance) that may not be confined to a location and/or dimensions of a physical screen (e.g., the size, position, orientation, color, transparency, opacity, and/or other visual characteristic of a virtual screen may be defined by software)) [Dolev: col. 41, line 39-48; col. 42, line 51-68]; (a display region visible via a wearable extended reality appliance may include a portion of a field of view of a user wearing a wearable extended reality appliance aligned with at least a partially transparent section of the wearable extended reality appliance allowing the user to see information ( e.g., displayed on a physical screen or projected on a wall) through the wearable extended reality appliance) [Dolev: col. 41, line 1-17; col. 66, line 58-68; col. 41, line 1-8; col. 41, line 1-8]. In the same field of endeavor, Shimizu further discloses the claim limitations as follows: wherein the at least one display constraint of the second AR display indicates one or more permitted combinations of a position parameter and a transparency parameter (In each virtual object, many parameters such as polygon mesh information, vertex information, material information, rendering information of gloss and shadow, physical calculation information such as collision, friction, and light, three-dimensional spatial coordinate position, animation, color information, transparency, effects of video and sound, and control script are set, and when all parameters are combined, an enormous amount of setting data is obtained) [Shimizu: para. 0041]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dolev with Shimizu to program the system to implement of Shimizu’s method. Therefore, the combination of Dolev with Shimizu will enable the system to increase a virtual reality feeling (i.e. sense of immersion in the virtual space) at the time of viewing [Shimizu: para. 0036]. Allowable Subject Matter 6. Claims 36-37 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. This objection is given with a condition that other objections and rejections of related claims are addressed. 7. The above identified claims recite multiple operations that perform on specific data to compute unique parameters in order to determine specific parameters for AR display. The prior arts fail to teach or render obvious this set of operations. Reference Notice Additional prior arts, included in the Notice of Reference Cited, made of record and not relied upon is considered pertinent to applicant's disclosure. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip Dang whose telephone number is (408) 918-7529. The examiner can normally be reached on Monday-Thursday between 8:30 am - 5:00 pm (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000./Philip P. Dang/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Feb 12, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602837
ON SUB-DIVISION OF MESH SEQUENCES
2y 5m to grant Granted Apr 14, 2026
Patent 12593116
IMAGING MEASUREMENT DEVICE USING GAS ABSORPTION IN THE MID-INFRARED BAND AND OPERATING METHOD OF IMAGING MEASUREMENT DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12581069
METHOD FOR ENCODING/DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12581106
IMAGE DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12574557
SCALABLE VIDEO CODING USING BASE-LAYER HINTS FOR ENHANCEMENT LAYER MOTION PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 470 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month