Prosecution Insights
Last updated: April 19, 2026
Application No. 18/189,409

Recommended Avatar Placement in an Environmental Representation of a Multi-User Communication Session

Non-Final OA §102
Filed
Mar 24, 2023
Examiner
ROBINSON, TERRELL M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
403 granted / 486 resolved
+20.9% vs TC avg
Moderate +8% lift
Without
With
+7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
27 currently pending
Career history
513
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 486 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments Applicant’s arguments, see pages 1-5, filed October 10, 2025, with respect to the rejections of claims 1-20 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Gorur Sheshagiri (US 2019/0026936 A1). In regards to independent claim 1, the Gorur Sheshagiri reference has now been cited as it discloses methods, devices, and apparatuses to facilitate a positioning of an item of virtual content in an extended reality environment (see abstract). The reference further details a virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant….such as visual characteristics of an avatar….Next, a mobile device 200 or XR computing system 130 can identify one or more objects disposed within a visible portion of the augmented reality environment (e.g., in block 312). Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment, interpreted as obtained geometric info of the physical environment. Gorur Sheshagiri then discloses that based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). In addition, at block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Finally, the reference details when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location, interpreted as determining a recommended avatar placement based on the geometric information and activity type for subsequent display within the environmental representation of the multi-user communication session (see para [0048], [0081], [0084], [0105], and [0129]) as further detailed in the rejections of the office action below. In regards to independent claims 9 and 20, these claims recite limitations similar in scope to that of claim 1, and therefore remain rejected under the same rationale as provided above and further detailed in the rejections of the office action below. In regards to dependent claims 2-8, 10, 11, 13-17, and 19 these claims depend from the rejected base claims 1 and 9, and therefore remain rejected under the same rationale as provided above and further detailed in the rejections of the office action below. Allowable Subject Matter Claims 12 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if the claims are rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: In regards to dependent claim 12, none of the cited prior art alone or in combination provides motivation to teach “wherein the computer readable code to adjust the first spatial position or the second spatial position comprises computer readable code to adjust a height of the first selected avatar placement or a height of the second selected avatar placement in the environmental representation based on the geometric information associated with the physical environment and the geometric information associated with the second physical environment” as the references only teach a multi-user virtual experience where avatar and object rendering and placements are determined based on parameters such as user orientation or activity and object can be resized, however the references fail to explicitly disclose the specific adjustment regarding an avatar’s height within the virtual representation based on the semantic analysis of the real world environment, in conjunction with the features of claim 11 with which it depends for adjusting spatial positions of the selected avatar placement within the environmental representation. In regards to dependent claim 18, none of the cited prior art alone or in combination provides motivation to teach “wherein: the activity type for the multi-user communication session comprises a board game type; the shared content item comprises at least one of a game board and corresponding game pieces; the computer readable code to determine the orientation of the shared content item comprises computer readable code executable by the one or more processors to determine a horizontal orientation for the at least one of the game board and corresponding game pieces; and the computer readable code to determine the recommended content placement comprises computer readable code executable by the one or more processors to determine an appropriate horizontal surface in the physical environment based on the horizontal orientation of the at least one of the game board and corresponding game pieces, the recommended avatar placement, and a characteristic of the at least one of the game board and corresponding game pieces, wherein the characteristic of the at least one of the game board and corresponding game pieces comprises at least one of: a size of the at least one of the game board and corresponding game pieces and expected manipulations of the at least one of the game board and corresponding game pieces based on the board game type” as the references only teach a multi-user virtual experience where avatar and object rendering and placements are determined based on parameters such as user orientation or activity, however the references fail to explicitly disclose the specific object of a gameboard with corresponding pieces and the functions for determining horizontal orientation, avatar placement and specific characteristics of the game and its pieces, in conjunction with the features of claim 17 with which it depends for using semantic and orientation information to determine recommended content placement. In addition, there is no teaching, suggestion, or motivation found in the current references and none that can be inferred from the examiner’s own knowledge with respect to the current limitation. As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-11, 13-17, 19, and 20 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Gorur Sheshagiri (US 2019/0026936 A1, hereinafter referenced “Shesh”). In regards to claim 1 (Original). Shesh discloses a method (Shesh, Abstract), comprising: -obtaining geometric information associated with a physical environment of a communication device participating in a multi-user communication session (Shesh, para [0076], [0079], and [0081]; Reference at [0076] discloses in response to the detected request, mobile device 200 or XR computing system 130 can determine positions and orientations of one or more users that access the visible portion of the augmented reality environment (e.g., in block 306). The one or more users may include a first user that operates mobile device 200 (e.g., the user wearing the HMD or the augmented reality eyewear), and a second user that operates one or more of mobile device 102, mobile device 104, or other mobile devices within network environment 100 (i.e. communication device participating in a multi-user communication session). Para [0079] discloses mobile device 200 or XR computing system 130 can also identify a portion of the augmented reality environment that is visible to the user of mobile device 200 (e.g., in block 308), and obtain depth map data that characterizes the visible portion of the augmented reality environment (e.g., in block 310). Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. obtaining geometric information associated with a physical environment)); -determining an activity type for the multi-user communication session (Shesh, para [0038]; Reference discloses the established extended reality environment can include graphical or audio content that enables users of mobile device 102 or 104 to visit and explore various historical sites and locations within the extended reality environment, or to participate in a virtual meeting attended by various, geographically dispersed participants. (i.e. determined activity type for multi-user communication regarding visiting historical sites or attending virtual meeting regarding the mobile device communication session))); -determining a recommended avatar placement based on the geometric information and the activity type (Shesh, para [0048], [0081], [0084], and [0129]; Reference at para [0048] discloses, virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant….such as visual characteristics of an avatar….Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. geometric info). Para [0084] discloses based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). Para [0105] discloses in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Para [0129] discloses when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location (i.e. determining a recommended avatar placement based on the geometric information and the activity type)); -and displaying an indication of the recommended avatar placement in an environmental representation of the multi-user communication session (Shesh, para [0105], [0110], and [0129]; Reference at para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. determining minimum of placement score as the placement position interpreted as the recommended avatar placement). Para [0110] discloses virtual content generation module 172 provides a means for inserting the virtual assistant into the augmented reality environment at the determined placement position Placement of the avatar in the determined position out of the candidate positions interpreted as the displaying an indication of the determined avatar placement within the environmental representation of the multi-user communication session as reflected in the example of para [0129] which discloses for example, the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location). In regards to claim 2 (Original). Shesh discloses the method of claim 1. Shesh further discloses -further comprising identifying candidate avatar placements in the environmental representation based on the activity type and the geometric information, wherein determining the recommended avatar placement comprises selecting the recommended avatar placement from the candidate avatar placements (Shesh, para [0046]; Reference discloses position determination module 170 can perform operations that establish a plurality of candidate positions of an item of virtual content, such as a virtual assistant, within the extended reality environment established by mobile devices 102 and 104. Position determination module 170 provides a means for determining a position for placement of the item of virtual content in the extended reality environment at least partially based on the determined position and orientation of the user. Position determination module 170 can compute placement scores that characterize a viability of each of the candidate positions of the item of virtual content within the augmented or other reality environment. As described below, the placement scores can be computed based on the generated depth map data, the data specifying the objects within the visible portion of the extended reality environment, data characterizing a portion and an orientation of each user within the extended reality environment, or data characterizing a level of interaction between users within the extended reality environment.)). In regards to claim 3 (Original). Shesh discloses the method of claim 1. Shesh further discloses -wherein the environmental representation comprises a virtual environment or a mixed reality environment based on a view of the physical environment (Shesh, para [0002] and [0019]; Reference at para [0002] discloses mobile devices enable users to explore and immerse themselves in extended reality environments, such as augmented reality environments that provide a real-time view of a physical real-world environment that is merged with or augmented by computer generated graphical content. Para [0019] discloses the mobile device can include augmented reality eyewear (e.g., glasses, goggles, or any device that covers a user's eyes) having one or more lenses or displays for displaying graphical elements of the deployed digital content. For example, the eyewear can display the graphical elements as augmented reality layers superimposed over real-world objects that are viewable through the lenses). In regards to claim 4 (Original). Shesh discloses the method of claim 3. Shesh further discloses -wherein the mixed reality environment comprises at least one virtual content item overlaid with the view of the physical environment (Shesh, para [0132]; Reference discloses these tools can deploy the elements of digital content for presentation to a user through display unit 208 incorporated into mobile device 200, such as augmented reality eyewear (e.g., glasses or goggles) having one or more lenses or displays for presenting graphical elements of the deployed digital content. The augmented reality eyewear can, for example, display the graphical elements as augmented reality layers superimposed over real-world objects that are viewable through the lenses, thus enhancing an ability of the user to interact with and explore the augmented reality environment. (i.e. mixed reality environment with content overlaid with the view of the physical environment)). In regards to claim 5 (Original). Shesh discloses the method of claim 1. Shesh further discloses -wherein the geometric information comprises at least one of a blueprint for the physical environment (Shesh, Fig. 5 and para [0081]; Reference discloses semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. blueprint of physical environment)), a three-dimensional representation of the physical environment (Shesh, para [0039]; Reference discloses for example, image processing module 164 can include, among other things, a depth mapping module 166 that generates a depth map for each of the visible portions of the extended reality environment (i.e. depth map is 3D representation of physical environment), and a semantic analysis module 168 that identifies and characterizes objects disposed within the visible portions of the extended reality environment.), and semantic information regarding vertical and horizontal surfaces in the physical environment (Shesh, para [0081]; Reference discloses semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. location and dimensions interpreted as semantic information regarding vertical and horizontal surfaces in the physical environment)). In regards to claim 6 (Original). Shesh discloses the method of claim 5. Shesh further discloses -wherein determining the recommended avatar placement is further based on the semantic information regarding horizontal surfaces in the physical environment (Shesh, para [0082] and [0105]; Reference at [0082] discloses that illustrated in FIG. 5, the applied semantic analysis techniques can identify, within the visible portion of the augmented reality environment, several pieces of furniture, such as sofa 512, end tables 514A and 514B, chair 516, and table 518, along with an additional user 520 (e.g., the user operating mobile device 102 or mobile device 104) disposed between sofa 512 and chair 516. Mobile device 200 or XR computing system 130 can perform operations that store semantic data characterizing the identified physical objects, along with the locations and dimensions of the identified objects within the visible portion of the augmented reality environment…FIG. 5 also shows (in phantom) a plurality of candidate positions 522, 524, and 526 for the virtual assistant, described below. Para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. recommended avatar placement based on minimum placement score)). In regards to claim 7 (Original). Shesh discloses the method of claim 1. Shesh further discloses -wherein the indication of the recommended avatar placement comprises at least one selected from an avatar outline at the recommended avatar placement in the environmental representation and a marker at the recommended avatar placement in the environmental representation (Shesh, Fig. 5 and para [0082]; Reference discloses that FIG. 5 illustrates a plan view of the visible portion of the augmented reality environment (e.g., as derived from stereoscopic images 404 and 406 and the generated depth map data)… Mobile device 200 or XR computing system 130 can perform operations that store semantic data characterizing the identified physical objects (e.g., an object type, such as furniture), along with the locations and dimensions of the identified objects within the visible portion of the augmented reality environment, within respective portions of database 214 or 134, e.g., within respective ones of object data 224 or 184. FIG. 5 also shows (in phantom) a plurality of candidate positions 522, 524, and 526 for the virtual assistant, described below. (i.e. the indication of the recommended avatar placement comprises at least one selected from an avatar outline at the recommended avatar placement in the environmental representation)). In regards to claim 8 (Original). Shesh discloses the method of claim 1. Shesh further discloses -further comprising orienting the environmental representation based on the geometric information, wherein the recommended avatar placement is further based on the orientation of the environmental representation (Shesh, para [0046] and [0104]; Reference at [0046] discloses position determination module 170 can compute placement scores that characterize a viability of each of the candidate positions of the item of virtual content within the augmented or other reality environment. As described below, the placement scores can be computed based on the generated depth map data, the data specifying the objects within the visible portion of the extended reality environment, data characterizing a portion and an orientation of each user within the extended reality environment, Para [0104] discloses referring back to FIG. 3, position determination module 170, when executed by mobile device 200 of XR computing system 130, can establish one of the candidate positions as a placement position of the item of virtual content within the visible portion of the augmented reality environment (e.g., in block 318). Depth map and orientation data used for the basis for determining placement position of virtual content interpreted as orienting the environmental representation based on the geometric information, wherein the recommended avatar placement is further based on the orientation of the environmental representation). In regards to claim 9 (Original). Shesh discloses a non-transitory computer readable medium comprising computer code (Shesh, para [0004]), executable by one or more processors to: -obtain geometric information associated with a physical environment of a communication device participating in a multi-user communication session (Shesh, para [0076], [0079], and [0081]; Reference at [0076] discloses in response to the detected request, mobile device 200 or XR computing system 130 can determine positions and orientations of one or more users that access the visible portion of the augmented reality environment (e.g., in block 306). The one or more users may include a first user that operates mobile device 200 (e.g., the user wearing the HMD or the augmented reality eyewear), and a second user that operates one or more of mobile device 102, mobile device 104, or other mobile devices within network environment 100 (i.e. communication device participating in a multi-user communication session). Para [0079] discloses mobile device 200 or XR computing system 130 can also identify a portion of the augmented reality environment that is visible to the user of mobile device 200 (e.g., in block 308), and obtain depth map data that characterizes the visible portion of the augmented reality environment (e.g., in block 310). Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. obtaining geometric information associated with a physical environment)); -determine an activity type for the multi-user communication session (Shesh, para [0038]; Reference discloses the established extended reality environment can include graphical or audio content that enables users of mobile device 102 or 104 to visit and explore various historical sites and locations within the extended reality environment, or to participate in a virtual meeting attended by various, geographically dispersed participants. (i.e. determined activity type for multi-user communication regarding visiting historical sites or attending virtual meeting regarding the mobile device communication session))); -determine a recommended avatar placement based on the geometric information and the activity type (Shesh, para [0048], [0081], [0084], [0105] and [0129]; Reference at para [0048] discloses, virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant….such as visual characteristics of an avatar….Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. geometric info). Para [0084] discloses based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). Para [0105] discloses in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Para [0129] discloses when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location (i.e. determining a recommended avatar placement based on the geometric information and the activity type)); -and display an indication of the recommended avatar placement in an environmental representation of the multi-user communication session (Shesh, para [0105], [0110], and [0129]; Reference at para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. determining minim of placement score as the placement position interpreted as the recommended avatar placement). Para [0110] discloses virtual content generation module 172 provides a means for inserting the virtual assistant into the augmented reality environment at the determined placement position Placement of the avatar in the determined position out of the candidate positions interpreted as the displaying an indication of the determined recommended avatar placement within the environmental representation of the multi-user communication session as reflected in the example of para [0129] which discloses for example, the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location). In regards to claim 10 (Original). Shesh discloses the non-transitory computer readable medium of claim 9. Shesh further discloses -further comprising computer readable code to: obtain geometric information associated with a second physical environment of a second communication device participating in the multi-user communication session (Shesh, para [0076], [0079], and [0081]; Reference at [0076] discloses in response to the detected request, mobile device 200 or XR computing system 130 can determine positions and orientations of one or more users that access the visible portion of the augmented reality environment (e.g., in block 306). The one or more users may include a first user that operates mobile device 200 (e.g., the user wearing the HMD or the augmented reality eyewear), and a second user that operates one or more of mobile device 102, mobile device 104, or other mobile devices within network environment 100 (i.e. communication device participating in a multi-user communication session). Para [0079] discloses mobile device 200 or XR computing system 130 can also identify a portion of the augmented reality environment that is visible to the user of mobile device 200 (e.g., in block 308), and obtain depth map data that characterizes the visible portion of the augmented reality environment (e.g., in block 310). Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. obtaining geometric information associated with a physical environment)); -determine a second recommended avatar placement based on the geometric information associated with the second physical environment, the activity type (Shesh, para [0048], [0081], [0084]-[0085], [0105], and [0129]; Reference at para [0048] discloses, virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant….such as visual characteristics of an avatar….Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. geometric info). Para [0084] discloses based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). Para [0085] discloses as illustrated schematically in FIG. 6A, a visible portion 600 of the augmented reality environment may include a user 601, which corresponds to the user of mobile device 200, and a user 602, which corresponds to the user of mobile devices 102 or 104. Para [0105] discloses in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Para [0129] discloses when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location (i.e. determining a recommended avatar placement based on the geometric information and the activity type). The references disclose multiple users and thus provides for the scenario where the second user causes the secondary information to be provided for the same method steps regarding the second physical environment, second communication device, and second avatar placement), -and the recommended avatar placement (Shesh, para [0129]; Reference at para [0048] discloses, virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant….such as visual characteristics of an avatar….Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. geometric info). Para [0084] discloses based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). Para [0085] discloses as illustrated schematically in FIG. 6A, a visible portion 600 of the augmented reality environment may include a user 601, which corresponds to the user of mobile device 200, and a user 602, which corresponds to the user of mobile devices 102 or 104. Para [0105] discloses in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Para [0129] discloses when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location (i.e. determining a recommended avatar placement based on the geometric information and the activity type)); -and display an indication of the second recommended avatar placement in the environmental representation (Shesh, para [0105], [0110], and [0129]; Reference at para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. determining minim of placement score as the placement position interpreted as the recommended avatar placement). Para [0110] discloses virtual content generation module 172 provides a means for inserting the virtual assistant into the augmented reality environment at the determined placement position Placement of the avatar in the determined position out of the candidate positions interpreted as the displaying an indication of the determined recommended avatar placement within the environmental representation of the multi-user communication session as reflected in the example of para [0129] which discloses for example, the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location). In regards to claim 11 (Original). Shesh discloses the non-transitory computer readable medium of claim 10. Shesh further discloses -further comprising computer readable code to: receive a first selected avatar placement and a second selected avatar placement; adjust a first spatial position of the first selected avatar placement or a second spatial position of the second selected avatar placement in the environmental representation; and display a first avatar at the first selected avatar placement and a second avatar at the second selected avatar placement in the environmental representation (Shesh, para [0033] and [0068]; Reference at [0068] discloses the operations module 244 can cause mobile device 200 to perform operations that modify the placement position of the virtual assistant (i.e. avatar) within the extended reality environment in accordance with the detected gestural input. Moving virtual assistant from one position to another interpreted as display of a first avatar in a first placement adjusting the spatial position and displaying the first avatar at a first placement in the environmental representation. Since the reference indicates multiple users this would account for the instance of a second selected avatar placement and adjusting a second spatial potion for subsequent display of the second avatar at a second selected placement in the environmental representation as para [0033] discloses mobile devices 102 and 104 may be operated by corresponding users, each of whom may access the extended reality environment using any of the processes described below, and be disposed at corresponding positions within the accessed extended reality environment). In regards to claim 13 (Previously Presented). Shesh discloses the non-transitory computer readable medium of claim 9. Shesh further discloses -further comprising computer readable code to: determine a recommended content placement for a shared content item based on the geometric information and the activity type (Shesh, [0081], [0084], and [0129]; Reference at para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. geometric info). Para [0084] discloses based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). Para [0105] discloses in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Para [0129] discloses when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location (i.e. determining a recommended content placement for a shared content item based on the geometric information and the activity type)); -and display an indication of the recommended content placement for the shared content item in the environmental representation (Shesh, para [0105], [0110], and [0129]; Reference at para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. determining minimum of placement score as the placement position interpreted as the recommended avatar placement). Para [0110] discloses virtual content generation module 172 provides a means for inserting the virtual assistant into the augmented reality environment at the determined placement position Placement of the avatar in the determined position out of the candidate positions interpreted as the displaying an indication of the determined content placement for the shared content item within the environmental representation as reflected in the example of para [0129] which discloses for example, the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location). In regards to claim 14 (Original). Shesh discloses the non-transitory computer readable medium of claim 13. Shesh further discloses -further comprising computer readable code to identify candidate content placements in the environmental representation based on the geometric information and the activity type, wherein the recommended content placement is selected from the candidate content placement (Shesh, para [0046]; Reference discloses position determination module 170 can perform operations that establish a plurality of candidate positions of an item of virtual content, such as a virtual assistant, within the extended reality environment established by mobile devices 102 and 104. Position determination module 170 provides a means for determining a position for placement of the item of virtual content in the extended reality environment at least partially based on the determined position and orientation of the user. Position determination module 170 can compute placement scores that characterize a viability of each of the candidate positions of the item of virtual content within the augmented or other reality environment. As described below, the placement scores can be computed based on the generated depth map data, the data specifying the objects within the visible portion of the extended reality environment, data characterizing a portion and an orientation of each user within the extended reality environment, or data characterizing a level of interaction between users within the extended reality environment.)). In regards to claim 15 (Original). Shesh discloses the non-transitory computer readable medium of claim 14. Shesh further discloses -wherein the computer readable code to determine the recommended content placement for the shared content item further comprises computer readable code to determine spatial relationships between the recommended avatar placement and each of the candidate content placements in the environmental representation (Shesh, Fig. 5 and para [0082]; Reference discloses that FIG. 5 illustrates a plan view of the visible portion of the augmented reality environment (e.g., as derived from stereoscopic images 404 and 406 and the generated depth map data)… Mobile device 200 or XR computing system 130 can perform operations that store semantic data characterizing the identified physical objects (e.g., an object type, such as furniture), along with the locations and dimensions of the identified objects within the visible portion of the augmented reality environment, within respective portions of database 214 or 134, e.g., within respective ones of object data 224 or 184. FIG. 5 also shows (in phantom) a plurality of candidate positions 522, 524, and 526 for the virtual assistant, described below (i.e. determined spatial relationships between the recommended avatar placement and each of the candidate content placements in the environmental representation)). In regards to claim 16 (Original). Shesh discloses the non-transitory computer readable medium of claim 13. Shesh further discloses -wherein the recommended content placement for the shared content item is further based on a characteristic of the shared content item (Shesh, para [0047]-[0048]; Reference at para [0047] discloses the computed placement score for a particular candidate position can reflect physical constraints imposed by the extended reality environment. As described above, the presence of a hazard at the particular candidate position (e.g., a lake, a cliff, etc.) may result in a low placement score whereas, if an object disposed at the particular candidate position is not suitable to support the item of virtual content (e.g., a bookshelf or table disposed at the candidate position of the virtual assistant), the extended reality computing system can compute a low placement score for that candidate position (i.e. placement determined by scores in relation to characteristics of virtual content and suitability). Para [0048] discloses virtual content generation module 172 can perform operations that generate and instantiate the item of virtual content, such as the virtual assistant, at the established position within the extended reality environment…virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant…that specifies visual characteristics of the virtual assistant, such as visual characteristics of an avatar selected by the user of mobile device 102 or the user of mobile device 104.). In regards to claim 17 (Original). Shesh discloses the non-transitory computer readable medium of claim 13. Shesh further discloses -wherein the geometric information comprises semantic information regarding vertical and horizontal surfaces in the physical environment (Shesh, para [0081]; Reference discloses semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. location and dimensions interpreted as semantic information regarding vertical and horizontal surfaces in the physical environment)), -and wherein the computer readable code to determine the recommended content placement further comprises computer readable code executable by the one or more processors to: determine an orientation of the shared content item; and determine the recommended content placement based on the semantic information and the determined orientation of the shared content item (Shesh, para [0046] and [0104]; Reference at [0046] discloses position determination module 170 can compute placement scores that characterize a viability of each of the candidate positions of the item of virtual content within the augmented or other reality environment. As described below, the placement scores can be computed based on the generated depth map data, the data specifying the objects within the visible portion of the extended reality environment, data characterizing a portion and an orientation of each user within the extended reality environment. Para [0104] discloses referring back to FIG. 3, position determination module 170, when executed by mobile device 200 of XR computing system 130, can establish one of the candidate positions as a placement position of the item of virtual content within the visible portion of the augmented reality environment (e.g., in block 318). Depth map and orientation data used for the basis for determining placement position of virtual content interpreted as determining the recommended content placement based on the semantic information regarding the depth map and the determined orientation of the shared content item regarding the item of virtual content). In regards to claim 19 (Original). Shesh discloses the non-transitory computer readable medium claim 13. Shesh further discloses -further comprising computer readable code executable by the one or more processors to: receive a selected avatar; determine, based on the selected avatar placement, an updated recommended content placement for the shared content item (Shesh, para [0068]; Reference at [0068] discloses the operations module 244 can cause mobile device 200 to perform operations that modify the placement position of the virtual assistant within the extended reality environment in accordance with the detected gestural input. Moving virtual assistant from one position to another interpreted as receiving selected avatar placement and based on the placement, an updated recommended content placement for the shared content item)); -and display an indication of the updated recommended content placement for the shared content item in the environmental representation (Shesh, para [0105], [0110], and [0129]; Reference at para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. determining minimum of placement score as the placement position interpreted as the recommended content placement). Para [0110] discloses virtual content generation module 172 provides a means for inserting the virtual assistant into the augmented reality environment at the determined placement position Placement of the avatar in the determined position out of the candidate positions interpreted as the displaying an indication of the determined avatar placement within the environmental representation as reflected in the example of para [0129] which discloses for example, the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location). In regards to claim 20 (Previously Presented). Shesh discloses a system (Shesh, Abstract), comprising: -one or more processors (Shesh, para [0004]); -and one or more non-transitory computer readable media comprising computer readable code executable by the one or more processors (Shesh, para [0004]) to: -obtain geometric information associated with a physical environment of a communication device participating in a multi-user communication session (Shesh, para [0076], [0079], and [0081]; Reference at [0076] discloses in response to the detected request, mobile device 200 or XR computing system 130 can determine positions and orientations of one or more users that access the visible portion of the augmented reality environment (e.g., in block 306). The one or more users may include a first user that operates mobile device 200 (e.g., the user wearing the HMD or the augmented reality eyewear), and a second user that operates one or more of mobile device 102, mobile device 104, or other mobile devices within network environment 100 (i.e. communication device participating in a multi-user communication session). Para [0079] discloses mobile device 200 or XR computing system 130 can also identify a portion of the augmented reality environment that is visible to the user of mobile device 200 (e.g., in block 308), and obtain depth map data that characterizes the visible portion of the augmented reality environment (e.g., in block 310). Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. obtaining geometric information associated with a physical environment)); -determine an activity type for the multi-user communication session (Shesh, para [0038]; Reference discloses the established extended reality environment can include graphical or audio content that enables users of mobile device 102 or 104 to visit and explore various historical sites and locations within the extended reality environment, or to participate in a virtual meeting attended by various, geographically dispersed participants. (i.e. determined activity type for multi-user communication regarding visiting historical sites or attending virtual meeting regarding the mobile device communication session))); -determine a recommended avatar placement based on the geometric information and the activity type (Shesh, para [0048], [0081], [0084], [0105], and [0129]; Reference at para [0048] discloses, virtual content generation module 172 can include a graphics module 176 that generates an animated representation of the virtual assistant….such as visual characteristics of an avatar….Para [0081] discloses mobile device 200 or XR computing system 130 can identify one or more objects disposed within the visible portion of the augmented reality environment (e.g., in block 312)…Semantic analysis module 168 can also apply one or more of the semantic analysis techniques described above to the accessed images, to identify physical objects within the accessed images, identify locations, types and dimensions of the identified objects within the accessed images, and thus, identify the locations and dimensions of the identified physical objects within the visible portion of the augmented reality environment (i.e. geometric info). Para [0084] discloses based on the generated depth map data 222, and on the identified physical objects (and their positions and dimensions), position determination module 170, when executed by processor 138 or processor 218, can establish a plurality of candidate positions for the item of virtual content (e.g., candidate positions 522, 524, and 526 of the virtual assistant in FIG. 5) within the visible portion of the augmented reality environment (e.g., in block 314). Position determination module 170 can also compute placement scores that characterize a viability of each of the candidate positions 522, 524, and 526 of the virtual assistant within the augmented reality environment (e.g., in block 316). Para [0105] discloses in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content. Para [0129] discloses when computing the modified placement score for each candidate position within the virtual meeting location, mobile device 200 or XR computing system 130 can apply an additional or alternate weighting factor…the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location (i.e. determining a recommended avatar placement based on the geometric information and the activity type)); -and display an indication of the recommended avatar placement in an environmental representation of the multi-user communication session (Shesh, para [0105], [0110], and [0129]; Reference at para [0105] discloses for example, in block 318, position determination module 170 can determine that the placement cost computed for the candidate position associated with candidate virtual assistant 604A (e.g., as illustrated in FIG. 6A) represents a minimum of the placement scores, and can establish that candidate position as the placement position for the virtual assistant or other item of virtual content (i.e. determining minimum of placement score as the placement position interpreted as the recommended avatar placement). Para [0110] discloses virtual content generation module 172 provides a means for inserting the virtual assistant into the augmented reality environment at the determined placement position Placement of the avatar in the determined position out of the candidate positions interpreted as the displaying an indication of the determined avatar placement within the environmental representation of the multi-user communication session as reflected in the example of para [0129] which discloses for example, the mobile device 200 or XR computing system 130 can perform operations that present the virtual assistant as a “talking head” or similar miniature object disposed in a middle of a “virtual” conference table within the virtual meeting location). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See the Notice of References Cited (PTO-892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TERRELL M ROBINSON whose telephone number is (571)270-3526. The examiner can normally be reached 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KENT CHANG can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TERRELL M ROBINSON/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Mar 24, 2023
Application Filed
Jan 10, 2025
Non-Final Rejection — §102
Apr 14, 2025
Response Filed
Jul 09, 2025
Final Rejection — §102
Oct 10, 2025
Notice of Allowance
Oct 10, 2025
Response after Non-Final Action
Nov 03, 2025
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602852
DYNAMIC GRAPHIC EDITING METHOD AND DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12572196
MANAGING AN INDUSTRIAL ENVIRONMENT HAVING MACHINERY OPERATED BY REMOTE WORKERS AND PHYSICALLY PRESENT WORKERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573124
PROGRESSIVE REAL-TIME DIFFUSION OF LAYERED CONTENT FILES WITH ANIMATED FEATURES
2y 5m to grant Granted Mar 10, 2026
Patent 12573111
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD FOR APPROPRIATE DISPLAY OF PRESENTER AND PRESENTATION ITEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561904
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD FOR CORRECTING COMPUTER GRAPHICS IMAGE IN MIXED REALITY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+7.5%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 486 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month