Prosecution Insights
Last updated: April 19, 2026
Application No. 18/642,148

USER INTERFACE FOR INTERACTING WITH HUMAN USERS

Non-Final OA §103
Filed
Apr 22, 2024
Examiner
AMIN, JWALANT B
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Wells Fargo Bank N A
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
500 granted / 631 resolved
+17.2% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 631 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4-8, 11, 14-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradley et al. (US 2024/0062430, hereinafter Bradley), and further in view of Sandler et al. (US 2023/0385386, hereinafter Sandler). Regarding claim 1, Bradley teaches a virtual reality host computing system (virtual reality viewing devices such as virtual reality headsets 117-119 and communication devices 120-122, fig. 1 and [0032]) comprising: at least one hardware processor (processor 1402, fig. 14/processing unit 1504, fig. 15) programmed to perform operations ([0049]: a system, including a processor, and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations) comprising: receiving, from a user device, an indication (detecting encounter) that a user avatar for a user (user 2) is to encounter a third-party avatar associated with a third party (user 1) in a virtual environment (fig. 12 step 1202: determining, by a system comprising a processor, a virtual reality encounter between a first user, a second user and a third user; [0034]: In the example shown, user 2's (115) view of user 1 (114) is shown when user 2 and user 1 encounter each other within the virtual reality environment 106. An encounter may be detected in a number of different ways. In one embodiment, an encounter may be detected by the virtual reality server detecting the field of view direction of user 1 and user 2 and determine that they are looking in each other's direction at the same time, for a threshold period of time. In another embodiment, an encounter may be detected when user 1 and user 2 are both looking at each other in their field of view and are also within a proximate distance to each other. In yet another embodiment, an encounter may be detected when user 1 and user 2 are talking to each other, e.g., independent of each user's field of view; (for example one user may be sitting in a virtual event in a row behind another user, and the two users can conversing without necessarily looking at each other). In one instance, the determination that user 1 and user 2 are talking to each other may be made by the virtual reality server analyzing the virtual reality content and avatar activity within the content to recognize that user 1 and user 2 are the only avatars at which each other is looking, within their field of view, or within a specified range within their field of view. In this manner, it may be inferred that user 2 is conversing with user 1; [0049]: Example operation 1106 represents detecting a second virtual reality encounter between the first user and the second user that is later in time relative to the first virtual reality encounter); accessing trust data describing a level of trust (level of trust) between the user and the third party ([0049]: Example operation 1108 represents accessing the relationship data; [0050]: The relationship data can include at least one: of trust level data, or familiarity level data; [0065]: The level of familiarity with other users may also relate to a level of trust with those other users. A user is thus able to perceive the avatars of other users at a level of visual and aural clarity depending on their level of familiarity and/or trust with those other users); determining a first visual indicator (indicator such as visual resolution, opacity, color, dashed outline, surrounding border, etc.) based at least in part on the level of trust between the user and the third party ([0022]: the appearance of a user's avatar can vary in association with the relationship data, including familiarity levels and/or trust levels of that user with each of the other user(s) encountered. For example, resolution, opacity, color and/or the like can be varied to alter the avatar's appearance, such as to present a very low resolution avatar to others where there is little familiarity and/or trust, a medium resolution avatar for some familiarity and/or trust, and a high resolution avatar for substantial familiarity and/or trust; [0037]: If user 2 (115) has a subsequent encounter with user 1 (116) in the same virtual reality environment 106 or a different virtual reality environment, the virtual reality server 102 may retrieve from the user profile data store 110 the level of familiarity of user 1 (114) to user 2 (115). If the level of familiarity is sufficiently high, the avatar 112 for user 1 (114) may be presented in full visual resolution to user 2 (115), as shown by the modified appearance of the avatar, now labeled 113. Furthermore, the level of audio volume and/or the clarity of audio spoken by user 1 (114) to user 2 (114) may be at full resolution as well; [0038]: In a similar manner, as represented in FIG. 5, if the user 3 (116) encounters user 1 (114) and user 1 (114) has a lower level of familiarity to user 3 (116), the avatar (e.g., 112) for user 1 (114) may be altered in its rendering by the VR server before it is presented to user 3 (116). Therefore, the presentation of the avatar may look different to user 2 (115), and user 3 (116), even at the same time. In this manner, any user may have an easy visual means to best see other avatars of users with whom they have a high level of familiarity; [0041]: Moreover, instead of or in addition to modifying resolution, other ways to vary the appearance of an avatar can be used. For example, FIG. 7A shows the user as having a dashed outline and features. FIG. 7B shows the user as presented with a surrounding border, which, for example, may be colored differently, e.g., ranging from red for low familiarity to yellow for medium to green for high familiarity; there may be only one color for each of low, medium or high, for example, or there may be a color gradient that ranges with a more granular score. Trust, as described herein, can also be presented differently. FIG. 8A shows modified opacity as one type of modified representation; [0052]: Outputting the altered visual representation of the avatar can include outputting an indicator in association with the avatar based on at least part of the relationship data); generating a rendering of the third-party avatar (avatar 112 of user 1), the rendering of the third-party avatar comprising the first visual indicator (indicator such as visual resolution, opacity, color, dashed outline, surrounding border, etc.; avatar 112 of user 1 is presented to user 2 in full resolution; [0037]: If user 2 (115) has a subsequent encounter with user 1 (116) in the same virtual reality environment 106 or a different virtual reality environment, the virtual reality server 102 may retrieve from the user profile data store 110 the level of familiarity of user 1 (114) to user 2 (115). If the level of familiarity is sufficiently high, the avatar 112 for user 1 (114) may be presented in full visual resolution to user 2 (115), as shown by the modified appearance of the avatar, now labeled 113. Furthermore, the level of audio volume and/or the clarity of audio spoken by user 1 (114) to user 2 (114) may be at full resolution as well; [0038]: In a similar manner, as represented in FIG. 5, if the user 3 (116) encounters user 1 (114) and user 1 (114) has a lower level of familiarity to user 3 (116), the avatar (e.g., 112) for user 1 (114) may be altered in its rendering by the VR server before it is presented to user 3 (116). Therefore, the presentation of the avatar may look different to user 2 (115), and user 3 (116), even at the same time. In this manner, any user may have an easy visual means to best see other avatars of users with whom they have a high level of familiarity; [0052]: Altering the presentation can include outputting an altered visual representation of the avatar. Outputting the altered visual representation of the avatar can include modifying at least one of: display resolution of the avatar, or opacity of the avatar. Outputting the altered visual representation of the avatar can include outputting an indicator in association with the avatar based on at least part of the relationship data; [0054]: presenting, to the first user, a representation of the avatar as presented to the second user during the second virtual reality encounter); serving the rendering of the third-party avatar to the user device for display by the user device to the user (presenting the avatar 112 of user 1 to user 2 is functionally analogous to rendering the avatar to the user device for displaying to the user; [0037]: the avatar 112 for user 1 (114) may be presented in full visual resolution to user 2 (115), as shown by the modified appearance of the avatar, now labeled 113; [0058]: Example operation 1206 represents presenting, by the system during the virtual reality encounter, an avatar of the first user to the second user via a first presentation based on the relationship data. Example operation 1208 represents presenting, by the system during the virtual reality encounter, the avatar of the first user to the second user via a second presentation). Bradley does not explicitly teach determining a portion of user data associated with the user that is shareable with the third party, the portion of the user data being determined based at least in part on the trust data; and sending the portion of the user data to a third-party computing device associated with the third party. Sandler teaches determining a portion of user data (content or data or portions thereof) associated with the user that is shareable with the third party, the portion of the user data being determined based at least in part on the trust data (content/data or portions thereof may be identified and shared by a first user with a second user based at least one the level of trust; [0042]: content/data may be allowed to be provided to the second user device via the distribution server or via a direct network connection. This content or data may be shared with the second user device based on a sharing criterion that may be associated with the levels of trust, the levels of intimacy; [0049]: a series of steps that may be performed at a user device when a user wishes to share data with other user devices based on trust levels and intimacy levels. Particular sets of data or portions thereof may only be sent to user devices that share a common interest, permission, or access level associated with a set of rules. FIG. 3 begins with step 310 where levels of trust are received. This may include a first user of a first device providing inputs to a user interface that associates licensees (i.e. other users) with levels of trust; [0057]: the level of trust may identify a title of a book, a level of intimacy may correspond to an age or demographic of the second user, and the level of interest may identify portions of content in the book that should not be shared with individuals of a certain age group or demographic. This means that certain content (e.g. sexually explicit content) may be removed from the content shared with individuals that are less than 18 years old. In this way a book with a restricted (R) level of content rating or an explicit (x) level rating may be removed from the shared content according a set of sharing rules. This may allow particular portions of content to be shared with only devices of users according to the set of sharing rules in step 360 of FIG. 3; [0081]: Methods and apparatus of the present disclosure may use levels of trust, intimacy, and possibly interest when identifying portions of content that cart be shared with certain individuals. For example, content owned by the University discussed above may be shared with a scientist, students, and laboratory personnel based on levels of trust, intimacy, and interest. Here again the scientist and the University, for example, may be aware of certain information, where the scientist's assistants (students or laboratory personnel) may only may access information based on matching levels of trust, intimacy, and possibly interest); and sending the portion of the user data to a third-party computing device (Bob’s user device) associated with the third party (Alex may identify data to be send to Bob based on the trust levels between Alex and Bob; [0045]: the first user device and the second user device may share data directly via a private network or via a transfer or signal server such as transfer/signal server 150 of FIG. 1; [0049]: Each of these user devices may have a respective unique identifier that allows data to be sent to them by any of the ways discussed in respect to FIGS. 1-2. Exemplary unique device identifiers may include a phone number, a unique code, an avatar or other means that link a device such as a smartphone to a particular user. Such identifiers may be used to direct data sent from a computing device to one or more other computing devices. This may include sending data through a server (such as distribution server 140 or transfer/signal server 150 of FIG. 1) configured to securely transfer data; [0055]: the first user may enter an interest of material science collaboration. This may also include entering a collaboration level for contacts that like to are part of the collaboration. A first user, Alex may identify that his contact Bob is a researcher that tests materials, that his contact Joe is a student that performs tests under Bob's supervision, and that his contact Adam is assigned the tasks of tracking materials used as part of the research. When Alex drafts a message regarding a test or that includes test data, that message may be filtered based on trust levels, intimacy levels, interest types, and possibly interest levels. When Alex wishes to plan a new test, he may draft a message in step 340 and he may then configure that message with levels of trust, intimacy, and possibly a type of interest. This may also include configuring the message or other data to be sent only to contacts that have a minimum interest level of researcher. This may allow only Bob to receive this message or and related data; [0056]: Alex may stipulate that only contacts that have at least a medium trust levels and medium intimacy levels should be sent this message or data. The message or data may then be sent to user devices that belong to such a filtered set of contacts in step 360 of FIG. 3. This filtered set of contacts may be referred to as a group of contacts that “compatible” with parameters, information, or ratings associated with the message or data; [0072]: Rules could be used to allow a processor of Sam's user device to identify which messages or data that Sam drafts that will be sent to other user devices. Any messages or other shared data sent from Sam's user device to Nancy's user device must conform to the rules set by Sam. Such a rule could identify that message or data A can only be sent to devices of users that have been assigned a trust/intimacy rating of B, A (medium trust, high intimacy)). Bradley contains a “base” device of rendering and displaying an avatar of a second avatar on a first’s user device based on a level of trust between the first user and the second user which the claimed invention can be seen as an “improvement” in that enabling a first user to identify and share data with the second user based on the level of trust. Sandler contains a known technique of a first user identifying data to be shared with a second user based on the level of trust ([0042], [0045], [0049], [0055-[0057], [0072] and [0081]) that is applicable to the “base” device. Sandler’s known technique of a first user identifying data to be shared with a second user based on the level of trust ([0042], [0045], [0049], [0055-[0057], [0072] and [0081]) would have been recognized by one skilled in the art as applicable to the “base” device of Bradley and the results would have been predictable and resulted in enabling a first user to share data with a second user based on the trust level which results in an improved device. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Claims 11 and 20 are similar in scope to claim 1, and therefore the examiner provides similar rationale to reject these claims. Moreover, Bradley also teaches a non-transitory computer -readable medium ([0109]-[0112]). Regarding claim 4, the combination of Bradley and Sandler teaches the virtual reality host computing system of claim 1, the first visual indicator comprising a depiction of an item (indicator such as visual resolution, opacity, color, dashed outline, surrounding border, etc., [0022] and [0041] - Bradley; surrounding border is functionally analogous to the item), and the generating of the rendering of the third-party avatar comprising positioning the depiction of the item at least one of on, over, or beside the third-party avatar (Bradley - as shown in 7B, the surrounding border is displayed over and around the avatar). Regarding claim 5, the combination of Bradley and Sandler teaches the virtual reality host computing system of claim 1, the first visual indicator comprising a modification to a perimeter of the third-party avatar (indicator such as visual resolution, opacity, color, dashed outline, surrounding border, etc., [0022] and [0041] - Bradley; dashed outline is functionally analogous to a perimeter of the avatar), and the generating of the rendering of the third-party avatar comprising: accessing a third-party avatar template (in order to present an avatar representing an actual user, the avatar is inherently accessed from a user profiled data store 110; Bradley - [0031]: FIG. 1 shows a virtual reality server 102, which based on virtual reality content 104, outputs a virtual environment 106 in which one or more virtual reality users can exist, e.g., as avatars representing actual users. The virtual reality server 102 is also coupled to a virtual reality encounters data store 108 and user profile data store 110 for avatar presentation, including the avatar 112; Bradley – [0033]: In the example of FIG. 1, the virtual reality content is presented to the users 114-116 via the virtual reality server 102, which is in communication with the user profile data store 110, e.g., with profile data for each user); and modifying a perimeter (outline of the avatar is modified into dashed outline) of the third-party avatar template (Bradley – [0041]: IG. 7A shows the user as having a dashed outline and features). Regarding claim 6, the combination of Bradley and Sandler teaches the virtual reality host computing system of claim 5, the modifying of the perimeter of the third-party avatar comprising at least one of blurring at least a portion of the perimeter of the third-party avatar (very low resolution avatar is functionally analogous to a blurry avatar; Bradley – [0022]: present a very low resolution avatar; Bradley – [0031]: avatar presentation, including the avatar 112, intentionally presented in a low resolution representation; Bradley –the avatar as shown in fig. 6B in low resolution appears blurry) or modifying a color of at least a portion of the perimeter of the third-party avatar (the dashed outline of the avatar can be colored based on the level of trust; Bradley – [0022]: For example, resolution, opacity, color and/or the like can be varied to alter the avatar's appearance; Bradley – [0041]: FIG. 7B shows the user as presented with a surrounding border, which, for example, may be colored differently, e.g., ranging from red for low familiarity to yellow for medium to green for high familiarity; there may be only one color for each of low, medium or high, for example, or there may be a color gradient that ranges with a more granular score; Bradley – [0042]: Virtually any way to modify the appearance of a user's avatar based on relationship data, e.g., familiarity and/or trust levels are feasible. Further, any combinations of the above concepts can be used, e.g., resolution for familiarity, a colored border for trust, and so on). Regarding claim 7, the combination of Bradley and Sandler teaches the virtual reality host computing system of claim 1, the first visual indicator comprising a color (Bradley – [0022] and [0041]: indicator such as visual resolution, opacity, color, dashed outline, surrounding border, etc.), and the generating of the rendering of the third-party avatar comprising: accessing a third-party avatar template (in order to present an avatar representing an actual user, the avatar is inherently accessed from a user profiled data store 110; Bradley - [0031]: FIG. 1 shows a virtual reality server 102, which based on virtual reality content 104, outputs a virtual environment 106 in which one or more virtual reality users can exist, e.g., as avatars representing actual users. The virtual reality server 102 is also coupled to a virtual reality encounters data store 108 and user profile data store 110 for avatar presentation, including the avatar 112; Bradley – [0033]: In the example of FIG. 1, the virtual reality content is presented to the users 114-116 via the virtual reality server 102, which is in communication with the user profile data store 110, e.g., with profile data for each user); and modifying the third-party avatar template to include the color (color of the avatar can be modified based on the trust level; Bradley – [0022]: For example, resolution, opacity, color and/or the like can be varied to alter the avatar's appearance; Bradley – [0041]: FIG. 7B shows the user as presented with a surrounding border, which, for example, may be colored differently, e.g., ranging from red for low familiarity to yellow for medium to green for high familiarity; there may be only one color for each of low, medium or high, for example, or there may be a color gradient that ranges with a more granular score; Bradley – [0042]: Virtually any way to modify the appearance of a user's avatar based on relationship data, e.g., familiarity and/or trust levels are feasible. Further, any combinations of the above concepts can be used, e.g., resolution for familiarity, a colored border for trust, and so on). Regarding claim 8, the combination of Bradley and Sandler teaches the virtual reality host computing system of claim 1, the operations further comprising generating a rendering of a user data interface element (virtual reality environment 106, fig. 4 - Bradley), the user data interface element depicting the portion of the user data (message such as shown in the dialogue box in the virtual reality environment 106 representing a message of the user, fig. 4 - Bradley), the sending of the portion of the user data to the third-party computing device comprising sending the rendering of the user data interface element to the third-party computing device for display in proximity to the user avatar (a dialogue message from the user is shared with another user and displayed next to the user’s avatar in the virtual reality environment of another user; Bradley – fig. 4 shows the dialogue box representing a dialogue of user 1 with respect to user 2 is displayed in the virtual reality environment of user 2; Sandler - Alex may identify data to be send to Bob based on the trust levels between Alex and Bob; Sandler - [0045]: the first user device and the second user device may share data directly via a private network or via a transfer or signal server such as transfer/signal server 150 of FIG. 1; Sandler - [0049]: Each of these user devices may have a respective unique identifier that allows data to be sent to them by any of the ways discussed in respect to FIGS. 1-2. Exemplary unique device identifiers may include a phone number, a unique code, an avatar or other means that link a device such as a smartphone to a particular user. Such identifiers may be used to direct data sent from a computing device to one or more other computing devices. This may include sending data through a server (such as distribution server 140 or transfer/signal server 150 of FIG. 1) configured to securely transfer data; Sandler - [0055]: the first user may enter an interest of material science collaboration. This may also include entering a collaboration level for contacts that like to are part of the collaboration. A first user, Alex may identify that his contact Bob is a researcher that tests materials, that his contact Joe is a student that performs tests under Bob's supervision, and that his contact Adam is assigned the tasks of tracking materials used as part of the research. When Alex drafts a message regarding a test or that includes test data, that message may be filtered based on trust levels, intimacy levels, interest types, and possibly interest levels. When Alex wishes to plan a new test, he may draft a message in step 340 and he may then configure that message with levels of trust, intimacy, and possibly a type of interest. This may also include configuring the message or other data to be sent only to contacts that have a minimum interest level of researcher. This may allow only Bob to receive this message or and related data; Sandler - [0056]: Alex may stipulate that only contacts that have at least a medium trust levels and medium intimacy levels should be sent this message or data. The message or data may then be sent to user devices that belong to such a filtered set of contacts in step 360 of FIG. 3. This filtered set of contacts may be referred to as a group of contacts that “compatible” with parameters, information, or ratings associated with the message or data; Sandler - [0072]: Rules could be used to allow a processor of Sam's user device to identify which messages or data that Sam drafts that will be sent to other user devices. Any messages or other shared data sent from Sam's user device to Nancy's user device must conform to the rules set by Sam. Such a rule could identify that message or data A can only be sent to devices of users that have been assigned a trust/intimacy rating of B, A (medium trust, high intimacy); Sandler – [0086]: The computing device of FIG. 4 may be a device such as a desktop computer, notebook computer, tablet, or cell phone computing device A network interface or wireless communication interface may communicate with a remote computing device. Computing devices consistent with the present disclosure may also include a display that displays a user interface that allows users to set levels of trust and intimacy. This display may also be used to prepare messages or data sets to send or to display received messages or data). Claims 14-18 are similar in scope to claims 4-8, respectively, and therefore the examiner provides similar rationale to reject these claims. Claim(s) 2-3 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradley, in view of Sandler, and further in view Bar-Zeev et al. (US 2021/0339143, hereinafter Bar-Zeev). Regarding claim 2, the combination of Bradley and Sandler teaches the virtual reality host computing system of claim 1, the operations further comprising: serving the rendering of the user avatar (avatar of use 2 as shown in fig. 10 is displayed in the larger display area than the avatar of user 1) to the third-party computing device (user device of user 1 shows a mirror view representation of the their own avatar while displaying the avatar of user 2; fig. 10; [0047]: Turning to another aspect, as represented in FIG. 10, a user may also be presented with a mirror view representation 1000(a) of themselves as they are being presented by the virtual reality server to another user. In this manner, a user may preview what their avatar looks like to the other user and may alter it to improve (to 1000(b)) or degrade (back to 1000(a)), (possibly in smaller improvement and/or degradation steps at a time), the presentation of their avatar the other user. Although not explicitly shown, it is understood that audio can be improved or degraded while in an encounter, and it is feasible for a user to experience his or her own voice to decide whether to adjust the avatar's aural output during the encounter). The combination of Bradley and Sandler does not explicitly teach determining a second visual indicator (degree of attenuation) based at least in part on the level of trust (trust metric or trust level) between the user (user X) and the third party (user N; fig. 2A, fig. B; [0036]: FIGS. 2A-2B illustrate a spectrum of attenuation 200 and an exemplary trust metric 250 for determining the spectrum of attenuation 200 in accordance with some implementations. In some implementations, an avatar in the SR setting can be invisible, partially attenuated, fully visible, or anywhere in between along the spectrum of attenuation 200; [0040]: As shown in FIG. 2B, in some implementations, the trust metric 250 specifies the SR setting and/or the context where the avatars reside, a respective trust level among the avatars in such context, corresponding avatar social interaction criteria, and/or the nature of attenuation in case of breaching the avatar social interaction criteria, among others); generating a rendering of the user avatar (avatar X of user X), the rendering comprising the second visual indicator (as shown in fig. 4, from user N’s perspective as displayed in his device, the avatar X of user X is attenuated (faded, blinked, shrank, or other animation) based on the trust metric of user X and user N; [0042]: The spectrum of attenuation 200 and the exemplary trust metric 250 shown in FIGS. 2A and 2B are from various perspectives. For example, in an SR scene where avatar A represents user A and avatar B represents user B, avatar A's trust level of avatar B is high. As such, when avatar B moves within a threshold distance from avatar A, no attenuation is displayed in the virtual scene from user A's perspective. On the other hand, avatar B's trust level of avatar A may not be high. As such, from user B's perspective, certain degree of attenuation is displayed in the SR scene. Further, even when two avatars share the same trust metric and the same set of avatar social interaction criteria, the attenuation can be displayed differently for different users as will be described below with reference to FIG. 4; [0043]: FIG. 4 is a block diagram 400 illustrating attenuation of co-user interactions from various perspectives in accordance with some implementations. In one exemplary scenario, avatar N representing user N is too close to avatar X representing user X in the SR setting. Based on the trust metric of both N and X and the corresponding avatar social interaction criteria, certain degree of attenuation is generated. From user N's perspective (e.g., display of the scene 106 on the SR device 104-N of the user 10-N as shown in FIG. 1), avatar X is attenuated, e.g., faded, blinked, shrank, or other animation indicating the breaching of the avatar social interaction criteria. From user X's perspective, however, avatar N and avatar X are fully visible and are at a socially acceptable distance in some implementations, as if avatar N did not move into avatar X's space. As such, user X would not be distracted by avatar N who is too close, yet avatar N be led into believing that she has succeeded in disturbing avatar X). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Bar-Zeev’s knowledge of determining degree of attenuation based on the trust metric between two users and displaying the avatar based on the degree of attenuation as taught and modify the system of Bradley and Sandler because such a system enhances the user experience by attenuating co-user interactions in a simulated reality setting ([0001]). Regarding claim 3, the combination of Bradley, Sandler and Bar-Zeev does not explicitly teach the virtual reality host computing system of claim 2, the second visual indicator being a depiction of a face of the user. The combination of Bradley, Sandler and Bar-Zeev teaches the second visual indicator depicts a fading, blinking or shrinking of an avatar (Bar-Zeev - [0043]). However, it would have been prima facie obvious for the visual indicator to depict a user of the user rather than shrinking his avatar. Whether the visual indicator depicts the face of the user or shrinks his avatar is solely a matter of aesthetic design choice, and would not be sufficient to distinguish over the prior art. See MPEP 2144.04. Claims 12-13 are similar in scope to claims 2-3, and therefore the examiner provides similar rationale to reject these claims. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradley, in view of Sandler, and further in view Moeglein et al. (US 2023/0025323, hereinafter Moeglein). Regarding claim 9, the combination of Bradley and Sandler does not explicitly teach the virtual reality host computing system of claim 1, the operations further comprising, before sending the portion of the user data to the third-party computing device, receiving, from the user device, a request to provide the portion of the user data to the third party. Moeglein teaches before sending the portion of the user data to the third-party computing device (selectively sharing user’s information with an ally (using a second device) in response to a request), receiving, from the user device, a request to provide the portion of the user data to the third party (first an ally (using their device) sends a request to the user to share the user’s information and the user, using his/her device, will selectively share the information with the ally based on trust level; fig. 5 step 550; [0046]: FIG. 2 is a flow chart illustrating steps of forming an ally relationship between a first and a second user (STEP 200); [0176]: Next a request may be made to share information, or any communication may be requested, and the user's information can be selectively shared with the ally, dependent upon the trust level. (STEP 550) The user's information that may be shared may include user profile information, one or more ally lists, favorites, shared interests, and a calendar; [0264]: Upon sending a message, the sender will receive confirmation that it is sent once it makes it to a store-and-forward server, whether that server is on the recipient's device, in the cloud, or in a mutual ally's device. The recipient may send and the sender may also receive confirmation when the message is read and when it is responded to, including a date and time stamp. Further, the recipient may send and the sender may receive scheduling information, for when the recipient plans to read or respond to the message. The sender's device may notify the sender of each of these types of information in a number of ways, including changing colors of the message in the sender's message history, providing an iconic representation or animation of the message status, and making this icon responsive to display message status information, including a list of status changes and associated times of those status changes). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Moeglein’s knowledge of sending data to an ally upon receiving a request to provide data as taught and modify the system of Bradley and Sandler because such a system prevents unwanted communications and avoids unnecessary distractions that could otherwise sap energy and waste time ([0022]). Claim 19 is similar in scope to claim 9, and therefore the examiner provides similar rationale to reject claim 19. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradley, in view of Sandler, and further in view Carbune et al. (US 2024/0202265, hereinafter Carbune). Regarding claim 10, the combination of Bradley and Sandler does not explicitly teach the virtual reality host computing system of claim 1, the portion of the user data comprising payment element data describing a payment element used for a purchase by the user from the third party. Carbune teaches the portion of the user data comprising payment element data describing a payment element (credit card information) used for a purchase by the user from the third party (location automated assistant located at the coffee shop is functionally analogous to the third party; a user purchasing a coffee from a coffee shop provides his credit card information to the location automated assistant located at the coffee shop based on the trust measure between the location automated assistant and the user; [0019]: the devices 105 and 115 (as shown in fig. 1) can be wearable apparatus such as glasses of the user having a computing device, a virtual or augmented computing device, etc. and the ecosystem of the devices 105 and 115 can be manually and/or automatically associated with each other in a device topology representation of the ecosystem; [0046]: a coffee shop can have a location device 115 executing a location automated assistant 118. The user, once paired, can provide a request, via the headphone 100, that includes a coffee order and can then sit at a table and wait for the order to be prepared. In this instance, the location automated assistant 118 may be provided with credit card information of the user but no other additional information, based on the trust measure between the location automated assistant 118 and the user, and with the explicit intent to utilize the credit card information to purchase a coffee). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Carbune’s knowledge of providing the user’s credit card information to the location automated assistant at the coffee shop based on the measure of trust as taught and modify the system of Bradley and Sandler because such a system using the location-based automated assistant causes rendering of the information at the headphones without requiring any installation of the location-based automated assistant at the user device and thereby enhancing an experience of the user ([0003]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cox et al. (US 2009/0271714) describes to identify a plurality of joined users that joined each virtual area and computes a joined user weighting for each of the plurality of joined users. The joined user weightings include friend of a friend level weightings and commonality weightings. Next, the invention described herein computes a virtual area friendliness level for each of the virtual areas by aggregating each of the joined user weightings for each of the virtual areas, and provides the virtual area friendliness levels to the external user in order for the external user to select the appropriate virtual area. Stoll et al. (US 2015/0278917) describes a method for obtaining product recommendations include a method comprising receiving from a requestor a recommendation request and determining a first level category associated with the recommendation request. The method further comprises sending instructions to display the recommendation request to one or more first users within a trust network of the requester, wherein the first users are identified as trusted by the requester with respect to the first level category, and if one or more first conditions are satisfied, send instructions to display the recommendation request to one or more second users, wherein each second user is within a respective trust network of a respective first user and is identified as trusted by the respective first user with respect to the first level category. Finn et al. (US 8677254) describes a method of discerning and displaying information regarding a relationship between at least two avatars in a virtual universe environment, the method comprising: determining whether a first avatar and a second avatar have at least one relationship with one or more common avatars in response to an indication of the first avatar initiating an interaction within the virtual universe with the second avatar, wherein the at least one relationship comprises: a common acquaintance comprising at least a third avatar distinct from the first avatar and the second avatar, and a level of trust corresponding to the third avatar based on at least one prior interaction within the virtual world between the third avatar and one of the first or the second avatar, wherein the level of trust is set by the one of the first or the second avatar; in the case that the first and second avatars have at least one relationship with at least the third avatar, displaying, within the virtual universe, information regarding the at least one relationship with the third avatar to at least one of the first and second avatar, the displaying including displaying, within the virtual universe, an indication of a third avatar and a level of trust corresponding to the third avatar, wherein the indication of a level of trust includes an indicia affirming that the third avatar is not trusted or an indicia confirming that the third avatar is trusted; and determining if the third avatar is currently online in the virtual universe. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JWALANT AMIN/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597091
COMPUTER-IMPLEMENTED METHOD, APPARATUS, SYSTEM AND COMPUTER PROGRAM FOR CONTROLLING A SIGHTEDNESS IMPAIRMENT OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592020
TRACKING SYSTEM, TRACKING METHOD, AND SELF-TRACKING TRACKER
2y 5m to grant Granted Mar 31, 2026
Patent 12585324
PROCESSOR, IMAGE PROCESSING DEVICE, GLASSES-TYPE INFORMATION DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12585130
LUMINANCE-AWARE UNINTRUSIVE RECTIFICATION OF DEPTH PERCEPTION IN EXTENDED REALITY FOR REDUCING EYE STRAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12579571
METHOD FOR IMPROVING AESTHETIC APPEARANCE OF RETAILER GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 631 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month