Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,543

COMPUTER-BASED PRIVACY FOR A CHAT GROUP IN A VIRTUAL ENVIRONMENT

Final Rejection §103
Filed
Oct 13, 2023
Examiner
SHAIKH, ZEESHAN MAHMOOD
Art Unit
2658
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
16 granted / 31 resolved
-10.4% vs TC avg
Strong +55% interview lift
Without
With
+55.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is responsive to the applicant’s amendment dated 1/2/2026. The applicant amended claims 1, 8, and 15. Response to Arguments Applicant’s arguments, see Remarks (pg. 10, line 1 – pg. 19, line 11), filed 1/2/2026, with respect to claims 1-20 have been fully considered and are persuasive. The 35 U.S.C. 101 rejection of claims 1-20 has been withdrawn. Applicant’s arguments with respect to the 35 U.S.C. 103 rejection of claims 1-5, 8-12, and 15-18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Given the amendments from the specification, a new ground of rejection is provided below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-12, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Moyers (US 20160197862 A1) in view of Punwani et al. US 20220124286 A1 (hereinafter Punwani) in view of Zhang et al. CN 112785671 A (hereinafter Zhang). Regarding independent claim 1, 8, and 15, Moyers teaches a computer-implemented method comprising/ a computer system comprising/ a computer program product comprising: one or more computer processors (FIG. 4, 24); one or more computer readable storage devices (FIG. 4, 22, 30); program instructions stored on the one or more computer readable storage devices for execution by at least one of the one or more computer processors, the stored program instructions comprising ([0079]): one or more computer readable storage devices and program instructions stored on the one or more computer readable storage devices, the stored program instructions comprising ([0079]): responsive to identifying an external user is interested in the chat group, generating and setting a current topic representing a conversation in the chat group as an externally perceivable topic that is perceivable by the external user ([0071] “Some examples provide a quick and easy way for users to wrap virtual areas around contexts of interest, which may be defined, for example, in terms of one or more of content, people, and real-world location”; [0132] “the request includes a topic label, and the virtual area platform associates the topic label with the virtual area”); generating a faux multi-person conversation associated with the externally perceivable topic that corresponds to a real conversation made by members in the chat group, wherein the faux multi-person conversation is a collection of generated faux utterances that correspond with real utterances from the chat group (FIG. 16, 509, [0167] “the chat log area 509 shows the persistent virtual chat history for text chat interactions occurring in connection with Area 1”; [0078] “The network 20 typically includes a number of different computing platforms and transport facilities that support the transmission of a wide variety of different media types (e.g., text, voice, audio, video, and other data) between network nodes”); assigning the faux utterances to chat group members based on a corresponding speaker index ([0082] “Each social network profile 50 typically includes: identity characteristics (e.g., name, age, gender, and geographic location information such as postal mailing address) that describe a respective communicant or a persona that is assumed by the communicant”); and utilizing one or more corresponding avatars of the chat group members to output the faux utterances to the external user ([0085] “In visual graphical user interfaces, communicants typically are represented in the virtual areas 44 by respective avatars”). Moyers fails to teach expanding a chat area of a chat group to form an experience annulus according to predetermined distance that a voice volume of private-chat-group member can propagate; wherein utilizing the one or more corresponding avatars of the chat group members to present the faux utterances comprises: manipulating mouth movement of the one or more corresponding avatars so that facial and the mouth movement of the one or more corresponding avatars mimics and matches the output faux utterances so the conversation in the chat group within the chat area remains private from the external user. However, Punwani teaches expanding a chat area of a chat group to form an experience annulus according to predetermined distance that a voice volume of private-chat-group member can propagate (FIG. 6B, [0113] The system may iteratively increase the size of a join area based on a number of avatars that have joined a conversation); Moyers in view of Punwani are considered to be analogous to the claimed invention because both are the same field of digital communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques context based virtual area creation of Moyers with the technique of expanding chat areas taught by Punwani in order to facilitate virtual interactions between users that more closely resemble their in-person counterparts (see Punwani [Abstract]). Moyers in view of Punwani fails to teach wherein utilizing the one or more corresponding avatars of the chat group members to present the faux utterances comprises: manipulating mouth movement of the one or more corresponding avatars so that facial and the mouth movement of the one or more corresponding avatars mimics and matches the output faux utterances so the conversation in the chat group within the chat area remains private from the external user. However, Zhang teaches wherein utilizing the one or more corresponding avatars of the chat group members to present the faux utterances comprises: manipulating mouth movement of the one or more corresponding avatars so that facial and the mouth movement of the one or more corresponding avatars mimics and matches the output faux utterances so the conversation in the chat group within the chat area remains private from the external user ([pg. 8, line 11-14]“The embodiment of the invention claims a false face animation synthesis fusion with the complementarity of multi-modality information, the adopted method ensures the lip motion and voice/text information of the synchronization and the synchronization of the chin movement and voice/text, so as to ensure the consistency of the lip area and the chin movement;”) Moyers in view of Punwani in view of Zhang are considered to be analogous to the claimed invention because both are the same field of digital communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques context based virtual area creation of Moyers in view of Punwani with the technique of generating false face animation taught by Zhang in order to improve false face animation synthesis (see Zhang [pg. 1, line 14-15]). Regarding claims 2, 9, and 16, Moyers in view of Punwani in view of Zhang teaches all of the limitations of claim 1, 8, and 15 respectively, upon which claims 2, 9, and 16 depend. Additionally, Punwani teaches wherein the utilized one or more corresponding avatars are physically manipulated to present the faux utterances ([0037] “the system may alter the appearance of an avatar based on actions of the avatar”). Regarding claims 3 and 10, Moyers in view of Punwani in view of Zhang teaches all of the limitations of claim 1 and 8 respectively, upon which claims 3 and 10 depend. Additionally, Moyers teaches monitoring and tracking, in real time, at least a portion of the conversation being held within the chat group ([0120] “In addition to identifying the communicants involved, the place of interaction, and start and end times of the interaction, an interaction record also may include links to other information relating to the interaction, including any shared content 444, chat logs 446, and recordings 448”); and identifying and extracting a predetermined number of utterances, order of the utterances, and speaker index of each utterance from the conversation, wherein identified verbal or expressed utterances are converted to text via a speech-to-text system or image recognition system, respectively ([0071] “The virtual areas support realtime communications between communicants (e.g., text chat, voice, video, application sharing, and file sharing) and provide a persistent historical repository for interactions in the virtual area”, examiner interprets historical repository to include text; [0222] “The speakerphone includes a microphone that converts human voice sounds projected into the physical space 804 by the communicants 808, 810 into output voice data that is transmitted to the client network nodes of the communicants who are present in a particular virtual area, and a speaker that projects human voice sounds received from the client network nodes into the physical space 804”, examiner interprets voice data as text; [0225] “The voice records typically correspond to voiceprints (also referred to as voice templates or voice models) that are created from features that are extracted from the recorded speech of known communicants in accordance with a speaker recognition enrollment process.”, examiner interprets voiceprint as speaker index). Regarding claims 4 and 11, Moyers in view of Punwani in view of Zhang teaches all of the limitations of claim 3 and 10 respectively, upon which claims 4 and 11 depend. Additionally, Punwani teaches identifying the external user is interested in the current topic being discussed in the chat group based on an analysis of the historic activities, user profile, and designated liked topics ([0055] “In some embodiments, a user profile may also include other information that is not related (directly or indirectly) to the virtual environment. For example, the system may determine social networks and/or other sources of content that may be relevant to a user and provide data feeds associated with those content sources…. The system may use this information to determine whether a user joins or leaves a conversation”, [0078] “This offers contextual information that is helpful in evaluating whether to initiate or join a conversation, and potentially suggest initial topics of conversation”); responsive to identifying that external user is interested in the current topic of the chat group, generating and outputting an utterance that is perceivable to the external user ([0037] “In some embodiments, the system may further provide visual, textual, graphical, and/or audio cues to indicate different scenarios as the avatar navigates the virtual environment (e.g., indicating that an avatar is within a specific join distance from another avatar).”). Regarding claims 5, 12, and 18, Moyers in view of Punwani in view of Zhang teaches all of the limitations of claim 3, 10, and 17 respectively, upon which claims 5, 12, and 18 depend. Additionally, Punwani teaches responsive to identifying that the external user is not interested in the conversation or that the chat group is no longer accepting new group members, randomly generating an alternative topic unrelated to the conversation being had by the chat group based on user data associated with external user to identify a topic that the external user is uninterested to facilitate the alternative topic; and outputting the randomly generated alternative topic to the external user as an externally hearable topic ([0078] “This offers contextual information that is helpful in evaluating whether to initiate or join a conversation, and potentially suggest initial topics of conversation or an overall context in which the conversation is being initiated”). Regarding claim 17, Moyers in view of Punwani in view of Zhang teaches all of the limitations of claim 15, upon which claim 17 depends. Additionally, Moyers teaches program instructions to monitor and track, in real time, at least a portion of the conversation being held within the chat group ([0120] “In addition to identifying the communicants involved, the place of interaction, and start and end times of the interaction, an interaction record also may include links to other information relating to the interaction, including any shared content 444, chat logs 446, and recordings 448”); and program instructions to identify and extract a predetermined number of utterances, order of the utterances, and speaker index of each utterance from the conversation, wherein identified verbal or expressed utterances are converted to text via a speech-to-text system or image recognition system ([0071] “The virtual areas support realtime communications between communicants (e.g., text chat, voice, video, application sharing, and file sharing) and provide a persistent historical repository for interactions in the virtual area.”, examiner interprets historical repository to include text; [0222] “The speakerphone includes a microphone that converts human voice sounds projected into the physical space 804 by the communicants 808, 810 into output voice data that is transmitted to the client network nodes of the communicants who are present in a particular virtual area, and a speaker that projects human voice sounds received from the client network nodes into the physical space 804”, examiner interprets voice data as text; [0225] “The voice records typically correspond to voiceprints (also referred to as voice templates or voice models) that are created from features that are extracted from the recorded speech of known communicants in accordance with a speaker recognition enrollment process.”, examiner interprets voiceprint as speaker index); Additionally, Punwani teaches program instructions to identify the external user is interested in the current topic being discussed in the chat group based on an analysis of the historic activities, user profile, and designated liked topics ([0055] “In some embodiments, a user profile may also include other information that is not related (directly or indirectly) to the virtual environment. For example, the system may determine social networks and/or other sources of content that may be relevant to a user and provide data feeds associated with those content sources…. The system may use this information to determine whether a user joins or leaves a conversation”[0078] “This offers contextual information that is helpful in evaluating whether to initiate or join a conversation, and potentially suggest initial topics of conversation”); and responsive to identifying that external user is interested in the current topic of the chat group, program instructions to generate and output an utterance that is perceivable to the external user ([0037] “In some embodiments, the system may further provide visual, textual, graphical, and/or audio cues to indicate different scenarios as the avatar navigates the virtual environment (e.g., indicating that an avatar is within a specific join distance from another avatar).”). Allowable Subject Matter Claims 6-7, 13-14, and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: After further search and consideration, the examiner deems the prior art of record, whether taken alone or in combination, fails to teach, inter alia, “filling a prompt template based on the recorded order of each utterance in the conversation, wherein the recorded speaker index, the word count, the identified language, and the externally hearable topic are utilized to fill the prompt template” in combination with the other claimed features. Therefore, the claims would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Singh et al. (US 20240086142 A1) teaches systems and methods are provided for dynamically adjusting a personal boundary of an avatar in an XR environment. The system identifies a first avatar in an extended reality (XR) environment based on rule data stored in a storage. In response to the system detecting that the first avatar has entered a portion of the XR environment at a communicable distance from a second avatar, the system does the following steps. The system determines an offensiveness rating of the first avatar. The system retrieves, from the storage, an offensiveness tolerance of the second avatar. The system compares the offensiveness rating of the first avatar and offensiveness tolerance of the second avatar. In response to determining, based on the comparing, that the offensiveness rating of the first avatar exceeds the offensiveness tolerance of the second avatar, the system automatically censors one or more messages from the first avatar to the second avatar. Xie et al. (CN 110610534 A) teaches a mouth animation generating program making the role mouth animation is combined with the reinforcement learning based on mouth animation method for automatically generating Actor-Critic algorithm, solving the problems of the existing technology needs a lot of sample data and problem easy to artifact. the method comprises the following steps: a. collecting the voice data and the character head portrait; b. analyzing the voice data to obtain acoustic features; c. identifying the role head image to facial recognition and action unit, obtaining facial features; d. to match the acoustic features and facial feature based on Actor-Critic algorithm; e. the opposite part expression and blinking action for reducing, automatically generating a mouth animation. The invention is suitable for the scene need fast and realistically generating mouth animation. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZEESHAN SHAIKH whose telephone number is (703)756-1730. The examiner can normally be reached Monday-Friday 7:30AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZEESHAN MAHMOOD SHAIKH/Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Sep 26, 2025
Non-Final Rejection — §103
Oct 29, 2025
Interview Requested
Nov 17, 2025
Examiner Interview Summary
Nov 17, 2025
Applicant Interview (Telephonic)
Jan 02, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579373
SYSTEM AND METHOD FOR SYNTHETIC TEXT GENERATION TO SOLVE CLASS IMBALANCE IN COMPLAINT IDENTIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12555575
Wakeup Indicator Monitoring Method, Apparatus and Electronic Device
2y 5m to grant Granted Feb 17, 2026
Patent 12518090
LOGICAL ROLE DETERMINATION OF CLAUSES IN CONDITIONAL CONSTRUCTIONS OF NATURAL LANGUAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12511318
MULTI-SYSTEM-BASED INTELLIGENT QUESTION ANSWERING METHOD AND APPARATUS, AND DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12512088
METHOD AND SYSTEM FOR USER-INTERFACE ADAPTATION OF TEXT-TO-SPEECH SYNTHESIS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+55.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month