Prosecution Insights
Last updated: April 19, 2026
Application No. 18/588,120

SYSTEM AND METHOD TO MEDIATE SOCIAL MEDIA PLATFORMS AUTOMATICALLY FOR USER SAFETY

Non-Final OA §101§103
Filed
Feb 27, 2024
Examiner
PADUA, NICO LAUREN
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
UNIVERSITY OF SOUTH CAROLINA
OA Round
3 (Non-Final)
10%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
27%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
3 granted / 31 resolved
-42.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
51 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
40.0%
+0.0% vs TC avg
§103
30.8%
-9.2% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This is a nonfinal rejection in response to remarks/amendments filed on 02/03/2026. Claims 1, 7, 9, 12, 18, 19, and 25 are amended. Claims 3-6, 8, 14-17, and 24 stand cancelled without prejudice. Claims 1, 2, 7, 9-13, 18-23, 25, and 26 remain pending and are examined herein. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/03/2026 has been entered. Priority The claims hold priority to US Provisional application # 63/487,318 filed on 02/28/2023. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6-14, and 17-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a Process, Machine, Manufacture, or Composition of Matter? Claims 1, 2, 7, & 9-11: A method for detecting users who can act as potential moderators of an online group of support seekers and support providers in an online social media platform operating on the Internet, the method comprising: Claims 12, 13, & 18: A system, comprising: a memory comprising instructions for detecting users who can act as potential moderators of an online group of support seekers and support providers in an online social media platform operating on the Internet; and a processor configured to execute the instructions to: Claims 19-23, & 25-26: A method for operation of an automated moderator which can connect support- seeking users with support giver users of an online group of support seeker and support provider users in an online social media platform operating on the Internet, the method comprising: Claims 1, 2, 7, & 9-11 and 19-23, & 25-26 recite a method which falls under the potentially eligible subject matter category “process.” Claims 12, 13, & 18 recite a system with memory and a processor, which is an apparatus claim and falls under the potentially eligible subject matter category “machine or manufacture.” Therefore, all of the claims are directed to at least one potentially eligible subject matter category, therefore the claims are to be further analyzed under step 2. Step 2a Prong 1: Is the claim reciting a Judicial Exception(A Law of Nature, a Natural Phenomenon (Product of Nature), or An Abstract Idea?) The claims under the broadest reasonable interpretation in light of the specification are analyzed herein. Representative claims 1, 12 and 19 are marked up, isolating the abstract idea from additional elements, wherein the abstract idea is in bold and the additional elements have been italicized as follows: Claim 1: A method for detecting users who can act as potential moderators of an online group of support seekers and support providers in an online social media platform operating on the Internet, the method comprising: Utilizing one or more hardware processors for automatically classifying users based on interaction content analysis and an expert-labeled dataset by executing a classifier comprising a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify the class of supportive users and class of non- supportive users of an online group; identifying the class of users comprising support providers of the online group selected by support seekers of the online group for interaction; monitoring interactions between support seekers and the selected support providers, and automatically storing explicit user-provided votes thereafter given by support seekers on the class of supportive users within the selected class of support providers; automatically receiving user-generated suggestions for formal moderator position of the online group from both the support seekers and from the class of support providers selected by the support seekers for interaction; filtering the identified class of users to exclude harmful users by applying a second classifier utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors; programmatically combining the received user-generated suggestions with the stored votes and results of the filtering to identify at least one potential moderator for the online group; and automatically contacting the potential moderator and technologically establishing the potential moderator in the formal moderator position for the online group. Claim 12: A system, comprising: a memory comprising instructions for detecting users who can act as potential moderators of an online group of support seekers and support providers in an online social media platform operating on the Internet; and one or more processors configured to execute the instructions to: automatically classify users based on interaction content analysis and an expert-labeled dataset by executing a classifier comprising a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify the class of supportive users and class of non- supportive users of an online group; identify the class of users comprising support providers of the online group selected by support seekers of the online group for interaction; monitor interactions between support seekers and the selected support providers, and automatically storing explicit user-provided votes thereafter given by support seekers on the class of supportive users within the selected class of support providers; automatically receive user-generated suggestions for formal moderator position of the online group from both the support seekers and from the class of support providers selected by the support seekers for interaction; filter the identified class of users to exclude harmful users by applying a second classifier utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors; programmatically combine the received user-generated suggestions with the stored explicit user-provided votes and results of the filtering to objectively identify at least one potential moderator for the online group; and automatically contacting the potential moderator and technologically establishing the potential moderator in the formal moderator position for the online group. Claim 19: A method for operation of an automated moderator which can connect support- seeking users with support giver users of an online group of support seeker and support provider users in an online social media platform operating on the Internet, the method comprising: in the context of the discussion subject matter of the online group, automatically classifying users based on interaction content analysis and an expert-labeled dataset by executing a classifier comprising a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify the class of supportive users and class of non-supportive users of the online group; identifying the class of users comprising support providers of the online group selected by support seekers of the online group for interaction; monitoring interactions between support seeker users and support provider users to collect automatically data comprising explicit user-provided feedback from support seeker users about support provider users who have helped them, and data comprising agreements and disagreements with others, and volume of participation in discussion; filtering the identified class of users to exclude harmful users by applying a second classifier utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors; applying rules to the collected explicit user-provided data and results of the filtering to objectively recommend a user as moderator if feedback and participation satisfy predetermined criteria for recommendation, and to not recommend a user as moderator if disagreement satisfies predetermined criteria for non-recommendation; and automatically contacting the potential moderator and technologically establishing the potential moderator as the moderator in the formal moderator position for the online group. When evaluating the bolded limitations of the claims under the broadest reasonable interpretation in light of the specification, it is clear that representative claims 1, 12, and 19 recite an abstract idea under the category “certain methods of organizing human activity.” More specifically, the present claims fall under the sub-grouping “managing personal behavior or relationships or interactions between people” including social activities, teaching, and following rules or instructions as outlined in MPEP 2106.04(a)(2)(II)(C). The bolded claims recite systems and methods for identifying, selecting, and connecting potential moderators to connect support seekers with support providers in a social environment. Therefore, the claims primarily recite social activities, and the facilitation of relationships and interactions between people. This notion is supported in the specification, at least in paragraph [0007] which states, “[0007] The presently disclosed subject matter generally deals with system and methodology subject matter for mediating social media platforms, and in particular for automatically or semi-automatically mediating social media platforms for user safety. [0008] More specifically, presently disclosed subject matter relates providing a conversation agent and/or a “chatbot” that moderates a platform. For users, such technology would suggest groups, contents, and moderators. For the platform (i.e., the platform operators), the technology can detect moderators. For the moderator perspectives, the technology can help detect users who are either helpful and harmful, to be appropriately managed.” As described, moderating or mediating a social media platform, whether done automatically or manually, is a ”certain method of organizing human activity.” Therefore, claims 1, 12, and 19 recite an abstract idea. Even when considering the amended limitation of “executing a classifier comprising a stacked sequence of an Encoder and Logistic Regression”, it is still part of the abstract idea because it merely claims the idea of executing a classifier with a stacked sequence of an encoder and logistic to categorize users into supportive/non-supportive. However, when considering that the classifier is a broadly recited “black box” claiming any use of an encoder and logistic regression to perform the abstract idea, it is clear that the claims above are still part of the abstract idea. The examiner notes that the additional element of the encoder being specifically a “universal sentence encoder” is addressed in step 2A Prong 2. However, the main point regarding this limitation in prong 1 is that the step itself falls within the “certain methods of organizing human activity” because it is recited with such generality that the steps are merely instructions to an individual to manage their personal behavior as “encoding” and “logistic regression” are data analysis steps recited at a high level of generality such that they can be performed in the human mind. Furthermore, even when considering the amended limitation “filtering the identified class of users to exclude harmful users by applying a second classifier utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors;” this limitation still falls within “managing personal behavior, or interactions or relationships between people” because it performs filtering to exclude certain users. The fact that this is done using a “second classifier utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count categories” does not preclude the step from being mere rules or instructions to an individual to manage personal behavior. The claim limitation is not necessarily limited to technical implementations as the classification steps are recited broadly such that any use of a label dataset and LIWC categories would fall within the scope. Therefore, this limitation is still part of the abstract idea, thus the claims even as amended, still recite an abstract idea under “certain methods of organizing human activity.” The examiner also notes, that the preamble of the method claims 1 and 19, merely states the purpose or intended use of the invention, rather than any distinct definition of any of the claimed invention’s limitations, therefore, the preamble is not considered a limitation(for example, elements such as “automated moderator”, and “online social media platform operating on the internet” are not given patentable weight (particularly, in claims 1, 19) because they are not mentioned in the body of the claims, as outlined in MPEP 2111.02(II)). For purposes of compact prosecution, however, such elements are still listed and analyzed as additional elements, since they hold patentable weight in claim 12. Step 2A Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Claims 1, 12 and 19 recite the following additional elements: --online social media platform operating on the internet in claims 1, 12, 19 -online group in claims 1, 12, 19 - A system, comprising: a memory comprising instructions in claims 12 - a processor configured to execute the instructions to: in claims 12 -automated moderator in claim 19 -automatically... in claims 1, 12, and 19 -technologically...in claims 1, 12, and 19 -programmatically...in claims 1, 12 -universal sentence encoder in claims 1, 12, and 19 The additional elements listed above, when considered individually and in combination with the claim as a whole, no more than a recitation of the words “apply it” (or an equivalent) or mere instructions to implement an abstract idea or other exception on generic computing components as outlined in MPEP 2106.05(f). In this case, the abstract idea of “moderating/mediating a social media platform” is performed on generic computing components such as memory, and processor. Furthermore, limiting the term “moderator” to be automated is also an example of “apply it” or performing the abstract idea on a generic computing device as it is merely instructing performing the steps of moderation on a generic computing device. This also applies to each of the functions that are recited to be performed “automatically”, “programmatically”, or “technologically,” such as “automatically contacting the potential moderator and technologically establishing the potential moderator as the moderator.” This is no more than an “apply it” level element because requiring the limitations to be performed “automatically” or “technologically” merely indicates that the steps are performed by a generic computer, potentially as instructions to be executed by a computer. Furthermore, limiting the social media platform or “group” to be online or operating on the Internet is a general link to particular technological environment or field of use as outlined in MPEP 2106.05(h). Merely indicating that the social media platform that the abstract idea is being performed on is broadly limited to being online does not meaningfully limit the claims beyond generally linking the abstract idea to a particular technological environment such as the internet. Similarly, limiting the encoder to be a “universal sentence encoder” is still a general link to the technological environment of Google’s “universal sentence encoder” because it merely instructs the classifying step to be performed using a stacked sequence of a universal sentence encoder and logistic regression. Using the “universal sentence encoder,” merely limits the data analysis step to a particular data source (such as the existing trained “universal sentence encoder”). Furthermore, this additional element also falls within MPEP 2106.05(f) “Mere instructions To Apply an Exception” because it merely recites the idea of the solution or outcome without reciting details of how the solution is accomplished. By limiting the classifier to be a stacked sequence of a universal sentence encoder and logistic regression, the claim attempts to cover any solution to the identified problem with no restriction on how the result of “classifying non-supportive and supportive users” is accomplished. Merely reciting that the “universal sentence encoder” and “logistic regression” in a stacked sequence without a description of the mechanism for accomplishing the result does not integrate the abstract idea into a practical application because this type of recitation is equivalent to the words “apply it.” Therefore, whether analyzed individually or as an ordered combination, none of the additional elements integrate the abstract idea into a practical application. Claims 1, 12, and 19 are directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Claims 1, 12 and 19 recite the following additional elements: --online social media platform operating on the internet in claims 1, 12, 19 -online group in claims 1, 12, 19 - A system, comprising: a memory comprising instructions in claims 12 - a processor configured to execute the instructions to: in claims 12 -automated moderator in claim 19 -automatically... in claims 1, 12, and 19 -technologically...in claims 1, 12, and 19 -programmatically...in claims 1, 12 -universal sentence encoder in claims 1, 12, and 19 The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using A system, comprising: a memory comprising instructions and a processor configured to execute the instructions to perform the abstract idea of “moderating/mediating a social media platform” amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept, such as limiting the “moderator” to be automated. Furthermore, limiting the abstract idea to be performed on online social media platforms and groups, “on the internet,” or “universal sentence encoder” does not meaningfully limit the claim beyond generally linking the abstract idea to a particular technological environment or field of use. Accordingly, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Thus claims 1, 12, and 19 are not patent eligible because the claims are directed to an abstract without significantly more. Dependent claims 2, 7, 9-11, 13, 18, 20-23, and 25-26 are also given the full two part analysis both individually and in combination with the claims they depend on herein: Claims 2, 13, 20, 22 merely further limit the abstract idea, particularly the step of “classifying user” by using “weak-supervision(claims 2, 13), or expert-labelled datasets(claims 20, 22). Since the claims are still more of the same abstract idea of “moderating/mediating a social media platform” by labelling users as supportive or non-supportive, it is still reciting “certain methods of organizing human activity.” Performing this analysis using “weak-supervision,” are merely general links to particular technological environment or field of use as outlined in MPEP 2106.05(h). Weak supervision is a generic form of supervised learning in the field of machine learning. Therefore, whether individually, or as an ordered combination, none of the additional elements provide an integration into a practical application or significantly more. Therefore, the claims are still directed to an abstract idea without integration into a practical application or significantly more. Claims 2, 20, 13, 22 are patent ineligible. Claim 7, 9, 18, 21, 23, and 25 merely further limit the abstract idea since every step recite steps of informing the moderator of the identified supportive, non-supportive, or harmful users. Presenting data to an individual, particularly regarding the status of other users, is more of the same abstract idea of “moderating social media platforms.” Since there are no new additional elements to consider, the claims are still directed to an abstract idea without integration into a practical application or significantly more. Claims 7, 9, 18, 21, 23, and 25 are patent ineligible. Claim 10 recites more of the same abstract idea as the independent claim, since it is merely a collection of feedback and a rule-based data processing procedure to recommend a user as a moderator. Whether analyzed individually or in combination with the claims depended upon, it is still a certain method of organizing human activity. The additional element of requiring the function to be performed “programmatically” is no more than mere instructions to perform the abstract idea on a generic computer, there the claims are still directed to an abstract idea without integration into a practical application or significantly more. Claim 10 is patent ineligible. Claims 11, 26 merely further limit the abstract idea by limiting the “online group” to be focused on a discussion topic in the area of health, crisis management, economic activity, sports, and education. This is more of the same abstract idea because even when considering the online group to be focused on any of these fields, the claims still recite the same abstract idea under “moderating a social media platform.” Since there are no new additional elements to consider, the claims are still directed to an abstract idea without integration into a practical application or significantly more. Claims 11 and 26 are patent ineligible. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 7, 9-13, and 18-26 are rejected under 35 U.S.C. 103 as being unpatentable over Savage et al. (US 9948689 B2) hereinafter Savage, in view of Dean Franklin Grove II (US 20150172227 A1) hereinafter Grove, further in view of Lyu et al. (US 20210201891 A1) hereinafter Lyu, further in view of Mossoba et al. (US 20210209651 A1) hereinafter Mossoba, further in view of Provost et al. (US 20200075040 A1) hereinafter Provost. Regarding Claims 1, 12: Savage discloses methods and systems for modeling and classifying user’s social roles in online social network platforms using semantic modeling, including roles of moderator, expert, newbie, maven etc. Savage teaches: Claim 1 Preamble -A method for detecting users who can act as potential moderators of an online group of support seekers and support providers in an online social media platform operating on the Internet, the method comprising: (Savage [Col. 2 Lines 48-51] An embodiment is a system, method and one or more computer readable media relating to managing user social personas, profiles and projected image within one or more online communities or social media systems. [Col. 3 Line 65- Col. 4 Line 5] An embodiment may take the textual information from conversations of an online community and the profiles of participants, to infer the typical social roles users assume within the conversation or community at large (such as Moderator, Maven, Troll, Newbie, or other use/administrator defined roles); the topical conversations in which users prefer to participate; and the social roles that are lacking in the discussions.) Claim 12 Preamble and structure: A system, comprising: a memory comprising instructions for detecting users who can act as potential moderators of an online group of support seekers and support providers in an online social media platform operating on the Internet; and a processor configured to execute the instructions to:(Savage [Col. 28 Lines 19-46] Program code, or instructions, may be stored in, for example, volatile and/or non-volatile memory, such as storage devices and/or an associated machine readable or machine accessible medium including solid-state memory... and other electronic devices, each including a processor, volatile and/or non-volatile memory readable by the processor, at least one input device and/or one or more output devices.) Body of Claim 1(as a representative claim also applicable to claim 12): - Utilizing one or more hardware processors for...(Savage [Col. 28 Lines 50-56] One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multiprocessor or multiple-core processor systems, minicomputers, mainframe computers, as well as pervasive or miniature computers or processors that may be embedded into virtually any device.) - automatically classifying users based on interaction content analysis and an expert-labeled dataset by executing a classifier comprising a stacked sequence to identify the class of supportive users and class of non- supportive users of an online group; (Savage [Col. 5 Lines 3-5] The Social Inference Module 110 may be configured to detect and classify the social roles of users from the topics of conversation of the community. [Col. 6 Lines 13-19] The Social Role Inference Block 111 may be configured to discover the social roles found in the discussion via topic modeling techniques and sentiment analysis. A module or subcomponent 112 is configured to detect user’s preferred topics of conversation. A module or subcomponent 114 is configured to detect users’ roles, e.g., topical Mavens, Newbies, Moderators, Trolls, etc. [Col. 6 Lines 51-57] Users creating the most content for a topic, for instance, are labeled as “Experts” of that topic. Users creating the least content on a topic, or asking the most questions on a topic, for instance, may be labeled as “Newbies” of a topic, especially when the question has been explored in detail by the community previously. [Col. 6 Lines 9-12] Social Inference Module: The social inference module 110 is comprised of two parts: Social Role Inference block 111 (aka social conversation block) and Social Identity Inference Block 113 (aka stated identity inference block). [Col. 25 Lines 12-18] the social inference module configured to automatically detect and classify social roles of the plurality of users of the online community, wherein the social inference module utilizes information collected by the crawling module; and a social recommendation module coupled to the persona manager, and configured to receive classification information from the social inference module, ) In view of the specification, the broadest reasonable interpretation (BRI) of supportive and non-supportive users are those who are helpful/informative in regards to a particular topic, in view of at least [0030] of the instant specification. Therefore, Savage’s classifications of “moderator,” “expert,” and “maven” fall within the scope of supportive. Likewise, “newbie,” “troll,” and “non-expert users” fall within non-supportive users. Furthermore, since Savage’s social inference module 110 is the “classifier comprising a stacked sequence” because it is comprised of two blocks. Furthermore, “based on interaction content analysis” is given the BRI of any analysis performed on interactions, which is also satisfied by Savage. Furthermore, given the BRI of “expert-labeled dataset,” Savage does teach a dataset that labels experts. The examiner notes that the alternative interpretation (a dataset labelled by experts) is shown to also be taught by Lyu below for purposes of compact prosecution. -identifying the class of users comprising support providers of the online group and support seekers of the online group for interaction; (Savage [Col. 8 Lines 58-65] Referring now to FIGS. 4-6, there is illustrated a flow diagram for an example Social Recommendation Module 130 (FIG. 1B). This module 130 receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens. Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user, and social roles present in each discussion 510. [Col. 6 Lines 51-57] Users creating the least content on a topic, or asking the most questions on a topic, for instance, may be labeled as “Newbies” of a topic, especially when the question has been explored in detail by the community previously.) Support providers are mapped to Maven and moderator, since they are deemed the informative and helpful members, support seekers are mapped to “Newbies” since they ask the most questions. - monitoring interactions between support seekers and the selected support providers, (Savage [Col. 5 Lines 40-43] Crawling Module: The crawling module 105 module collects (crawls 107) the K latest conversations (posts and comments) of an online community, along with the profile page of users participating in the discussion. ) automatically receiving user-generated suggestions for formal moderator position of the online group from both the support seekers and from the class of support providers selected by the support seekers for interaction; (Savage [Col. 10 Lines 17-34] FIG. 5 illustrates a flow diagram for an example method for the Online Conversation Recommendation Block 133: This block identifies social roles that are lacking in discussions that are alive (meaning being contributed to and discussed) 511 (lacking Mavens), 521 (lacking Moderators) and finds users that could fulfill those needed social roles. Data 510 may be retrieved as identified in the Social Role Inference module...(61) To identify when a conversation is lacking certain social roles, the block analyzes whether there are greater than K Trolls or K Newbies in the discussion and no Moderator or Maven is participating in the discussion. Depending on the case, the system may then search the list of Moderators or the list of Mavens, and send alerts 515, 525 to the top K users of these lists. If after a period of time these K users do not respond, the system may alert the next top K users.) -applying a second classifier utilizing a Linguistic Inquiry and Word Count. (Savage [Col. 7 Line 67 – Col. 8 Line 27] Trolls are a group of users whose comments may shift the initial topic of conversation to another topic, for instance, as identified in block 321, using Linguistic Inquiry and Word Count (LIWC 2007). LIWC is a text analysis software program and calculates the degree to which people use different categories of words across a wide array of texts, including emails, speeches, poems, or transcribed daily speech. More information on LIWC may be found at www*liwc*net, where periods in URLs are replaced with asterisk in this document to avoid inadvertent hyperlinks. The first initial topic of conversation may be identified by using LDA over the text in the original post and calculating its topic vector. Comments generated for the post may be gathered, and then LDA may be used to obtain their own topic vector. A similarity metric, such as L2 norm, may then be used to measure how similar or dissimilar the comments are to the main post. Comments, whose similarity to the main post is below threshold T, are labeled as dissimilar. The M first dissimilar comments may be gathered and their authors labeled as possible Trolls, in block 323. Trolls may also be identified as those users posting aggressive comments. A determination may be made, in block 325, whether there have been K comments made by the user after the identification of the user as a possible Troll, where those comments are aggressive or off-topic. It the user continues to be aggressive or off-topic, the user is labeled as a Troll in bloc, 327.) Savage’s use of the LIWC also satisfies this limitation, since aggressiveness is mapped to categories of negative behaviors. -automatically contacting the potential moderator and (Savage [Col. 10 Lines 30- 34]Depending on the case, the system may then search the list of Moderators or the list of Mavens, and send alerts 515, 525 to the top K users of these lists. If after a period of time these K users do not respond, the system may alert the next top K users [Col. 10 Lines 42-44] In such cases the system may send alerts to these users to notify them of the opening 535 (Moderator alert) and 536 (Maven alert).) -technologically establishing the potential moderator in the formal moderator position for the online group (Savage [Col. 10 Lines 56-67] FIG. 6 is a flow diagram illustrating an example method for the Social Goal Alert Block 135. This block receives the latest topic of discussion of users 612, and their latest social role taken 610. The block then analyzes whether the social role taken by the user antagonizes with his/her social goal 614, in which case the system alerts the user of the potential danger. For each user with a stated goal, an example method first determines whether a goal is to be a Moderator, in block 601. If so, then it is determined whether the user is in the list of Trolls for the conversation, in block 603. If so, then a warning message/alert may be sent to the user as a reminder that the goal is to be a Moderator, in block 605. [Col. 11 Lines 1-4] If the user is not listed as a Troll, then it is determined whether the user is in the current list of Moderators for the conversation, in block 607. If so, then a congratulatory message/alert may be sent to the user, in block 609.) In Savage, the social role being “taken by the user,” and one of the roles being moderator, indicates that the potential moderators in the previous citation are established as the moderator position for the group. Since Savage refers to the “system” as performing the functions, then the limitations of “technologically establishing the potential moderator in the formal moderator position for the online group” has been met because “formal moderator” is not specifically defined in the specification, and how the “technological establishing” is performed covers the scope of any computing system that registers the users as the moderator in a database. However, Savage fails to teach: -that the automatic classification of users is also based on an expert-labeled dataset(Savage teaches a dataset that labels experts but not a dataset that is labeled by an expert. The latter interpretation is covered again for purposes of compact prosecution.) -the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify... -that the identified support providers of the online group are selected by support seekers of the online group for interaction; - and automatically storing explicit user-provided votes thereafter given by support seekers on the class of supportive users within the selected class of support providers; -that the automatically received user-generated suggestions for formal moderator position of the online group are from both the support seekers and from the class of support providers selected by the support seekers for interaction; -filtering the identified class of users to exclude harmful users by applying a second classifier utilizing a labeled dataset on harassment and (LIWC) categories on negative behaviors(Savage teaches a second classifier which uses LIWC but does not teach the labeled dataset on harassment and categories on negative behaviors, and Savage doesn’t teach the filtering step to exclude the trolls.) - programmatically combining the received user-generated suggestions with the stored votes and results of the filtering to objectively identify at least one potential moderator for the online group; and Alternatively, Grove discloses a system of identifying helpful persons or moderators in a chat session. Grove teaches: -identifying the class of users comprising support providers of the online group selected by support seekers of the online group for interaction; (Grove [0016] In one example, a helpful and productive participant may be approved by others to increase their overall use score and receive points. [0023] When users join the chat session they are prompted with discussion topics and moderators can be selected from reputable members of the community in the future. The chat will continue until everyone has left the chat session. [0031] The user accounts that are identified as frequenting support groups may be asked to become a moderator if they desire, which may be a user who can help guide the conversation and answer direct questions from other users or a user who dictates who can/cannot attend a session, sets a threshold for attendance (e.g., users with at least 10 points, etc.).) Support providers are mapped to Grove’s moderator, support seekers are mapped to other members of the community who ask questions. Therefore, since members of the community select moderators, the limitation has been taught. - and automatically storing explicit user-provided votes thereafter given by support seekers on the class of supportive users within the selected class of support providers;(Grove [0030] In one example, in order to determine if the user is offering helpful input to the chat session or is being counterproductive (e.g., submitting jokes, generating spam, annoying the participants, overtaking the chat session, etc.), the users may have access to a voting mechanism and/or report button, then the aggregate votes may be used to determine the relevant worth of the user input and the corresponding action taken. This could then initiate a reward badge (i.e., +1 added to score), a warning (i.e., -1 taken from score), etc., and the user continues to disrupt the session based on the other users experiences, they can be reported/removed depending on the importance of the users reporting. For example, a high rated user or session moderator may have complete authority to remove participants. Or, a user with many positive points, such as 10, 20, 100 or more, may be able to report a use and have them removed without delay from the chat session.) Therefore, since members of the community can vote on individuals based on how helpful they are, the limitation has been taught. Since the limitation “explicit user-provided” votes is not specifically defined in the disclosure, it has the broadest reasonable interpretation of any vote provided by a user, therefore Grove satisfies the limitation. -that the automatically received user-generated suggestions for formal moderator position of the online group are from both the support seekers and from the class of support providers selected by the support seekers for interaction; (Grove [0030] In one example, in order to determine if the user is offering helpful input to the chat session or is being counterproductive (e.g., submitting jokes, generating spam, annoying the participants, overtaking the chat session, etc.), the users may have access to a voting mechanism and/or report button, then the aggregate votes may be used to determine the relevant worth of the user input and the corresponding action taken. This could then initiate a reward badge (i.e., +1 added to score), a warning (i.e., -1 taken from score), etc., and the user continues to disrupt the session based on the other users experiences, they can be reported/removed depending on the importance of the users reporting. [0032] For example, if the number of users is identified with a largely dissatisfied audience, the algorithm for future chats related to the same topic may increase or decrease depending on the number of users present in that session. If user profiles receive enough positive votes from others, then they may be asked to moderate future chat sessions. An administrator may still verify and authorize moderators suggested by the agent application.) Since Grove teaches that the selection of suggested moderators is from both the voting process and selections from administrators the limitation has been taught. Since the specification does not specifically limit “user-generated” suggestions, the limitation has the broadest reasonable interpretation of any nomination or vote for a user to be a moderator of the application. Therefore, by “receiving positive votes” from others, Grove satisfies the limitation. -filtering the identified class of users to exclude harmful users(Grove [0020] The scoring for the session 212 may be performed by the session agent 112, the scoring may provide a way to rank users, include/exclude users in current/subsequent chat sessions and offer future recommendations to those users. [0030] In one example, in order to determine if the user is offering helpful input to the chat session or is being counterproductive (e.g., submitting jokes, generating spam, annoying the participants, overtaking the chat session, etc.), the users may have access to a voting mechanism and/or report button, then the aggregate votes may be used to determine the relevant worth of the user input and the corresponding action taken. This could then initiate a reward badge (i.e., +1 added to score), a warning (i.e., -1 taken from score), etc., and the user continues to disrupt the session based on the other users experiences, they can be reported/removed depending on the importance of the users reporting. For example, a high rated user or session moderator may have complete authority to remove participants. Or, a user with many positive points, such as 10, 20, 100 or more, may be able to report a use and have them removed without delay from the chat session.) Removing the harmful participants in Grove falls within the scope of “filtering the identified class of users to exclude harmful users.” - programmatically combining the received user-generated suggestions with the stored votes and results of the filtering to identify at least one potential moderator for the online group; and (Grove [0031] According to example embodiments, the user inputted text may be stored, retrieved and processed after the chat session is complete, and the automated agent processing the information may rely on other users’ voting input rather than simply natural language processing capabilities or a combination of both type of processing. The result of the processing may be identifying which users should have points rewarded/deducted so a score can be updated to illustrate the accumulated points, which are used to invite the user to more sessions or promote the user to have other privileges, such as moderator to create their own chat session and invite others, administrator to have the right to remove or add people depending on the chat session. Etc. The user accounts that are identified as frequenting support groups may be asked to become a moderator if they desire, which may be a user who can help guide the conversation and answer direct questions from other users or a user who dictates who can/cannot attend a session... [0032] If user profiles receive enough positive votes from others, then they may be asked to moderate future chat sessions. [0036] The operations of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a computer program executed by a processor, or in a combination of the two.) Grove’s combination of both types of processing (natural language capabilities and user’s voting) satisfies combining the received suggestions with the stored votes to identify the moderator. Furthermore, at this point, the harmful participant has already been removed (filtered), therefore, the “results of the filtering” limitation is satisfied. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to modify Savage by adding Grove’s voting system to select moderators, which integrates voting from support seekers, and outright suggestions from support providers to Savage system which merely identifies potential moderators through other techniques such as natural language processing. By integrating Grove’s voting system into Savage’s classification of moderators, one would expect the predictable outcome of selecting moderators by identifying the support providers selected by support seekers, storing votes..., receiving suggestions... combining the received suggestions with the stored votes to identify the at least one potential moderator. One would be motivated to perform the combination by the benefit of increasing the quality of chat sessions by incentivizing helpful users and eliminating unhelpful users. (Grove [0016] Example embodiments also include ways to increase the quality of the chat sessions by rewarding participants or incentivizing participants for participating, sharing certain information, etc. Also, certain participants may be eliminated who are offensive or who produce spam and otherwise not contributing to the goal of the session...) However, neither Savage nor Grove teach: -that the automatic classification of users is also based on an expert-labeled dataset(Neither Savage nor Grove teach a dataset labeled by experts) -the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify... -that the filtering the identified class of users to exclude harmful users is performed by applying a second classifier utilizing a labeled dataset on harassment and (LIWC) categories on negative behaviors(Since the combination of Savage and Grove yield filtering the identified class of users to exclude harmful users by applying a second classifier utilizing LIWC, the remaining deficiency is that the classifier utilized a labeled dataset on harassment and liwc categories on negative behaviors.) Alternatively, Lyu teaches: -that the automatic classification of users is also based on an expert-labeled dataset (Lyu [0018] In some cases, one or more heuristics can be automatically generated using a small dataset of segments previously labeled by one or more users (such as, by one or more domain experts). The generated one or more heuristics along with one or more patterns can be used to assign training labels to a large unlabeled dataset of segments. A subset of segments representing occurrence of safety incident (such as, occurrence of verbal harassment) can be selected using the assigned training labels.) When interpreting the claim as a dataset labeled by an expert, Lyu satisfies this limitation. -applying a second classifier utilizing a labeled dataset on harassment (Lyu [0062] In block 218, the selected subset 214 can be labeled by a domain expert or the like. In block 218, one or more labels can be selected and assigned. For example, a domain expert can label a text segment with the occurrence of a particular type of verbal harassment, such as sexual harassment, aggressive behavior, extortion, or the like, or non-occurrence of verbal harassment. To accelerate the labeling in block 218, the subset 214 can be selected to include text segments that riders (and/or drivers) have identified as having one or more occurrences of verbal harassment.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by substituting labeled dataset of moderators, mavens, newbies and trolls to be expert labeled as taught by Lyu. By using the expert labels as taught by Lyu to instead label supportive and non-supportive users, one would arrive at the predictable outcome of using expert labeled dataset of supportive and non-supportive users. Furthermore, using a labeled data set on harassment as taught by Lyu in the process of filtering harmful users in Grove, would yield the predictable outcome of filtering the identified class of users to exclude harmful users is performed by applying a second classifier utilizing a labeled dataset on harassment. One of ordinary skill would have been motivated to use expert labelled data of verbal harassment instead as it would provide the benefit of increase accuracy and allowing the system to work with a human in the loop. (Lyu [0064] The labeler 240 can use one or more patterns 244 in addition to the one or more heuristics 242 in order to improve accuracy and/or speed of the system 200. A pattern can define matching of words, phrases, or the like and may be designed by a domain expert. The labeler 240 can apply the one or more heuristics 242 and the one or more patterns 244 to determine labels for the unlabeled data 224.) However, the combination of Savage, Grove, and Lyu still fail to teach: -the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify... -that the filtering the identified class of users to exclude harmful users is performed by applying a second classifier also uses (LIWC) categories on negative behaviors Alternatively, Mossoba discloses a content blocking algorithm for use on social media platforms to automatically moderate content. Mossoba teaches: - the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify block unwanted social media content items(Mossoba [0032] The NLP module can use intent classification techniques. Intent classification can be a natural language understanding (“NLU”) task that can understand at a high level what the user's intent is in written text, and thus, what message the user is intending to send with the writing...The NLP module can also determine the intent by training a supervised machine learning classification model on labeled data. Many machine-learning models can be used for this purpose, e.g., a neural network (or deep learning), ...logistic regression, etc. The NLP module can also include some preprocessing modules to convert text into character, word, or sentence embeddings that can be fed into the model... include stemming or lemmatization, sentence or word tokenization, stopword removal, etc. This can include a term frequency based approach, including TF-IDF, or Word2Vec, Universal Sentence Encoder, etc. Part of the NLU can also include dependency parsing to try to capture negation, or sentiment analysis. [0033] In one example embodiment, the de-targeting algorithm can block a social media post for a user of the social media platform. For example, based on the profile of the user and the friends that the user interacts with on the platform, the targeting algorithm can recommend a set of posts to the user, e.g., posts regarding boycotting an election (because the friends of the user with similar profiles have been reading and commenting on these posts). However, on the website for a retailer of electronic books, the user has been actively purchasing books touting the benefits of participation in elections in liberal democracies. The user can provide this data to the social media platform through the login mechanism discussed in FIG. 3. The de-targeting algorithm can use this data to block posts encouraging boycotting elections. [0007] The de-targeting algorithm can also block unwanted content items in the future.) The broadest reasonable interpretation of a stacked sequence is indicating the use of both the encoder and logistic regression in series. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by adding the use of a stacked sequence of a Universal Sentence Encoder and Logistic Regression on the expert-labeled dataset taught by the hypothetical combination of Savage, Grove, and Lyu. By using the Universal Sentence Encoder and Logistic Regression to preprocess the data, one would predictably arrive at the claims because the present claims also do not necessarily limit how a universal sentence encoder and logistic regression is used to classify supportive users from non-supportive users. Therefore, combining Savage’s classifier that has the intention of classifying supportive users from non-supportive users with Mossoba’s use of universal sentence encoder and logistic regression to block unwanted would yield the limitation. One of ordinary skill in the art would have been motivated to combine as it would provide the benefit of more accurately capturing the sentiment of the text. (Mossoba [0032]) However, even the combination of Savage, Grove, Lyu, and Mossoba fail to teach or suggest: -that the filtering the identified class of users to exclude harmful users is performed by applying a second classifier also uses (LIWC) categories on negative behaviors However, Provost discloses a method of predicting the emotional state of a user using verbal behavior and identifying the semantic content using LIWC. Provost teaches: -applying a second classifier also uses (LIWC) categories on negative behaviors(Provost [0102] In some embodiments, one or more feature sets may be generated that capture linguistic style. First, syntax may be generated. For example, as discussed above, the LIWC dictionary may be used to compute normalized counts of: (1) Part of Speech (POS) categories (e.g. first person pronouns, adverbs) (2) verb tenses (e.g. past, present), (3) swear words, (4) non-fluencies (e.g “hmm”, “um”), and (5) fillers (e.g. “you know”). The 18 POS measures included in LIWC may be combined with 5 additional POS categories derived using the Natural Language Toolkit (NLTK) POS tagger and with 13 POS ratio features (e.g. adjective:verbs). [0103] In some embodiments, semantic content may be identified. For example, LIWC may be used to measure the presence of psychologically meaningful categories, such as emotion (e.g. anger, anxiety), biological processes (e.g. body, health), and personal concerns (e.g. work, death). [0032] The proposed methods improve generalizability by extracting temporal descriptions 104 of emotional behavior in terms of valence and activation, rather than contextualized categorical labels (e.g., fear).) Provost’s LIWC categories of “swear words” or psychological meaningful categories such as “anger, anxiety fall within the scope of LIWC categories on negative behaviors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by adding Provost’s negative behavior LIWC category to Savage’s classifier which performs LIWC. This is merely a simple substitution as it would have been obvious to use a LIWC negative behaviors category when trying to identify supportive and non-supportive users. Therefore, the combination would yield the predictable result of applying the second classifier also using the LIWC categories on negative behaviors as taught by Provost. One of ordinary skill in the art would have been motivated to perform the combination as it would yield the benefit of improving robustness, generalizability, and performance when determining the behaviors of an individual. (Provost [0032] [0032] The present techniques represent significant advancements in both the fields of engineering and psychiatry. For example, with respect to engineering, the proposed techniques address current limitations in the design of robust and generalizable data collection and behavior extraction algorithms. In psychiatry, the present techniques lead to major advances in the field by creating the first dataset that provides a longitudinal, objective linking between an individual's social interactions and his/her mental health. In both fields, the present techniques result in the first robust method for evaluating expressed emotion in the personal environment of the individual 108. The present techniques include solutions for addressing current challenges in speech emotion recognition systems and in assistive technology that include generalizability, robustness, and performance. The proposed methods improve generalizability by extracting temporal descriptions 104 of emotional behavior in terms of valence and activation, rather than contextualized categorical labels (e.g., fear). The present techniques improve robustness by controlling for nuisance modulations. Finally, the present techniques focus on extracting secondary features whose variation is more directly in line with the slowly varying behavior of interest (mood), creating a level of performance not possible in current approaches.) Regarding Claims 2, 13: The combination of Savage, Grove, Lyu, Mossoba and Provost teach or suggest The method according to claim 1,/ The system according to claim 12, Furthermore, Savage teaches -wherein automatically classifying users comprises use of algorithms to identify the supportive users and non-supportive users. (Savage Col. 6 Line 66 – Col. 7 Line 27] Other social roles that may be detected with the social role inference module 111 are “Trolls” and “Moderators.” Trolls are a group of users whose comments may shift the initial topic of conversation to another topic, for instance, as identified in block 321, using Linguistic Inquiry and Word Count (LIWC 2007). LIWC is a text analysis software program and calculates the degree to which people use different categories of words across a wide array of texts, including emails, speeches, poems, or transcribed daily speech...The first initial topic of conversation may be identified by using LDA over the text in the original post and calculating its topic vector. Comments generated for the post may be gathered, and then LDA may be used to obtain their own topic vector. A similarity metric, such as L2 norm, may then be used to measure how similar or dissimilar the comments are to the main post. Comments, whose similarity to the main post is below threshold T, are labeled as dissimilar. The M first dissimilar comments may be gathered and their authors labeled as possible Trolls, in block 323. Trolls may also be identified as those users posting aggressive comments. A determination may be made, in block 325, whether there have been K comments made by the user after the identification of the user as a possible Troll, where those comments are aggressive or off-topic. It the user continues to be aggressive or off-topic, the user is labeled as a Troll in bloc, 327.) Savage’s classifications of “moderator,” “expert,” and “maven” fall within the scope of supportive. Likewise, “newbie,” “troll,” and “non-expert users” fall within non-supportive users. However, neither Savage nor Grove teach or suggest: -use of weak-supervision to identify the supportive users and non-supportive users. Alternatively, Lyu discloses generating training data to detect verbal harassment using machine learning models. Lyu teaches: -use of weak-supervision to identify the occurrence and non-occurrence of verbal harassment .(Lyu [0017] In some cases, weak supervision techniques can be utilized so that noisy training data can be used for training a machine learning model. [0063] The system 230 can determine the one or more heuristics 242 using labeled data 222. A heuristic can be configured to analyze content of a conversation in order to identify occurrence or non-occurrence of verbal harassment. In some cases, heuristics can be one or more of decision trees, logic regression, nearest neighbor, or the like. The system 230 can utilize a labeling generation system, such as for example one or more features of the labeling system described in Varma et al., “Snuba: Automating Weak Supervision to Label Training Data,” Proceedings of the VLDB Endowment, Vol. 12, No. 3, November 2018 (“Snuba”), which is hereby incorporated by reference herein in its entirety.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by substituting their algorithm to identify supportive user and non-supportive users with Lyu’s weak supervision techniques to identifies verbal harassers, and non-verbal harassers. By using the weak supervision machine learning algorithm taught by Lyu to instead classify supportive and non-supportive users, one would arrive at the predictable outcome of using weak supervision to identify supportive and non-supportive users. One of ordinary skill would have been motivated to use weak supervision instead as it would provide the benefit of increase accuracy and allowing the system to work with a human in the loop. (Lyu [0064] The labeler 240 can use one or more patterns 244 in addition to the one or more heuristics 242 in order to improve accuracy and/or speed of the system 200. A pattern can define matching of words, phrases, or the like and may be designed by a domain expert. The labeler 240 can apply the one or more heuristics 242 and the one or more patterns 244 to determine labels for the unlabeled data 224.) Regarding Claim 7: The combination of Savage, Grove, Lyu, Mossoba and Provost teach or suggest teach the method according to claim 1, Furthermore, Savage teaches: further comprising (instructions to): -informing the moderator of identified harmful users. (Savage [Col. 11 Lines 43-54] Social Persona Visualization Module 150: FIG. 7 illustrates an example visualization for online personas. This module receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens, Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user. The module uses these initial classifications to present to users a visualization of the social personas the systems considers they have created in the online community. The initial social personas considered are that a user is a: Maven, Moderator, Troll, or Newbie. The interface may present users how much of a Maven. Moderator, Troll, Newbie they are with respect to the rest of the community members.) Regarding Claims 9, 18: The combination of Savage, Grove, and Lyu teach the method according to claim 1/ The system according to claim 12, Furthermore, Savage teaches further comprising: -informing the moderator of all identified harmful users as distinguished from those only identified as non-supportive users.(Savage [Col. 11 Lines 43-54] Social Persona Visualization Module 150: FIG. 7 illustrates an example visualization for online personas. This module receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens, Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user. The module uses these initial classifications to present to users a visualization of the social personas the systems considers they have created in the online community. The initial social personas considered are that a user is a: Maven, Moderator, Troll, or Newbie. The interface may present users how much of a Maven. Moderator, Troll, Newbie they are with respect to the rest of the community members.) Since Savage identifies both Newbies, which are users who are non-supportive but not harmful, and trolls, who are both non-supportive and harmful, the limitation has been satisfied by Savage. Regarding Claim 10: The combination of Savage, Grove, Lyu, Mossoba and Provost teach or suggest The method according to claim 1, However, Savage fails to teach: Wherein programmatically combining user-generated suggestions with explicit user-provided votes includes: -collecting data comprising feedback from users about users who have helped them, - and data comprising agreements and disagreements with others, and volume of participation in discussion, -and applying rules to collected data to recommend a user if feedback and participation satisfy predetermined criteria for recommendation, - and to not recommend a user if participation and disagreement satisfy predetermined criteria for non-recommendation. Alternatively, Grove teaches Wherein programmatically combining user-generated suggestions with explicit user-provided votes includes: -collecting data comprising feedback from users about users who have helped them,(Grove [0030] In one example, in order to determine if the user is offering helpful input to the chat session or is being counterproductive (e.g., submitting jokes, generating spam, annoying the participants, overtaking the chat session, etc.), the users may have access to a voting mechanism and/or report button, [0032] The agent attempts to cluster users into groups of 7-10. This will be adjusted based on the voting received from the satisfaction surveys, which will specifically ask about user satisfaction on the number of users.) The voting mechanism based on helpfulness is an example of data comprising feedback about users who have helped them. - and data comprising agreements and disagreements with others, and volume of participation in discussion, (Grove [0016] Example embodiments provide an application that creates chat sessions, social networking affiliations, topics of interest and other online tools to engage users with certain interests to communicate and share information without the user having to create a new chat session, invite participants, setup a time, propose a topic, etc. Example embodiments also include ways to increase the quality of the chat sessions by rewarding participants or incentivizing participants for participating, sharing certain information, etc. Also, certain participants may be eliminated who are offensive or who produce spam and otherwise not contributing to the goal of the session. In one example, a helpful and productive participant may be approved by others to increase their overall use score and receive points. The total points may be applied to a rewards system, or the points may be displayed to other users to increase comradery among users. [0020] The user 106 has lost a point (-1) for adding spam to a website that is not contributing to the topic or the discussion. The users 108 and 100 are neutral and have yet to offer any information. [0020] In this example, the user 102 is receiving points for sharing topic related information, and the user 104 is also receiving points, although not at the same rate as user 102 who has shared more at this point in the chat session. The user 106 has lost a point (-1) for adding spam to a website that is not contributing to the topic or the discussion. The users 108 and 100 are neutral and have yet to offer any information.) Data considering users who have been deemed offensive by others is an example of “disagreements.” Data considering a “helpful and productive participant” falls within the scope of agreements. The amount of points accumulated by the user is a measure of volume of participation. -and applying rules to collected data to recommend a user if feedback and participation satisfy predetermined criteria for recommendation, (Grove [0030] For example, a high rated user or session moderator may have complete authority to remove participants. Or, a user with many positive points, such as 10, 20, 100 or more, may be able to report a use and have them removed without delay from the chat session. [0031] The result of the processing may be identifying which users should have points rewarded/deducted so a score can be updated to illustrate the accumulated points, which are used to invite the user to more sessions or promote the user to have other privileges, such as moderator to create their own chat session and invite others, administrator to have the right to remove or add people depending on the chat session. Etc.The user accounts that are identified as frequenting support groups may be asked to become a moderator if they desire, which may be a user who can help guide the conversation and answer direct questions from other users or a user who dictates who can/cannot attend a session, sets a threshold for attendance (e.g., users with at least 10 points, etc.). [0032] If user profiles receive enough positive votes from others, then they may be asked to moderate future chat sessions.) The processing which results in determining users to be promoted to moderator status is based on high rated. As best understood by the examiner, in view of the indefiniteness of the claim, “many positive points” is an example of the feedback being high and participation being high, since would need high participation to accumulate many points. Grove’s ‘if user profiles received enough positive voted from others” satisfies the amended limitation “predetermined criteria for recommendation.” - and to not recommend a user if participation and disagreement satisfy predetermined criteria for non-recommendation.(Grove [0035] The user’s profile is updated to receive positive points if the input is related to a chat session topic or negative points if the input is unrelated to a chat session topic. [0027] The users who reject the invitation or decline to answer may have their score decreased associated with chat recommendations 422, this may lead to fewer offers and a lower rank.) High disagreement with a high volume of participation results in a negative score in Grove, which leads to fewer offers to be recommended. The users not recommended for having many negative votes in Grove satisfies the limitation “satisfy predetermined criteria for non-recommendation.” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by adding Grove’s teachings of measuring feedback and volume of feedback as a scoring system in order to help determine the potential moderators. By implementing such a scoring system into Savage’s determination of “Mavens” and “Newbies” one would arrive at the predictable outcome of collecting data comprising feedback from users about users who have helped them and data comprising agreements and disagreements with others, and volume of participation in discussion and applying rules to collected data to recommend a user if feedback is high and participation is high and to not recommend a user if participation is high and disagreement is high. One of ordinary skill in the art would have been motivated by the benefit of using the aggregate feedback and democratized voting system which would more accurately measure the helpfulness/harmfulness of a user. (Grove [0030]) Regarding Claim 11: The combination of Savage, Grove, Lyu, Mossoba and Provost teaches or suggests The method according to claim 1, Furthermore, Savage teaches wherein the online group is focused on at least one discussion topic including at least one of the areas of health, and education.(Savage [Col. 15 Lines 13-17] Given an online community, the goal of the framework is threefold: (1) discover the different theme-based interests present in the community’s discussions (posts and comments created); (2) categorize posts, comments, and interest tags into these discovered theme-based interests; [Col. 19 Lines 35-44] Each theme was labeled based on representative words from each of its topics. The labels given for the themes present in the community’s posts were: Racism, Women’s Rights, LGBT, Politics, Health care, Sexual Identity, Terrorism, and Church. The labels given for the themes covering community members’ stated interest were: Outdoor Activities, TV Shows, Computer Science Themes, School Subjects, Fiction, Musical Artists, Gender Inequality, Asian Culture, Felines, and the Holocaust.) Regarding Claim 19: Savage teaches: - A method for operation of an automated moderator which can connect support- seeking users with support giver users of an online group of support seeker and support provider users in an online social media platform operating on the Internet, the method comprising: (Savage [Col. 2 Lines 48-51] An embodiment is a system, method and one or more computer readable media relating to managing user social personas, profiles and projected image within one or more online communities or social media systems. [Col. 3 Line 65- Col. 4 Line 5] An embodiment may take the textual information from conversations of an online community and the profiles of participants, to infer the typical social roles users assume within the conversation or community at large (such as Moderator, Maven, Troll, Newbie, or other use/administrator defined roles); the topical conversations in which users prefer to participate; and the social roles that are lacking in the discussions.) -in the context of the discussion subject matter of the online group, automatically classifying users based on interaction content analysis and an expert-labelled dataset by executing a classifier comprising a stacked sequence to identify the class of supportive users and class of non-supportive users of the online group; (Savage [Col. 5 Lines 3-5] The Social Inference Module 110 may be configured to detect and classify the social roles of users from the topics of conversation of the community. [Col. 6 Lines 13-19] The Social Role Inference Block 111 may be configured to discover the social roles found in the discussion via topic modeling techniques and sentiment analysis. A module or subcomponent 112 is configured to detect user’s preferred topics of conversation. A module or subcomponent 114 is configured to detect users’ roles, e.g., topical Mavens, Newbies, Moderators, Trolls, etc. [Col. 6 Lines 51-57] Users creating the most content for a topic, for instance, are labeled as “Experts” of that topic. Users creating the least content on a topic, or asking the most questions on a topic, for instance, may be labeled as “Newbies” of a topic, especially when the question has been explored in detail by the community previously. [Col. 6 Lines 9-12] Social Inference Module: The social inference module 110 is comprised of two parts: Social Role Inference block 111 (aka social conversation block) and Social Identity Inference Block 113 (aka stated identity inference block). [Col. 25 Lines 12-18] the social inference module configured to automatically detect and classify social roles of the plurality of users of the online community, wherein the social inference module utilizes information collected by the crawling module; and a social recommendation module coupled to the persona manager, and configured to receive classification information from the social inference module, ) In view of the specification, the broadest reasonable interpretation (BRI) of supportive and non-supportive users are those who are helpful/informative in regards to a particular topic, in view of at least [0030] of the instant specification. Therefore, Savage’s classifications of “moderator,” “expert,” and “maven” fall within the scope of supportive. Likewise, “newbie,” “troll,” and “non-expert users” fall within non-supportive users. Furthermore, since Savage’s social inference module 110 is the “classifier comprising a stacked sequence” because it is comprised of two blocks. Furthermore, “based on interaction content analysis” is given the BRI of any analysis performed on interactions, which is also satisfied by Savage. Furthermore, given the BRI of “expert-labeled dataset,” Savage does teach a dataset that labels experts. The examiner notes that the alternative interpretation (a dataset labelled by experts) is shown to also be taught by Lyu below for purposes of compact prosecution. -identifying the class of users comprising support providers of the online group selected by support seekers of the online group for interaction; (Savage [Col. 8 Lines 58-65] Referring now to FIGS. 4-6, there is illustrated a flow diagram for an example Social Recommendation Module 130 (FIG. 1B). This module 130 receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens. Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user, and social roles present in each discussion 510. [Col. 6 Lines 51-57] Users creating the least content on a topic, or asking the most questions on a topic, for instance, may be labeled as “Newbies” of a topic, especially when the question has been explored in detail by the community previously.) Support providers are mapped to Maven and moderator, since they are deemed the informative and helpful members, support seekers are mapped to “Newbies” since they ask the most questions. -monitoring interactions between support seeker users and support provider users(Savage [Col. 5 Lines 40-43] Crawling Module: The crawling module 105 module collects (crawls 107) the K latest conversations (posts and comments) of an online community, along with the profile page of users participating in the discussion. ) -automatically contacting the potential moderator and technologically establishing the potential moderator as the moderator in the formal moderator position for the online group. (Savage [Col. 10 Lines 17-34] FIG. 5 illustrates a flow diagram for an example method for the Online Conversation Recommendation Block 133: This block identifies social roles that are lacking in discussions that are alive (meaning being contributed to and discussed) 511 (lacking Mavens), 521 (lacking Moderators) and finds users that could fulfill those needed social roles. Data 510 may be retrieved as identified in the Social Role Inference module...(61) To identify when a conversation is lacking certain social roles, the block analyzes whether there are greater than K Trolls or K Newbies in the discussion and no Moderator or Maven is participating in the discussion. Depending on the case, the system may then search the list of Moderators or the list of Mavens, and send alerts 515, 525 to the top K users of these lists. If after a period of time these K users do not respond, the system may alert the next top K users.) However, Savage fails to teach: -the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify... - to automatically collect data comprising explicit user-provided feedback from support seeker users about support provider users who have helped them, -and data comprising agreements and disagreements with others, and volume of participation in discussion; -filtering the identified class of users to exclude harmful users by applying a second classifier utilizing a labeled dataset on harassment and (LIWC) categories on negative behaviors(Savage teaches a second classifier which uses LIWC but does not teach the labeled dataset on harassment and categories on negative behaviors, and Savage doesn’t teach the filtering step to exclude the trolls.) -applying rules to the collected explicit user-provided data and results of the filtering to objectively recommend a user as moderator if feedback and participation satisfy predetermined criteria for recommendation, -and to not recommend a user as moderator if disagreement satisfies predetermined criteria for non-recommendation; and -that the automatic classification is also based on expert-labeled dataset(Savage teaches a dataset that labels experts but not a dataset that is labeled by an expert. The latter interpretation is covered again for purposes of compact prosecution.) Alternatively, Grove teaches wherein combining suggestions with votes includes: - automatically collect data comprising explicit user-provided feedback from support seeker users about support provider users who have helped them, (Grove [0030] In one example, in order to determine if the user is offering helpful input to the chat session or is being counterproductive (e.g., submitting jokes, generating spam, annoying the participants, overtaking the chat session, etc.), the users may have access to a voting mechanism and/or report button, [0032] The agent attempts to cluster users into groups of 7-10. This will be adjusted based on the voting received from the satisfaction surveys, which will specifically ask about user satisfaction on the number of users.) The voting mechanism based on helpfulness is an example of data comprising feedback about users who have helped them. - and data comprising agreements and disagreements with others, and volume of participation in discussion, (Grove [0016] Example embodiments provide an application that creates chat sessions, social networking affiliations, topics of interest and other online tools to engage users with certain interests to communicate and share information without the user having to create a new chat session, invite participants, setup a time, propose a topic, etc. Example embodiments also include ways to increase the quality of the chat sessions by rewarding participants or incentivizing participants for participating, sharing certain information, etc. Also, certain participants may be eliminated who are offensive or who produce spam and otherwise not contributing to the goal of the session. In one example, a helpful and productive participant may be approved by others to increase their overall use score and receive points. The total points may be applied to a rewards system, or the points may be displayed to other users to increase comradery among users. [0020] The user 106 has lost a point (-1) for adding spam to a website that is not contributing to the topic or the discussion. The users 108 and 100 are neutral and have yet to offer any information. [0020] In this example, the user 102 is receiving points for sharing topic related information, and the user 104 is also receiving points, although not at the same rate as user 102 who has shared more at this point in the chat session. The user 106 has lost a point (-1) for adding spam to a website that is not contributing to the topic or the discussion. The users 108 and 100 are neutral and have yet to offer any information.) Data considering users who have been deemed offensive by others is an example of “disagreements.” Data considering a “helpful and productive participant” falls within the scope of agreements. The amount of points accumulated by the user is a measure of volume of participation. -filtering the identified class of users to exclude harmful users(Grove [0020] The scoring for the session 212 may be performed by the session agent 112, the scoring may provide a way to rank users, include/exclude users in current/subsequent chat sessions and offer future recommendations to those users. [0030] In one example, in order to determine if the user is offering helpful input to the chat session or is being counterproductive (e.g., submitting jokes, generating spam, annoying the participants, overtaking the chat session, etc.), the users may have access to a voting mechanism and/or report button, then the aggregate votes may be used to determine the relevant worth of the user input and the corresponding action taken. This could then initiate a reward badge (i.e., +1 added to score), a warning (i.e., -1 taken from score), etc., and the user continues to disrupt the session based on the other users experiences, they can be reported/removed depending on the importance of the users reporting. For example, a high rated user or session moderator may have complete authority to remove participants. Or, a user with many positive points, such as 10, 20, 100 or more, may be able to report a use and have them removed without delay from the chat session.) Removing the harmful participants in Grove falls within the scope of “filtering the identified class of users to exclude harmful users.” -applying rules to the collected explicit user-provided data and results of the filtering to objectively recommend a user as moderator if feedback and participation satisfy predetermined criteria for recommendation,(Grove [0030] For example, a high rated user or session moderator may have complete authority to remove participants. Or, a user with many positive points, such as 10, 20, 100 or more, may be able to report a use and have them removed without delay from the chat session. [0031] The result of the processing may be identifying which users should have points rewarded/deducted so a score can be updated to illustrate the accumulated points, which are used to invite the user to more sessions or promote the user to have other privileges, such as moderator to create their own chat session and invite others, administrator to have the right to remove or add people depending on the chat session. Etc.The user accounts that are identified as frequenting support groups may be asked to become a moderator if they desire, which may be a user who can help guide the conversation and answer direct questions from other users or a user who dictates who can/cannot attend a session, sets a threshold for attendance (e.g., users with at least 10 points, etc.). [0032] If user profiles receive enough positive votes from others, then they may be asked to moderate future chat sessions.) The processing which results in determining users to be promoted to moderator status is based on high rated. As best understood by the examiner, in view of the indefiniteness of the claim, “many positive points” is an example of the feedback being high and participation being high, since would need high participation to accumulate many points. Since the filtering of harmful users has already been performed, the limitation has been satisfied, because only the individuals in the group can be recommended as moderators. -and to not recommend a user as moderator if disagreement satisfies predetermined criteria for non-recommendation; and (Grove [0035] The user’s profile is updated to receive positive points if the input is related to a chat session topic or negative points if the input is unrelated to a chat session topic. [0027] The users who reject the invitation or decline to answer may have their score decreased associated with chat recommendations 422, this may lead to fewer offers and a lower rank.) High disagreement with a high volume of participation results in a negative score in Grove, which leads to fewer offers to be recommended. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by adding Grove’s teachings of measuring feedback and volume of feedback as a scoring system in order to help determine the potential moderators. By implementing such a scoring system into Savage’s determination of “Mavens” and “Newbies” one would arrive at the predictable outcome of collecting data comprising feedback from users about users who have helped them and data comprising agreements and disagreements with others, and volume of participation in discussion and applying rules to collected data to recommend a user if feedback is high and participation is high and to not recommend a user if participation is high and disagreement is high. One of ordinary skill in the art would have been motivated by the benefit of using the aggregate feedback and democratized voting system which would more accurately measure the helpfulness/harmfulness of a user. (Grove [0030]) However, neither Savage nor Grove teach: -that the automatic classification of users is also based on an expert-labeled dataset(Neither Savage nor Grove teach a dataset labeled by experts, though Grove teaches a dataset that labels experts) -the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify... -that the filtering the identified class of users to exclude harmful users is performed by applying a second classifier utilizing a labeled dataset on harassment and (LIWC) categories on negative behaviors(Since the combination of Savage and Grove yield filtering the identified class of users to exclude harmful users by applying a second classifier utilizing LIWC, the remaining deficiency is that the classifier utilized a labeled dataset on harassment and liwc categories on negative behaviors.) Alternatively, Lyu teaches: -that the automatic classification of users is also based on an expert-labeled dataset (Lyu [0018] In some cases, one or more heuristics can be automatically generated using a small dataset of segments previously labeled by one or more users (such as, by one or more domain experts). The generated one or more heuristics along with one or more patterns can be used to assign training labels to a large unlabeled dataset of segments. A subset of segments representing occurrence of safety incident (such as, occurrence of verbal harassment) can be selected using the assigned training labels.) When interpreting the “expert-labeled dataset” as a dataset labelled by experts, Lyu satisfies this limitation. -applying a second classifier utilizing a labeled dataset on harassment (Lyu [0062] In block 218, the selected subset 214 can be labeled by a domain expert or the like. In block 218, one or more labels can be selected and assigned. For example, a domain expert can label a text segment with the occurrence of a particular type of verbal harassment, such as sexual harassment, aggressive behavior, extortion, or the like, or non-occurrence of verbal harassment. To accelerate the labeling in block 218, the subset 214 can be selected to include text segments that riders (and/or drivers) have identified as having one or more occurrences of verbal harassment.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by substituting labeled dataset of moderators, mavens, newbies and trolls to be expert labeled as taught by Lyu. By using the expert labels as taught by Lyu to instead label supportive and non-supportive users, one would arrive at the predictable outcome of using expert labeled dataset of supportive and non-supportive users. Furthermore, using a labeled data set on harassment as taught by Lyu in the process of filtering harmful users in Grove, would yield the predictable outcome of filtering the identified class of users to exclude harmful users is performed by applying a second classifier utilizing a labeled dataset on harassment. One of ordinary skill would have been motivated to use expert labelled data of verbal harassment instead as it would provide the benefit of increase accuracy and allowing the system to work with a human in the loop. (Lyu [0064] The labeler 240 can use one or more patterns 244 in addition to the one or more heuristics 242 in order to improve accuracy and/or speed of the system 200. A pattern can define matching of words, phrases, or the like and may be designed by a domain expert. The labeler 240 can apply the one or more heuristics 242 and the one or more patterns 244 to determine labels for the unlabeled data 224.) However, the combination of Savage, Grove, and Lyu still fail to teach: -the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify... -that the filtering the identified class of users to exclude harmful users is performed by applying a second classifier also uses (LIWC) categories on negative behaviors Alternatively, Mossoba discloses a content blocking algorithm for use on social media platforms to automatically moderate content. Mossoba teaches: - the classifier comprises a stacked sequence of a Universal Sentence Encoder and Logistic Regression to identify block unwanted social media content items(Mossoba [0032] The NLP module can use intent classification techniques. Intent classification can be a natural language understanding (“NLU”) task that can understand at a high level what the user's intent is in written text, and thus, what message the user is intending to send with the writing...The NLP module can also determine the intent by training a supervised machine learning classification model on labeled data. Many machine-learning models can be used for this purpose, e.g., a neural network (or deep learning), ...logistic regression, etc. The NLP module can also include some preprocessing modules to convert text into character, word, or sentence embeddings that can be fed into the model... include stemming or lemmatization, sentence or word tokenization, stopword removal, etc. This can include a term frequency based approach, including TF-IDF, or Word2Vec, Universal Sentence Encoder, etc. Part of the NLU can also include dependency parsing to try to capture negation, or sentiment analysis. [0033] In one example embodiment, the de-targeting algorithm can block a social media post for a user of the social media platform. For example, based on the profile of the user and the friends that the user interacts with on the platform, the targeting algorithm can recommend a set of posts to the user, e.g., posts regarding boycotting an election (because the friends of the user with similar profiles have been reading and commenting on these posts). However, on the website for a retailer of electronic books, the user has been actively purchasing books touting the benefits of participation in elections in liberal democracies. The user can provide this data to the social media platform through the login mechanism discussed in FIG. 3. The de-targeting algorithm can use this data to block posts encouraging boycotting elections. [0007] The de-targeting algorithm can also block unwanted content items in the future.) The broadest reasonable interpretation of a stacked sequence is indicating the use of both the encoder and logistic regression in series. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by adding the use of a stacked sequence of a Universal Sentence Encoder and Logistic Regression on the expert-labeled dataset taught by the hypothetical combination of Savage, Grove, and Lyu. By using the Universal Sentence Encoder and Logistic Regression to preprocess the data, one would predictably arrive at the claims because the present claims also do not necessarily limit how a universal sentence encoder and logistic regression is used to classify supportive users from non-supportive users. Therefore, combining Savage’s classifier that has the intention of classifying supportive users from non-supportive users with Mossoba’s use of universal sentence encoder and logistic regression to block unwanted would yield the limitation. One of ordinary skill in the art would have been motivated to combine as it would provide the benefit of more accurately capturing the sentiment of the text. (Mossoba [0032]) However, even the combination of Savage, Grove, Lyu, and Mossoba fail to teach or suggest: -that the filtering the identified class of users to exclude harmful users is performed by applying a second classifier also uses (LIWC) categories on negative behaviors However, Provost discloses a method of predicting the emotional state of a user using verbal behavior and identifying the semantic content using LIWC. Provost teaches: -applying a second classifier also uses (LIWC) categories on negative behaviors(Provost [0102] In some embodiments, one or more feature sets may be generated that capture linguistic style. First, syntax may be generated. For example, as discussed above, the LIWC dictionary may be used to compute normalized counts of: (1) Part of Speech (POS) categories (e.g. first person pronouns, adverbs) (2) verb tenses (e.g. past, present), (3) swear words, (4) non-fluencies (e.g “hmm”, “um”), and (5) fillers (e.g. “you know”). The 18 POS measures included in LIWC may be combined with 5 additional POS categories derived using the Natural Language Toolkit (NLTK) POS tagger and with 13 POS ratio features (e.g. adjective:verbs). [0103] In some embodiments, semantic content may be identified. For example, LIWC may be used to measure the presence of psychologically meaningful categories, such as emotion (e.g. anger, anxiety), biological processes (e.g. body, health), and personal concerns (e.g. work, death). [0032] The proposed methods improve generalizability by extracting temporal descriptions 104 of emotional behavior in terms of valence and activation, rather than contextualized categorical labels (e.g., fear).) Provost’s LIWC categories of “swear words” or psychological meaningful categories such as “anger, anxietyy fall within the scope of LIWC categories on negative behaviors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by adding Provost’s negative behavior LIWC category to Savage’s classifier which performs LIWC. This is merely a simple substitution as it would have been obvious to use a LIWC negative behaviors category when trying to identify supportive and non-supportive users. Therefore, the combination would yield the predictable result of applying the second classifier also using the LIWC categories on negative behaviors as taught by Provost. One of ordinary skill in the art would have been motivated to perform the combination as it would yield the benefit of improving robustness, generalizability, and performance when determining the behaviors of an individual. (Provost [0032] [0032] The present techniques represent significant advancements in both the fields of engineering and psychiatry. For example, with respect to engineering, the proposed techniques address current limitations in the design of robust and generalizable data collection and behavior extraction algorithms. In psychiatry, the present techniques lead to major advances in the field by creating the first dataset that provides a longitudinal, objective linking between an individual's social interactions and his/her mental health. In both fields, the present techniques result in the first robust method for evaluating expressed emotion in the personal environment of the individual 108. The present techniques include solutions for addressing current challenges in speech emotion recognition systems and in assistive technology that include generalizability, robustness, and performance. The proposed methods improve generalizability by extracting temporal descriptions 104 of emotional behavior in terms of valence and activation, rather than contextualized categorical labels (e.g., fear). The present techniques improve robustness by controlling for nuisance modulations. Finally, the present techniques focus on extracting secondary features whose variation is more directly in line with the slowly varying behavior of interest (mood), creating a level of performance not possible in current approaches.) Regarding Claims 20: The combination of Savage, Grove, Lyu, Mossoba, and Provost teaches or suggests The method according to claim 19 Furthermore, Savage teaches: -wherein automatically classifying users comprises use of a labeled dataset of the supportive and non-supportive users on online communities in the context of the discussion subject matter of the online group.(Savage [Col. 7 Lines 15-27] A similarity metric, such as L2 norm, may then be used to measure how similar or dissimilar the comments are to the main post. Comments, whose similarity to the main post is below threshold T, are labeled as dissimilar. The M first dissimilar comments may be gathered and their authors labeled as possible Trolls, in block 323. Trolls may also be identified as those users posting aggressive comments. A determination may be made, in block 325, whether there have been K comments made by the user after the identification of the user as a possible Troll, where those comments are aggressive or off-topic. It the user continues to be aggressive or off-topic, the user is labeled as a Troll in bloc, 327. [Col. 7 Lines 58- Col. 8 Line 4] From the list of conversations that are alive, the system then identifies a list of current Trolls, Moderators, Mavens, list of latest topic discussed by each user, list of the social roles present in each of these conversations, as well as a list of relevant dead conversations, in block 315... For alive conversations having no Mavens, or knowledgeable, frequent on-topic posters, the conversation may be labeled as lacking a Maven, in block 317. ) The broadest reasonable interpretation of the claim, in view of the specification, is any use of a labeled dataset with the supportive and non-supportive users on the online communities. Since Savage labels the possible trolls and possible moderators based on their similarity to the main post (context of the discussion subject matter...), then the resulting list is the labeled dataset. Therefore, the limitation has been satisfied. However, neither Savage nor Grove teach: -That the dataset is expert-labelled Alternatively, Lyu teaches: -expert labelled dataset of verbal harassment (Lyu [0018] In some cases, one or more heuristics can be automatically generated using a small dataset of segments previously labeled by one or more users (such as, by one or more domain experts). The generated one or more heuristics along with one or more patterns can be used to assign training labels to a large unlabeled dataset of segments. A subset of segments representing occurrence of safety incident (such as, occurrence of verbal harassment) can be selected using the assigned training labels. [0062] In block 218, the selected subset 214 can be labeled by a domain expert or the like. In block 218, one or more labels can be selected and assigned. For example, a domain expert can label a text segment with the occurrence of a particular type of verbal harassment, such as sexual harassment, aggressive behavior, extortion, or the like, or non-occurrence of verbal harassment. To accelerate the labeling in block 218, the subset 214 can be selected to include text segments that riders (and/or drivers) have identified as having one or more occurrences of verbal harassment.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to further modify Savage by substituting labeled dataset of moderators, mavens, newbies and trolls to be expert labeled as taught by Lyu. By using the expert labels as taught by Lyu to instead label supportive and non-supportive users, one would arrive at the predictable outcome of using expert labeled dataset of supportive and non-supportive users. One of ordinary skill would have been motivated to use expert labelled data instead as it would provide the benefit of increase accuracy and allowing the system to work with a human in the loop. (Lyu [0064] The labeler 240 can use one or more patterns 244 in addition to the one or more heuristics 242 in order to improve accuracy and/or speed of the system 200. A pattern can define matching of words, phrases, or the like and may be designed by a domain expert. The labeler 240 can apply the one or more heuristics 242 and the one or more patterns 244 to determine labels for the unlabeled data 224.) Regarding Claim 21: The combination of Savage, Grove, Lyu, Mossoba, and Provost teaches or suggests The method according to claim 19, Furthermore, Savage teaches: - further comprising informing the moderator of identified supportive users and non-supportive users.(Savage [Col. 11 Lines 43-54] Social Persona Visualization Module 150: FIG. 7 illustrates an example visualization for online personas. This module receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens, Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user. The module uses these initial classifications to present to users a visualization of the social personas the systems considers they have created in the online community. The initial social personas considered are that a user is a: Maven, Moderator, Troll, or Newbie. The interface may present users how much of a Maven. Moderator, Troll, Newbie they are with respect to the rest of the community members.) Since the list of social roles(including who is identified as a Maven or Newbie) are provided to each of the members in a group, including the moderators, then the limitation above has been satisfied. Regarding Claim 22: The combination of Savage, Grove, Lyu, Mossoba, and Provost teaches or suggests The method according to claim 21, Furthermore, Savage teaches: -further comprising further classifying previously identified non-supportive users using a labeled dataset on harassment and hate speech to identify harmful users. (Savage [Col. 7 Line 67 – Col. 8 Line 27] Trolls are a group of users whose comments may shift the initial topic of conversation to another topic, for instance, as identified in block 321, using Linguistic Inquiry and Word Count (LIWC 2007). LIWC is a text analysis software program and calculates the degree to which people use different categories of words across a wide array of texts, including emails, speeches, poems, or transcribed daily speech. More information on LIWC may be found at www*liwc*net, where periods in URLs are replaced with asterisk in this document to avoid inadvertent hyperlinks. The first initial topic of conversation may be identified by using LDA over the text in the original post and calculating its topic vector. Comments generated for the post may be gathered, and then LDA may be used to obtain their own topic vector. A similarity metric, such as L2 norm, may then be used to measure how similar or dissimilar the comments are to the main post. Comments, whose similarity to the main post is below threshold T, are labeled as dissimilar. The M first dissimilar comments may be gathered and their authors labeled as possible Trolls, in block 323. Trolls may also be identified as those users posting aggressive comments. A determination may be made, in block 325, whether there have been K comments made by the user after the identification of the user as a possible Troll, where those comments are aggressive or off-topic. It the user continues to be aggressive or off-topic, the user is labeled as a Troll in bloc, 327.) The LIWC identifying aggressive/off-topic comments is mapped to the labeled dataset on harassment and hate speech. Savage’s “trolls” are mapped to harmful users. Regarding Claim 23: The combination of Savage, Grove, Lyu, Mossoba, and Provost teaches or suggests The method according to claim 22, Furthermore, Savage teaches: -further comprising informing the moderator of identified harmful users. (Savage [Col. 11 Lines 43-54] Social Persona Visualization Module 150: FIG. 7 illustrates an example visualization for online personas. This module receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens, Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user. The module uses these initial classifications to present to users a visualization of the social personas the systems considers they have created in the online community. The initial social personas considered are that a user is a: Maven, Moderator, Troll, or Newbie. The interface may present users how much of a Maven. Moderator, Troll, Newbie they are with respect to the rest of the community members.) Regarding Claim 25: The combination of Savage, Grove, Lyu, Mossoba, and Provost teaches or suggests The method according to claim 24, Furthermore, Savage teaches: -further comprising informing the moderator of all identified harmful users as distinguished from those only identified as non-supportive users. (Savage [Col. 11 Lines 43-54] Social Persona Visualization Module 150: FIG. 7 illustrates an example visualization for online personas. This module receives the classifications and inferences from the Social Inference Module 110, such as list of Mavens, Newbies on X topic, list of Moderators, Trolls, topics of interests and preferred topics of conversation of each user. The module uses these initial classifications to present to users a visualization of the social personas the systems considers they have created in the online community. The initial social personas considered are that a user is a: Maven, Moderator, Troll, or Newbie. The interface may present users how much of a Maven. Moderator, Troll, Newbie they are with respect to the rest of the community members.) Since Savage identifies both Newbies, which are users who are non-supportive but not harmful, and trolls, who are both non-supportive and harmful, the limitation has been satisfied by Savage. Regarding Claim 26: The combination of Savage, Grove, Lyu, Mossoba, and Provost teaches or suggests The method according to claim 19, Furthermore, Savage teaches: -wherein the online group is focused on at least one discussion topic including at least one of the areas of health, and education.([Col. 15 Lines 13-17] Given an online community, the goal of the framework is threefold: (1) discover the different theme-based interests present in the community’s discussions (posts and comments created); (2) categorize posts, comments, and interest tags into these discovered theme-based interests; [Col. 19 Lines 35-44] Each theme was labeled based on representative words from each of its topics. The labels given for the themes present in the community’s posts were: Racism, Women’s Rights, LGBT, Politics, Health care, Sexual Identity, Terrorism, and Church. The labels given for the themes covering community members’ stated interest were: Outdoor Activities, TV Shows, Computer Science Themes, School Subjects, Fiction, Musical Artists, Gender Inequality, Asian Culture, Felines, and the Holocaust.) Response to Arguments Applicant's arguments filed 02/03/2026 have been fully considered but they are not persuasive. In response to arguments over rejections under 35 U.S.C. 101, the applicant traverses Step 2A Prong 1 and alleges that the amended claim recite such “specific” subject matter (a stacked sequence of a Universal Sentence Encoder and Logistic Regression, and filtering the identified class of users by applying a second classifier utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors) allegedly amounts to more than merely an abstract idea. However, the examiner respectfully disagrees. In the updated rejection in view of the amended limitations, the examiner specifically addresses the encoder, logistic regression, and linguistic inquiry and word count categories and explains why such steps still fall under “certain methods of organizing human activity” when considering the applications in which they are being used and the generality in which they are being claimed. The scope of the limitation covers any classifier comprising the stacked sequence of a universal sentence encoder and that uses logistic regression in any matter, therefore, the claims lack specificity in how the outcome of classifying such users is arrived. This also applies to the LIWC categories, which are merely indicating the sources of data collection without meaningfully limiting how the claims arrive at the categorization feature. Furthermore, Step 2a Prong 1 determines whether the claims at least recite an abstract idea, not whether the claim as a whole, including the additional elements is directed to an abstract idea. It is clear that the claims, which by definition “manage personal behavior, interactions, or relationships between people” at least recite “certain methods of organizing human activity. Therefore, the applicant’s arguments over step 2a Prong 1 are not persuasive. Regarding arguments over integration into a practical application, the applicant argues that the amended claim recasts the abstract idea into specific, tangible, and inventive solution by the “highly specific classifying of users followed by specific filtering, ultimately affirmatively leading to detecting users...” However, the examiner respectfully disagrees. When considering the specificity of the claims, it specificity alone does not determine integration into a practical application. When considering the particularity(specificity) or generality in Step 2A Prong 2, the relevant guidelines are found in MPEP 2106.05(f), because even a specific/narrow abstract idea is still a recitation of an abstract idea. The specificity of how the abstract idea is applied by the additional elements is the main consideration in MPEP 2106.05(f)(1-3), “(1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished, (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process, (3) The particularity or generality of the application of the judicial exception.” Therefore, while the claims do recite additional elements such as a “universal sentence encoder,” since the claims only recite the idea of a solution or outcome along with merely invoking the universal sentence encoder, then the claims are merely “apply it.” Therefore, the applicant’s argument in page 10 that the “claimed subject matter does not improperly attempt to wholly preempt any abstract idea” is not persuasive, because MPEP 2106.04(1) states, “While preemption is the concern underlying the judicial exceptions, it is not a standalone test for determining eligibility. Rapid Litig. Mgmt. v. CellzDirect, Inc., 827 F.3d 1042, 1052, 119 USPQ2d 1370, 1376 (Fed. Cir. 2016). Instead, questions of preemption are inherent in and resolved by the two-part framework from Alice Corp. and Mayo (the Alice/Mayo test referred to by the Office as Steps 2A and 2B). Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1150, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016); Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379, 115 USPQ2d 1152, 1158 (Fed. Cir. 2015). It is necessary to evaluate eligibility using the Alice/Mayo test, because while a preemptive claim may be ineligible, the absence of complete preemption does not demonstrate that a claim is eligible.” Therefore, the argument is not persuasive because the arguments under 101 are not persuasive, and the questions of preemption are inherent and resolved by the two-part framework. Furthermore, the applicant’s argument that the “combination of the specialized data generation and the novel arrangement... provides a specific inventive technical solution that practically applies any asserted abstract concept to improved online platform operation” is not persuasive because no improvements to computer functionality, technology, or a technical field have been purported. MPEP 2106.05(a) considers that an improvement must be apparent to one of ordinary skill in the art and must be reflected in the scope of the claims. Whilst there may be an improvement to abstract ideas, MPEP 2106.05(a) states, “However, it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology.” Therefore, the applicant’s argument is not persuasive. Furthermore, the applicant relates the present claims to Examples 48 and 47 of the PEG, alleging that they are “instructive as to how presently amended claim 1 is patent eligible under 35 U.S.C. 101.” However, the examiner respectfully disagrees. Example 48 (Speech separation) distinguishes from the present claims because it recited specific steps (partitioning vectors, binary masks) rather than just using “a neural network.” In contrast, the present claims merely invoke the use of a universal sentence encoder and logistic regression to perform the classifying, without providing the mechanisms to arrive at the intended outcome. Therefore, the applicant’s argument that the “stacked sequence of a Universal Sentence Encoder and Logistic Regression” moves the claim from a generic result to a specific means of achieving that result is not persuasive because it is not equivalent to the level of specificity in Example 48’s specific steps which reflect an improvement to the technology of speech separation(which is not an abstract idea) ( Example 48: Step (f) recites “synthesizing speech waveforms from the masked clusters, wherein each speech waveform corresponds to a different source sn,” and step (g) recites “combining the speech waveforms to generate a mixed speech signal x' by stitching together the speech waveforms corresponding to the different sources sn, excluding the speech waveform from a target source ss such that the mixed speech signal x' includes speech signals from the different sources sn, where n ∈ {1, . . . N}, and excludes the speech signal from the target source ss.”) Similarly, the arguments over Example 47 are also not persuasive because the “Harmful User” Filter is not equivalent to the technological feature of “dropping malicious packets.” Example 47 reflects an improvement to the technical field of network intrusion detection because the detection of a source address associated with the one or more malicious network packets in real time cannot practically be performed in the human mind. In contract, the present claims are directed to a “certain method of organizing human activity,” and not a mental process alone. The claimed features of the present claim do not transform the claim from “managing relationships” to a network security tool in the same way that Example 47 is directed to an improvement to network intrusion detection. Social media networks is not within the same technical realm as network intrusion detection. Therefore, none of the applicant’s arguments are persuasive, including those in page 11 of the applicant’s remarks regarding step 2b and claim 19. The examiner does not find “classifying users based on interaction content analysis and an expert-labeled dataset” to provide “significantly more” than the abstract idea based on the two-part approach performed above. Furthermore, the arguments regarding claims 1 and 12 also apply to claim 19, and since none of the applicant’s arguments are persuasive, independent claims 1, 12, and 19 remain rejected under 35 U.S.C. 101. Since no arguments have been provided based on the eligibility of the dependent claims, claims 2, 7, 9-11, 13, 18-23, 25, and 26 also remain patent ineligible. In view of the applicant’s arguments over rejections under 35 U.S.C. 103, the applicant’s arguments have been fully considered but are either moot or unpersuasive in view of the updated rejections which now rely on a combination of Savage, Grove, Lyu, Mossoba and Provost, where the applicant’s arguments were based on the rejection in view of Savage, Grove and Lyu alone. Furthermore, the applicant argues that the stacked sequence of a “universal sentence and logistic regression” are not taught by Savage, Mossoba, and Lyu, however, the applicant argument’s regarding the failure of the reference to teach “stacked sequences” is not persuasive. This because the scope of stacked sequences given the broadest reasonable interpretation in view of the specification is any classifier that can use a universal sentence encoder and logistic regression in any matter whatsoever. The claims do not require a specific set of steps to implement such techniques, and furthermore the claims do not recite “deep learning models.” Therefore, the applicant’s arguments that Savage never teaches “deep learning models” is not persuasive because the scope of the claims does not include deep learning models. The applicant’s arguments that Mossoba merely recites a laundry list of potential algorithms for ad targeting and does not suggest a specific combination of stacked sequence features is not persuasive because the laundry list of features is recited at the same level of specificity as the amended claims – merely listing additional elements as a black box without reciting the specific steps or mechanisms to arrive at the outcome. Furthermore, in view of the applicant’s arguments that Savage uses the LIWC to identify “possible trolls,” instead of the amended claims which require “utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors,” the applicant’s argument is persuasive. However, this argument is moot, as it would have been obvious to arrive at “utilizing a labeled dataset on harassment and Linguistic Inquiry and Word Count (LIWC) categories on negative behaviors,” when considering the combination including “Provost.” Therefore, the applicant’s argument that Lyu focuses on audio data and speech recognition (ASR) heuristics, is not persuasive because teaching audio based recognition is not a separate field of endeavor as text-based classification, as the audio data is ultimately converted to text. In view of the applicant’s argument on page 16 of the applicant’s remarks that the “lengthy four-way combination of references” is a fairly convoluted pathway based on improper hindsight reconstruction, guided by the applicant’s claimed subject matter. In response to applicant's argument that the examiner has combined an excessive number of references, reliance on a large number of references in a rejection does not, without more, weigh against the obviousness of the claimed invention. See In re Gorman, 933 F.2d 982, 18 USPQ2d 1885 (Fed. Cir. 1991). Especially when considering that the claims merely name a well-known type of encoder or dataset to carry out the claimed invention, the length of the references required to reject the claims does not suggest that the claims are novel over the prior art. The applicant then goes on to list the sequence of specific modifications required to arrive at the limitations. However, the examiner notes that such an analysis is not in line with how obviousness is determined. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Furthermore, the scope of the actual amended claims merely invokes the “universal sentence encoder,” “logistic regression,” and “Linguistic Inquiry and Word Count,” as part of the classifiers performing the function. It does not require a specific set of steps that specifically limit how such data is used to arrive at the claim. Therefore, the “stacked sequence” in which the applicant argues does not carry the same patentable weight as the remark’s attribute to “stacked sequence.” In fact, any teaching of the classifier comprising a universal sentence encoder and logistic regression would satisfy the limitation because the claims do not specifically restrict “stacked sequence,” to a particular arrangement of steps of using “universal sentence encoding” and “logistic regression.” Therefore, the applicant’s arguments which attempt to outline how one must modify Savage, Lyu and Mossoba to arrive at the claims is not persuasive because the test for obviousness is not whether the features of the secondary references may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). Furthermore, the applicant’s arguments that the “prior art does not suggest such specific technical architecture” is not persuasive because it is noted that the features upon which applicant relies (i.e., specific technical architecture) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In view of the section titled “other claims,” the applicant’s arguments have been fully considered but are not persuasive because the arguments also apply to other independent claims 12 and 19 which are substantially similar. Furthermore, the applicant has not provided any additional arguments regarding dependent claims, therefore, the dependent claims also remain rejected under 35 U.S.C. 103. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICO LAUREN PADUA whose telephone number is (703)756-1978. The examiner can normally be reached Mon to Fri: 8:30 to 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at (571) 270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICO L PADUA/Junior Patent Examiner, Art Unit 3626 /JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
Jul 29, 2025
Non-Final Rejection — §101, §103
Oct 15, 2025
Response Filed
Nov 03, 2025
Final Rejection — §101, §103
Feb 03, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586035
INTERACTIVE USER INTERFACE FOR SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12523701
METHOD FOR MANAGING BATTERY RECORD AND APPARATUS FOR PERFORMING THE METHOD
2y 5m to grant Granted Jan 13, 2026
Patent 11881521
SEMICONDUCTOR DEVICE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
10%
Grant Probability
27%
With Interview (+17.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month