Prosecution Insights
Last updated: April 19, 2026
Application No. 18/738,183

HARASSMENT INFORMATION PROVIDING APPARATUS, HARASSMENT INFORMATION PROVIDING METHOD, AND PROGRAM STORAGE MEDIUM

Non-Final OA §101§103
Filed
Jun 10, 2024
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is in response to the Application filed on 06/10/2024. Claims 1-7 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application claims priority to foreign application with application number JP 2023-104272 dated 06/26/2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 06/10/2024 has been considered and placed in the application file. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. The following terms in the claims have been given the following interpretations in light of the specification: Attribute information: pg. 18, lines 21-23, “it is also conceivable to correct the negative level and the positive level by using attribute information such as the age, nationality, sex, and personality of the speaker in the conversation.” Thus, attribute information is any information regarding the background or profile of a user. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Should applicant wish different definitions, Applicant should point to the portions of the specification that clearly show a different definition. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (6), apparatus/machine claims (1-5, 7) or manufacture claim under (Step 1), but under Step 2A all of these claims recite abstract ideas and specifically mental processes. These mental processes are more particularly recited in claims 1, 6, and 7 as: converting an utterance of a speaker into text using speech data of a conversation… calculating a negative level and a positive level of the conversation… determining a level of harassment of the conversation… output harassment information… Under Step 2A Prong One, claims 1, 6, and 7 are directed to an abstract idea and specifically a mental process. As detailed above, the steps of converting, calculating, determining, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper. For example, a human could transcribe a conversation between two friends into text, count the number of positive and negative words said by each friend in the conversation, determine if harassment is present in the conversation by comparing the number of positive words to the number of negative words, and then write down using pen and paper whether harassment occurred or not. Under Step 2A Prong Two, this judicial exception is not integrated into a practical application because claims 1-7 do not recite additional elements that integrate the exception into a practical application. In particular, claims 1, 6, and 7 recite the additional elements of a processor (pg. 6, lines 20-25) and memory storing instructions (pg. 6, lines 12-15). These additional elements are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Under Step 2B, the claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is noted as a general computer {processor (pg. 6, lines 20-25); memory storing instructions (pg. 6, lines 12-15)}. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitations in the claims noted above are directed towards insignificant extra-solution activities. The claims are not patent eligible. With respect to claim 2, the claim relates to extracting negative and positive words in order to calculate negative and positive levels, then determining a level of harassment using the calculated negative and positive levels. This relates to a human counting the number of times each positive and negative word appears in a transcript and then determining if harassment occurred based on comparing the number of negative words to the number of positive words. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 3, the claim relates to using information from speech data to calculate negative and positive levels. This relates to a human listening to the tone of a phrase in order to determine its negativity or positivity. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 4, the claim relates to using relationship and attribute information relevant to a user to determine negative and positive levels. This relates to a human understanding the level of friendship between the two friends in order to determine whether a conversation is negative or positive. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 5, the claim relates to determining a level of harassment by using a user-set degree of tolerance. This relates to a human understanding each of the friends’ sensitivities to harassment before interrupting the conversation to prevent harassment. The additional element of “acquiring…” amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. For all of the above reasons, taken alone or in combination, claims 1-7 recite a non-statutory mental process. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, and 7 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 20200234243 A1 (Miron et al.) in view of US Patent Publication 20210321000 A1 (Gopalan). Claim 1 Regarding claim 1, Miron et al. disclose a harassment information providing apparatus comprising: a memory configured to store instructions (Miron et al. ¶ [0110], "Memory unit 74 may comprise volatile computer-readable media (e.g. dynamic random-access memory—DRAM) storing data/signals/instruction encodings accessed or generated by processor(s) 72 in the course of carrying out operations."); and at least one processor (Miron et al. ¶ [0110], "Memory unit 74 may comprise volatile computer-readable media (e.g. dynamic random-access memory—DRAM) storing data/signals/instruction encodings accessed or generated by processor(s) 72 in the course of carrying out operations.") configured to execute the instructions to: calculate a negative level (Miron et al. ¶ [0048], "An exemplary aggressiveness assessor computes a score for each message of a conversation, the score indicative of a level of aggression indicated by the language of the respective message" An aggressiveness score is considered analogous to a negative level) and a positive level of the conversation (Miron et al. ¶ [0050], "An exemplary friendliness assessor aims to detect phrases that display affection and a friendly attitude towards one or the other of the interlocutors. Since friends often tease each other using offensive language, a friendliness indicator/score may help distinguish true abuse from behaviors that could appear aggressive, but are in fact playful and benign. " A friendliness score is considered analogous to a positive level) by analyzing text data indicating the utterance converted into text (Miron et al. ¶ [0047], "Each text processor 56 may output a set of scores, labels, etc. Such scores/labels may be determined for each individual message of the conversation, or may be determined for the respective conversation as a whole."); determine a level of harassment of the conversation by using the calculated negative level and positive level (Miron et al. ¶ [0061], "decision unit 53 (FIG. 9) inputs individual assessment indicators 26-28 received from text and/or image processors 56-58, respectively, and outputs an aggregated risk assessment indicator 22 determined according to the individual risk assessment indicators." An aggregated risk assessment indicator is considered analogous to a level of harassment); and output harassment information including information indicating the determined level of harassment (Miron et al. ¶ [0061], "An exemplary aggregated risk assessment indicator 22 is determined for the conversation as a whole and comprises a set of scores wherein each score indicates a likelihood of a distinct type of threat or scenario (e.g., fighting, bullying, depression, sexual exposure, grooming, loss of confidential data, etc.)."). Miron et al. do not explicitly disclose all of converting an utterance into text using speech data. However, Gopalan disclose converting an utterance of a speaker into text using speech data of a conversation (Gopalan ¶ [0016]-[0017], "According to some embodiments, the audio 120 is processed by the SD module 122 to diarize the audio 120 according to each speaker. ... The diarized audio segments from the pre-processed audio 124 are then transcribed for example by the ASR engine 104, which yields text transcripts 126 corresponding to the pre-processed audio 124."); and calculating a negative level and a positive level of the conversation by analyzing text data indicating the utterance converted into text (Gopalan ¶ [0019], "The SAM 128 organizes the received text transcripts 126 into 12 different text sets.... The SAM 128 performs sentiment analysis on each of the 12 text sets, and for each set, determines a sentiment score, a count or percentage of positive words and a count or percentage of negative words." A count of negative words and positive words is considered analogous to a negative level and a positive level respectively). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Miron et al.’s harassment detection method to incorporate Gopalan’s speech-to-text conversion because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Miron et al.’s harassment detection method as modified by Gopalan’s speech-to-text conversion can yield a predictable result of increasing flexibility since being able to prevent harassment from speech data in addition to text data would allow the invention to be utilized with more flexibility. Thus, a person of ordinary skill would have appreciated including in Miron et al.’s harassment detection method the ability to do Gopalan’s speech-to-text conversion since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Miron et al. in view of Gopalan disclose all the elements of the claimed invention as stated above. Miron et al. further disclose wherein the at least one processor is configured to execute the instructions to: calculate the negative level (Miron et al. ¶ [0048], "An exemplary aggressiveness assessor computes a score for each message of a conversation, the score indicative of a level of aggression indicated by the language of the respective message" An aggressiveness score is considered analogous to a negative level) [by using a number of negative words included in the conversation]; calculate the positive level (Miron et al. ¶ [0050], "An exemplary friendliness assessor aims to detect phrases that display affection and a friendly attitude towards one or the other of the interlocutors. Since friends often tease each other using offensive language, a friendliness indicator/score may help distinguish true abuse from behaviors that could appear aggressive, but are in fact playful and benign. " A friendliness score is considered analogous to a positive level) [by using a number of positive words included in the conversation]; and determine the level of harassment by using the negative level and the positive level (Miron et al. ¶ [0061], "decision unit 53 (FIG. 9) inputs individual assessment indicators 26-28 received from text and/or image processors 56-58, respectively, and outputs an aggregated risk assessment indicator 22 determined according to the individual risk assessment indicators." An aggregated risk assessment indicator is considered analogous to a level of harassment). Gopalan further discloses extracting a negative word and a positive word from the text data of the conversation, the negative word being a negative utterance, the positive word being a positive utterance (Gopalan ¶ [0018], "The SAM 128 identifies sentiments from text and classifies them into three sentiments, namely, positive, negative or neutral. For example, in a sentence “I liked the movie,” the SAM 128 ignores the article “the” and determines that there are two neutral sentiment words (I, movie), one positive word (liked) and no negative words."); calculating the negative level by using a number of negative words included in the conversation (Gopalan ¶ [0018], "The SAM 128 uses a lexical feature (n-gram feature), a syntactic feature (Parts of Speech (PoS)), a combination of lexical and syntactic features, and lexicon-based features to determine the sentiment scores and count of positive and negative words." A count of negative words is considered analogous to a negative level); and calculating the positive level by using a number of positive words included in the conversation (Gopalan ¶ [0018], "The SAM 128 uses a lexical feature (n-gram feature), a syntactic feature (Parts of Speech (PoS)), a combination of lexical and syntactic features, and lexicon-based features to determine the sentiment scores and count of positive and negative words." A count of positive words is considered analogous to a positive level). Claim 3 Regarding claim 3, the rejection of claim 2 is incorporated. Miron et al. in view of Gopalan disclose all the elements of the claimed invention as stated above. Gopalan further discloses calculating the negative level and the positive level by using information acquired from the speech data (Gopalan ¶ [0018], "The sentiment analysis module (SAM) 128 is configured to determine sentiment and/or sentiment scores from the text, and count and/or percentage of all words corresponding to each sentiment, based on text transcript(s) of a conversation. The SAM 128 identifies sentiments from text and classifies them into three sentiments, namely, positive, negative or neutral." Determining sentiment scores from text transcripts is considered analogous to using information acquired from speech data) in addition to information regarding the number of negative words and the number of positive words included in the conversation (Gopalan ¶ [0018], see claim 2). Claim 6 Regarding claim 6, the limitations of claim 6 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 7 Regarding claim 7, Miron et al. disclose a non-transitory computer readable medium storing a computer program for causing a computer to execute processing (Miron et al. ¶ [0110], "Memory unit 74 may comprise volatile computer-readable media (e.g. dynamic random-access memory—DRAM) storing data/signals/instruction encodings accessed or generated by processor(s) 72 in the course of carrying out operations."). The remaining limitations of claim 7 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claims 4 and 5 are rejected under 35 U.S.C. 103 as obvious over Miron et al. in view of Gopalan as applied to claims 1 and 2 above, and further in view of US Patent 12452212 A1 (Kats et al.). Claim 4 Regarding claim 4, the rejection of claim 2 is incorporated. Miron et al. in view of Gopalan disclose all the elements of the claimed invention as stated above. Gopalan further disclose calculating the negative level and the positive level by using ... information regarding the number of negative words and the number of positive words included in the conversation (Gopalan ¶ [0018], "The sentiment analysis module (SAM) 128 is configured to determine sentiment and/or sentiment scores from the text, and count and/or percentage of all words corresponding to each sentiment, based on text transcript(s) of a conversation. The SAM 128 identifies sentiments from text and classifies them into three sentiments, namely, positive, negative or neutral."). Miron et al. in view of Gopalan do not explicitly disclose all of customizing harassment detection using user information. However, Kats et al. disclose calculating the [negative level and the positive] level by using at least one of information indicating a relationship between speakers of the conversation (Kats et al. ¶ (32)-(34), "calculation module 108 may calculate the probability that the message is harassment targeted at the user by analyzing, on the social media service, a social graph of the sender of the message that includes at least one additional user (i.e., a social graph that includes more than just the sender and the user). … For example, as illustrated in FIG. 4, a social graph 400 may include a variety of clusters of users who are connected with one another. In some examples, a user 408 may receive a message from a sender 404. In one example, the systems described herein may determine that sender 404 is part of the same cluster 402 and/or is directly connected to a sender 406 who has previously sent a harassing message to user 408. In this example, the systems described herein may determine that the message from sender 404 is more likely to be harassment based on the social graph connection between sender 404 and sender 406." Calculating a probability of harassment based on the relationships to a user within a social graph is considered analogous to calculating a level by using information indicating a relationship between speakers) and attribute information of each speaker (Kats et al. ¶ (26)-(29), "modeling module 106 may perform the topic modeling that is customized based on at least one harassment topic relevant to the user by analyzing, via machine learning and/or natural language processing, harassing messages previously sent to the user via the social media service. ... calculation module 108 may, as part of server 206 in FIG. 2, calculate, based at least in part on the topic modeling, probability 210 that message 208 is harassment targeted at the user."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Miron et al. in view of Gopalan to incorporate Kats et al.’s user customization. The suggestion/motivation for doing so would have been that, “By using topic modeling to create a custom filter, the systems described herein may identify harassing messages targeted at a particular user with a much higher degree of accuracy than systems that look for universally negative keywords and phrases,” as noted by the Kats et al. disclosure in paragraph (28). Claim 5 Regarding claim 5, the rejection of claim 1 is incorporated. Miron et al. in view of Gopalan disclose all the elements of the claimed invention as stated above. Miron et al. in view of Gopalan do not explicitly disclose all of customizing harassment detection using user preferences. However, Kats et al. disclose acquiring harassment resistance information indicating a degree of tolerance of each speaker to a harassment utterance that raises a concern about harassment (Kats et al. ¶ (36), "the systems described herein may determine that the probability that the message is harassment meets the threshold for harassment probability by identifying the threshold for harassment probability set by the user (e.g., 70%, 80%, 95%, etc.)"); and determining the level of harassment by using the harassment resistance information (Kats et al. ¶ (36), " blocking module 110 may block or not block messages based at least in part on user preferences. For example, the systems described herein may determine that the probability that the message is harassment meets the threshold for harassment probability by identifying the threshold for harassment probability set by the user (e.g., 70%, 80%, 95%, etc.)" A threshold for harassment probability is considered analogous to a harassment resistance information. Therefore, using a user-customized threshold value for harassment detection is considered analogous to determining a level of harassment using harassment resistance information). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Miron et al. in view of Gopalan to incorporate Kats et al.’s user customization. The suggestion/motivation for doing so would have been that, “Some users may have a higher tolerance for harassing messages and a lower tolerance for accidentally missing benign messages, while other users may feel differently. By detecting user preferences, blocking module 110 may err on the side the user is most comfortable with,” as noted by the Kats et al. disclosure in paragraph (36). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 01/14/2026
Read full office action

Prosecution Timeline

Jun 10, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month