Prosecution Insights
Last updated: April 17, 2026
Application No. 18/780,380

SYSTEMS AND METHODS FOR MITIGATING THE SPREAD OF OFFENSIVE CONTENT AND/OR BEHAVIOR

Non-Final OA §103§DP
Filed
Jul 22, 2024
Examiner
KHAN, AFTAB N
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
364 granted / 454 resolved
+22.2% vs TC avg
Strong +50% interview lift
Without
With
+50.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
15 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim 1-19 are presented for examination. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statements (IDS) submitted on 01/16/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Provisional Priority This present application (18/779640) appears to claim a continuation of U.S. Patent Application No. 17/900,866, filed August 31, 2022, which is a continuation-in-part of U.S. Patent Application 17/393,540, filed August 4, 2021, now U.S. Patent No. 11,706,176, issued July 18, 2023, which is a continuation of U.S. Patent Application No. 16/372,140, filed April 1, 2019, now U.S. Patent No. 11,095,585, issued August 17, 2021, which is a continuation of U.S. Patent Application No. 15/187,674, filed June 20, 2016, now U.S. Patent No. 10,250,538, issued April 2, 2019, which is a continuation-in-part of U.S. Patent Application No. 14/738,874, filed June 13, 2015, now U.S. Patent No. 9,686,217, issued June 20, 2017, which claims the benefit of U.S. Provisional Patent Application No. 62/012,296, filed June 14, 2014. Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) to the originally filed provisional application 62/012296 is not granted because the subject matter is broader and completely different in scope and has totally distinct utility. For example, the first US patent 9686217 is directed to messaging application stopping cyber bullying by employing re-thinking techniques that generate an alert a message on a smart phone prior to sending of the hurtfully or bullying message to another user. Whereas the current application is directed machine learning model techniques not the original ReThink concepts. Applicant is requested to perfect the effective filing date and properly reflect it in the paragraph [001] of the specification. Currently as best understood the effective filling date of the current applicant can be reasonably extended to the its parent application 17/900866 filed on 08/31/2022. Internet Communication Authorization The examiner recommends filling a written authorization for internet communication in response to the present action. Doing so permits the USPTO to communicate with applicant using internet email to schedule interviews or discuss other aspects of the application. Without a written authorization in place, the USPTO cannot respond to Internet correspondence received from Applicant. The preferred method of providing authorization is by filing form PTO/SB/439, available at: https://www.uspto.gov/patent/forms/forms. See MPEP § 502.03 for other methods of providing written authorization. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements are auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-18 of US patent 12341739 B2. Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1-19 are anticipated by claims 1-18 of the issued US patent application. Instant Application: Claims: 1 Issued App# 17900866/US pat, 12341739B2 Claims: 1 A method for identifying offensive message content comprising: for each particular responsive message of a plurality of responsive messages received in response to an initial message: providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message; processing, by the one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and storing, by the one or more computers, the generated output data; determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and based on a determination, by the one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content A method for identifying offensive message content comprising: for each particular responsive message of a plurality of responsive messages received in response to an initial message: providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message; processing, by one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and storing, by one or more computers, the generated output data; determining, by one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and based on a determination, by one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, one or more remedial operations to mitigate exposure to the offensive content, wherein performing the one or more remedial operations comprises: adjusting, using one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items. Claim 1 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1 of U.S. Patent # 12341739. Claim 1 of instant application is analogous to the issued US patent. This is substantially similar in nature to this application as can clearly be seen. Although, the conflicting claims are not identical, they are not patentably distinct from each other because the subject matter claimed in the instant application is substantially similar in nature of US patent app number 17/900,866. This is a non-provisional obviousness-type double patenting. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Owens et al (US pub, 2016/0321260 A1) in view of Epstein et al (US 2015/0309987 A1) and further in view of Almerekhi at. Al (PROVOKE, 2022). Referring to claims 1,18 and 19 Owen teaches a method, system and computer readable-store media for identifying offensive message content comprising: for each particular responsive message of a plurality of responsive messages received in response to an initial message (Owens teaches analyzing multiple responses/comments to a content item … see ¶[0006], ¶[0029], “signals indicative of objectionable material can include a keyword signal relating to a number of responses … to a content item”); storing, by the one or more computers, the generated output data Owens teaches storing counters and response-related metrics. (Owens ¶[0034]: “data store 118… counters indicating a number of comments provided in response”, i.e. Owens teaches storing counters and response-related metrics.); determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content (Owens ¶[0004], ¶[0026] teaches threshold-based determination using response signals… “compare… signal values with a threshold” – [032], [033]); and based on a determination, by the one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content (Owens ¶ [008]-[009]: expressly teaches remedial operations applied to the initial content item see “adjusting a rank value… lowering the rank value” [0027], demotion prior to proliferation, claim 5, 6) Owen teaches the invention but lack providing, by one or more computers, content of the particular responsive message as an input to a machine learning model. However Epstein teaches providing, by one or more computers, content of the particular responsive message as an input to a machine learning model (Epstein teaches providing text samples as inputs to a machine-learning classifier ¶[0005], ¶[0021] “providing, to the classifier, a text sample … and obtaining … a label”) Epstein teaches Epstein teaches ML models producing likelihood labels for offensive usage (see ¶ [011], “obtaining… a label confidence score that indicates a confidence”) but lacks model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message. processing, by the one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content (Epstein ¶[0011], ¶[0021]: classifier outputs labels and confidence scores); and However, Provoke teaches a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message (PROVOKE teaches machine learning models to detect toxicity from reddit comments (see [abs], Page 2, Rt Column lines 1-60) that include properties of a parent message (toxicity trigger) are being inferred from toxic child replies definition of Toxicity Triggers, Parent-Inference using reply toxicity predictions. See Fig. 1; see §Methodology, Pg. 3 Left Column & Pg4, - Model predicts a negative/offensive characterization of an initial message based on processing of response message from variety of users (i.e. users A from first device and user B from second user device); It would have been obvious to an ordinary person skilled in the art at the time invention was made to modify response-based moderation framework of Owens to include Machine-learning offensiveness classifier as taught by Epstein to further include teaching that toxic replies reveal properties of parent messages as taught by Provoke in order to improve accuracy and automation of offensive-content detection, especially in large-scale social platforms, where manual moderation is impractical. The combination merely substitutes known ML techniques (Epstein) into known moderation pipelines (Owens), guided by known conversational toxicity dynamics (PROVOKE), yielding predictable results. Referring to claim 2, Owens and Epstein teaches the method of claim 1, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content comprises: for each particular instance of output data generated for a responsive message: determining, by the one or more computers, whether the particular instance of output data satisfies a predetermined threshold (Owens: [032], [033], [026] “Signal Values…compared with a threshold” i.e. comparing objectionable signals to the threshold & Epstein [011], classifier outputs likelihood/confidence scores usable with thresholds); and incrementing one of a plurality of counters based on the determination as to whether the particular instance of output data satisfies a predetermined threshold, wherein incrementing one of the plurality of counters based on the determination comprises: incrementing a first counter corresponding to a first determination that the particular responsive message indicates that the initial message is likely offensive (Owens: ¶[034], Explicit counters tracking number of responses meeting criteria, see “Counters indicating a number of comments provided in response”), or incrementing a second counter corresponding to a second determination that the particular responsive message indicates that the initial message is not likely offensive (Owens: ¶ [034], Separate counters for objectionable vs. Non-Objectionable responses). Referring to claim 3, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely includes offensive content if the first counter is greater than the second counter (Owens ¶ [034] -Majority-type determinations using comparative counters relative “number of objectionable responses). Referring to claim 4, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely includes offensive content if the first counter is greater than the second counter after evaluation of a threshold number of responsive messages (Owens ¶ [027],[029] Avoiding premature action until sufficient response data collected). Referring to claim 5, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely includes offensive content if the first counter satisfies a predetermined threshold number of occurrences (see Owens ¶ [026], Threshold-based triggering of moderation actions [032], [033], [034]). Referring to claim 6. Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely does not include offensive content if the second counter is greater than the first counter (see ¶[034], for Logical Inverse of Objectionable-Content Determination…also see [025], [027],[029],[040]). Referring to claim 7, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely does not include offensive content if the second counter is greater than the first counter after evaluation of a threshold number of responsive messages (see Owens; ¶ [027][029],[032], [041]-[043] Same delayed-decision logic applied to moderation outcomes). Referring to claim 8, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely does not include offensive content if the second counter satisfies a predetermined threshold number of occurrences (see ¶ [026], [034], Counters compared to thresholds for moderation decisions see ¶ [041]-[043]). Referring to claim 9, Owen teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the first counter and the second counter are equal (see ¶ [068] [006], [019], Engagement and interaction metrics used as ranking signals); and based on a determination, by the one or more computers, that the first counter and the second counter are equal, determining, by the one or more computers, whether the initial message likely includes offensive content based on one or more of a number of likes associated with the initial message, a number of different types of emojis associated with the initial message, or a number of comments associated with the initial message (emoji would be known and obvious substitution of known engagement signals). Referring to claim 10, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: generating, using one or more computers, notification data that, when processed by the first user device, causes the first user device to prompt a user of the first user device to indicate whether the user wants to delete the initial message content or not delete the initial message content (see Owens ¶ [027], User facing moderation actions before content proliferation). Referring to claim 11, Owens and Epstein teaches the method of claim 10, further comprising: receiving, using one or more computers, data corresponding to an indication that the user of the first user device wants to delete the initial message content (Owens: ¶ [027], User facing moderation actions before content proliferation); and in response to receiving data corresponding to an indication that the user of the first user device wants to delete the initial message content, deleting, using one or more computers, the initial message, wherein deletion of the initial message content prohibits any other user from viewing the first message content after its deletion (Epstein: [002], [032], Redaction/removal of offensive content). Referring to claim 12, Epstein teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: deleting, using one or more computers, the initial message, wherein deletion of the initial message content prohibits any other user from viewing the initial message content after its deletion (see Epstein ¶[002]-[003], [032] -Automatic redaction/removal of offensive text [066] [074]). Referring to claim 13, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: flagging, using one or more computers, the initial message for deletion (see ¶ [025]-[027], redressing content items for moderation action). Referring to claim 14, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: adjusting, using one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items (see ¶ [008]-[009], Owens teaches lowering rank values to demote content). Referring to claim 15, Epstein teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: storing, using one or more computers, the content of the initial message in a database of offensive content used to screen messages or other content for offensive content (see ¶ [024], [032], Repositories of labeled offensive text samples). Referring to claim 16, Epstein teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: training, using one or more computers, a machine learning model to detect subsequent messages that have the initial message content as offensive content (see ¶ [004]-[007], [023], Iterative Training/retraining of the classifier using labeled offensive samples). Referring to claim 17, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: causing, using one or more computers, a warning message to be displayed in proximity to the initial message content within a messaging application (see ¶ [050], notice to be displayed … “or similar warnings” also see [027], Mitigation actions to prevent exposure before dissemination ). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Additional relevant prior art can be found in the included form PTO-892 (Notice of Cited References). Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111 (c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFTAB N. KHAN whose telephone number is (571)270-5172. The examiner can normally be reached on Monday-Friday 8AM-5PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton Burgess can be reached on 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AFTAB N. KHAN/ Primary Examiner, Art Unit 2454
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587579
SELECTING PROXY DEVICE BASED ON HARDWARE RESOURCE UTILIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12574440
MEDIA CONTENT MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12574356
BANDWIDTH CONTROLLED MULTI-PARTY JOINT DATA PROCESSING METHODS AND APPARATUSES
2y 5m to grant Granted Mar 10, 2026
Patent 12574426
IDENTIFYING INSERTION POINTS FOR INSERTING LIVE CONTENT INTO A CONTINUOUS CONTENT STREAM
2y 5m to grant Granted Mar 10, 2026
Patent 12557234
SERVER INFORMATION HANDLING SYSTEM SECURITY BEZEL WITH INTEGRATED FILTER COMPARTMENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+50.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month