DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim 1-19 are presented for examination.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 01/16/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Internet Communication Authorization
The examiner recommends filling a written authorization for internet communication in response to the present action. Doing so permits the USPTO to communicate with applicant using internet email to schedule interviews or discuss other aspects of the application. Without a written authorization in place, the USPTO cannot respond to Internet correspondence received from Applicant. The preferred method of providing authorization is by filing form PTO/SB/439, available at: https://www.uspto.gov/patent/forms/forms. See MPEP § 502.03 for other methods of providing written authorization.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements are auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-18 of US patent 12341739. Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1-19 are anticipated by claims 1-18 of the issued US patent application.
Instant Application: 18780459
Claims: 1
Issued US pat: 12341739
Claims: 1
1. A method for identifying offensive message content comprising:
for each particular responsive message of a plurality of responsive messages received in response to an initial message:
providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message;
processing, by the one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and
storing, by the one or more computers, the generated output data;
determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and
based on a determination, by the one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content.
A method for identifying offensive message content comprising:
for each particular responsive message of a plurality of responsive messages received in response to an initial message:
providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message;
processing, by one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and
storing, by one or more computers, the generated output data;
determining, by one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and
based on a determination, by one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, one or more remedial operations to mitigate exposure to the offensive content, wherein performing the one or more remedial operations comprises: adjusting, using one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items.
Claim 1 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1 of U.S. Patent application 17/900,866. Claim 1 of instant application is analogous to the issue the issued application. Elements identified above claims 1 of Application 17/900,866 or US patent 12341739 contains most of the elements of claim 1 in the earlier application and thus anticipate claim 1 and other independent claims of the instant application. This is a non-provisional obviousness-type double patenting.
This is substantially similar in nature to this application as can clearly be seen. Although, the conflicting claims are not identical, they are not patentably distinct from each other because the subject matter claimed in the instant application is substantially similar in nature of US patent 12341739. This is a non-statutory obviousness-type double patenting rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Owens et al (US pub, 2016/0321260 A1) in view of Epstein et al (US 2015/0309987 A1) and further in view of Almerekhi at. Al (PROVOKE, 2022).
Referring to claims 1, 18 and 19, Owens teaches a method/System/CRM for identifying offensive message content see ¶ [004], [025]-[026],[041]-[045] identifying objectionable material=offensive content using response based signals & messages in social networking system 600 ¶[079]) comprising:
for each particular responsive message of a plurality of responsive messages received in response to an initial message (Owens ¶ [034], [038], [053], [055], [085], Owens analyzes multiple responses/comments to a content item, see text such as responses to a content item or comments provided in response):
storing, by the one or more computers, the generated output data (Owens teaches storing counters and response-related metrics see ¶[0034]: “data store 118… counters indicating a number of comments provided in response”, i.e. Owens teaches storing counters and response-related metrics);
determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content (Owens ¶[0004], ¶[0026] teaches threshold-based determination using response signals… “compare… signal values with a threshold” – [032], [033]); and
based on a determination, by the one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content (Owens ¶ [008]-[009]: expressly teaches remedial operations applied to the initial content item see “adjusting a rank value… lowering the rank value” [0027], demotion prior to proliferation, claim 5, 6).
Owens uses rules base signals to determine objectionable material but expressly lacks providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message;
However, Epstein teaches providing, by one or more computers, content of the particular responsive message as an input to a machine learning model (Epstein teaches providing text samples as inputs to a machine-learning classifier ¶[0005], ¶[0021] “providing, to the classifier, a text sample … and obtaining … a label” & [011])
processing, by the one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content (Epstein ¶[0011], ¶[0021]: classifier outputs labels and confidence scores);
Epstein teaches whether a given pieces of text is offensive using machine learning model without the inferring properties of a different message based on replies, that takes single text sample as input and outputs confidence that that text is offensive.
Neither Owens nor Epstein expressly teach that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message.
However, Provoke teaches that you can infer something about a parent message by looking at how people reply to it. Furthermore, Provoke teaches that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message (PROVOKE teaches that properties of a parent message (toxicity trigger) are being inferred from toxic child replies definition of Toxicity Triggers, Parent-Inference using reply toxicity predictions. See Fig. 1, 2; see §Methodology, Pg. 3 & Pg4);
It would have been obvious to an ordinary person skilled in the art at the time invention was made to modify response-based moderation framework of Owens to include Machine-learning offensiveness classifier as taught by Epstein to further include teaching that toxic replies reveal properties of parent messages as taught by Provoke in order to improve accuracy and automation of offensive-content detection, especially in large-scale social platforms, where manual moderation is impractical. The combination merely substitutes known ML techniques (Epstein) into known moderation pipelines (Owens), guided by known conversational toxicity dynamics (PROVOKE), yielding predictable results to arrive at this present invention.
Referring to claim 2, Owens and Epstein teaches the method of claim 1, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content comprises: for each particular instance of output data generated for a responsive message:
determining, by the one or more computers, whether the particular instance of output data satisfies a predetermined threshold (Owens: [032], [033], [026] “Signal Values…compared with a threshold” i.e. comparing objectionable signals to the threshold & Epstein [011], classifier outputs likelihood/confidence scores usable with thresholds); and
incrementing one of a plurality of counters based on the determination as to whether the particular instance of output data satisfies a predetermined threshold, wherein incrementing one of the plurality of counters based on the determination comprises: incrementing a first counter corresponding to a first determination that the particular responsive message indicates that the initial message is likely offensive (Owens: ¶[034], Explicit counters tracking number of responses meeting criteria, see “Counters indicating a number of comments provided in response”), or
incrementing a second counter corresponding to a second determination that the particular responsive message indicates that the initial message is not likely offensive (Owens: ¶ [034], Separate counters for objectionable vs. Non-Objectionable responses).
Referring to claim 3, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises:
determining, by the one or more computers, that the initial message likely includes offensive content if the first counter is greater than the second counter (Owens ¶ [034] -Majority-type determinations using comparative counters relative “number of objectionable responses).
Referring to claim 4, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises:
determining, by the one or more computers, that the initial message likely includes offensive content if the first counter is greater than the second counter after evaluation of a threshold number of responsive messages (Owens ¶ [027],[029] Avoiding premature action until sufficient response data collected).
Referring to claim 5, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises:
determining, by the one or more computers, that the initial message likely includes offensive content if the first counter satisfies a predetermined threshold number of occurrences (see Owens ¶ [026], Threshold-based triggering of moderation actions [032], [033], [034]).
Referring to claim 6, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises: determining, by the one or more computers, that the initial message likely does not include offensive content if the second counter is greater than the first counter (see ¶[034], for Logical Inverse of Objectionable-Content Determination…also see [025], [027],[029],[040]).
Referring to claim 7, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises:
determining, by the one or more computers, that the initial message likely does not include offensive content if the second counter is greater than the first counter after evaluation of a threshold number of responsive messages (see Owens; ¶ [027][029],[032], [041]-[043] Same delayed-decision logic applied to moderation outcomes).
Referring to claim 8, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises:
determining, by the one or more computers, that the initial message likely does not include offensive content if the second counter satisfies a predetermined threshold number of occurrences (see ¶ [026], [034], Counters compared to thresholds for moderation decisions see ¶ [041]-[043]).
Referring to claim 9, Owens teaches the method of claim 2, wherein determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content further comprises:
determining, by the one or more computers, that the first counter and the second counter are equal (see ¶ [068] [006], [019], Engagement and interaction metrics used as ranking signals); and
based on a determination, by the one or more computers, that the first counter and the second counter are equal, determining, by the one or more computers, whether the initial message likely includes offensive content based on one or more of a number of likes associated with the initial message, a number of different types of emojis associated with the initial message, or a number of comments associated with the initial message (emoji would be known and obvious substitution of known engagement signals).
Referring to claim 10, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: generating, using one or more computers, notification data that, when processed by the first user device, causes the first user device to prompt a user of the first user device to indicate whether the user wants to delete the initial message content or not delete the initial message content (see Owens ¶ [027], User facing moderation actions before content proliferation).
Referring to claim 11, Owens and Epstein teaches the method of claim 10, further comprising: receiving, using one or more computers, data corresponding to an indication that the user of the first user device wants to delete the initial message content (Owens: ¶ [027], User facing moderation actions before content proliferation); and
in response to receiving data corresponding to an indication that the user of the first user device wants to delete the initial message content, deleting, using one or more computers, the initial message, wherein deletion of the initial message content prohibits any other user from viewing the first message content after its deletion (Epstein: [002], [032], Redaction/removal of offensive content).
Referring to claim 12, Epstein teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises:
deleting, using one or more computers, the initial message, wherein deletion of the initial message content prohibits any other user from viewing the initial message content after its deletion (see Epstein ¶[002]-[003], [032] -Automatic redaction/removal of offensive text [066] [074]).
Referring to claim 13, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: flagging, using one or more computers, the initial message for deletion (see ¶ [025]-[027], redress content items for moderation action).
Referring to claim 14, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises:
adjusting, using one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items (see ¶ [008]-[009], Owens teaches lowering rank values to demote content).
Referring to claim 15. Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises: storing, using one or more computers, the content of the initial message in a database of offensive content used to screen messages or other content for offensive content (see ¶ [024], [032], Repositories of labeled offensive text samples).
Referring to claim 16, Epstein teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises:
training, using one or more computers, a machine learning model to detect subsequent messages that have the initial message content as offensive content (see ¶ [004]-[007], [023], Iterative Training/retraining of the classifier using labeled offensive samples).
Referring to claim 17, Owens teaches the method of claim 1, wherein performing, by one or more computers, a remedial operation to mitigate exposure to the offensive content comprises:
causing, using one or more computers, a warning message to be displayed in proximity to the initial message content within a messaging application (see ¶ [050], notice to be displayed … “or similar warnings” also see [027], Mitigation actions to prevent exposure before dissemination ).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Additional relevant prior art can be found in the included form PTO-892 (Notice of Cited References).
Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111 (c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFTAB N. KHAN whose telephone number is (571)270-5172. The examiner can normally be reached on Monday-Friday 8AM-5PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton Burgess can be reached on 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AFTAB N. KHAN/
Primary Examiner, Art Unit 2454