DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim 2-22 are presented for examination.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 01/16/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Internet Communication Authorization
The examiner recommends filling a written authorization for internet communication in response to the present action. Doing so permits the USPTO to communicate with applicant using internet email to schedule interviews or discuss other aspects of the application. Without a written authorization in place, the USPTO cannot respond to Internet correspondence received from Applicant. The preferred method of providing authorization is by filing form PTO/SB/439, available at: https://www.uspto.gov/patent/forms/forms. See MPEP § 502.03 for other methods of providing written authorization.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements are auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 2 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-18 of US patent 12341739 B2. Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 2-22 are anticipated by claims 1-18 of the issued US patent application.
Instant Application: 18638212
Instant Application Claims: 2
Issued 17900866/US pat, 12341739B2
Claims: 1
A method for obfuscating offensive content, the method comprising:
receiving, using one or more computers, data corresponding to message content entered into a first user device by a first user; processing, using the one or more computers, the received data corresponding to the message content entered into the first user device to generate output data indicating whether the message content entered into the first user device by the first user includes offensive content; determining, using the one or more computers and based on the generated output data whether the message content entered into the first user device by the first user includes offensive content; and based on a determination, by the user device, that the message content entered into the first user device by the first user includes offensive content: obfuscating, using the one or more computers the offensive content; providing, using the one or more computers, the obfuscated offensive content to a second user device; generating, using the one or more computers, a notification that indicates that the message content includes offensive content; and providing, using the one or more computers, the generated notification to the first user device.
A method for identifying offensive message content comprising:
for each particular responsive message of a plurality of responsive messages received in response to an initial message:
providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message;
processing, by one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and
storing, by one or more computers, the generated output data;
determining, by one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and
based on a determination, by one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, one or more remedial operations to mitigate exposure to the offensive content, wherein performing the one or more remedial operations comprises: adjusting, using one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items.
9. (New) A system for obfuscating offensive content, the system comprising:
one or more computers; and one or more memory devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations, the operations comprising:
receiving, using the one or more computers, data corresponding to message content entered into a first user device by a first user;
processing, using the one or more computers, the received data corresponding to the message content entered into the first user device to generate output data indicating whether the message content entered into the first user device by the first user includes offensive content; determining, using the one or more computers and based on the generated output data whether the message content entered into the first user device by the first user includes offensive content; and
based on a determination, by the user device, that the message content entered into the first user device by the first user includes offensive content: obfuscating, using the one or more computers the offensive content;
providing, using the one or more computers, the obfuscated offensive content to a second user device;
generating, using the one or more computers, a notification that indicates that the message content includes offensive content; and providing, using the one or more computers, the generated notification to the first user device.
17. A system for identifying offensive message content comprising:
one or more computers; and one or more computer-readable storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations, the operations comprising: for each particular responsive message of a plurality of responsive messages received in response to an initial message:
providing, by the one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message;
processing, by the one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and storing, by the one or more computers, the generated output data; determining, by the one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and
based on a determination, by the one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by the one or more computers, one or more remedial operations to mitigate exposure to the offensive content, wherein performing the one or more remedial operations comprises: adjusting, using the one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items.
Claim 2-22 are narrower than earlier infer offensiveness from replies concept. This claim detect offensiveness directly from the message content itself and then obfuscate the content for recipients and notifies the sender. These claims are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1 of U.S. Patent 12341739. Claim 2, 9 and 16 of instant application is analogous to the issue the issued application. Except for the identified elements above, claims 1 of US patent 12341739 contains most of the elements of independent claims 1, 17, and 18 in the earlier US Patent and thus anticipate claim 2 and other similar independent claims of the instant application. This is a non-provisional type double patenting.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2-22 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Laks (US pub, 2016/0350675A1) in view of Publicover (US pub, 2016/0253710 A1)
Referring to claims 2, 16, Laks teaches a method for obfuscating offensive content (Laks ¶¶[003], [010], [030] teaches methods for detecting objectionable/offensive content and initiating moderation actions), the method comprising:
receiving, using one or more computers, data corresponding to message content entered into a first user device by a first user (Laks ¶¶[0016], [0023] receives user-generated content items (posts/messages) submitted from user devices for analysis);
processing, using the one or more computers, the received data corresponding to the message content entered into the first user device to generate output data indicating whether the message content entered into the first user device by the first user includes offensive content (Laks ¶¶[028]–[031], [0042], applies machine-learning models to message content to generate scores/probabilities that the content includes objectionable material = Offensive Content );
determining, using the one or more computers and based on the generated output data whether the message content entered into the first user device by the first user includes offensive content (Laks ¶¶[030], flagged content identification module 104 can identify content items [034]) compares ML output scores to thresholds to classify content as objectionable) and
based on a determination, by the user device, that the message content entered into the first user device by the first user includes offensive content (Laks ¶¶[035],explicitly triggers moderation actions or appropriate actions [033] when content is determined to be objectionable);
generating, using the one or more computers, a notification that indicates that the message content includes offensive content (Laks ¶¶[062], discloses notifications and alerts associated with moderation determinations “Collection of flagged content”);
providing, using the one or more computers, the generated notification to the first user device (Laks ¶¶[063], sends notifications to user device for further review 312 associated with content originators when content is flagged or moderated);
Laks teaches detecting offensive/objectionable content, performs Machine Learning scoring and triggers moderations workflows but Laks expressly lacks obfuscating, using the one or more computers the offensive content.
However, Publicover teaches modifying or substituting content prior to delivery to mitigate exposure. Furthermore, Publicover, a relevant art in the same field of endeavor teaches obfuscating, using the one or more computers the offensive content (Publicover ¶¶[302], [303] teaches modifying, substituting, or filtering content prior to delivery, such that the user receives altered content instead of the original);
Furthermore, Publicover teaches providing, using the one or more computers, the obfuscated offensive content to a second user device (Publicover ¶¶ [304]-[306] discloses delivering modified/substitute content to user devices instead of original content);
It would have been obvious to an ordinary person skilled in the art at the time invention was made to modify objectionable content detection of Laks to include obfuscation of content such as modifying or substituting content before it is delivered as taught by Publicover in order to obfuscate offensive content identified by Laks using Publicover’s content-modification techniques prior to delivery to a recipient device, while notifying the sender, in order to protect recipients from offensive content while maintaining user awareness.
Referring to claim 3, Publicover taches the method of claim 1, wherein obfuscating the offensive content prohibits a second user of the second user device from viewing the offensive content (see Publicover ¶¶ [150], [303], teaches delivering modified/substitute content in place of original content, thereby preventing viewing of the original).
Referring to claim 4, Publicover teaches the method of claim 1, wherein generating the notification that indicates that the message content includes offensive content comprises: generating, using the one or more computers, a notification that includes data that, when rendered on a graphical user interface of the first user device asks the first user whether the first user wants to delete the obfuscated offensive content provided to the second user device (Publicover ¶¶ [224], content to replace the green screen [450], [633], [699] discloses substitution/replacement of content prior to delivery).
Referring to claim 5, Publicover teaches the method of claim 4, wherein deleting the obfuscated offensive content prohibits a second user of the second user device from viewing the obfuscated offensive content (Publicover ¶¶ 240] [242], [252] teaches filtering/altering content before presentation).
Referring to claim 6, Laks-Publicover teaches the method of claim 4, wherein providing, using the one or more computers, the generated notification to the first user device comprises: providing, using the one or more computers, the generated notification that includes data that, when rendered on a graphical user interface of the first user device asks the first user whether the first user wants to delete the obfuscated offensive content provided to the second user device (Laks ¶¶ [024], [029], [032], Graphical interface for disapproval of the content for a manual review that may include deleting the content prompting deletion is an obvious UI action within that workflow).
Referring to claim 7, Laks-Publicover teaches the method of claim 1, wherein the message content includes image content or video content (Laks ¶¶ [056], [098] see various types of content/audio/video etc. & Publicover [246])
Referring to claim 8, Publicover teaches the method of claim 1, wherein the obfuscated offensive content comprises obfuscated image content or obfuscated video content (Publicover ¶¶ [225], video editing, [227], [304], overlaid with checkered pattern substitutions).
Referring to claim 9, Laks teaches a system for obfuscating offensive content (Laks ¶¶[003], [010], [030] teaches methods for detecting objectionable/offensive content and initiating moderation actions), the system comprising:
one or more computers (Figs. 6, shows one or more computers); and
one or more memory devices (see ¶ [098], [100], [102], [104], memory) storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations, the operations comprising:
receiving, using the one or more computers, data corresponding to message content entered into a first user device by a first user (Laks ¶¶[0016], [0023] receives user-generated content items (posts/messages) submitted from user devices for analysis);
processing, using the one or more computers, the received data corresponding to the message content entered into the first user device to generate output data indicating whether the message content entered into the first user device by the first user includes offensive content (Laks ¶¶[028]–[031], [0042], applies machine-learning models to message content to generate scores/probabilities that the content includes objectionable material);
determining, using the one or more computers and based on the generated output data whether the message content entered into the first user device by the first user includes offensive content (Laks ¶¶[030], [034]) compares ML output scores to thresholds to classify content as objectionable); and
based on a determination, by the user device, that the message content entered into the first user device by the first user includes offensive content (Laks ¶¶[035], [040] explicitly triggers moderation actions when content is determined to be objectionable):
generating, using the one or more computers, a notification that indicates that the message content includes offensive content (Laks ¶¶[062], discloses notifications and alerts associated with moderation determinations “Collection of flagged content”); and
providing, using the one or more computers, the generated notification to the first user device (Laks ¶¶[063], sends notifications to user device for further review 312 associated with content originators when content is flagged or moderated);
Laks teaches detecting offensive/objectionable content, performs Machine Learning scoring and triggers moderations workflows but Laks expressly lacks obfuscating, using the one or more computers the offensive content.
However, Publicover teaches modifying or substituting content prior to delivery to mitigate exposure. Furthermore, obfuscating, using the one or more computers the offensive content (Publicover ¶¶[302], [303], teaches modifying, substituting, or filtering content prior to delivery, such that the user receives altered content instead of the original);
providing, using the one or more computers, the obfuscated offensive content to a second user device (Publicover ¶¶ [304]-[306], discloses delivering modified/substitute content to user devices instead of original content);
It would have been obvious to an ordinary person skilled in the art at the time invention was made to modify objectionable content detection of Laks to include obfuscation of content such as modifying or substituting content before it is delivered as taught by Publicover in order to obfuscate offensive content identified by Laks using Publicover’s content-modification techniques prior to delivery to a recipient device, while notifying the sender, in order to protect recipients from offensive content while maintaining user awareness.
Referring to claim 10, Publicover teaches the system of claim 8, wherein obfuscating the offensive content prohibits a second user of the second user device from viewing the offensive content (see Publicover ¶¶ [150], [303], teaches delivering modified/substitute content in place of original content, thereby preventing viewing of the original).
Referring to claim 11, Publicover teaches the system of claim 8, wherein generating the notification that indicates that the message content includes offensive content comprises: generating, using the one or more computers, a notification that includes data that, when rendered on a graphical user interface of the first user device asks the first user whether the first user wants to delete the obfuscated offensive content provided to the second user device (Publicover ¶¶ [224], content to replace the green screen [450], [633], [699] discloses substitution/replacement of content prior to delivery).
Referring to claim 12, Publicover teaches the system of claim 11, wherein deleting the obfuscated offensive content prohibits a second user of the second user device from viewing the obfuscated offensive content (Publicover ¶¶ 240] [242], [252] teaches filtering/altering content before presentation).
Referring to claim 13, Laks-Publicover teaches the system of claim 11, wherein providing, using the one or more computers, the generated notification to the first user device comprises:
providing, using the one or more computers, the generated notification that includes data that, when rendered on a graphical user interface of the first user device asks the first user whether the first user wants to delete the obfuscated offensive content provided to the second user device (Laks ¶¶ [024], [029], [032], Graphical interface for disapproval of the content for a manual review that may include deleting the content prompting deletion is an obvious UI action within that workflow)..
Referring to claim 14, Publicover teaches the system of claim 8, wherein the message content includes image content or video content (Laks ¶¶ [056], [098] see various types of content/audio/video etc. & Publicover [246]).
Referring to claim 15, Publicover teaches the system of claim 8, wherein the obfuscated offensive content comprises obfuscated image content or obfuscated video content (Publicover ¶¶ [225], video editing, [227], [304], overlaid with checkered pattern substitutions).
Referring to claim 17, Publicover teaches the one or more computer-readable storage media of claim 16, wherein obfuscating the offensive content prohibits a second user of the second user device from viewing the offensive content (see Publicover ¶¶ [150], [303], teaches delivering modified/substitute content in place of original content, thereby preventing viewing of the original).
Referring to claim 18, Publicover teaches the one or more computer-readable storage media of claim 16, wherein generating the notification that indicates that the message content includes offensive content comprises:
generating, using the one or more computers, a notification that includes data that, when rendered on a graphical user interface of the first user device asks the first user whether the first user wants to delete the obfuscated offensive content provided to the second user device (Publicover ¶¶ [224], content to replace the green screen [450], [633], [699] discloses substitution/replacement of content prior to delivery).
Referring to claim 19, Publicover teaches the one or more computer-readable storage media of claim 18, wherein deleting the obfuscated offensive content prohibits a second user of the second user device from viewing the obfuscated offensive content (Publicover ¶¶ 240] [242], [252] teaches filtering/altering content before presentation).
Referring to claim 20, Laks teaches the one or more computer-readable storage media of claim 18, wherein providing, using the one or more computers, the generated notification to the first user device comprises:
providing, using the one or more computers, the generated notification that includes data that, when rendered on a graphical user interface of the first user device asks the first user whether the first user wants to delete the obfuscated offensive content provided to the second user device (Laks ¶¶ [024], [029], [032], Graphical interface for disapproval of the content for a manual review that may include deleting the content prompting deletion is an obvious UI action within that workflow).
Referring to claim 21, Publicover teaches the one or more computer-readable storage media of claim 16, wherein the message content includes image content or video content (Laks ¶¶ [056], [098] see various types of content/audio/video etc. & Publicover [246]).
Referring to claim 22, Publicover teaches the one or more computer-readable storage media of claim 16, wherein the obfuscated offensive content comprises obfuscated image content or obfuscated video content (Publicover ¶¶ [225], video editing, [227], [304], overlaid with checkered pattern substitutions).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Additional relevant prior art can be found in the included form PTO-892 (Notice of Cited References).
Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111 (c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFTAB N. KHAN whose telephone number is (571)270-5172. The examiner can normally be reached on Monday-Friday 8AM-5PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton Burgess can be reached on 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AFTAB N. KHAN/
Primary Examiner, Art Unit 2454