DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 11/26/2025.
Claims 8 and 20 have been canceled by the Applicant.
Claim(s) 1-7, 9-19, and 21-25 are pending and have been examined. Hence, this action has been made FINAL.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments and Amendments
Amendments to the claims by the Applicant have been considered and addressed below.
With respect to the 35 USC § 101, 102, and 103 rejections, the Applicant provides several arguments in which the Examiner will respond accordingly, below.
35 USC § 102/103 rejection(s)
Arguments in pages 9-13 of the Remarks filed on 11/26/2025.
Examiner’s Response to Arguments:
Applicant’s arguments with respect to claim(s) 1, 13, and 25 under 35 U.S.C. § 102, and 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kumar et al. (US 20150186787 A1) further in view of Hailpern et al. (US 20080021922 A1) and Castleberry et al. (US 20190163695 A1).
For more details, please refer to updated 35 U.S.C. § 103 rejections for claims 1, 13, and 25, below.
35 USC § 101 rejection(s)
Arguments in pages 13-16 of the Remarks filed on 11/26/2025.
Examiner’s Response to Arguments:
Applicant’s arguments with respect to the 35 U.S.C. § 101 of independent claims 1, 13, and 25 have been considered but these are not persuasive.
The Examiner respectfully disagrees with the arguments of
“1. The independent claims are not directed to a mathematical concept… Although representative claim 1 has been amended to include, for example, a feature relating to calculating an originality score that may be performed using math, the claims do not fall within the "mathematical concept" grouping of abstract ideas.”
“2. The independent claims are not directed to a mental process… Even if (arguendo) it were possible with the aid of pen and paper to perform the claimed monitoring, generate the claimed messages, and detect plagiarism in the same manner as described by claim 1, the mental process rejections would still be proper under §101 because the human mind is not a practical means for doing so…”
and “3. The claims recite significantly more than an abstract idea because they are integrated into a practical application… present claims are linked to a particular technological field of use, e.g., combating plagiarism of AI-sourced content (and/or other sources that may not already be literally recorded anywhere public). This may be accomplished by analyzing "modifications... and corresponding user interactions with the content editor" as claimed. ”
Also, the Examiner notes (as numbered in the arguments, above):
The claims are directed to the abstract idea grouping of a mental process applying mathematical concepts (i.e., calculation of score(s))
Please refer to the comment above and the analysis to be provided below.
The claims as drafted do not limit the invention to combating plagiarism of AI-sourced content or any other source not recorded. Hence, these arguments are considered moot.
Please see detailed analysis below for more details on how the Examiner understands the independent claims do not recite additional elements that integrate the judicial exception into a practical application. Hence, not qualifying as patent eligible subject matter under 35 U.S.C. § 101.
Please refer to MPEP 2106.04(1): Eligibility Step 2A: Whether a Claim is Directed to a Judicial Exception: Prong One.
“Prong One asks does the claim recite an abstract idea, law of nature, or natural phenomenon? In Prong One examiners evaluate whether the claim recites a judicial exception, i.e. whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. While the terms "set forth" and "described" are thus both equated with "recite", their different language is intended to indicate that there are two ways in which an exception can be recited in a claim. For instance, the claims in Diehr, 450 U.S. at 178 n. 2, 179 n.5, 191-92, 209 USPQ at 4-5 (1981), clearly stated a mathematical equation in the repetitively calculating step, and the claims in Mayo, 566 U.S. 66, 75-77, 101 USPQ2d 1961, 1967-68 (2012), clearly stated laws of nature in the wherein clause, such that the claims "set forth" an identifiable judicial exception. Alternatively, the claims in Alice Corp., 573 U.S. at 218, 110 USPQ2d at 1982, described the concept of intermediated settlement without ever explicitly using the words "intermediated" or "settlement."”
“An example of a claim that recites a judicial exception is "A machine comprising elements that operate in accordance with F=ma." This claim sets forth the principle that force equals mass times acceleration (F=ma) and therefore recites a law of nature exception. Because F=ma represents a mathematical formula, the claim could alternatively be considered as reciting an abstract idea. Because this claim recites a judicial exception, it requires further analysis in Prong Two in order to answer the Step 2A inquiry. An example of a claim that merely involves, or is based on, an exception is a claim to "A teeter-totter comprising an elongated member pivotably attached to a base member, having seats and handles attached at opposing sides of the elongated member." This claim is based on the concept of a lever pivoting on a fulcrum, which involves the natural principles of mechanical advantage and the law of the lever. However, this claim does not recite these natural principles and therefore is not directed to a judicial exception (Step 2A: NO). Thus, the claim is eligible at Pathway B without further analysis.”
From this analysis, in Step 2A, Prong One, the Examiner has evaluated the independent claims accordingly and determined that the amended independent claims as drafted indeed describe a judicial exception (i.e., an abstract idea), which represent a mental process (which can be performed by a human with pen and paper).
More specifically, similar to what was discussed in the Non-Final Rejection mailed on 08/25/2025:
The limitations in the independent claims 1, 13, and 25, as drafted cover a human (mental process):
individual document performed via user interaction with the content editor;
sending, from the revision handler to a plagiarism analysis engine and in response to each of the modifications, a respective inter-process communication comprising a revision event describing the modification to the individual document and the corresponding user interaction with the content editor;
detecting, by the plagiarism analysis engine, plagiarism within the individual document based on the modifications to the individual document and the corresponding user interactions with the content editor received in the inter-process communications;
wherein detecting the plagiarism comprises calculating an originality score based on the revision events and determining that the originality score reflects less than a threshold amount of originality; and
wherein the originality score reflects increasing amounts of originality with greater numbers of revision events and/or greater amounts of time spent performing the modifications described by the revision events.
More specifically, a human based on either mentally and/or using pen and paper performing the steps of:
Monitoring/reading what another human (i.e., user) writes down/edits on a document;
Identifying and/or writing down revision/edit events of a document (e.g., letter); and
Determining if there is plagiarism in said document based on the revision/edit events;
wherein determining if there is plagiarism in said document based on the revision/edit events includes calculating a score based on a predetermined equation(s) and determining is less than a predefined number (i.e., mathematical concept); and
wherein the score increases proportionally with the revision/edit events.
Please also refer to MPEP 2106.05(f)(2): Whether the claim invokes computers or other machinery merely as a tool to perform an existing process, and MPEP 2106.06(b): Clear Improvement to a Technology or to Computer Functionality.
Please refer to MPEP 2106.04(2): Eligibility Step 2A: Whether a Claim is Directed to a Judicial Exception: Prong Two.
“Prong Two asks does the claim recite additional elements that integrate the judicial exception into a practical application? In Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. If the additional elements in the claim integrate the recited exception into a practical application of the exception, then the claim is not directed to the judicial exception (Step 2A: NO) and thus is eligible at Pathway B. This concludes the eligibility analysis. If, however, the additional elements do not integrate the exception into a practical application, then the claim is directed to the recited judicial exception (Step 2A: YES), and requires further analysis under Step 2B (where it may still be eligible if it amounts to an ‘‘inventive concept’’). For more information on how to evaluate whether a judicial exception is integrated into a practical application, see MPEP § 2106.04(d)(2).”
From this analysis, in Step 2A, Prong Two, the Examiner has evaluated the independent claims accordingly and determined that the amended independent claims as drafted that the claims as a whole do not include additional elements that integrate the exception into a practical application of that exception. (i.e., an abstract idea). Similar to what was discussed in the Non-Final Rejection mailed on 08/25/2025:
This judicial exception is not integrated into a practical application because for example: claim 1 recites “a computing system”, claim 13 recites “a computing system”, “processing circuitry”, “memory circuitry”, and claim 25 recites “a non transitory computer readable medium” and “processing circuitry”. As an example, in ¶ [0076] of the as filed specification, it is disclosed: “Any or all of the processing described above may, for example, be performed by a centralized or distributed computing system of one or more computing devices. Such a computing system 600 may be implemented according to the example illustrated in Figure 5. The computing system 600 of Figure 5 comprises processing circuitry 610,memory circuitry 620, and interface circuitry 630. The processing circuitry 610 is communicatively coupled to the memory circuitry 620 and the interface circuitry 630, e.g., via a bus 604. The processing circuitry 610 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof. For example, the processing circuitry 610 may be programmable hardware capable of executing software instructions stored, e.g., as a machine-readable computer program 640 in the memory circuitry 620.” Therefore, a general-purpose computer or computing device is described and mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical idea because it does not impose any meaningful limits on practicing the abstract idea.
Please also refer to MPEP 2106.05(f)(2): Whether the claim invokes computers or other machinery merely as a tool to perform an existing process.
Finally, please refer to MPEP 2106.05(A): Relevant Considerations For Evaluating Whether Additional Elements Amount To An Inventive Concept
“Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include:
i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f));
ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d));”
From this analysis, in Step 2B, the Examiner has evaluated the independent claims accordingly and determined that the independent claims as drafted have limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception. Similar to what was discussed in the Non-Final Rejection mailed on 08/25/2025:
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is listed as a general computing device as noted. The claim is not patent eligible.
In summary, the Examiner respectfully disagrees with the arguments above. Please refer to analysis above.
For more details, please refer to updated 35 U.S.C. § 103 rejections for claims 1, 13, and 25, below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7, 9-19, and 21-25 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. More specifically directed to the abstract idea grouping of: mental process and/or mathematical concept.
The independent claim(s) 1, 13, and 25 recite(s):
1. A method of detecting plagiarism, implemented by a computing system, the method comprising:
monitoring, by a revision handler, a content editor for modifications to an individual document performed via user interaction with the content editor;
sending, from the revision handler to a plagiarism analysis engine and in response to each of the modifications, a respective inter-process communication comprising a revision event describing the modification to the individual document and the corresponding user interaction with the content editor;
detecting, by the plagiarism analysis engine, plagiarism within the individual document based on the modifications to the individual document and the corresponding user interactions with the content editor received in the inter-process communications;
wherein detecting the plagiarism comprises calculating an originality score based on the revision events and determining that the originality score reflects less than a threshold amount of originality; and
wherein the originality score reflects increasing amounts of originality with greater numbers of revision events and/or greater amounts of time spent performing the modifications described by the revision events.
13. A computing system for detecting plagiarism, the computing system comprising:
processing circuitry and memory circuitry, the memory circuitry storing instructions executable by the processing circuitry whereby the computing system is configured to:
[perform the limitations as in claim 1, above]
25. A non-transitory computer readable medium storing software instructions for controlling a computing system to detect plagiarism, wherein running the software instructions on processing circuitry of the computing system, causes the computing system to:
[perform the limitations as in claim 1, above]
This reads on a human (e.g., mentally and/or using pen and paper):
Monitoring/reading what another human (i.e., user) writes down/edits on a document;
Identifying and/or writing down revision/edit events of a document (e.g., letter); and
Determining if there is plagiarism in said document based on the revision/edit events;
wherein determining if there is plagiarism in said document based on the revision/edit events includes calculating a score based on a predetermined equation(s) and determining is less than a predefined number; and
wherein the score increases proportionally with the revision/edit events.
This judicial exception is not integrated into a practical application because for example: claim 1 recites “a computing system”, claim 13 recites “a computing system”, “processing circuitry”, “memory circuitry”, and claim 25 recites “a non transitory computer readable medium” and “processing circuitry”. As an example, in ¶ [0076] of the as filed specification, it is disclosed: “Any or all of the processing described above may, for example, be performed by a centralized or distributed computing system of one or more computing devices. Such a computing system 600 may be implemented according to the example illustrated in Figure 5. The computing system 600 of Figure 5 comprises processing circuitry 610,memory circuitry 620, and interface circuitry 630. The processing circuitry 610 is communicatively coupled to the memory circuitry 620 and the interface circuitry 630, e.g., via a bus 604. The processing circuitry 610 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof. For example, the processing circuitry 610 may be programmable hardware capable of executing software instructions stored, e.g., as a machine-readable computer program 640 in the memory circuitry 620.” Therefore, a general-purpose computer or computing device is described and mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical idea because it does not impose any meaningful limits on practicing the abstract idea.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is listed as a general computing device as noted. The claim is not patent eligible.
With respect to claims 2 and 14, the claim(s) recite:
2 and 14. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document comprises determining that less than a threshold revision time was spent performing the modifications described by the revision events.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein determining if there is plagiarism in said document based on the revision/edit events includes determining that less than a predefined amount of time was spent editing the document.
No additional limitations are present.
With respect to claims 3 and 15, the claim(s) recite:
3 and 15. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document comprises determining that one or more of the modifications described by the revision events were performed faster than a threshold.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein determining if there is plagiarism in said document based on the revision/edit events includes determining that the modifications were performed faster than a predefined amount of time.
No additional limitations are present.
With respect to claims 4 and 16, the claim(s) recite:
4 and 16. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document comprises determining that the revision events are fewer in number than a threshold number of revisions.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein determining if there is plagiarism in said document based on the revision/edit events includes determining that the edits were less than a predefined number.
No additional limitations are present.
With respect to claims 5 and 17, the claim(s) recite:
5 and 17. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document comprises determining, for a given modification type, that fewer than a threshold number of the revision events describe modifications having the given modification type.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein determining if there is plagiarism in said document based on the revision/edit events includes determining that for a specific type of edit (addition of a character) there are less than a predefined number of edits.
No additional limitations are present.
With respect to claims 6 and 18, the claim(s) recite:
6 and 18. The method/computing system of claims 5 and 17, further comprising classifying the revision events according to modification type, wherein detecting the plagiarism within the individual document further comprises weighing the revision events of the given modification type differently from revision events of a different modification type.
This reads on a human (e.g., mentally and/or using pen and paper):
Classifying the types of modifications, wherein determining if there is plagiarism in said document based on the revision/edit events includes adding a particular value or weight to specific revision/edit events.
No additional limitations are present.
With respect to claims 7 and 19, the claim(s) recite:
7and 19. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document comprises determining that the modification described by more than a threshold number of the revision events was performed by pasting content from outside the media content.
This reads on a human (e.g., mentally and/or using pen and paper):
wherein determining if there is plagiarism in said document based on the revision/edit events includes determining that the edits are performed by copying exactly portions of text from another document.
No additional limitations are present.
With respect to claims 9 and 21, the claim(s) recite:
9 and 21. The method/computing system of claims 1 and 13, further comprising generating each revision event upon detecting the modification as the modification is performed via the user interaction with the content editor.
This reads on a human (e.g., mentally and/or using pen and paper):
identifying and/or writing down revision/edit events of a document (e.g., letter)
No additional limitations are present.
With respect to claims 10 and 22, the claim(s) recite:
10 and 22. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document comprises using a media originality machine learning model to determine that a likelihood that the media content has been plagiarized exceeds a threshold.
This reads on a human (e.g., mentally and/or using pen and paper):
wherein determining if there is plagiarism in said document based on the revision/edit events includes using a predetermined set of rules/steps to determine that the probability that the document has been plagiarized exceeds a predefined value.
No additional limitations are present.
With respect to claims 11 and 23, the claim(s) recite:
11 and 23. The method/computing system of claims 10 and 22, further comprising training the media originality machine learning model on a plurality of content training samples and corresponding training revision events, each training revision event being labeled as describing either an original modification or a plagiarized modification.
This reads on a human (e.g., mentally and/or using pen and paper):
defining the predetermined set of rules/steps to determine that the probability that the document has been plagiarized with data known to be plagiarized.
No additional limitations are present.
With respect to claims 12 and 24, the claim(s) recite:
12 and 24. The method/computing system of claims 10 and 22, further comprising training the media originality machine learning model on a plurality of content training samples and corresponding training revision events, each content training sample being labeled as either original content or plagiarized content.
This reads on a human (e.g., mentally and/or using pen and paper):
defining the predetermined set of rules/steps to determine that the probability that the document has been plagiarized with data known to be plagiarized.
No additional limitations are present.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 9-13, and 21-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US 20150186787 A1) further in view of Hailpern et al. (US 20080021922 A1) and Castleberry et al. (US 20190163695 A1).
As to independent claim 1, Kumar et al. teaches:
1. A method of detecting plagiarism, implemented by a computing system (see ¶ [0002]: “According to an implementation of the disclosed subject matter, a system is provided that includes a database and a feature extraction module…” and ¶ [0007]: “The disclosed implementations may be useful to detect plagiarism in a MOOC…”), the method comprising:
detecting, by the plagiarism analysis engine, plagiarism within the individual document based on the modifications to the individual document (see ¶ [0015]: “Three components are disclosed that in combination provide detection of plagiarized content. The first component is a cloud-based platform for document writing or computer program development that allows restricted sharing and maintains records of changes. The records of changes or edits made to a document on the cloud-platform may include a time that a change was made. A second component is a feature extraction module which is based on a writing pattern for a user and incremental content addition as will be described below. The third component is a machine-learning based scheme that predicts which pairs or groups of documents have similar contents indicating plagiarism.”);
However, Kumar et al. does not explicitly teach, but Hailpern et al. does teach:
monitoring, by a revision handler, a content editor for modifications to an individual document performed via user interaction with the content editor (see
Fig. 1B (102: editor, 103: editing event monitor, and 104: originality-related info collector)
¶ [0076]: “FIG. 1B illustrates a system 100 for maintaining the originality-related information about elements in an editable object according to still further embodiments of the present invention. As shown, in further embodiments of the present invention, apart from the editing event monitor 103, the originality-related information collector 104, and the originality-related information recorder 105, the system 100 may further comprise an editor 102 for editing an editable object, …”
¶ [0077]: “Among the above components of the system 100, the editor 102 may be a standard editor, such as Eclipse Java Editor, used in software development environment or multimedia file authoring and editing environment or other environment, or an editor specially adapted to implement the present invention. The editor 102 has a user interface (not shown) for interacting with the user, including receiving the editing actions from the user and displaying the contents of the object being edited and the originality-related information about the elements of the object and other relevant information generated during the editing process.”
Fig. 2 (102: editing event monitor, 104: originality-related information collector),
and ¶ [0107]: “Returning to FIG.2, after determining in step 202 that a line has been added to the program code, and determining the method by which the line has been added to the program code, the editing event monitor 102 triggers the originality-related information collector 104, which, in step 203, identifies the originality-related information about the line…”);
sending, from the revision handler to a plagiarism analysis engine and in response to each of the modifications, a respective inter-process communication comprising a revision event describing the modification to the individual document and the corresponding user interaction with the content editor (see Fig. 1B and 2 and ¶ [0076-0077 and 0107] citations as in limitation(s) above. More specifically: ¶ [0107]: “Returning to FIG.2, after determining in step 202 that a line has been added to the program code, and determining the method by which the line has been added to the program code, the editing event monitor 102 triggers the originality-related information collector 104, which, in step 203, identifies the originality-related information about the line…” );
detecting, by the plagiarism analysis engine, plagiarism within the individual document based on the modifications to the individual document and the corresponding user interactions with the content editor received in the inter-process communications (see Fig. 1B and 2 and ¶ [0076-0077 and 0107] citations as in limitation(s) above. More specifically: ¶ [0107]: “Returning to FIG.2, after determining in step 202 that a line has been added to the program code, and determining the method by which the line has been added to the program code, the editing event monitor 102 triggers the originality-related information collector 104, which, in step 203, identifies the originality-related information about the line…” and further ¶ [0004]: “…. Moreover, in today's software development projects, developers are often required at the end of the production phase to sign a "Certificate of Originality" (COO) stating which parts of the code of the software are their own creation, and which parts are from the Open Source or some other sources; …”);
Kumar et al. and Hailpern et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. to incorporate the teachings of Hailpern et al. of monitoring, by a revision handler, a content editor for modifications to an individual document performed via user interaction with the content editor; sending, from the revision handler to a plagiarism analysis engine and in response to each of the modifications, a respective inter-process communication comprising a revision event describing the modification to the individual document and the corresponding user interaction with the content editor; and detecting, by the plagiarism analysis engine, plagiarism within the individual document based on the modifications to the individual document and the corresponding user interactions with the content editor received in the inter-process communications which provides the benefit of properly tracking and maintaining the originality-related information about elements throughout their whole lifecycle ([0028] of Hailpern et al.).
However, Kumar et al. does not explicitly teach, but Castleberry et al. does teach:
wherein detecting the plagiarism comprises calculating an originality score based on the revision events (see ¶ [0076]: “…The document originality estimation data 160 can include a graphical presentation output via the GUI of the research assistance application indicating to the user the current originality score. “) and determining that the originality score reflects less than a threshold amount of originality (see ¶ [0076 and 0078]: “[0076] In one embodiment, the originality detection module 120 generates document originality estimation data 160 estimating a percentage or portion of the research document that includes original material generated by the user. The document originality estimation data 160 can include a graphical presentation output via the GUI of the research assistance application indicating to the user the current originality score. The graphical presentation can include a percentage, a chart, a bar, a graph, or any other type of indicator that can inform the user of the current originality level of the research document. The graphical presentation can include a first color, for example red, if the originality score falls below a selected threshold. The graphical presentation can include a second color, for example green, if the originality score surpasses a selected direction. In this way the user can easily see whether or not the research document includes a satisfactory portion of original material generated by the user. [0078] In one embodiment, a professor or faculty member can stipulate what percentage of a research document must correspond to originally generated material. The user will not be able to submit the research document until the research document as an originality score that satisfies the originality requirement.”); and
wherein the originality score reflects increasing amounts of originality with greater numbers of revision events [Examiner notes: only one required “and/or”] (see ¶ [0075]: “…The user can add a revised material in order to increase an originality of the research document to a satisfactory level as determined by one or more of: the user, a teacher or professor of the user, a professional body to which the research document will be submitted, or other individuals or organizations.”).
Kumar et al., Hailpern et al. and Castleberry et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. in combination with Hailpern et al. to incorporate the teachings of Castleberry et al. of comprises calculating an originality score based on the revision events and determining that the originality score reflects less than a threshold amount of originality; and the originality score reflects increasing amounts of originality with greater numbers of revision events and/or greater amounts of time spent performing the modifications described by the revision events which provides the benefit of greatly improving the efficiency with which individuals can perform research and generate research documents.([0009] of Castleberry et al.).
As to independent claim 13, Kumar et al. teaches:
13. A computing system for detecting plagiarism (see ¶ [0002]: “According to an implementation of the disclosed subject matter, a system is provided that includes a database and a feature extraction module…” and ¶ [0007]: “The disclosed implementations may be useful to detect plagiarism in a MOOC…”), the computing system comprising:
processing circuitry and memory circuitry (see ¶ [0029]: “Implementations of the presently disclosed subject matter may be implemented in and used with a variety of component and network architectures. FIG. 1 is an example computer system 20 suitable for implementing implementations of the presently disclosed subject matter. The computer 20 includes a bus 21 which interconnects major components of the computer 20, such as one or more processors 24, memory 27 such as RAM, ROM, flash RAM, or the like, an input/output controller 28, and fixed storage 23 such as a hard drive, flash storage, SAN device, or the like...), the memory circuitry storing instructions executable by the processing circuitry (see ¶ [0029] citation as in limitation above and further ¶ [0034]: “…The processor may be coupled to memory, such as RAM, ROM, flash memory, a hard disk or any other device capable of storing electronic information. The memory may store instructions adapted to be executed by the processor to perform the techniques according to implementations of the disclosed subject matter.”) whereby the computing system is configured to:
[perform the limitations as in claim 1, above]
As to independent claim 25, Kumar et al. teaches:
25. A non-transitory computer readable medium storing software instructions for controlling a computing system to detect plagiarism (see ¶ [0018 and 0034] citations as in claim 13, above and further ¶ [0034]: “…Implementations also may be implemented in the form of a computer program product having computer program code containing instructions implemented in non-transitory and/or tangible media, such as floppy diskettes, CD-ROMs, hard drives, USB (universal serial bus) drives, or any other machine readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing implementations of the disclosed subject matter…”), wherein running the software instructions on processing circuitry of the computing system (see ¶ [0018 and 0034] citations as in claim 11 and the limitation, above.), causes the computing system to:
[perform the limitations as in claim 1, above]
Regarding claims 9 and 21, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
9 and 21. The method/computing system of claims 1 and 13,
further comprising generating each revision event upon detecting the modification as the modification is performed via the user interaction with the content editor (see ¶ [0018]: “… For example, a user may access the system and be presented with a user interface that allows the user to create a document. Upon the user doing so, the database may create a record or document history 316 for the document. The record 316 may include changes that are visible to the user such as new lines of code, edits, new or edited words, etc. The record 316 may contain information that is not visible to the user. For example, the record 316 may be associated with or contain the time at which any changes or additions to the document are made. ...).
Regarding claims 10 and 22, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
10 and 22. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the media content based on the revision events (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
comprises using a media originality machine learning model to determine that a likelihood that the media content has been plagiarized exceeds a threshold (see ¶ [0025]: “…For example, a user may open the document 412 and paste in text or other content, type a new paragraph, or the like. A time reference 424 may be associated with the edit 422 to the document 412. The edit 422, the time reference 424 and/or the type of edit may be stored. In an implementation, a hash of the edit, time reference, and/or type of edit may be generated and stored in addition to, or instead of, separately storing the edit, time reference, and/or type of edit. A type of edit may refer to, for example, copy, paste, delete, add, move, search-and-replace, etc. The edit 422 and the time reference 424 may be stored to the database as a document history 414. A feature vector 426 based on the document history 414 may be generated. In some configurations, the feature vector may be stored to the database. A probability that the document is plagiarized may be determined based on a classification of the feature vector by a machine learning technique. As described earlier, the machine learning model, such as a logistical regression model or a support vector machine ("SVM") may be trained on a set of documents known to be plagiarized. Classification of the feature vector by the machine learning technique may result in a ranked list of documents according to the probability that they have been plagiarized. An instructor may set a threshold value of probability of plagiarism above which the instructor may elect to manually review the documents.”).
Regarding claims 11 and 23, Kumar et al. teaches the limitations as in claims 10 and 22, above.
Kumar et al. further teaches:
11 and 23. The method/computing system of claims 10 and 22,
further comprising training the media originality machine learning model on a plurality of content training samples and corresponding training revision events (see ¶ [0025] citation as in claims 10 and 22, above. More specifically: “…As described earlier, the machine learning model, such as a logistical regression model or a support vector machine ("SVM") may be trained on a set of documents known to be plagiarized…” and further ¶ [0003]: “…The extraction module may provide an indication of the similarity score. In some configurations, a machine learning technique may be trained on a first set of documents that are known to be plagiarized. The trained machine learning algorithm may be used to classify the feature vector.”, ¶ [0016]: “Each time a change is made to a document on the cloud-based platform, the change and time may be incrementally recorded. Once an assignment is submitted for grading, features are extracted based on the stored history of the document. For example, a sequence or distribution of word n-grams over time may be an indicator of document content. Each person typically is associated with a set of words, phrases, or style of writing that may be utilized to uniquely identify the individual. Similarly, for computer programs, a distribution over programming language-dependent keywords and their relative orders may be computed. A variety of features in addition to, or instead of, n-grams or programming dependent keywords may be extracted from a document. For example, hashes may be generated from the text content and the hashes may be classified by a machine trained on hashes derived from works known to be plagiarized…”, and ¶ [0024]: “… The machine learning algorithm may be trained on the feature vectors for both plagiarized and non-plagiarized documents. The trained classifier may be applied to the feature vector extracted from the group of documents described earlier.”),
each training revision event being labeled as describing either an original modification or a plagiarized modification (see ¶ [0025] citation as in claims 10 and 22, above and ¶ [0003, 0016, and 0024] citations as in limitation above. More specifically see ¶ [0024]: “… The machine learning algorithm may be trained on the feature vectors for both plagiarized and non-plagiarized documents…”).
Regarding claims 12 and 24, Kumar et al. teaches the limitations as in claims 10 and 22, above.
Kumar et al. further teaches:
12 and 24. The method/computing system of claims 10 and 22,
further comprising training the media originality machine learning model on a plurality of content training samples and corresponding training revision events (see ¶ [0025] citation as in claims 10 and 22, above. More specifically: “…As described earlier, the machine learning model, such as a logistical regression model or a support vector machine ("SVM") may be trained on a set of documents known to be plagiarized…” and further ¶ [0003,0016, and 0024] citations as in claims 11 and 23, above.),
each content training sample being labeled as either original content or plagiarized content (see ¶ [0025] citation as in claims 10 and 22, above and ¶ [0003, 0016, and 0024] citations as in limitation above. More specifically see ¶ [0003]: “…The extraction module may provide an indication of the similarity score. In some configurations, a machine learning technique may be trained on a first set of documents that are known to be plagiarized…”).
Claims 2-5, 7, 14-17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US 20150186787 A1) further in view of Hailpern et al. (US 20080021922 A1) and Castleberry et al. (US 20190163695 A1) as applied to claims 1 and 13 above, and further in view of Granit et al. (US 20230359659 A1).
Regarding claim 2 and 14, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
2 and 14. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
However, Kumar et al. does not explicitly teach, but Granit et al. does teach:
comprises determining that less than a threshold revision time was spent performing the modifications described by the revision events (see ¶ [0052]: “A template-detection-worker function may then receive the action data frame prepared by the caller function and return a list of sequences, which may be a list of action IDs including the copying and pasting actions by the user or agent, together with template candidate texts or strings. In some embodiments, the caller or worker functions may calculate the difference, or delta (e.g., in seconds) between copy and paste actions as part of filtering candidate template candidates. In such a manner, candidates in which the difference exceeds a predetermined threshold (e.g., 100 seconds) may be discarded as non-templates, while those where the difference is below the threshold may be kept in the template candidate bank.”).
Kumar et al. and Granit et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document analysis/editing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. to incorporate the teachings of Granit et al. of determining that less than a threshold revision time was spent performing the modifications described by the revision events which provides the benefit of improving the technologies of computer automation, big data analysis, and computer use and automation analysis. ([0061] of Granit et al.).
Regarding claims 3 and 15, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
3 and 15. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
However, Kumar et al. does not explicitly teach, but Granit et al. does teach:
comprises determining that one or more of the modifications described by the revision events were performed faster than a threshold (see ¶ [0052]: “A template-detection-worker function may then receive the action data frame prepared by the caller function and return a list of sequences, which may be a list of action IDs including the copying and pasting actions by the user or agent, together with template candidate texts or strings. In some embodiments, the caller or worker functions may calculate the difference, or delta (e.g., in seconds) between copy and paste actions as part of filtering candidate template candidates. In such a manner, candidates in which the difference exceeds a predetermined threshold (e.g., 100 seconds) may be discarded as non-templates, while those where the difference is below the threshold may be kept in the template candidate bank.”).
Kumar et al. and Granit et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document analysis/editing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. to incorporate the teachings of Granit et al. of determining that one or more of the modifications described by the revision events were performed faster than a threshold which provides the benefit of improving the technologies of computer automation, big data analysis, and computer use and automation analysis. ([0061] of Granit et al.).
Regarding claims 4 and 16, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
4 and 16. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
However, Kumar et al. does not explicitly teach, but Granit et al. does teach:
comprises determining that the revision events are fewer in number than a threshold number of revisions (see ¶ [0052] citation as in claim 3 and 15 above an further ¶ [0052]: “…Similarly, the caller or worker functions may count the number or appearances of a plurality of routines associated with a given template candidate in the low-level action data or dataset, and discard or remove candidates for which corresponding routines do not exceed a predetermined number of appearances in the low-level action data (e.g., a threshold of at least two appearances). Candidates for which strings were not copied from one app and/or window to another, different app and/or window, or where a number of pasting actions to the same target app or application and/or window does not exceed a predetermined threshold (e.g., pasting has to occur twice) may be discarded as well. Additional or alternative conditions and/or constraints for finding, keeping or discarding text template candidates may be employed or included, e.g., in caller and/or worker functions as part of other embodiments of the invention. Caller or worker functions may be applied to template candidate instances in an iterative manner, e.g., calculate the time difference between copying and pasting actions for a first instance of a given routine for a given template candidate, then performing the same calculation for a second instance, a third instance, and so forth—and then move on to the next template candidate and calculate time differences in instances of a routine for that candidate, etc.”).
Kumar et al. and Granit et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document analysis/editing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. to incorporate the teachings of Granit et al. of determining that the revision events are fewer in number than a threshold number of revisions which provides the benefit of improving the technologies of computer automation, big data analysis, and computer use and automation analysis. ([0061] of Granit et al.).
Regarding claims 5 and 17, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
5 and 17. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
However, Kumar et al. does not explicitly teach, but Granit et al. does teach:
comprises determining, for a given modification type, that fewer than a threshold number of the revision events describe modifications having the given modification type (see ¶ [0052] citation as in claim 3-4 and 15-16 above. More specifically: ““…Similarly, the caller or worker functions may count the number or appearances of a plurality of routines associated with a given template candidate in the low-level action data or dataset, and discard or remove candidates for which corresponding routines do not exceed a predetermined number of appearances in the low-level action data (e.g., a threshold of at least two appearances). Candidates for which strings were not copied from one app and/or window to another, different app and/or window, or where a number of pasting actions to the same target app or application and/or window does not exceed a predetermined threshold (e.g., pasting has to occur twice) may be discarded as well...”).
Kumar et al. and Granit et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document analysis/editing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. to incorporate the teachings of Granit et al. of determining, for a given modification type, that fewer than a threshold number of the revision events describe modifications having the given modification type which provides the benefit of improving the technologies of computer automation, big data analysis, and computer use and automation analysis. ([0061] of Granit et al.).
Regarding claims 7 and 19, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
7 and 19. The method/computing system of claims 1 and 13, wherein detecting the plagiarism within the individual document (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
However, Kumar et al. does not explicitly teach, but Granit et al. does teach:
comprises determining that the modification described by more than a threshold number of the revision events was performed by pasting content from outside the media content (see ¶ [0052] citation as in claims 3-5 and 15-17 above and further ¶ [0006]: “A system and method may identify computer-based processes involving the use of text templates which may be candidates for automation. Using one or more computers and/or computer processors, embodiments of the invention may sort low-level user action information for a given process which may be received as input (e.g., as a dataset of computer actions); search for a plurality of strings pasted multiple times (e.g., from a first app to another, different second app) in the sorted information;…”).
Kumar et al. and Granit et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in document analysis/editing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. to incorporate the teachings of Granit et al. of determining that the modification described by more than a threshold number of the revision events was performed by pasting content from outside the media content which provides the benefit of improving the technologies of computer automation, big data analysis, and computer use and automation analysis. ([0061] of Granit et al.).
Claims 6 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US 20150186787 A1) further in view of Hailpern et al. (US 20080021922 A1), Castleberry et al. (US 20190163695 A1) and Granit et al. (US 20230359659 A1) as applied to claims 5 and 17 above, and further in view of Pathak (US 20160034427 A1).
Regarding claims 6 and 18, Kumar et al. teaches the limitations as in claims 1 and 13, above.
Kumar et al. further teaches:
6 and 18. The method/computing system of claims 5 and 17,
further comprising classifying the revision events according to modification type (see ¶ [0025]: “…A type of edit may refer to, for example, copy, paste, delete, add, move, search-and-replace, etc. The edit 422 and the time reference 424 may be stored to the database as a document history 414…”), wherein detecting the plagiarism within individual document (see ¶ [0002, 0004-0005, 0007, 0015, and 0018] citations as in claims 1 and 13 above.)
However, Kumar et al. in combination with Granit et al. does not explicitly teach, but Pathak does teach:
further comprises weighing the revision events of the given modification type differently from revision events of a different modification type (see ¶ [0036]: “In the calculation process (step S2), the personalization service 2 analyzes the user interaction history information in the personalization database 21 to generate the content importance scores which can be used to select contents for aggregation. The importance score for each unit of content (a paragraph, a page, etc.) may be calculated as a weighted sum of multiple types of user interactions, such as the ones listed above in connection with step S15. The weighted sum can be expressed as: Content Importance Score S(u)=Σ.sub.iW.sub.i*N.sub.i(u), where u is an index or other identifying information that identifies a unit of content (e.g. paragraph number, page number, section number, page number plus line number, etc.), i is an index of the type of user interactions, W.sub.i are the weights given to the types of user interactions, and N.sub.i(u) are the cumulative number of times the user performed that type of interaction for that unit of content. Any suitable weights may be given to the various types of user interactions. In one particular example, the following types of user interactions are used to calculate the content importance score and they are given declining weights in the this order: annotation, print, copy or cut and paste, edit, share, and view. The content importance score represents the level of interest the user has in a given content. The content importance score for each unit of content of a document may be stored in the personalization database 21 along with the user interaction history data, as shown in the example of FIG. 5.”).
Kumar et al., Granit et al. and Pathak are considered to be analogous to the claimed invention because they are in the same field of endeavor in document processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kumar et al. in combination with Granit et al. to incorporate the teachings of Pathak of weighing the revision events of the given modification type differently from revision events of a different modification type which provides the benefit of allowing quick review of materials ([0007] of Pathak).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Keisha Y Castillo-Torres whose telephone number is (571)272-3975. The examiner can normally be reached Monday - Friday, 9:00 am - 4:00 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Keisha Y. Castillo-Torres
Examiner
Art Unit 2659
/Keisha Y. Castillo-Torres/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659