DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination (RCE) under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 8, 2025 has been entered.
Status of Claims
This action is in response to the RCE and amendment filed on December 8, 2025. Claims 1-15 are pending, of which claims 1-4, 6, 8, and 12-15 have been amended.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-15 are rejected under 35 U.S.C. 112(a), as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention.
Specifically, the limitations:
1) “configured to evaluate whether the answer and the model answer satisfy a semantic equivalence” and
2) “wherein the grading logic is configured to adjust the comprehensive evaluation score based on the determined degree of similarity between the answer and the model answer and based on the evaluation with respect to the one or more evaluation items,” found in claims 1, 14 and 15 recite NEW MATTER.
With regard to these limitations a review of the specification does not find any disclosure of the term “degree of similarity” or how it is determined. Moreover, the specification also lacks any description of the grading logic being configured to adjust the comprehensive evaluation score based on a determined degree of similarity between the answer and the model answer. The specification describes the grading logic may adjust a score based on grading logic and evaluation items. Therefore, the specification does not provide a written description supporting this limitation.
As a result, the amended claims 1, 14, and 15 contain subject matter recites new matter which lacks adequate written description to support the amendments to these claims, and, for at least these reasons, claims 1, 14, and 15 are found to fail the written description requirement.
Claims 2-13 depend from claim 1 and are rejected for at least the reasons given for claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
A patent may be obtained for “any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof.” 35 U.S.C. § 101. The Supreme Court has held that this provision contains an important implicit exception: laws of nature, natural phenomena, and abstract ideas are not patentable. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2354 (2014); Gottschalk v. Benson, 409 U.S. 63, 67 (1972) (“Phenomena of nature, though just discovered, mental processes, and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work.”). Notwithstanding that a law of nature or an abstract idea, by itself, is not patentable, the application of these concepts may be deserving of patent protection. Mayo Collaborative Servs. v. Prometheus Labs., Inc., 132 S. Ct. 1289, 1293-94 (2012). In Mayo, the Court stated that “to transform an unpatentable law of nature into a patent eligible application of such a law, one must do more than simply state the law of nature while adding the words ‘apply it.” Mayo, 132 S. Ct. at 1294 (citation omitted).
In Alice, the Supreme Court reaffirmed the framework set forth previously in Mayo “for distinguishing patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications of these concepts.” Alice, 134 S. Ct. at 2355. The first step in the analysis is to “determine whether the claims at issue are directed to one of those patent-ineligible concepts.” Id. If the claims are directed to a patent-ineligible concept, then the second step in the analysis is to consider the elements of the claims “individually and ‘as an ordered combination” to determine whether there are additional elements that “transform the nature of the claim’ into a patent-eligible application.” Id. (quoting Mayo, 132 S. Ct. at 1298, 1297). In other words, the second step is to “search for an ‘inventive concept’-i.e., an element or combination of elements that is ‘sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.” Id. (brackets in original) (quoting Mayo, 132 S. Ct. at 1294). The prohibition against patenting an abstract idea “cannot be circumvented by attempting to limit the use of the formula to a particular technological environment or adding insignificant post-solution activity.” Bilski v. Kappos, 561 U.S. 593, 610-11 (2010) (citation and internal quotation marks omitted). The Court in Alice noted that “[s]imply appending conventional steps, specified at a high level of generality,’ was not ‘enough’ [in Mayo] to supply an ‘inventive concept.” Alice, 134 S. Ct. at 2357 (quoting Mayo, 132 S. Ct. at 1300, 1297, 1294).
Examiners must perform a Two-Part Analysis for Judicial Exceptions. In Step 1, it must be determined whether the claimed invention is directed to a process, machine, manufacture or composition of matter.
Claims 1-15 are directed to method, system, and non-transitory computer readable medium. As such, the claimed invention falls into the broad categories of invention. However, even claims that fall within one of the four subject matter categories may nevertheless be ineligible if they encompass laws of nature, physical phenomena, or abstract ideas. See Diamond v. Chakrabarty, 447 U.S. at 309.
In Step 2A, it must be determined whether the claimed invention is ‘directed to’ a judicially recognized exception. According to the specification, the invention is directed to automated answer evaluation method, for example, to reduce burdens on teachers, see, e.g., ¶¶1,2.
Independent claim 1 recites the following (with emphasis):
An answer evaluation method comprising:
receiving, by a processor, a request from a user for displaying an input screen;
receiving, by the processor via the input screen, an input information corresponding to:
an answer provided in response to a question,
a model answer corresponding to the question,
one or more evaluation items associated with evaluation functions for determining a degree of similarity between the answer and the model answer, and
a comprehensive evaluation method including a grading logic, the grading logic being configured to assign a comprehensive evaluation score based on the degree of similarity between the answer and the model answer;
evaluating, by the processor, the answer with respect to the one or more evaluation items and the model answer;
based on the evaluation of the answer, determining, by the processor, the degree of similarity between the answer and the model answer, the degree of similarity indicating whether the answer satisfies at least one of a mathematical equivalence to the model answer, a semantic similarity to the model answer, or a partial match to the one or more evaluation items;
computing the comprehensive evaluation score for the answer using the grading logic, wherein the grading logic is configured to adjust the comprehensive evaluation score based on the determined degree of similarity between the answer and the model answer and based on the evaluation with respect to the one or more evaluation items; and
generating, by the processor, a grading result file including the computed comprehensive evaluation score and evaluation information identifying an evaluation item used in computing the comprehensive evaluation score, and storing the grading result file in a database.
The underlined portions of claim 1 generally encompass the abstract idea, with substantially identical features in claims 14 and 15. Claims 2-13 further define the abstract idea, such as by defining the elements for providing the automated evaluation of answers. Under prong 2, the claimed invention encompasses an abstract idea in the form of certain methods of organizing human activity and/or mental processes. The claims recite a method of evaluating answers to questions based on evaluation items and a grading logic (i.e., a rubric). This is a method and operation of organizing human activity because it is drawn to managing the personal behavior of teachers/students with regard to how to evaluate answers to questions (aka teaching). Furthermore, the method and operations represent mental processes that can be performed in the mind of a human and/or with the aid of pencil and paper.
The evaluation of student responses to questions as a means of assessing student ability, knowledge and comprehension is basic to the learning process and has been performed for centuries. The method, CRM, and processing apparatus in the instant application simply seek to automate this well-known activity using generic computers recited at a high level of generality, and, therefore, the claims are directed to the abstract concept sub-grouping of "managing personal behavior or relationships or interactions between people" including teaching and following rules or instructions, for example, an teacher receiving answer to questions from a student, and administering criteria for how to evaluate (e.g., similarity to a model answer), and assess the student’s answers relative to some educational/grading rubric to determine a score for the student.
The invention also encompasses making judgments about answers of a user, such as evaluating the answer according to evaluation criteria and a grading rubric such as those performed by a teacher. Such judgments about the evaluation of answers relate to mental processes. These judgments of the claimed evaluation method and processes are mental activities, which could be made in the human mind or with pen and paper. But for the recitation of a system, a processing apparatus, a storage unit, and a computer-readable recording medium having a program recorded thereon that can be executed by at least one processor, nothing in the claimed method or computer implemented operations precludes the recitations from practically being performed in the mind. For example, receiving information corresponding to: an answer provided in response to a question, a model answer corresponding to the question, an evaluation item associated with evaluation functions for determining a degree of similarity between the answer and the model answer, and a grading logic, the grading logic being configured to assign a comprehensive evaluation score based on the degree of similarity between the answer and the model answer evaluating the answer with respect to the evaluation item and the model answer; based on the evaluation of the answer, determining the degree of similarity between the answer and the model answer, the degree of similarity indicating whether the answer satisfies at least one of a mathematical equivalence to the model answer, a semantic similarity to the model answer, or a partial match to the one or more evaluation items; computing the comprehensive evaluation score for the answer using the grading logic, wherein the grading logic is configured to adjust the comprehensive evaluation score based on the determined degree of similarity between the answer and the model answer and based on the evaluation with respect to the one or more evaluation items; and generating a grading result file including the computed comprehensive evaluation score and evaluation information identifying an evaluation item used in computing the comprehensive evaluation score, and storing the grading result file in a database may be performed or formulated in the mind a teacher, tutor, or instructor by reading an answer to a question, a model answer and thinking about the criteria of a correct/incorrect answer. The teacher may evaluate the answer with respect to the one or more evaluation items and the model answer to compute a comprehensive evaluation score for the answer using the grading logic and the evaluation with respect to the one or more evaluation items (e.g., thinking how does the answer meet the criteria of the evaluation items as applied according to the grading logic or rules), and outputting information representing the computed comprehensive evaluation score (e.g., speaking or writing the evaluation) and placing the result in a log. If a claim, under its broadest reasonable interpretation, covers performance of recitations in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas.
Therefore, under prong 2, the claimed invention encompasses an abstract idea in the form of mental processes and/or certain methods of organizing human activity.
Under prong 2, the instant claims do not integrate the abstract idea into a practical application. In other words, the claims do not (1) improve the functioning of a computer or other technology, (2) effect a particular treatment or prophylaxis for a disease or medical condition (3) are not applied with any particular machine, (4) do not effect a transformation of a particular article to a different state, and (5) are not applied in any meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim, as a whole, is more than a drafting effort designed to monopolize the exception, the claims are directed to the judicially recognized exception of an abstract idea. See MPEP §§ 2106.05(a)-(c), (e)-(h).
While certain physical elements (i.e., elements that are not an abstract idea) are present in the claims, such features do not affect an improvement in any technology or technical field and are recited in generic (i.e., not particular) ways. Similarly, the abstract idea does not improve the functioning of these physical elements. In recent cases, the CAFC has made it clear that the term “practical application” means providing a technical solution to a technical problem in computers or networks per se. To be patent-eligible, the claimed invention must improve the functioning of the computer as a computer or network as a network. Applicant’s invention does not meet these requirements. Applicant’s invention uses computers to process data according to specified input information, evaluation criteria, and output a result, for example, evaluating a user’s response to a question based on evaluation criteria. This does not improve the computer qua computer. Instead, Applicant’s invention uses generic a computer as a tool to implement the abstract idea. As such, the claims are not eligible under Section 101.
Step 2B requires that if the claim encompasses a judicially recognized exception, it must be determined whether the claimed invention recites additional elements that amount to significantly more than the judicial exception. The additional elements or combination of elements other than the abstract idea per se amounts to no more than: a system having a processor and memory and a display configured to perform the abstract idea. Applicant’s specification, for example, ¶¶15-20 and 23-26, recite off the shelf general purpose computing components such a display, a processor, a database, and a memory for storing instructions that can be executed by a computer. As a result, nothing in Applicant’s specification indicates the computer system performs anything other than well understood, routine, and conventional functions, such as receiving, storing, processing, and displaying. See, Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1355 (ed. Cir. 2016) (“Nothing in the claims, understood in light of the [S]pecification, requires anything other than off-the-shelf, conventional computer, network, and display technology for gathering, sending, and presenting the desired information.”); see also Alice, 573 US. at 224—26 (receiving, storing, sending information over networks insufficient to add an inventive concept); buySAFE, Inc. v. Google, Inc., 765 F.3d 1340, 1355 (ed. Cir, 2014) (That a computer receives and sends the information over a network-—with no further specification—is not even arguably inventive.”). At best, Applicant’s claimed subject matter simply uses generic processing circuitry to perform the abstract idea of converting input data from one form to another (e.g., student answers to grades). As noted above, the use of a generic computer system does not alone transform an otherwise abstract idea into patent-eligible subject matter. As our reviewing court has observed, “after Alice, there can remain no doubt: recitation of generic computer limitations does not make an otherwise ineligible claim patent-eligible.” DDR Holdings, 773 F.3d at 1256 (citing Alice, 573 U.S. at 223).
As a result, these additional elements amount to generic, well-understood and conventional computer components. As demonstrated by Berkheimer v. HP, such computer functions cannot save an otherwise ineligible claim under §101. In short, each step or operation does no more than require a generic computer to perform generic computer functions.
Considered as an ordered combination, only generic computer components are present. Viewed as a whole, the claims simply recite the concept of making judgments by a generic computer. The claims do not, for example, purport to improve the functioning of the computer itself. Nor do they effect an improvement in any other technology or technical field. Instead, the claims at issue amount to nothing significantly more than an instruction to apply the abstract idea using some unspecified, generic computer. Under relevant court precedents, that is not enough to transform an abstract idea into a patent-eligible invention.
As a result, claims 1-15 are not patent eligible
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 10, and 12-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent No. 11,410,569 to by Ferreira (“Ferreira”) or in the alternative, as being unpatentable over Ferreira in view of US Publication No. 2008/0126319 by Bukai et al. (“Bukai”).
In re claims 1, 14, and 15, Ferreira discloses an answer evaluation method that is executed by an answer evaluating system, a non-transitory computer-readable recording medium having a program recorded thereon that can be executed by at least one processor of an information processing apparatus, a storage unit [Abstract, Figs. 1, 4, and 5, and col. 1 l. 45- col. 2, l. 16], the method and operations comprising: receiving, by a processor, a request from a user for displaying an input screen [Figs. 1, 4, 5, ##100, 500, col. 6, ll. 1-14, 51-65, col. 13 ll. 4-20, among others, describe first and second user devices with user interfaces to allow communication and input/display of data via connection with a server device]; receiving, by the processor via the input screen, an input information corresponding to: an answer provided in response to a question [col. 1, ll. 45-57, among others describe receiving student answers to the questions for a student], a model answer corresponding to the question [col. 1, ll. 45-57, col. 3, l. 31, among others describe teacher provided and/or correct answers to the questions], one or more evaluation items associated with evaluation functions for determining a degree of similarity between the answer and the model answer [col. 4, l. 51-col. 5, l. 3, col. 8, l.-col. 9, l. 21, 35-col. col. 10, l.57 to col. 11 l. 6, among others, describe information, such as information specified in the definition of the assignment, which can include evaluation items such as exact match, equivalent match, words, numbers, partial match, whether matching of intermediate steps are allowed suitable general acceptable equivalents, such as an indication that full sentence answers are not required, that spelling out numbers is acceptable, and/or any other suitable generalized rules], and a comprehensive evaluation method including a grading logic, the grading logic being configured to assign a comprehensive evaluation score based on the degree of similarity between the answer and the model answer [col. 10, l.57 to col. 11, ll. 8-27, among others, describe a grading logic including assign a score to an individual answer]; evaluating, by the processor, the answer with respect to the one or more evaluation items and the model answer [col. 10, l.5 to col. 11, l. 27, among others, describe comparing the student answer with the teacher answer based on the assignment information]; based on the evaluation of the answer, determining, by the processor, the degree of similarity between the answer and the model answer, the degree of similarity indicating whether the answer satisfies at least one of a mathematical equivalence to the model answer, a semantic similarity to the model answer, or a partial match to the one or more evaluation items [col. 9, l.1 to col. 11, l. 27, among others, describe determining whether answer is a partial match or equivalent including mathematically equivalent expression and semantic similarity]; computing the comprehensive evaluation score for the answer using the grading logic, wherein the grading logic is configured to adjust the comprehensive evaluation score based on the determined degree of similarity between the answer and the model answer and based on the evaluation with respect to the one or more evaluation items [col. 10, l.57 to col. 11, ll. 8-27, among others, describe a grading logic including assign a score to an individual answer]; and generating, by the processor, a grading result file including the computed comprehensive evaluation score and evaluation information identifying an evaluation item used in computing the comprehensive evaluation score, and storing the grading result file in a database [col. 3 ll. 11-41, col. 11, ll. 8-27 and 28-39 describe server for grading and storing grading result of an assignment].
Ferreira discloses grading logic for assigning a grade/score to a students answers and storing the information at a server. Servers typically include database for storing files; however, to the extent that Ferreira lacks storing files in a database, Bukai in a system for automatic evaluation and scoring of student answers to question includes a server with database for storing student assessment/grades files [Fig. 1,¶72 describes a MySqul DB].
Ferreira and Bukai are both considered to be analogous to the claimed invention because they are in the same field of automated answer analysis. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ferreira to include database, as taught by Bukai, in order to improve user experience, e.g., by providing an efficient searchable data structure.
In re claim 2, Ferreira discloses providing, by the processor, an interface on a terminal for editing the grading logic included in the comprehensive evaluation [Fig. 1, #102, col. 5, ll4-24, col. 6, ll. 1-14, 51-65, col. 13 ll. 4-20, among others, describe first user devices (teacher) with user interfaces to allow communication and input/display of data via connection with a server device which may be used to enter the assignment definition and other information].
In re claim 3, Ferreira discloses collectively receiving the answer, the model answer, the grading logic, and the one or more evaluation items in a single request [Fig. 4, col. 11, l. 21-col. 12, l. 54, among others, describes a server may receive, store, and process all information from the student device and teacher devices].
In re claim 4, Ferreira discloses further comprising: individually receiving at least two of the answer, the model answer, the grading logic, and the one or more evaluation items; and receiving association information corresponding to the one or more evaluation items [col. 1, ll. 45-57, col. 3, l. 31,col. 4, l. 51-col. 5, l. 3, col. 8, l.-col. 9, l. 21, 35-col. col. 10, l.57 to col. 11 l. 6, among others, receiving a model answer from the teacher device and the answer from the student device].
In re claim 5, Ferreira discloses the evaluation of the answer is performed based on the answer and point allocation information included in the grading logic for each of the one or more evaluation items [col. 10, l.5 to col. 11, l. 27, among others, describe grading/scoring by a point allocation including full credit for a match and 50% of the credit for a partial match based on the evaluation items].
In re claim 6, Ferreira discloses providing, by the processor, an interface for editing information corresponding to the one or more evaluation items, and receiving, via the interface, an input defining the one or more evaluation items [Fig. 1, #102, col. 5, ll4-24, col. 6, ll. 1-14, 51-65, col. 13 ll. 4-20, among others, describe first user devices (teacher) with user interfaces to allow communication and input/display of data via connection with a server device which may be used to enter the assignment definition and other information (i.e., edit an assignment and its information)].
In re claim 7, Ferreira discloses the evaluation of the answer is performed based on the answer, the model answer corresponding to the question, and the one or more evaluation items [col. 10, l.5 to col. 11, l. 27, among others, describe comparing the student answer with the teacher answer based on the assignment information].
In re claim 8, Ferreira discloses the computed comprehensive evaluation score further comprises the evaluation of the answer with respect to each of the one or more evaluation items based on the point allocation information included in the grading logic [col. 10, l.5 to col. 11, l. 27, among others, describe comparing the student answer with the teacher answer based on the assignment information including point allocation based on the grading logic, e.g., full credit for a match, and partial credit 50% for match of an intermediate step].
In re claim 10, Ferreira discloses wherein the one or more evaluation items comprise: a first item configured to evaluate whether the answer and the model answer satisfy; and one or more second items configured to evaluate one or more factors that indicate a deviation from the mathematical equivalence between the answer and the model answer, wherein the grading logic includes a logic for adjusting the comprehensive evaluation according to an evaluation of the one or more second items when an evaluation of the first item satisfies the mathematical equivalence [col. 4, l. 51-col. 5, l. 3, col. 8, l.-col. 9, l. 21, 35-col. col. 10, l.57 to col. 11 l. 6, among others, describe information, such as information specified in the definition of the assignment, which can include evaluation items such as exact match, equivalent match, words, numbers, whether spelling of words is acceptable, partial match, including mathematical equivalence and whether matching of intermediate steps are allowed suitable general acceptable equivalents. In addition, grading may be adjusted (full credit to partial credit) an interim step is mathematically equivalent to a step in the model/teacher answer].
In re claim 12, Ferreira discloses an automated answer evaluation system including providing an interface to indicate whether to include/use different factors for determining correctness of a student response including semantics and similarity. Ferreira doesn’t explicitly disclose the one or more evaluation items comprise: a fourth item configured to evaluate whether the answer and the model answer satisfy a semantic equivalence; and one or more fifth items configured to evaluate one or more factors that indicate a deviation from the semantic equivalence between the answer and the model answer, wherein the grading logic includes a logic for adjusting the comprehensive evaluation according to the evaluation of the one or more fifth items when an evaluation of the fourth item satisfies the semantic equivalence.
However, Bukai teaches wherein the one or more evaluation items comprise: the one or more evaluation items comprise: a fourth item configured to evaluate whether the answer and the model answer satisfy a semantic equivalence ; and one or more fifth items configured to evaluate one or more factors that indicate a deviation from the semantic equivalence between the answer and the model answer, wherein the grading logic includes a logic for adjusting the comprehensive evaluation according to the evaluation of the one or more fifth items when an evaluation of the fourth item satisfies the semantic equivalence [¶¶14-16, 39, 40, 109-128, describe determining sematic equivalence between a model answer and user answer, including determining the word satisfies one factor (i.e., the word is the same as model) or second factor (i.e., word is equivalent) which determines a match=positive, e.g., similarity is above first threshold].
Ferreira and Bukai are both considered to be analogous to the claimed invention because they are in the same field of automated answer analysis. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ferreira to include determining semantic similarity between the answer and a model answer, as taught by Bukai, in order to improve automatic grading of students’ answers, for example, by not restricting student answers to long form answers, e.g., ¶13.
In re claim 13, Ferreira lacks, but Bukai teaches, wherein the one or more evaluation items further comprise one or more sixth items configured to evaluate one or more factors indicating the semantic equivalence between the answer and the model answer, wherein the grading logic includes a logic for adjusting the comprehensive evaluation according to an evaluation result of the one or more sixth items when an evaluation of the fourth item fails to satisfy the semantic equivalence [¶¶14-16, 39, 40, 109-128, describe determining equivalence between a model answer and user answer, including determining third factor when not a match (i.e., first item is negative) but above a second threshold, the scoring logic provides partial credit].
Ferreira and Bukai are both considered to be analogous to the claimed invention because they are in the same field of automated answer analysis. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ferreira to include the determination of partial credit for responses that are similar but not the same as the model answer, as taught by Bukai, in order to improve automatic grading of students’ answers, for example, by not restricting student answers to long form answers, e.g., ¶13.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Ferreira in view of Bukai and further in view of US Publication No. 2015/0199598 by Iams (“Iams”).
In re claim 9, Ferreira discloses the comprehensive evaluation method comprises evaluation items and point allocation information allotted to the one or more evaluation items [col. 10, l.57 to col. 11, ll. 8-27, among others, describe a grading logic including assign a score to an individual answer based on evaluation items (e.g., a partial match)]. Ferreira lacks, but Iams discloses providing, by the processor, an interface configured to display the comprehensive evaluation of the answer, wherein the interface is further configured to display evaluation results [Fig. 2., ¶¶26-28].
Ferreira and Iams are both considered to be analogous to the claimed invention because they are in the same field of automated answer analysis. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ferreira to include an interface configured to display evaluation results for each of the one or more evaluation items, as taught by Iams, in order to improve teacher and student experience in learning, for example, by improving the efficiency of grading student assignments and providing feedback to the students, e.g., ¶¶3-5.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ferreira in view of Bukai and further in view of US Publication No. 2012/0189991 by Smith et al. (“Smith”).
In re claim 11, Ferreira lacks, but Smith teaches wherein the one or more evaluation items further comprise one or more third items configured to evaluate one or more factors indicating the mathematical equivalence between the answer and the model answer, wherein the grading logic includes a logic for adjusting the comprehensive evaluation according to an evaluation result of the one or more third items when the evaluation of the first item fails to satisfy the mathematical equivalence [¶¶67, 177-187 describe a third item for evaluating when the first item is negative (i.e., the answer does not match the model answer and is not mathematically equivalent) by determining whether any elements of the students response matched the model answer and providing partial credit based on how close the students response was].
Ferreira and Smith are both considered to be analogous to the claimed invention because they are in the same field of automated answer analysis. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ferreira to include the determination of partial credit for responses that are similar to the model answer, as taught by Smith, in order to improve automatic grading of students’ answers, for example, by not restricting student answers to a predetermined form and/or stating the student answer is incorrect when it isn’t see, e.g., ¶6.
Response to Arguments
Applicant’s arguments filed December 8, 2025, have been fully considered.
The previous rejection of claims 12 and 13 for lack of written description under 35 U.S.C. 112(a) is withdrawn in view of Applicant’s amendments; however, a new rejection of all the claims has been made in view of the amendments to claims 1, 14, and 15.
The previous rejection of claims 6 and 8 as indefinite under 35 U.S.C. 112(b) is withdrawn in view of Applicant’s amendments.
With regard to the rejection under 35 U.S.C. 101, Applicant’s arguments have been considered but are not persuasive. The rejection has been updated in view of Applicant’s amended claims.
Applicant argues the claimed elements integrate the abstract idea into a practical application. One method of showing a practical application is for the claimed invention to provide a technical improvement. Applicant argues Claim 1 is directed to an answer evaluation method that “improves” the functioning of a computer-based evaluation system using dynamically configurable grading logic. Applicant argues the method “improves the functioning of a computer-based evaluation system by enabling the processor to perform operations that go beyond conventional static logic execution, including dynamically adjusting scores based on a determined degree of similarity between an answer and a model answer.” In addition, Applicant argues that improvement is “directly tied to enabling a flexible and customizable evaluation system.” However, Applicant is attempting to substantiate the alleged technological improvement while relying on the approach that the claimed/disclosed system is using to automate the grading process. However, even assuming arguendo that the clamed/disclosed approach (e.g., dynamically adjusting scores based on a determined degree of similarity between an answer and a model answer, etc.) is unique/new, this does not necessarily imply a technological improvement over the relevant existing technology. This is because the claimed/disclosed system is still utilizing the existing computer/network technology—merely as a tool—to facilitate an abstract idea. Particularly, the claimed/disclosed system is utilizing the existing computer/network technology to facilitate the process of evaluating the answer that a user is providing in response to a question, wherein the evaluation is conducted using evaluation items and a grading/scoring rubric, etc. (e.g., see claim 1). Thus, such use of the existing technology—merely as a tool—to facilitate an abstract idea does not constitute a technological improvement over the relevant existing technology
Applicant states “Thus, the method of claim 1 allows users to define and edit grading logic through an interface, and dynamically apply conditional evaluation logic, such as adjusting scores based on whether a mathematical equivalence is satisfied or expression-level deviations are present. The method further determines a degree of similarity using semantic similarity, mathematical match, or partial scoring based on specific evaluation items, which involve data comparisons and conditional logic executed by the processor that are not practical to perform in the human mind or with pen and paper.” Applicant continues “This transforms a static, manual evaluation process into a structured system that supports per-item scoring, mathematical and semantic evaluation, and partial point allocation that provides a comprehensive evaluation score. The method also generates a grading result file that includes both the computed score and information identifying which evaluation item(s) were used, and stores the file in a database to enable later reference by the user (see paragraph [0046] of the as-filed specification). The claimed method does not merely automate an abstract process using conventional components, but defines specific data processing steps (e.g., applying grading logic based on semantic or mathematical equivalence, generating structured output with evaluation metadata, storing that output for future user access) that are not part of the abstract mental process itself.” The examiner respectfully disagrees.
First, automated vs manual grading using configurable logic is far from “unconventional” as demonstrated by art of record. In fact, automated grading using computers has been provided in various forms for decades. Second, the purported improvements are directed to the abstract idea of how to evaluate and grade/score answers. This does not improve how the computer functions or performs (i.e., a technical improvement), but rather is directed to the abstract process of what elements are used to evaluate similarity and a scoring rubric to (e.g., a mental process). Applicant’s claim that defining specific data processing steps (e.g., applying grading logic based on semantic or mathematical equivalence, generating structured output with evaluation metadata, storing that output for future user access) that are not part of the abstract mental process itself is incorrect. These are mental steps; however, they are carried out by a computer.
Applicant’s assertion that the method determination of a degree of similarity using semantic similarity, mathematical match, or partial scoring based on specific evaluation items involve data comparisons and conditional logic executed by the processor that are not practical to perform in the human mind or with pen and paper is not supported by any evidence in record. Moreover, the examiner asserts that teachers are very adept at evaluations based on logic, data comparisons including sematic similarity and mathematical equivalence. A statement that the human mind cannot perform the method is not evidence. Moreover, the implementation of the method by a computer may speed up the analysis, but then the computer is being used as a tool for its computational ability.
Therefore, Applicant claimed invention does not constitute technical improvement that would result in a practical application. Instead, the claims are directed to automation of abstract idea using convention computer components at a high level of abstraction and as such are not patent eligible. In other words, the claims do no more than recite an abstract idea and then apply it with a computer.
Applicant’s arguments with respect to the anticipation and obviousness of claims 1-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed on the attached Notice of References Cited.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew Bodendorf whose telephone number is (571) 272-6152. The examiner can normally be reached M-F 9AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached on (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW BODENDORF/Examiner, Art Unit 3715
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715