Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In response to the preliminary amendment, claims 1-6 have been amended, claim 8 is cancelled, and new claims 9-17 are added. Claims 1-7 and 9-17 are pending and under examination.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 9-13, and 16-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation "the certain category" in line 7. It is ambiguous which renders the claim indefinite because it is unclear whether the limitation refers to "a certain category" in lines 3-4 or "a certain category" in line 5. For the reason, dependent claims thereof are rejected as well.
Claim 9 recites “A program for making a computer function as the academic ability estimation model generation device according to Claim 1.” Claim 9 recites only preamble and fails to define any body. For example, claim appears to be directed to computer function but no specific step is further defined in the body of the claim. For similar reason, claim 10 is rejected as well.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 9-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claims are directed to “program,” which is considered software per se. MPEP 2106.03(I).
Claims 1-7 and 9-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception(s) without significantly more.
[STEP 1] The claim recites at least one step or structure. Thus, the claim is to a process or product, which is one of the statutory categories of invention (Step 1: YES).
[STEP2A PRONG I] The claim(s) 1, 6, and 7 recite(s):
processing circuitry configured to generate a decision tree by using correct/incorrect-answer information as teacher data, the correct/incorrect-answer information indicating that a plurality of answerers who have answered a question group consisting of a plurality of predetermined questions have answered each question correctly or incorrectly;
delete a leaf node when an entropy of a classification result indicated by the leaf node being a terminal end of the decision tree which is generated is equal to or lower than a predetermined value; and
set each new terminal end of the decision tree after deleting the leaf node as a category to which any of the answerers belongs;
acquire the correct/incorrect-answer information of a target for academic ability estimation and estimates academic ability of the target based on the academic ability estimation model [claim 6].
The claim recites a mathematical formula or calculation that is used to generate a decision tree using an entropy and information gain algorithm. That is, other than reciting “processing circuitry,” nothing in the claim element precludes the step from practically being performed the mathematical formula.
If a claim limitation, under its broadest reasonable interpretation, covers mathematical formula or calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concept” grouping of abstract ideas.
Alternatively, the non-highlighted aforementioned, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “processing circuitry,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recited language, the step in the context of this claim encompasses generating a decision tree using an entropy and information gain algorithm.
Accordingly, the claim recites a judicial exception, and the analysis must therefore proceed to Step 2A Prong Two.
[STEP2A PRONG II] This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional element(s) – “processing circuitry.”
The “processing circuitry” in the aforementioned steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component.
Accordingly, the additional element(s) do(es) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea and the claim is therefore directed to the judicial exception. (Step 2A: YES).
[STEP2B] The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the aforementioned steps amounts to no more than mere instructions to apply the exception using a generic computer component, which cannot provide an inventive concept.
As noted previously, the claim as a whole merely describes how to generally “apply” the aforementioned concept in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea.
The claim is not patent eligible. (Step 2B: NO).
Claim(s) 2-5 and 9-17 is/are dependent on supra claim(s) and includes all the limitations of the claim(s). Therefore, the dependent claim(s) recite(s) the same abstract idea. The claim recites the additional limitations of “storing [data]” [claims 3-5 and 11-17] and “program” [claims 9 and 10], which are no more than mere instructions to apply the exception using a generic computer component, generally linking the use of the judicial exception to a particular technological environment or field of use, insignificant extra-solution activity, or that are well understood, routine and conventional activities previously known to the industry. Accordingly, the additional element(s) do(es) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea and the claim is therefore directed to the judicial exception. The additional element of using “program” to perform the aforementioned steps amounts to no more than mere instructions to apply the exception using a generic computer component, which cannot provide an inventive concept. Alternatively, the additional element of “program” amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use, which cannot provide an inventive concept. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity or well-known, routine, and conventional activity in Step 2A should be reevaluated in Step 2B. Here, the aforementioned step(s) “storing [data]” was/were considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The background of the specification does not provide any indication that the additional element(s) is/are anything other than a generic, off-the-shelf computer component, and the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here), and the Electric Power Group, LLC v. Alstom S.A., and Ameranth, court decisions cited in MPEP 2106.05(g) indicate that displaying data is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the examiner takes OFFICIAL NOTICE that the aforementioned additional elements are well-known, routine and conventional activity. Accordingly, a conclusion that the aforementioned step(s) is/are well-understood, routine, conventional activity is supported under Berkheimer Option 2. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-7 and 9-17 is/are rejected under 35 U.S.C. 102(a)(1) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over Chang et al (U.S. Patent Application Publication 2012/0034581), hereinafter Chang, as evidenced by, or alternatively in view of, Huang et al (U.S. Patent 10778702), hereinafter Huang.
Regarding claims 1, 6, 7, 9, and 10, Chang discloses an academic ability estimation model generation device/method (Abstract) comprising:
processing circuitry configured to generate a decision tree (S905 in FIG. 9; ¶57: “Next, in step S905, the decision tree generation module 106 generates the corresponding assessment decision tree according to the pronunciation features and the grade marks of the training data. To be specific, the decision tree generation module 106 generates an assessment decision tree corresponding to each training data group.”; ¶39 discloses further specific step how to generate the decision tree: “The decision tree generation module 106 generates an assessment decision tree. To be specific, the decision tree generation module 106 generates the corresponding assessment decision tree according to the training data groups categorized by the feature extraction module 104 and the pronunciation features and grade marks of the training data in the training data groups.”) by using correct/incorrect-answer information as teacher data, the correct/incorrect-answer information indicating that a plurality of answerers who have answered a question group consisting of a plurality of predetermined questions have answered each question correctly or incorrectly (¶29: “Namely, each training data is assigned a mark based on the correctness of the pronunciation thereof. In the present exemplary embodiment, each training data is marked as "good" or "bad".”);
acquire the correct/incorrect-answer information of a target for academic ability estimation and estimates academic ability of the target based on the academic ability estimation model [claim 6] (S919 and S921 in FIG. 9; ¶60: “In step S919, the assessment and diagnosis module 110 determines a diagnosis path corresponding to the pronunciations among the decision paths in the corresponding assessment decision tree. Finally, in step S921, the assessment and diagnosis module 110 outputs the feedback information and the grade mark corresponding to the decision nodes on the diagnosis path.”);
a program for making a computer function [claims 9 and 10] (FIG 1B illustrates software modules executing the claimed functions).
delete a leaf node when an entropy of a classification result indicated by the leaf node being a terminal end of the decision tree which is generated is equal to or lower than a predetermined value; and set each new terminal end of the decision tree after deleting the leaf node as a category to which any of the answerers belongs (Chang, ¶39 discloses the decision tree is generated via the ID3 algorithm: “in another exemplary embodiment of the present disclosure, the decision tree generation module 106 may also generate each assessment decision tree by using the ID3 algorithm”; The ID3 (Iterative Dichotomiser 3) algorithm is a classic algorithm in machine learning used to construct decision trees from a dataset. It was invented by Ross Quinlan and is a precursor to the C4.5 algorithm.; Huang is evidenced to show how the ID3 algorithm delete the leaf node, i.e., pruning, with entropy at col. 17, ll. 3-23: “The model generator 322 starts at the root node with a specified set of samples and a specified set of features to work with... Consequently, the first decision will decrease the entropy (or increase information gain) the most of any decision in the tree. As noted above, a given node is designated as a leaf node when the node is pure or there is no feature that can reduce the entropy.., [I]f the best possible feature for a node could only reduce the entropy by a trivial amount, the node may be designated as a leaf node. In some implementations a maximum threshold is set on the number of decisions or number of leaves that may be added to the decision tree causing a premature exit and yielding a smaller decision tree. In some implementations, portions of a decision tree are pruned after the decision tree is constructed.”);
Alternatively, Chang discloses may not disclose the ID3 algorithm (¶39) but may not explicitly disclose deleting the leaf node with entropy as claimed.
Huang teaches predictive modeling (Abstract) comprising the claimed deleting a left node (see supra mapping @col. 17, ll. 3-23).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention was made to modify the invention in Chang by adding the predictive modeling features as taught in Huang in order to “accurately classify unknown [training data]” (col. 1, ll. 42-43 of Huang).
Regarding claim 2. Chang/Huang further discloses that the processing circuitry is configured to, when a value of the entropy in a certain category of said categories is greater than a predetermined value or when the number of the answerers belonging to a certain category of said categories is smaller than a predetermined value, connect a subtree, which is located on a terminal end side of any of nodes passed through before reaching the certain category and does not reach the certain category, to the certain category as an auxiliary decision tree so as to lead the answerers belonging to the certain category to a category located at a terminal end of the auxiliary decision tree (Huang, col. 17, ll. 3-10: “The model generator 322 starts at the root node with a specified set of samples and a specified set of features to work with. After finding a feature that reduces the entropy the most, the model generator recursively creates additional nodes and identifies features to use at the additional nodes in the same way. Consequently, the first decision will decrease the entropy (or increase information gain) the most of any decision in the tree.”).
Regarding claims 3 and 11. Chang/Huang further discloses that the processing circuitry is configured to store a parameter which associates correct/incorrect- answer information of each question with a comprehension level in each learning field; and generate comprehension levels in each learning field of the answerers belonging to each category by using the parameter (Chang discloses the comprehension levels at ¶29: “the grade marks may also be numbers. For example, each training data is marked between 0-10 based on the correctness thereof”; Chang also discloses the supplement parameters at ¶23: “To be specific, the language learning system in the exemplary embodiment of the present disclosure constructs the corresponding assessment decision trees by collecting a plurality of pronunciations of a language learner as training sentences and analyzing the pronunciation features, such as the tones (for example, the 1.sup.st, 2.sup.nd, 3.sup.rd, and 4.sup.th tones in Chinese) and intonations (for example, the accents, non-accents, unvoice phonemes, and silence in English) in these training sentences”; Huang also shows the ID3 algorithm handles the supplemental parameters in generating the decision tree at col. 16, ll. 47-58: “The same process can be extended to numeric features with ranges of values. A first way to utilize numeric features is to consider each feature ƒ as a parameterized set of features, where the parameter is a threshold value θ. For each value of θ, there is a Boolean feature ƒ.sub.θ, which is true or false for a domain depending on whether the feature value is less than the threshold value. Some implementations extend this to using two or more threshold values, which subdivides the feature values into more ranges. Of course, the greater number of possible parameterized features, the greater the processing time that is needed to build the decision trees.” ).
Regarding claims 4 and 12. Chang/Huang further discloses that the processing circuitry is configured to store supplementary information which is information about learning progress in each learning field of the answerers or a subjective comprehension level in each learning field of the answerers, wherein revise a generated comprehension level in each learning field based on the supplementary information (Chang, ¶29: “the grade marks may also be numbers. For example, each training data is marked between 0-10 based on the correctness thereof”;).
Regarding claims 5 and 13-17. Chang/Huang further discloses that the processing circuitry is configured to store a pass/fail result which is a result of the answerers for an examination of a predetermined school; and generate, for each category, a pass rate of an answerer belonging to a corresponding category with respect to the predetermined school and outputs the pass rate for each category (Chang , ¶29: “In the present exemplary embodiment, each training data is marked as "good" or "bad".”; Huang, col. 10, ll. 38-41; “In other implementations the vote is averaged across the estimators or fed into an additional model generator as features (e.g., using stacked machine learning).”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J HONG whose telephone number is (571)272-0993. The examiner can normally be reached 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Vasat can be reached at (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
THOMAS J. HONG
Primary Examiner
Art Unit 3715
/THOMAS J HONG/ Primary Examiner, Art Unit 3715