Prosecution Insights
Last updated: April 18, 2026
Application No. 18/227,443

SYSTEMS AND METHODS FOR KNOWLEDGE DISCOVERY FROM DATA AND PRIOR KNOWLEDGE

Non-Final OA §101§103
Filed
Jul 28, 2023
Examiner
TRIEU, EM N
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
George Mason University
OA Round
1 (Non-Final)
48%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
53%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
30 granted / 63 resolved
-7.4% vs TC avg
Minimal +5% lift
Without
With
+5.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 63 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is in response to the claims filed on 07/28/2023 . Claims 1-9, 12-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Election/Restrictions Restriction to one of the following inventions is required under 35 U.S.C. 121: I. Claims 1- 9, 13-20 are drawn to product and process of generating the argumentation for the similar cases based on the search and classification knowledge based from the reference cases . The method claim was classified in classified in G06N 5 /0 22 . II. Claims 10-11 are classified in G06N 5 /0 22 , however, those claims are directed to different invention. The claims 10-11 are drawn to another product of determining the cover crop biomass based on the reference farm cases . The inventions are independent or distinct, each from the other because: Invention I and II are related product inventions. The related inventions are distinct if: (1) the inventions as claimed are either not capable of use together or can have a materially different design, mode of operation, function, or effect; (2) the inventions do not overlap in scope, i.e., are mutually exclusive; and (3) the inventions as claimed are not obvious variants. See MPEP § 806.05(j). In the instant case, the inventions as claimed I ( 1-9, 1 2 -20 ) to product and process of generating the argumentation for the similar cases based on the search and classification knowledge based on the reference cases . The invention II (1 0-11 ) are drawn to another product of determining the cover crop biomass based on the reference farm cases . Furthermore, the inventions as claimed do not encompass overlapping subject matter and there is nothing of record to show them to be obvious variants. Restriction for examination purposes as indicated is proper because all the inventions listed in this action are independent or distinct for the reasons given above and there would be a serious search and/or examination burden if restriction were not required because one or more of the following reasons apply: a) The inventions have acquired a separate status in the art in view of their different classification; b) The inventions require a different field of search (for example, searching different classes/subclasses or electronic resources, or employing different search queries). Subsequent to a telephone conversation with Attorney for Applicant, GEORGE LIKOUREZOS, Registration No.40067, on 03/25/2026, a provisional election was made without traverse to prosecute the invention of group I, claims 1-9, 12-20. Affirmation of this election must be made by applicant in replying to this Office action. Claims 10-11 are withdrawn from further consideration by the examiner, 37 CFR l.142(b), as being drawn to a non-elected invention. Applicant is reminded that upon the cancellation of claims to a non-elected invention, the inventorship must be corrected in compliance with 37 CFR 1.48(a) if one or more of the currently named inventors is no longer an inventor of at least one claim remaining in the application. A request to correct inventorship under 37 CFR 1.48(a) must be accompanied by an application data sheet in accordance with 37 CFR 1.76 that identifies each inventor by his or her legal name and by the processing fee required under 37 CFR l.17(i). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9, 12-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 analysis: In the instant case, the claims are directed to a system (claims 1- 9 ), computer (claim s 12-20 ). Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Step 2A analysis: Based on the claims being determined to be within of the four categories (Step 1), it must be determined if the claims are directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), in this case the claims fall within the judicial exception of an abstract idea. Specifically the abstract idea of “Mental Processes/Concepts performed in the human mind (including an observation, evaluation, judgment, opinion)” . The claim 1 recites: Step 2A: prong 1 analysis: generate argumentation that explains a phenomenon of the reference case it is a mental process, the human mind can provide the argumentation, such as attorney can provide the argumentation include the fact or situation is observed to happen for a particular case (observation) generate a knowledge-based generalization of the argumentation by learning a lower bound generalization and an upper bound generalization; this is a mental process, the human mind can provide the knowledge based of the argumentation based on the different level , for example, attorney can select a potential good argumentation for a particular case , for example, Attorney can determine that a argumentation A is best fit for the particular case, because the argumentation A include evidences are more related to the particular case than the argumentation B, which includes the evidence are not relate d to the particular case, (observation/ Evaluation ). apply the argumentation to a plurality of cases similar to the reference case based on knowledge-based search and classification; this is a mental process, the human mind can apply a particular argumentation to particular similar case based on the knowledge based and the classification, for example, the attorney can apply a particular argument of the reference case to the current, if the two cases are reciting the similar issues/problems (observation/Evaluation). apply the argumentation to a plurality of cases similar to the reference case based on knowledge-based search and classification; this is a mental process, the attorney can apply the argumentation of the particular ( criminal case) to the current case, if the current case is classified as the criminal case, (observation/Evaluation) . “ split the plurality of similar cases into a plurality of favoring cases and a plurality of disfavoring cases; ” this is a mental process, the human mind can device the similar cases into two groups (favoring cases and disfavor cases), (observation/Evaluation). select a disfavoring case of the plurality of disfavoring cases that is most similar to the reference case based on a similarity of factors” this is a mental process, the human mind can select a particular case from a particular group (disfavor cases), (observation/Evaluation). determine what factors were not taken into account in generating the argumentation; this is a mental process, for example, the human can see what factor was not taken into the generating the argumentation, for example examiner can see which factor/issue was not included in the response (generating the argumentation) , (observation/Evaluation). and generate a hypothesis-driven explanation theory based on comparing one or more features of the reference case to one or more features of the most disfavoring case. This is a mental process, the human mind can generate the explanation theory based on the comparing one feature of particular case to the most disfavoring case, for example, attorney can provide the hypothesis-driven explanation theory , such as a strategy to speed up the process of a particular based a comparing on the feature of the particular case to the disfavor case (observation/Evaluation). a) Step 2A: Prong 2 analysis: “ A system for knowledge discovery comprising :a processor; and a memory coupled to the processor and storing instructions which, when executed by the processor, cause the system to ” , “ developing a predictive model ” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). “ access a reference case of a plurality of cases ” ; t hese/this additional limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application. b) Step 2B analysis: “A system for knowledge discovery comprising :a processor; and a memory coupled to the processor and storing instructions which, when executed by the processor, cause the system to ” , “ developing a predictive model” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). “ access a reference case of a plurality of cases; ” These/this limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception itself . The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory") . The claim 2 recites: Step 2A: prong 1 analysis: “ learn an evidence collection rule for each argument that reduces the hypothesis to an evidence item ” this is a mental process, the human can learn how to collect the evidence (which evidence is good to collect) for a particular argument that can reduce the hypothesis to an evidence , (observation/evaluation). and search the plurality of reference cases, for the evidence item. This is a mental process, the human mind can search the evidence item for the particular case, (observation/evaluation). Step 2A: Prong 2 analysis : “ by a collection agent ” these/this additional element (s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application. Step 2B analysis “ by a collection agent ” T hese/this element (s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception itself . The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). The claim 3 recites: a) Step 2A: Prong 2 analysis: -“ the predictive model includes a probabilistic inference network.” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). b) Step 2B analysis: -“ the predictive model includes a probabilistic inference network.” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). The claim 4 recites: a) Step 2A: Prong 2 analysis: -“ the predictive model includes a Wigmorean probabilistic inference network. ” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). . b) Step 2B analysis: -“the predictive model includes a Wigmorean probabilistic inference network.” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)). The claim 5 recites: a) Step 2A: Prong 1 analysis: “ wherein the argumentation includes at least one of a hypothesis or a conjunction of sub hypothesis. ” T his is a mental process, the human can generate the argumentation include the hypothesis, (observation). b) Step 2A: Prong 2 analysis and Step 2B analysis: No additional element that provides a practical application or amount to significantly more than the abstract idea . The claim 6 recites: a) Step 2A: Prong 1 analysis: -“ wherein the hypothesis to be assessed is decomposed into simpler hypotheses by considering both favoring arguments and disfavoring arguments.” This is a mental process, the human mind can decompose the hypothesis into the simper hypotheses based on the disfavoring argumentation and favoring argumentation, (observation/evaluation). Step 2A: Prong 2 analysis and Step 2B analysis No additional element that provides a practical application or amount to significantly more than the abstract idea. The claim 7 recites: a) Step 2A: Prong 2 analysis: wherein the lower bound employs a cautious learner strategy and wherein the upper bound employs an aggressive learning strategy. This is a mental process, the human can generate the cautious learning strategy and the aggressive learning strategy based on the particular level (lower bound and upper bound), (observation/Evaluation) . Step 2A: Prong 2 analysis and Step 2B analysis No additional element that provides a practical application or amount to significantly more than the abstract idea. The claim 8 recites: Step 2A: prong 1 analysis: wherein the disfavoring case provides an indication that the generated argumentation is incomplete and/or partially incorrect. This is a mental process, the human mind can provide the indication that the set the disfavoring case s includes the incomplete argumentation , (observation/Evaluation). Step 2A: Prong 2 analysis and Step 2B analysis No additional element that provides a practical application or amount to significantly more than the abstract idea. The claim 9 recites: Step 2A: prong 1 analysis: refine the generated hypothesis-driven explanation theory based on selecting a new case from the plurality of disfavoring cases that is most similar to the reference case. This is a mental process, the human mind can generat e the hypothesis explanation theory based on selecting the new case from the plurality of disfavoring case that is most similarity to the reference case, for example, the human can modify the hypothesis explanation theory based on the particular new information selected from the disfavoring cases (observation/evaluation). Step 2A: Prong 2 analysis and Step 2B analysis No additional element that provides a practical application or amount to significantly more than the abstract idea. Regarding claim 12 is rejected for the same reason as the claim 1, since these claims recite the same limitations. Regarding claim 13 is rejected for the same reason as the claim 2, since these claims recite the same limitations. Regarding claim 14 is rejected for the same reason as the claim 4, since these claims recite the same limitations. Regarding claim 15 is rejected for the same reason as the claim 5, since these claims recite the same limitations. Regarding claim 16 is rejected for the same reason as the claim 6, since these claims recite the same limitations. Regarding claim 17 is rejected for the same reason as the claim 7, since these claims recite the same limitations. Regarding claim 18 is rejected for the same reason as the claim 8, since these claims recite the same limitations. Regarding claim 19 is rejected for the same reason as the claim 9, since these claims recite the same limitations. Regarding claim 20 is rejected for the same reason as the claim 3, since these claims recite the same limitations. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Yanosy et al. (PUB. No 20200234148 -hereinafter, Yanosy ) in view of Katsuda et al. (PUB. No 20170206315 -hereinafter, Katsuda ) and further in view of Coman et al. (PUB. No 20210125190 -hereinafter, Coman ) . Regarding claim 1, Yanosy teaches a system for knowledge discovery comprising:a processor; and a memory coupled to the processor and storing instructions which, when executed by the processor, cause the system to (Yanosy, [Par.0054], “FIG. 1 is a system functional diagram of a generic ontology knowledge and reasoning system and its ontology and functional components according to an exemplary embodiment of the invention. According to an embodiment, the generic ontology knowledge system has one or more computer processor(s) that provides the computer processing power and memory to host and execute the following kinds of system software identified as the web server 20, knowledge repository server 21, OWL 2 Web Ontology Language (“OWL2”) reasoning engine 22, semantic mapping service 24a, and RDF): access a reference case of a plurality of cases (Yanosy, [Par.0012], “Further, according to an embodiment, a CBR method includes: receiving, at a user interface, selection of a current case and at least one past case; comparing the current case with the at least one past case based on the CBR ontology, wherein the ontology integrates information associated with the at least one current case and the at least one past case with at least one reasoning system,.” Examiner’s note, selecting the current case and past case of plurality cases, wherein, the past case is considered as the reference case ) ; generate argumentation that explains a phenomenon of the reference case by developing a predictive model ( Yanosy, [Par.0006, 0056], “[0006], “…According to an embodiment, a CBR system compares the population of past situations and compares factors for each past situation with the factors for a current situation using a hierarchical multi-layered reasoning model, wherein the model is implemented in a unique CBR ontology where the different kinds of reasoning associated with each layer's model filters the population of past situations, layer-by-layer, until a sorted list of past situations satisfying each layer's model is provided with CBR outcome argument strength classifications .” And [0056], FIG. 2A is a diagram illustrating a CBR system according to an exemplary embodiment of the invention. According to an embodiment, a CBR system 100 is configured to: (i) reason about the domain factors hierarchy ontology model 101 when analyzing and comparing the sets of factor occurrences for each comparison of the current case 120 and each of the past cases 110; (ii) perform a pairwise case comparison between the current case 120 and each of the past cases 110 based on the knowledge about factor occurrences 122 and 112, respectively, in each case; and (iii) generate a plurality of CBR reasoning outcomes 131 to 136. As depicted in the figure, the domain factors 101 correspond to the factors used for decisions in a domain and which some subsets of them, e.g., factors 112, were used in decisions for the past cases 110. Further, the case 110 includes a plurality of past situations/cases 111 and their corresponding case factors 112 and decisions 113. Further, the current case 120 includes a current situation/case 121 and the corresponding case factors 122. As further depicted in the figure, the CBR reasoning outcomes includes strongest argument 131, strong argument 132, relevant argument—exception arguments undermined 133, relevant argument—no exception arguments 134, relevant argument failed—has exception arguments 135, and relevant argument failed—missing common factor arguments 136. The outcomes represent an argument strength preference for use of a particular past case, from the strongest, e.g., 131 to the minimally relevant, e.g., 134.” Examiner’s note, generating the plurality level of the argumentation for the past cases (situations) based on the classification that is corresponding to the phenomenon explanation of the past cases/reference cases.); generate a knowledge-based generalization of the argumentation by learning a lower bound generalization and an upper bound generalization ( Yanosy, [Par.0056], “…As further depicted in the figure, the CBR reasoning outcomes includes strongest argument 131, strong argument 132, relevant argument—exception arguments undermined 133, relevant argument—no exception arguments 134, relevant argument failed—has exception arguments 135, and relevant argument failed—missing common factor arguments 136. The outcomes represent an argument strength preference for use of a particular past case, from the strongest, e.g., 131 to the minimally relevant, e.g., 134. Further, 135 and 136 are excepted from use for a similar decision argument due to failing the arguments or not having sufficient common factors. The CBR reasoning outcomes will be described in more detail below.” Examiner’s note , the CBR reasoning outcomes includes a different level of the argument such as strongest argument, no exception argument,…missing comment factor argument, that corresponds to the upper bound generalization and an upper bound generalization.) ; apply the argumentation to a plurality of cases similar to the reference case based on knowledge-based search and classification ( Yanosy, [Par.0008], “Further, the CBR system is also designed to enable comparative reasoning between a current situation and past situations with a capability to select different combinations or subsets of factor occurrences for a current situation. This flexibility to select and assert subsets of factors for a current situation provides additional insights about which factor combinations for a current situation provide the best set of past cases with stronger arguments for arguing a similar decision for the current situation. Each execution of the CBR reasoning system will result in a sorted list of past situations organized by their strength of argument for a similar decision and, in this way, subsequent reasoning by the same system can automatically determine the best combination of factors.” Examiner’s note, the providing the best set of past case with stronger argument for arguing a similar decision for the current situation/similar case) ; and generate a hypothesis-driven explanation theory based on comparing one or more features of the reference case to one or more features of the most disfavoring case (Yanosy, [Par. 0068… 0070], “FIG. 4A is a diagram illustrating a case comparison factor partitions from a “pro” perspective according to an exemplary embodiment of the invention. The CBR ontology represents the concepts in this diagram and enables reasoning to infer the appropriate partition for the factors of both cases in the case comparisons. See FIGS. 6F1 and 6F2 for the factor partition ontology patterns. P1 identifies the common factors that are biased for the “pro” perspective or “P” (the “pro” factors being associated with a solid filling). P2 identifies the common factors that are biased for the “con” perspective or “C” (the “con” factors being associated with a hashed filling). P3 identifies the unique factors in the current case, not in the past case, biased for P. P4 identifies unique factors in the past case, not in the current case that are biased for C. P5 identifies the unique factors in the current case, not in the past case, that are biased for C. P6 identifies the unique factors in the past case, not in the current case, that are biased for P. FIG. 4B is a diagram illustrating a case comparison factor partitions from a “con” perspective according to an exemplary embodiment of the invention.”..[0070], “If the past case has no P6 partition than the past case is passed on for further CBR reasoning. If P6 occurs and is undermined by either the AS3-P3 or AS3-P4 partitions, then the past case is also passed on for further CBR reasoning. AS3-P3 P3 Current case stronger If P3 occurs with the current case having partition a unique differentiating factor for P, and it has the same factor parent as in P6, then the AS3 claim is undermined and the past case instance is not excepted and, therefore, is passed along for further CBR argument reasoning. AS3-P4 P3 Past case weaker If P4 occurs with a past case having a partition unique differentiating factor for C and it has the same factor parent as in P6, then the AS3 claim is undermined and the past case instance is not excepted and, therefore, is passed along for further CBR argument reasoning. AS2 P6 Preference for factors T his argument simply passes all of the supporting P than cases that were not excepted by AS3, factors support C i.e., either don't have a P6 partition or the P6 partition was undermined by either AS3-P3 or AS3-P4 AS4 P5 Current case weaker If P5 occurs with the current case having exception a unique differentiating factor for C, without AS4-P3 or AS4-P4 being entailed, than this past case instance is excepted and does not pass this argument filter for further CBR reasoning. If the past case has no P5 partition then the past case is passed on for further CBR reasoning. If the P5 occurs and is undermined by either the AS4-P3 or AS4-P4 partitions, then the past case is also passed on for further CBR reasoning. AS4-P3 P3 Current case stronger If P3 occurs with the current case having partition a unique differentiating factor for P and it has the same factor parent as in P5, then the AS4 claim is undermined and the past case instance is not excepted and, therefore, is passed along for further CBR argument reasoning.” Examiner’s note , the current is split into the “pro” and “con” factor case, the comparing the “pro” and “con” feature of both cases (current case/similar case and the past case/reference case “ to determine whether the argument of reference case is still good or not that corresponds to generate a hypothesis-driven explanation theory. ). However, Yanosy does not teach split the plurality of similar cases into a plurality of favoring cases and a plurality of disfavoring cases, select a disfavoring case of the plurality of disfavoring cases that is most similar to the reference case based on a similarity of factors, determine what factors were not taken into account in generating the argumentation. On the other hand, Katsuda teaches split the plurality of similar cases into a plurality of favoring cases and a plurality of disfavoring cases (Katsuda, [Par.00070], “When the similarity calculation result 132 is output from the similarity calculating unit 142 to the storing unit 130, the case dividing unit 143 refers to the similarity calculation result 132 and executes case division processing of dividing the other patients into two groups by classifying the other patients into either a similar case group or a dissimilar case group based on the similarity threshold 7th (step S203). The similar case group table 133a in which the patients who belong to the similar case group are indicated as a list of the patient IDs and the dissimilar case group table 133b in which the patients who belong to the dissimilar case group are indicated as a list of the patient IDs are created. ; select a disfavoring case of the plurality of disfavoring cases that is most similar to the reference case based on a similarity of factors (Katsuda, [Par.0072], “The test unit 144 sequentially selects one patient ID-a listed in the similar case group table 133a and acquires the gene information 4g from the patient database 131 by using the selected patient ID-a. Next, the test unit 144 sequentially selects one patient ID-b listed in the dissimilar case group table 133b and acquires the gene information 4g from the patient database 131 by using the selected patient ID-b. The test unit 144 compares the two sets of gene information 4g of the patient ID-a and the patient ID-b and determines whether change in the expression is present or absent regarding each gene.” Examiner’s note , select ing the patient ID (case) in the dissimilar group.) ; Yanosy and Katsuda are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the generate a knowledge-based generalization of the argumentation by learning a lower bound generalization and an upper bound generalization, apply the argumentation to a plurality of cases similar to the reference case based on knowledge-based search and classification , as taught by Yanosy, to include the split the plurality of similar cases into a plurality of favoring cases and a plurality of disfavoring cases, select a disfavoring case of the plurality of disfavoring cases that is most similar to the reference case based on a similarity of factors, as taught by Katsuda. The modification would have been obvious because one of the ordinary skills in art would be motivated to divide the case data into two group, (Katsuda, [Par.0070], “] When the similarity calculation result 132 is output from the similarity calculating unit 142 to the storing unit 130, the case dividing unit 143 refers to the similarity calculation result 132 and executes case division processing of dividing the other patients into two groups by classifying the other patients into either a similar case group or a dissimilar case group based on the similarity threshold 7th (step S203). The similar case group table 133a in which the patients who belong to the similar case group are indicated as a list of the patient IDs and the dissimilar case group table 133b in which the patients who belong to the dissimilar case group are indicated as a list of the patient IDs are created.”). However, neither Yanosy nor Katsuda teaches determine what factors were not taken into account in generating the argumentation, On the other hand, Coman teaches determine what factors were not taken into account in generating the argumentation (Coman, [Par.0093], “ In accordance with certain embodiments, if a discrepancy and/or assertion cannot be verified by the system, a fact pattern response may be generated (for example, in block 420) that can include any information known about the transaction related to the product or service. For example, a customer communication may include the following utterance: “ Why was I charged $25 for shipping? I was only charged $7 on all my previous orders!” In this example case, there may be two assertions: (1) the customer was charged $25 for shipping; and (2) the customer's previous orders were only charged $7 for shipp ing. Based on the utterance, the system may determine that there is an $18 discrepancy between the expected and perceived state. The system may then attempt to verify the assertions/discrepancy by retrieving the customer's order history from a database. If, for example, the order history indicates that there were no $25 shipping charges on any previous orders associated with the customer , and that all previous orders were charged $7, the generated fact pattern response may reference the customer order history and provide a listing of recent orders, dates, shipping charges, etc. In this respect, the fact pattern response may provide information that will help the customer realize that they may have confused vendors.” Examiner’s note , identify the factor (shipping cost) was not taken in the previous conversation to generate the argument.); Yanosy, Katsuda and Coman are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the combined teaching of Yanosy and Katsuda of access a reference case of a plurality of cases; generate argumentation that explains a phenomenon of the reference case by developing a predictive model; generate a knowledge-based generalization of the argumentation by learning a lower bound generalization and an upper bound generalization; apply the argumentation to a plurality of cases similar to the reference case based on knowledge-based search and classification; split the plurality of similar cases into a plurality of favoring cases and a plurality of disfavoring cases; select a disfavoring case of the plurality of disfavoring cases that is most similar to the reference case based on a similarity of factors, as set forth above to include the determine what factors were not taken into account in generating the argumentation, as taught by Coman . The modification would have been obvious because one of the ordinary skills in art would be motivated to resolve differences in terminology (Coman, [Par.0139], “The systems and methods disclosed herein may engage with the customer to establish common ground (e.g., resolve differences in terminology). For example, a customer utterance may take the form: “I enrolled in premium rewards yesterday! Why am I being charged shipping on the order I'm trying to place?” The systems and methods as disclosed here may be used to autonomously generate a response, such as: “Are you referring to the Gold Rewards tier?”.”). Regarding claim 12 is rejected for the same reason as the claim 1, since these claims recite the same limitations. Claims 2 , 13 are rejected under 35 U.S.C. 103 as being unpatentable over Yanosy et al. (PUB. No 20200234148 -hereinafter, Yanosy ) in view of Katsuda et al. (PUB. No 20170206315 -hereinafter, Katsuda ) and further in view of Coman et al. (PUB. No 20210125190 -hereinafter, Coman ) and further in view of Gerken et al. (PUB. No 20170032262 -hereinafter, Gerken ) . Regarding claim 2, Yanosy teaches the system of claim 1, wherein when generating a knowledge-based generalization of the argumentation, the instructions ( Yanosy, [Par.0056], “…As further depicted in the figure, the CBR reasoning outcomes includes strongest argument 131, strong argument 132, relevant argument—exception arguments undermined 133, relevant argument—no exception arguments 134, relevant argument failed—has exception arguments 135, and relevant argument failed—missing common factor arguments 136. The outcomes represent an argument strength preference for use of a particular past case, from the strongest, e.g., 131 to the minimally relevant, e.g., 134. Further, 135 and 136 are excepted from use for a similar decision argument due to failing the arguments or not having sufficient common factors. The CBR reasoning outcomes will be described in more detail below.” Examiner’s note , the CBR reasoning outcomes based on a different level of the argument such as strongest argument, no exception argument,…missing comment factor argument.) , However, Yanosy does not teach when executed by the processor, further cause the system to: learn an evidence collection rule for each argument that reduces the hypothesis to an evidence item; and search the plurality of reference cases, by a collection agent, for the evidence item. On the other hand, Gerken teaches when executed by the processor, further cause the system to: learn an evidence collection rule for each argument that reduces the hypothesis to an evidence item (Gerken, [Par.0013], “The instructions then evaluate each hypothesis based on the set of collected pieces of evidence to determine a confidence value for the respective hypothesis. For each hypothesis failing to exceed a predefined, threshold confidence value, the instructions identify one or more missing pieces of additional evidence needed to help resolve the hypothesis, and isolate the common missing pieces of evidence from missing pieces of evidence relevant to only a single hypothesis. A weighted value—referred to herein as a “Value-Of-Information” (VOI) metric—is then formed by the instructions for each missing piece of evidence, and the instructions again search the data repository to find missing pieces of evidence based on the VOI and/or the applicability of the missing evidence across the hypothesis space, to reduce the uncertainty associated with those hypotheses which had earlier failed to meet the predefined threshold confidence values. Finally, the executed instructions refine any remaining hypotheses, as described above in this paragraph.” Examiner’s note, collecting more evidence to help to resolve the hypothesis). and search the plurality of reference cases, by a collection agent, for the evidence item , (Gerken, [Par.0011], “ One embodiment of this Hypothesis Orchestration System is a method involving receiving an inquiry or set of inquiries (“inquiry” for convenience), and performing an initial search of a data repository (or repositories— again, “repository”, for convenience) to identify evidence relevant to that inquiry . With this initial-search evidence, a plurality of hypotheses is formed based on observation data and information derived from observation data identified as relevant to the inquiry—each hypothesis is a proposed solution to the inquiry. Based on these newly-generated hypotheses, observation data and information derived from observation data relevant to one or more of these hypotheses are formed into a set of collected pieces of evidence.”). Yanosy, Katsuda, Coman and Gerken are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the generating a knowledge-based generalization of the argumentation, as taught by Yanosy, to include the learn an evidence collection rule for each argument that reduces the hypothesis to an evidence item; and search the plurality of reference cases, by a collection agent, for the evidence item, as taught by Gerken. he modification would have been obvious because one of the ordinary skills in art would be motivated to improve the confidence associated with one or more of the plurality of hypotheses.”). Regarding claim 1 3 is rejected for the same reason as the claim 2 , since these claims recite the same limitations. Claims 3 , 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yanosy et al. (PUB. No 20200234148 -hereinafter, Yanosy ) in view of Katsuda et al. (PUB. No 20170206315 -hereinafter, Katsuda ) and further in view of Coman et al. (PUB. No 20210125190 -hereinafter, Coman ) and further in view of Kim et al. (PUB. No 20180150609 -hereinafter, Kim ) . Regarding claim 3, Yanosy teaches the method of the claim 1, but it does not teach wherein the predictive model includes a probabilistic inference network. On the other hand, Kim teaches wherein the predictive model includes a probabilistic inference network (Kim, [Par.0128], “In addition, the class prediction model generation unit 211 learns similar case clusters for each associated feature for the similar case cluster of a target feature, and generates a class prediction model for predicting the probability for the future value class of a target feature for each class based on the linear or nonlinear distribution of the data for each class. The class prediction model may predict the probability of each class based on a machine learning algorithm such as Deep Belief Network (DBN) or Convolutional Neural Network (CNN).”) . Yanosy, Katsuda, Coman and Gerken are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the system of the claim 1, as taught by Yanosy, to include the wherein the predictive model includes a probabilistic inference network, as taught by Kim. The modification would have been obvious because one of the ordinary skills in art would be motivated to predict the probability for each class, ( Gerken, [Par.01 27], “In addition, the prediction model learning unit 210 includes a class (probability) prediction model generation unit 211 for learning a similar case cluster for each associated feature associated with the target feature to predict the probability for each class and a future value prediction model generation unit 212 for predicting a future value for each class.”). Regarding claim 20 is rejected for the same reason as the claim 3, since these claims recite the same limitations. Claims 4 , 5, 6, 7, 8 , 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yanosy et al. (PUB. No 20200234148 -hereinafter, Yanosy ) in view of Katsuda et al. (PUB. No 20170206315 -hereinafter, Katsuda ) and further in view of Coman et al. (PUB. No 20210125190 -hereinafter, Coman ) and further in view of Schu m et al. ( NPL. - Substance-Blind Classification of Evidence for Intelligence Analysis- hereinafter, Schum ) . Regarding claim 4, Yanosy teaches the system of the claim 1, but it does not teach wherein the predictive model includes a Wigmorean probabilistic inference network. On the other hand, Schum teaches wherein the predictive model includes a Wigmorean probabilistic inference network (Schum [Abstract], “This paper presents several substance-blind classifications of evidence which are based on these inferential characteristics and facilitate the clarification of many uncertainties lurking in intelligence analysis. It also shows how the Disciple-LTA cognitive assistant uses these classifications to develop Wigmorean probabilistic inference networks for assessing the likelihood of hypotheses.” ) . Yanosy , Katsuda, Coman and Schum are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the system of the claim 1, as taught by Yanosy, to include the wherein the predictive model includes a Wigmorean probabilistic inference network, as taught by schum. The modification would have been obvious because one of the ordinary skills in art would be motivated to develop the arguments that link evidence to hypotheses by establishing and fusing the relevance, believability and inferential force or weight of a wide variety of items of evidence of different types (Schum, [Page 1 , Fig.1 ], “In this paper we present a foundation for such an evidence categorization scheme that will tell us what kinds and combi nations of evidence we have in any intelligence analysis regardless of the substance or content of the evidence and the objectives of the analysis. First, we present a general approach to evidence-based hypothesis analysis which consists in developing a Wigmorean probabilistic inference network that shows how evidence is linked to a hypothesis through a potentially very complex argument that establishes and fuses the relevance, the believability and the inferential force or weight of a wide variety of items of evidence of different types.”). Regarding claim 5, Yanosy teaches the system of the claim 1, but it does not teach wherein the argumentation includes at least one of a hypothesis or a conjunction of sub hypothesis. On the other hand, Schum teaches wherein the argumentation includes at least one of a hypothesis or a conjunction of sub hypothesis (Schum, [Page 1, section I , Fig. 1 ], “'Evidence' is word of relation used in the context of argumentation: e.g. "A is evidence of B". In that context information has a potential role as relevant evidence if it tends to support or tends to negate, directly or indirectly, some hypothesis about a contested matter. One draws inferences from evidence in order to prove or disprove a hypothesis. The framework is argument, the process is proof, and the engine is inferential reasoning from information [1]. Thus evidence differs from the words data or items of information, since data or items of information only become evidence when their relevance is established regarding some hypothesis at issue.”.). Yanosy, Katsuda, Coman and Schum are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the system of the claim 1, as taught by Yanosy, to include the wherein the argumentation includes at least one of a hypothesis or a conjunction of sub hypothesis , as taught by schum. The modification would have been obvious because one of the ordinary skills in art would be motivated to develop the arguments that link evidence to hypotheses by establishing and fusing the relevance, believability and inferential force or weight of a wide variety of items of evidence of different types (Schum, [Page 1], “In this paper we present a foundation for such an evidence categorization scheme that will tell us what kinds and combi nations of evidence we have in any intelligence analysis regardless of the substance or content of the evidence and the objectives of the analysis. First, we present a general approach to evidence-based hypothesis analysis which consists in developing a Wigmorean probabilistic inference network that shows how evidence is linked to a hypothesis through a potentially very complex argument that establishes and fuses the relevance, the believability and the inferential force or weight of a wide variety of items of evidence of different types.”). Regarding claim 6, Yanosy teaches the system of the claim 1, but it does not teach wherein the hypothesis to be assessed is decomposed into simpler hypotheses by considering both favoring arguments and disfavoring arguments. On the other hand, Schum teaches wherein the hypothesis to be assessed is decomposed into simpler hypotheses by considering both favoring arguments and disfavoring arguments (Schum, [Page 2, Section 2, Fig.1], “A complex hypothesis is first reduced to simpler and simpler hypotheses and the simplest hypotheses are assessed through evidence analysis. For example, in Fig. 1, the hypothesis H1 (or problem [P1]) is reduced to three simpler hypotheses, H11, H12, and H13 (problems [P2], [P3] and [P4]). Each of these hypotheses is assessed by considering both favoring evidence and disfavoring evidence (i.e., problems [P5] and [P6]). Let us assume that there are two items of favoring evidence for H11: E1 and E2. For each of them (e.g., E1) Disciple LTA assesses the extent to which it favors the hypothesis H11 (i.e., [P7]). This requires assessing both the relevance of E1 to H11 (problem [P9]) and the believability of E1 (problem [P10]). Let us assume that Disciple-LTA has obtained the following solutions for these two last problems:”) . Yanosy, Katsuda, Coman and Schum are analogous in arts because they have the same field of endeavor of generating the case data . Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the system of the claim 1, as taught by Yanosy, to include the wherein the hypothesis to be assessed is decomposed into simpler hypotheses by considering both favoring arguments and disfavoring arguments, as taught by schum. The modification would have been obvious because one of the ordinary skills in art would be motivated to develop the arguments that link evidence to hypotheses by establishing and fusing the relevance, believability and inferential force or weight of a wide variety of items of evidence of
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572779
INTERFACE NEURAL NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12541705
SYSTEM AND METHOD FOR FACILITATING A MACHINE LEARNING MODEL REBUILD
2y 5m to grant Granted Feb 03, 2026
Patent 12511531
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493804
METHOD OF BUILDING AND OPERATING DECODING STATUS AND PREDICTION SYSTEM
2y 5m to grant Granted Dec 09, 2025
Patent 12493774
NEURAL NETWORK OPERATION MODULE AND METHOD
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
48%
Grant Probability
53%
With Interview (+5.0%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 63 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month