Prosecution Insights
Last updated: April 19, 2026
Application No. 19/219,939

REASONING ENGINE SERVICES

Non-Final OA §101§102§103§DP
Filed
May 27, 2025
Examiner
WONG, LUT
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Nant Holdings Ip LLC
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
463 granted / 598 resolved
+22.4% vs TC avg
Moderate +15% lift
Without
With
+15.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
23 currently pending
Career history
621
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
28.6%
-11.4% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§101 §102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. At least Claims 53, 73, 74 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 31, 41, 51, 52 of copending Application No. 18984946 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation. Instant application 19/219939 18984946 53. (New) A computer-based reasoning system comprising: a data interface configured to acquire environment data representing aspects of an environment; and at least one inference engine having at least one processor and at least one computer readable non-transitory memory storing software instructions, the at least one inference engine coupled with the data interface and that performs the following operations upon execution of the software instructions: receive an inquiry relating to the aspects of the environment; recognize target data objects from the aspects of the environment, each target data object having object attributes; generate at least one hypothesis according to at least one reasoning rule set operating based on the environment data or object attributes of the target data objects, the at least one hypothesis representing a suspected correlation among the recognized target data objects; and cause a computing device to render an output related to the at least one hypothesis and reasoning steps taken according to the at least one reasoning rule set that generated the output related to the at least one hypothesis. 31. (New) A computer-based reasoning system, comprising: at least one computer-readable non-transitory memory storing software instructions and available reasoning rule sets; a data interface configured to acquire environment data from at least a news outlet; and at least one computer-based inference engine coupled with the data interface and the at least one memory, and that performs the following operations upon execution of the software instructions: recognizing aspects in the environment data as target objects, the target objects having object attributes; (Examiner Note: in order to recognize an aspect, it must first be received) selecting at least one reasoning rule set from the available reasoning rule sets as a function of the environment data and object attributes of the target objects; establishing at least one hypothesis according to the selected at least one reasoning rule set, the hypothesis representing a suspected correlation among the target objects; deriving at least one merit score associated with the at least one hypothesis based at least in part on the environment data; and rendering, via a presentation module, the at least one hypothesis according to the at least one merit score. 41. (New) The system of claim 31, wherein the presentation module is further configured to present reasoning steps taken to generate the hypothesis. Claims 73-74 correspond to claims 51-52 and are rejected under the same rationale. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. At least Claims 53, 73, 74 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 31, 50, 51 of copending Application No. 18220754 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation. Instant application 19/219939 18984946 53. (New) A computer-based reasoning system comprising: a data interface configured to acquire environment data representing aspects of an environment; and at least one inference engine having at least one processor and at least one computer readable non-transitory memory storing software instructions, the at least one inference engine coupled with the data interface and that performs the following operations upon execution of the software instructions: receive an inquiry relating to the aspects of the environment; recognize target data objects from the aspects of the environment, each target data object having object attributes; generate at least one hypothesis according to at least one reasoning rule set operating based on the environment data or object attributes of the target data objects, the at least one hypothesis representing a suspected correlation among the recognized target data objects; and cause a computing device to render an output related to the at least one hypothesis and reasoning steps taken according to the at least one reasoning rule set that generated the output related to the at least one hypothesis. 31. (Currently Amended) A computer-based remote patient data monitoring engine, comprising: at least one computer-readable memory storing rules engine software instructions; a data interface coupled with at least one remote biometric sensor associated with a patient; and at least one processor coupled with the data interface and the at least one computer- readable memory, and upon execution of the rules engine software instructions performs operations to: acquire, over a network, environmental data associated with the patient and at least partially including sensor data from the at least one remote biometric sensor acquired via the data interface wherein the sensor data records real- time or near real-time changes in a body of the patient; recognize aspects from the environmental data as target objects including real- time changes in the patient's body recorded by the at least one remote biometric sensor; select, from the at least one computer-readable memory, at least one reasoning rule set as a function of the recognized target objects based on a concept mapping of the recognized target objects to one or more types of reasoning, wherein the concept mapping comprises traversing algorithmic structures or extrapolation paths selected using pointers to the one or more types of reasoning, wherein selection of the pointers is weighted according to attributes of the recognized target objects including the real-time recorded changes in the patient's body; generate, according to the selected at least one reasoning rule set, at least one hypothesis regarding a correlation among the target objects with respect to an outcome; and cause an output device to present in real-time or near real-time the at least one hypothesis and at least some environmental data associated with the recognized target objects as relating to the outcome wherein the at least one hypothesis and the selection of pointers relate to a positive or a negative outcome of a treatment protocol applied to the patient;and recommend or alter an action with respect to the treatment protocol based on the at least one hypothesis and the selection of pointers. Claims 73-74 correspond to claims 50-51 and are rejected under the same rationale. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. At least Claims 53, 73, 74 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 31, 50, 51 of copending Application No. 17964015 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation for the same reasons as 18220754 as shown above. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 11900276. Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation for the same reasons as 18984946 as shown above. At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 10762433. Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation for the same reasons as 18984946 as shown above. At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 10255552. Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation for the same reasons as 18984946 as shown above. At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 9530100. Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation for the same reasons as 18984946 as shown above. At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 9262719. Although the claims at issue are not identical, they are not patentably distinct from each other because of anticipation for the same reasons as 18984946 as shown above. At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 12340320. Although the claims at issue are not identical, they are not patentably distinct from each other because of obviousness At least Claims 53, 73, 74 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 9576242. Although the claims at issue are not identical, they are not patentably distinct from each other because of obviousness. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 53-74 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 53: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the following limitations: recognize target data objects from the aspects of the environment, each target data object having object attributes (recognizing objects in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); generate at least one hypothesis according to at least one reasoning rule set operating based on the environment data or object attributes of the target data objects, the at least one hypothesis representing a suspected correlation among the recognized target data objects (hypothesis generation/establishing in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); The claim recites an abstract idea. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites the following additional elements: 53. (New) A computer-based reasoning system comprising: a data interface configured to acquire environment data representing aspects of an environment (amounts to mere data gathering, an insignificant extra-solution activity as discussed in MPEP 2106.05(g)), and at least one inference engine having at least one processor and at least one computer readable non-transitory memory storing software instructions, the at least one inference engine coupled with the data interface and that performs the following operations upon execution of the software instructions (applying it as discussed in MPEP 2106.05(f)): receive an inquiry relating to the aspects of the environment (amounts to mere data gathering, an insignificant extra-solution activity as discussed in MPEP 2106.05(g)); cause a computing device to render an output related to the at least one hypothesis and reasoning steps taken according to the at least one reasoning rule set that generated the output related to the at least one hypothesis (amounts to mere insignificant application, an insignificant extra-solution activity as discussed in MPEP 2106.05(g)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. 53. (New) A computer-based reasoning system comprising: a data interface configured to acquire environment data representing aspects of an environment (amounts to mere data gathering, an insignificant extra-solution activity as discussed in MPEP 2106.05(g)), which is well understood, routine and convention activity of receiving or gathering data as identified by the court in MPEP 2106.05(d)); and at least one inference engine having at least one processor and at least one computer readable non-transitory memory storing software instructions, the at least one inference engine coupled with the data interface and that performs the following operations upon execution of the software instructions (applying it as discussed in MPEP 2106.05(f)): receive an inquiry relating to the aspects of the environment (amounts to mere data gathering, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is well understood, routine and convention activity of receiving or gathering data as identified by the court in MPEP 2106.05(d); cause a computing device to render an output related to the at least one hypothesis and reasoning steps taken according to the at least one reasoning rule set that generated the output related to the at least one hypothesis (amounts to mere insignificant application, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is extra-solution activity of well, understood routine and conventional operation of presentation of offer or statistics under MPEP 2106.05(d)). The claim is not patent eligible. Claim 54: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. 54. (New) The system of claim 53, wherein the reasoning steps include mapping to a known concept (concept mapping in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites no additional element: Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 55: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 55. (New) The system of claim 53, wherein the reasoning steps include a type of reasoning (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 56: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. 56. (New) The system of claim 53, wherein the reasoning steps include a correlation made among target data objects (correlation making in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites no additional element: Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 57: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. 57. (New) The system of claim 53, wherein the reasoning steps include inferences among target data objects (inference in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites no additional element: Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 58: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 58. (New) The system of claim 53, wherein the output comprises ranked hypotheses (amounts to mere insignificant application, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is extra-solution activity of well, understood routine and conventional operation of presentation of offer or statistics under MPEP 2106.05(d)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 59: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 59. (New) The system of claim 58, wherein the ranked hypotheses are based on merit scores assigned to individual hypotheses (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 60: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 60. (New) The system of claim 59, wherein the merit scores are multi- dimensional, representing different aspects of relevance (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 61: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 61. (New) The system of claim 53, wherein the output includes the reasoning steps presented in a browser on the computing device (amounts to mere insignificant application, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is extra-solution activity of well, understood routine and conventional operation of presentation of offer or statistics under MPEP 2106.05(d)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 62: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 62. (New) The system of claim 53, wherein the reasoning steps comprise a mapping to a namespace (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 63: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 63. (New) The system of claim 53, wherein the least one inference engine interacts with the environment via the data interface (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 64: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 64. (New) The system of claim 63, wherein the data interface includes at least one of the following: a mobile device, a sensor, a router, a switch, an appliance, anon-mobile device, or a garment (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 65: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 65. (New) The system of claim 63, wherein the data interface comprises an application program interface (API) (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 66: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 66. (New) The system of claim 53, wherein the at least one reasoning rule set includes deductive reasoning (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 67: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 67. (New) The system of claim 53, wherein the at least one reasoning rule set includes abductive reasoning (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 68: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 68. (New) The system of claim 53, wherein the operations further include conducting reflexive reasoning and the reasoning steps include the reflexive reasoning (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 69: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 69. (New) The system of claim 53, wherein at least one reasoning rule set comprises inductive reasoning (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 70: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. construct a validation plan for the at least one hypothesis (validation plan construction in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); and derive a merit score for the at least one hypothesis based on the validation data (merit score derivation/calculation in high level is an observation, evaluation, judgment, opinion mental process which can reasonably be performed in one’s mind with the aid of pencil and paper); Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 70. (New) The system of claim 53, further comprising a validation module coupled with the data interface and that performs the following validation operations upon execution of the software instructions (amounts to mere data gathering, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is well understood, routine and convention activity of receiving or gathering data as identified by the court in MPEP 2106.05(d): acquire validation data according to the validation plan (amounts to mere data gathering, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is well understood, routine and convention activity of receiving or gathering data as identified by the court in MPEP 2106.05(d)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 71: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 71. (New) The system of claim 70, wherein the output further includes the validation plan and the merit score (amounts to mere insignificant application, an insignificant extra-solution activity as discussed in MPEP 2106.05(g), which is extra-solution activity of well, understood routine and conventional operation of presentation of offer or statistics under MPEP 2106.05(d)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 72: Step 1: the claim is directed to statuary category. Step 2A Prong 1: The claim recites the abstract idea of parent claim. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites additional element(s): 72. (New) The system of claim 53, wherein the inference engine is a member of a multi-layered set of inference engines (amounts to generally linking the abstract ideas to the technological environment or field of use as discussed in in MPEP 2106.05(h)). Step 2B: As shown above, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The judicial exceptions are not integrated into a practical application. The claim is not patent eligible. Claim 73 is method claim having similar limitation as claim 53 and is rejected under the same rationale. Claim 74 is non-transitory computer readable medium claims having similar limitation as claim 53 and is rejected under the same rationale. The additional elements in claim 74 is A non-transitory computer-readable medium comprising computer- readable instructions thereon, which, when executed by a processor, configure the processor as a computer-based reasoning system operable to (amounts to performing generic function of execution of stored instructions (MPEP 2106.05(f)). Accordingly, the additional elements do not integrate the abstract into practical application and are not sufficient to amount to significant more than the abstract idea. Therefore, the claims are an abstract idea. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 53-61, 63-65, 67, 69-74 is/are rejected under 35 U.S.C. 102a1 as being anticipated by Talbot et al (US 2006/0112048 A1) 53. (New) Talbot anticipated A computer-based reasoning system comprising: a data interface configured to acquire environment data representing aspects of an environment (See Fig. 4-118 on information extraction. [0024] In the illustrated example, evidence and stories can be input into the knowledge base 114 in a number of ways. For example, an information extraction component 118 can be used to reduce an evidence source, such as a text document or a transcripted conversation, into a desired evidence format. This evidence can be linked with existing stories in the knowledge base or new stories can be assembled in response to the evidence. The information extraction component 118 breaks down a input text segment into individual words or phrases, interprets the context and meaning of the various words or phrases, and uses the extracted information to generate a template representing the text segment. For example, the information extraction component 118 can look for details relating to an event described in the document, such as the nature of the event, the cause or motivation for the event, the mechanism of the event, the identity of an actor, the location of the event, the time or date of the event, and the magnitude of the event. Each of these details can be added to a template related to the text segment. In accordance with one aspect of the invention, the information extraction component 118 can look for hedge words (e.g., maybe, probably, certainly, never) within the text segment. The information extraction component 118 can use a co-referencing routine to determine what nouns relate to a given hedge word, and use this information to determine the weight of the evidence associated with the template, in the form of belief values and disbelief values. Examiner Note: event information (e.g. evidence and stories) that are inputted/acquired/extracted are environment data representing aspects of an environment. The details of an event reads on environment data representing aspects of an environment); and at least one inference engine having at least one processor and at least one computer readable non-transitory memory storing software instructions, the at least one inference engine coupled with the data interface and that performs the following operations upon execution of the software instructions (See Fig. 4-102-107. [0027] The plurality of inferencing algorithms 102-107 can include any of a variety of appropriate algorithms for evaluating stored data to determine significant patterns and trends. Specifically, the inferencing algorithms 102-107 search the knowledge base for unanticipated story fragments, comprising at least a single new hypothesis, and more typically of story fragments comprising linked hypotheses and any associated evidence. In the illustrated example, the plurality of inferencing algorithms include an inductive reasoner 102 that computes changes in rules based on new evidence within the knowledge base. The change is based on rule induction, using the old evidence supporting the old structure along with exceptions to the old rules in the existing evidence to induce new rules that better account for the entire body of old and the new evidence. In essence, new rules defining a revised network structure are computed from an aggregate of old and new evidence. In one implementation, the Weka algorithm can be utilized within the inductive reasoner 102. [0046] The computer system 300 can include a hard disk drive 314, a magnetic disk drive 316, e.g., to read from or write to a removable disk 318, and an optical disk drive 320, e.g., for reading a CD-ROM or DVD disk 322 or to read from or write to other optical media. The hard disk drive 314, magnetic disk drive 316, and optical disk drive 320 are connected to the system bus 306 by a hard disk drive interface 324, a magnetic disk drive interface 326, and an optical drive interface 334, respectively. The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, and computer-executable instructions for the computer system 300. Although the description of computer-readable media above refers to a hard disk, a removable magnetic disk and a CD, other types of media which are readable by a computer, may also be used. For example, computer executable instructions for implementing systems and methods described herein may also be stored in magnetic cassettes, flash memory cards, digital video disks and the like. Examiner Note: See Fig. 4 on module couplings): receive an inquiry relating to the aspects of the environment ([0017] FIG. 2 illustrates a functional block diagram of an artificial intelligence system 20 comprising an assisted decision making system 22 utilizing automated discovery of unknown unknowns. The assisted decision making system 22 includes at least one associated story of interest 24. In accordance with an aspect of the invention, a story of interest comprises an executable belief network augmented by one or more characteristics of the hypotheses comprising the belief network, the evidence, and the content from which the evidence was extracted. For example, the characteristics of a given hypothesis can include the answers to the so-called "reporter's questions" for items of evidence supporting the hypothesis (e.g., the source of the evidence, an associated location, an associated time of occurrence, an associated actor, etc.). It will be appreciated that each story of interest will relate to a question of interest to a decision maker utilizing the assisted decision making system 22. [0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. Examiner Note: each story fragment is an inquiry related to story of interest and the inquiry is the question); recognize target data objects from the aspects of the environment, each target data object having object attributes ([0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment; [0023] The plurality of inferencing systems 102-107 utilize data from an associated knowledge base 114 and, optionally, external sources 115. It will be appreciated that the stored data within the knowledge base can include current instantiations of stories of interest 116 and 117 associated with the assisted decision making systems 110 and 112. The external sources can include general knowledge bases such as Cyc or WordNET. In accordance with an aspect of the present invention, the knowledge base 114 comprises a plurality of stories, where each story comprises an executable belief network comprising at least one hypothesis, evidence supporting the at least one hypothesis, and a reference (e.g., a pointer) to the context from which the evidence was gathered. Each story is executable, such that it can produce mathematically consistent results in response to any change in its associated evidence, belief values, or weights. Accordingly, the stories can be updated and propagated to multiple decision algorithms in real time, allowing for a flexible exchange between a large number of decision algorithms or analysts. Examiner Note: each story is a target object; the evidence and reference are attributes); generate at least one hypothesis according to at least one reasoning rule set operating based on the environment data or object attributes of the target data objects, the at least one hypothesis representing a suspected correlation among the recognized target data objects ([0017] FIG. 2 illustrates a functional block diagram of an artificial intelligence system 20 comprising an assisted decision making system 22 utilizing automated discovery of unknown unknowns. The assisted decision making system 22 includes at least one associated story of interest 24. In accordance with an aspect of the invention, a story of interest comprises an executable belief network augmented by one or more characteristics of the hypotheses comprising the belief network, the evidence, and the content from which the evidence was extracted. For example, the characteristics of a given hypothesis can include the answers to the so-called "reporter's questions" for items of evidence supporting the hypothesis (e.g., the source of the evidence, an associated location, an associated time of occurrence, an associated actor, etc.). It will be appreciated that each story of interest will relate to a question of interest to a decision maker utilizing the assisted decision making system 22. [0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment. Examiner Note: degree of resemblance represents a suspected correlation among the recognized target data objects); and cause a computing device to render an output related to the at least one hypothesis and reasoning steps taken according to the at least one reasoning rule set that generated the output related to the at least one hypothesis (Fig. 4-136, 138. [0032] The arbitrators 126 and 128 evaluate the story fragments provided by the plurality of decision algorithms 102-107 to determine if a given story fragment is relevant to respective sets of one or more stories of interest 116 and 117. Specifically, an arbitrator (e.g., 126) examines each story fragment and determines if it is sufficiently related to any of its associated stories of interest (e.g., 116) to warrant inclusion of the story fragment in the related story of interest. For example, the arbitrators 126 and 128 can compare the hypotheses, evidence, and links within a story fragment and its associated characteristics with the hypotheses and associated characteristics of a given story of interest. It will be appreciated that the arbitrators 126 and 128 can evaluate the story fragments individually or combine related story fragments from multiple inferencing algorithms to provide a more complete story fragment for evaluation. [0033] Where a threshold level of similarity is found, the story fragment can be provided to human analysts through associated user interfaces 136 and 138. A user interface (e.g., 136) can include a graphical user interface that allows the analyst to quickly review the pertinent portions of the story fragment and determine its relationship to the story of interest. If the human analyst agrees that the story fragment is relevant to the story of interest, he or she can add information to the story fragment and incorporate the story fragment into the story of interest. A fusion engine 140 can mathematically reconcile the story of interest in light of the added story fragments to allow the analyst to see the impact of the story fragment. [0034] FIG. 5 illustrates a functional block diagram of an exemplary graphic user interface (GUI) 150 that can be utilized in an assisted decision making system in accordance with the present invention. The GUI 150 includes an editor 152 that allows a user to view and edit a structure argument. The editor 152 has four associated high-level functions, an editing function 154, a visualization function 156, a belief propagation function 158, and an administrative function 160. [0027] The plurality of inferencing algorithms 102-107 can include any of a variety of appropriate algorithms for evaluating stored data to determine significant patterns and trends. Specifically, the inferencing algorithms 102-107 search the knowledge base for unanticipated story fragments, comprising at least a single new hypothesis, and more typically of story fragments comprising linked hypotheses and any associated evidence. In the illustrated example, the plurality of inferencing algorithms include an inductive reasoner 102 that computes changes in rules based on new evidence within the knowledge base. The change is based on rule induction, using the old evidence supporting the old structure along with exceptions to the old rules in the existing evidence to induce new rules that better account for the entire body of old and the new evidence. In essence, new rules defining a revised network structure are computed from an aggregate of old and new evidence. In one implementation, the Weka algorithm can be utilized within the inductive reasoner 102. [0019] Each of the plurality of inferencing algorithms 28 and 30 utilize data, including formatted evidence, executable stories, and story fragments from an associated knowledge base 32 to produce story fragments. It will be appreciated that the inferencing algorithms 28 and 30 are not limited to retrieving data from the knowledge base, but can also access data from one or more external sources (e.g., Cyc, WordNET, and similar knowledge bases). In one implementation, the plurality of inferencing algorithms 28 and 30 can include an abductive reasoning algorithm that compiles the best explanation for a body of data in the form of a rule tree. Similarly, the plurality of decision algorithms 24 and 26 can comprise an unsupervised clustering algorithm that attempts to form clusters from the data to determine new hypotheses for the decision making system 22. Examiner Note: reasoning rule sets is not further defined, reads on any inferencing algorithms, such as inductive reasoner 102 that uses induction rule; abductive reasoner 105 that form a rule tree). 54. (New) The system of claim 53, wherein the reasoning steps include mapping to a known concept ([0015] FIG. 1 illustrates a number of categories of knowledge considered by decision makers in the form of a knowledge pyramid 10. At the peak 12 of the knowledge pyramid 10 is the knowledge that is both known to the decision maker and effectively incorporated into a story of interest, referred to as "known knowns." Put simply, known knowns are information that an organization is aware of and is utilizing effectively. On the next level 14 of the pyramid are "unknown knowns". An unknown known is an available item of information that has not been incorporated into a story of interest because its relationship to the story has not yet been realized or because the information has not been communicated to the decision maker from associated decision makers. Unknown knowns are information that an organization knows but has not yet effectively utilized. [0016] On a third level 16 of the pyramid is the information that a decision maker understands would be pertinent to a story of interest, but the information is not available. Such an item of information is referred to as a "known unknown." Known unknowns represent the information that an organization is aware that it lacks. Accordingly, the decision making process can be adapted to account for the missing data, for example, by assuming the worst possible instantiation of the missing data. The final level 18 of the pyramid represents information that a decision maker does not realize is missing from a story of interest. This information is referred to as "unknown unknowns," and represents the facts that an organization does not know it doesn't know. It will be appreciated that unknown unknowns can have a significant impact on the effectiveness of a decision maker, as it is impossible to account for their effect on the decision making process with any degree of precision). 55. (New) The system of claim 53, wherein the reasoning steps include a type of reasoning ([0027] The plurality of inferencing algorithms 102-107 can include any of a variety of appropriate algorithms for evaluating stored data to determine significant patterns and trends. Specifically, the inferencing algorithms 102-107 search the knowledge base for unanticipated story fragments, comprising at least a single new hypothesis, and more typically of story fragments comprising linked hypotheses and any associated evidence. In the illustrated example, the plurality of inferencing algorithms include an inductive reasoner 102 that computes changes in rules based on new evidence within the knowledge base. The change is based on rule induction, using the old evidence supporting the old structure along with exceptions to the old rules in the existing evidence to induce new rules that better account for the entire body of old and the new evidence. In essence, new rules defining a revised network structure are computed from an aggregate of old and new evidence. In one implementation, the Weka algorithm can be utilized within the inductive reasoner 102. [0019] Each of the plurality of inferencing algorithms 28 and 30 utilize data, including formatted evidence, executable stories, and story fragments from an associated knowledge base 32 to produce story fragments. It will be appreciated that the inferencing algorithms 28 and 30 are not limited to retrieving data from the knowledge base, but can also access data from one or more external sources (e.g., Cyc, WordNET, and similar knowledge bases). In one implementation, the plurality of inferencing algorithms 28 and 30 can include an abductive reasoning algorithm that compiles the best explanation for a body of data in the form of a rule tree. Similarly, the plurality of decision algorithms 24 and 26 can comprise an unsupervised clustering algorithm that attempts to form clusters from the data to determine new hypotheses for the decision making system 22. Examiner Note: the plurality of inference algorithms are of various types). 56. (New) The system of claim 53, wherein the reasoning steps include a correlation made among target data objects ([0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment. Examiner Note: degree of resemblance represents a correlation among the recognized target data objects). 57. (New) The system of claim 53, wherein the reasoning steps include inferences among target data objects ([0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment. Examiner Note: story fragments and story of interest are among target data objects). 58. (New) The system of claim 53, wherein the output comprises ranked hypotheses ([0030] An analogical reasoner 106 can examine the similarity between the present state of a story and past successful decision networks. The analogical reasoner 106 finds successful cases in the knowledge base that are most similar to the present state of a story and suggests differences in hypotheses based on the successful cases. A link analysis component 107 can be used to compute link values between hypotheses based on the characteristics of the hypotheses. When new evidence creates a drastic change in the strength of a link or provides the basis for a new link, the new link data can be provided as an unknown unknown. Examiner Note: most similar indicated ranked result). 59. (New) The system of claim 58, wherein the ranked hypotheses are based on merit scores assigned to individual hypotheses (([0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment. [0042] If the story fragment meets the internal threshold of its associated inferencing algorithm (Y), the story fragment is provided to an arbitrator associated with a decision making system at 260. At 262, it is determined if the story fragment is sufficiently related to one or more stories of interest associated with the arbitrator. For example, the hypotheses, links, and evidence in a story fragment and its associated characteristics can be compared with the hypotheses and associated characteristics of a given story of interest to determine the relatedness of the story fragment and the story. If the story fragment is not sufficiently related to the story of interest (N), the story fragment is rejected at 258. Examiner Note: the degree/relatedness reads on merit score. [0020] FIG. 3 illustrates a representation of a belief network 50 in accordance with an aspect of the present invention. The belief network 50 of FIG. 2 is illustrated as a Dempster-Shafer belief network, but it will be appreciated that other belief networks, such as Bayesian belief networks, can be utilized as stories of interest in accordance with an aspect of the present invention. The decision network 50 includes a top layer 52, a first intermediate layer 54, a second intermediate layer 56, and a bottom layer 58. The top layer 52 includes nodes N1-N6 linked to the first intermediate or hypothesis layer 54 by links or multipliers L1-L10. The first intermediate layer 54 includes nodes N7-N11 linked to the second intermediate layer 54 by links or multipliers L11-L17. The second intermediate layer 56 includes nodes N11-N13 linked to the bottom layer 58 by links or multipliers L18-L21. Each node represents a given variable and hypothesis associated with that variable that can affect the variable and hypothesis of other nodes in lower layers mathematically. Associated with each of the nodes N1-N15 are three parameters, which are a belief parameter B, a disbelief parameter D, and an unknown parameter U. The parameters B, D, and U conform to the Dempster-Shafer evidential interval such that the parameter B, D and U add up to one for each node N1-N15. [0021] The links represent multipliers or weights of a given parameter on a lower node. Link values can be constant, or computed by an algorithm. For example, the belief of node N7 of the first intermediate layer 54 depends on the belief of nodes N1, N2, and N3, each multiplied by its respective link value L1, L2, and L3. Additionally, the disbelief of node N7 of the first intermediate layer 54 depends on the disbelief of nodes N1, N2, and N3, each multiplied by its respective link value L1, L2, and L3. The unknown is computed based on the Dempster-Shafer combination rule. The belief and disbelief of node N7 then propagate to N11 through link L11, which is combined with the belief and disbelief of N18 multiplied by link L12 and the belief and disbelief of node N9 multiplied by link L14. The belief and disbelief of node N11 then propagate to node N14 through link L18 which is combined with the belief and disbelief of N13 multiplied by link L20. The ignorance, or unknowns, of each row can be evaluated using the Dempster-Shafer combination rule. Similar propagation occurs to provide the beliefs, the disbeliefs, and unknowns of the node N15. Examiner Note: the link value also reads on hypothesis score). 60. (New) The system of claim 59, wherein the merit scores are multi- dimensional, representing different aspects of relevance ([0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment. [0042] If the story fragment meets the internal threshold of its associated inferencing algorithm (Y), the story fragment is provided to an arbitrator associated with a decision making system at 260. At 262, it is determined if the story fragment is sufficiently related to one or more stories of interest associated with the arbitrator. For example, the hypotheses, links, and evidence in a story fragment and its associated characteristics can be compared with the hypotheses and associated characteristics of a given story of interest to determine the relatedness of the story fragment and the story. If the story fragment is not sufficiently related to the story of interest (N), the story fragment is rejected at 258. Examiner Note: the degree/relatedness reads on validity. [0020] FIG. 3 illustrates a representation of a belief network 50 in accordance with an aspect of the present invention. The belief network 50 of FIG. 2 is illustrated as a Dempster-Shafer belief network, but it will be appreciated that other belief networks, such as Bayesian belief networks, can be utilized as stories of interest in accordance with an aspect of the present invention. The decision network 50 includes a top layer 52, a first intermediate layer 54, a second intermediate layer 56, and a bottom layer 58. The top layer 52 includes nodes N1-N6 linked to the first intermediate or hypothesis layer 54 by links or multipliers L1-L10. The first intermediate layer 54 includes nodes N7-N11 linked to the second intermediate layer 54 by links or multipliers L11-L17. The second intermediate layer 56 includes nodes N11-N13 linked to the bottom layer 58 by links or multipliers L18-L21. Each node represents a given variable and hypothesis associated with that variable that can affect the variable and hypothesis of other nodes in lower layers mathematically. Associated with each of the nodes N1-N15 are three parameters, which are a belief parameter B, a disbelief parameter D, and an unknown parameter U. The parameters B, D, and U conform to the Dempster-Shafer evidential interval such that the parameter B, D and U add up to one for each node N1-N15. [0021] The links represent multipliers or weights of a given parameter on a lower node. Link values can be constant, or computed by an algorithm. For example, the belief of node N7 of the first intermediate layer 54 depends on the belief of nodes N1, N2, and N3, each multiplied by its respective link value L1, L2, and L3. Additionally, the disbelief of node N7 of the first intermediate layer 54 depends on the disbelief of nodes N1, N2, and N3, each multiplied by its respective link value L1, L2, and L3. The unknown is computed based on the Dempster-Shafer combination rule. The belief and disbelief of node N7 then propagate to N11 through link L11, which is combined with the belief and disbelief of N18 multiplied by link L12 and the belief and disbelief of node N9 multiplied by link L14. The belief and disbelief of node N11 then propagate to node N14 through link L18 which is combined with the belief and disbelief of N13 multiplied by link L20. The ignorance, or unknowns, of each row can be evaluated using the Dempster-Shafer combination rule. Similar propagation occurs to provide the beliefs, the disbeliefs, and unknowns of the node N15. Examiner Note: each layer of link value representing a different dimension of relevance). 61. (New) The system of claim 53, wherein the output includes the reasoning steps presented in a browser on the computing device ([0037] The visualization function 156 controls the display of the information within the argument model to the user. A graphic configuration sub-function 180 allows the user to specify display preferences for the editor interface 152. For example, the user can specify the colors associated with the nodes and connectors within the argument model. A change view sub-function 182 allows the user to select a zoom level or a portion of the argument model for viewing. A collapse node sub-function 184 allows the user to hide one or more of nodes and connectors downstream of a particular node to simplify the display of the argument model. The sub-function 184 also allows the argument to be reexpanded upon a command from the user. Finally, a confidence slider sub-function 186 allows a user to display a confidence value or an influence value as a line graph scale, ranging from a minimum value to a maximum value. This allows the user to more easily visualize the range of available values when editing these parameters. [0044] FIG. 7 illustrates a computer system 300 that can be employed to implement systems and methods described herein, such as based on computer executable instructions running on the computer system. The computer system 300 can be implemented on one or more general purpose networked computer systems, embedded computer systems, routers, switches, server devices, client devices, various intermediate devices/nodes and/or stand alone computer systems. Additionally, the computer system 300 can be implemented as part of the computer-aided engineering (CAE) tool running computer executable instructions to perform a method as described herein. Examiner Note: display of output on a networked computer indicates the use of a browser). 63. (New) The system of claim 53, wherein the least one inference engine interacts with the environment via the data interface (See Fig. 4. [0033] Where a threshold level of similarity is found, the story fragment can be provided to human analysts through associated user interfaces 136 and 138. A user interface (e.g., 136) can include a graphical user interface that allows the analyst to quickly review the pertinent portions of the story fragment and determine its relationship to the story of interest. If the human analyst agrees that the story fragment is relevant to the story of interest, he or she can add information to the story fragment and incorporate the story fragment into the story of interest. A fusion engine 140 can mathematically reconcile the story of interest in light of the added story fragments to allow the analyst to see the impact of the story fragment. [0034] FIG. 5 illustrates a functional block diagram of an exemplary graphic user interface (GUI) 150 that can be utilized in an assisted decision making system in accordance with the present invention. The GUI 150 includes an editor 152 that allows a user to view and edit a structure argument. The editor 152 has four associated high-level functions, an editing function 154, a visualization function 156, a belief propagation function 158, and an administrative function 160). PNG media_image1.png 626 869 media_image1.png Greyscale 64. (New) The system of claim 63, wherein the data interface includes at least one of the following: a mobile device, a sensor, a router, a switch, an appliance, anon-mobile device, or a garment ([0044] FIG. 7 illustrates a computer system 300 that can be employed to implement systems and methods described herein, such as based on computer executable instructions running on the computer system. The computer system 300 can be implemented on one or more general purpose networked computer systems, embedded computer systems, routers, switches, server devices, client devices, various intermediate devices/nodes and/or stand alone computer systems. Additionally, the computer system 300 can be implemented as part of the computer-aided engineering (CAE) tool running computer executable instructions to perform a method as described herein.). 65. (New) The system of claim 63, wherein the data interface comprises an application program interface (API) (See Fig. 4-136 and 138. [0047] A number of program modules may also be stored in one or more of the drives as well as in the RAM 310, including an operating system 330, one or more application programs 332, other program modules 334, and program data 336. [0048] A user may enter commands and information into the computer system 300 through user input device 340, such as a keyboard, a pointing device (e.g., a mouse). Other input devices may include a microphone, a joystick, a game pad, a scanner, a touch screen, or the like. These and other input devices are often connected to the processor 302 through a corresponding interface or bus 342 that is coupled to the system bus 306. Such input devices can alternatively be connected to the system bus 306 by other interfaces, such as a parallel port, a serial port or a universal serial bus (USB). One or more output device(s) 344, such as a visual display device or printer, can also be connected to the system bus 306 via an interface or adapter 346). 67. (New) The system of claim 53, wherein the at least one reasoning rule set includes abductive reasoning ([0027] The plurality of inferencing algorithms 102-107 can include any of a variety of appropriate algorithms for evaluating stored data to determine significant patterns and trends. Specifically, the inferencing algorithms 102-107 search the knowledge base for unanticipated story fragments, comprising at least a single new hypothesis, and more typically of story fragments comprising linked hypotheses and any associated evidence. In the illustrated example, the plurality of inferencing algorithms include an inductive reasoner 102 that computes changes in rules based on new evidence within the knowledge base. The change is based on rule induction, using the old evidence supporting the old structure along with exceptions to the old rules in the existing evidence to induce new rules that better account for the entire body of old and the new evidence. In essence, new rules defining a revised network structure are computed from an aggregate of old and new evidence. In one implementation, the Weka algorithm can be utilized within the inductive reasoner 102. [0019] Each of the plurality of inferencing algorithms 28 and 30 utilize data, including formatted evidence, executable stories, and story fragments from an associated knowledge base 32 to produce story fragments. It will be appreciated that the inferencing algorithms 28 and 30 are not limited to retrieving data from the knowledge base, but can also access data from one or more external sources (e.g., Cyc, WordNET, and similar knowledge bases). In one implementation, the plurality of inferencing algorithms 28 and 30 can include an abductive reasoning algorithm that compiles the best explanation for a body of data in the form of a rule tree. Similarly, the plurality of decision algorithms 24 and 26 can comprise an unsupervised clustering algorithm that attempts to form clusters from the data to determine new hypotheses for the decision making system 22). 69. (New) The system of claim 53, wherein at least one reasoning rule set comprises inductive reasoning ([0027] The plurality of inferencing algorithms 102-107 can include any of a variety of appropriate algorithms for evaluating stored data to determine significant patterns and trends. Specifically, the inferencing algorithms 102-107 search the knowledge base for unanticipated story fragments, comprising at least a single new hypothesis, and more typically of story fragments comprising linked hypotheses and any associated evidence. In the illustrated example, the plurality of inferencing algorithms include an inductive reasoner 102 that computes changes in rules based on new evidence within the knowledge base. The change is based on rule induction, using the old evidence supporting the old structure along with exceptions to the old rules in the existing evidence to induce new rules that better account for the entire body of old and the new evidence. In essence, new rules defining a revised network structure are computed from an aggregate of old and new evidence. In one implementation, the Weka algorithm can be utilized within the inductive reasoner 102. [0019] Each of the plurality of inferencing algorithms 28 and 30 utilize data, including formatted evidence, executable stories, and story fragments from an associated knowledge base 32 to produce story fragments. It will be appreciated that the inferencing algorithms 28 and 30 are not limited to retrieving data from the knowledge base, but can also access data from one or more external sources (e.g., Cyc, WordNET, and similar knowledge bases). In one implementation, the plurality of inferencing algorithms 28 and 30 can include an abductive reasoning algorithm that compiles the best explanation for a body of data in the form of a rule tree. Similarly, the plurality of decision algorithms 24 and 26 can comprise an unsupervised clustering algorithm that attempts to form clusters from the data to determine new hypotheses for the decision making system 22). 70. (New) The system of claim 53, further comprising a validation module coupled with the data interface and that performs the following validation operations upon execution of the software instructions: construct a validation plan for the at least one hypothesis; acquire validation data according to the validation plan; and derive a merit score for the at least one hypothesis based on the validation data ([0026] During operation, the extracted evidence templates can be provided to one or more evidence classifiers 120. The evidence classifiers 120 can assign the evidence to associated hypotheses according to the evidence content. It will be appreciated that the evidence classifiers 120 can assign the templates to one or more existing hypotheses in the knowledge base 114 or generate a new suggested hypothesis. In an exemplary embodiment, the evidence classifiers 120 can include a rule-based classifier that classifies the templates according to a set of user defined rules. For example, rules can be defined relating to the fields within the template or the source of the data. Other classifiers can include, for example, supervised and unsupervised neural network classifiers, semantic network classifiers, statistical classifiers, and other classifier models. These classifiers can be orchestrated to increase the efficiency of the classification. For example, the rule-based classifier can be applied first, and if a rule is not actuated, a statistical classifier can be used. If a pre-specified probability threshold is not reached at the statistical classifier, a semantic distance classifier can be applied and the results shown to the user for validation. [0018] The assisted decision making system 22 further comprises an arbitrator 26 that controls the flow of new data to the assisted decision making system. Specifically, the arbitrator 26 reviews story fragments provided by a plurality of inferencing algorithms 28 and 30 to determine if any of the fragments are sufficiently relevant to the at least one story of interest 22 as to warrant its consideration at the decision making system 22. For example, a given story fragment can be compared to a story of interest to determine to what degree the hypotheses within the story fragment and their associated characteristics resemble those of the hypotheses comprising the story of interest. In an exemplary embodiment, the arbitrator 26 can also evaluate the relatedness of multiple story fragments and combine related fragments prior to applying them to a story of interest. For example, items of evidence and hypotheses provided from a first inferencing algorithm (e.g., 28) that support a hypothesis provided from a second inferencing algorithm (e.g., 30) can be linked with that hypothesis to provide a larger, more complete story fragment. [0042] If the story fragment meets the internal threshold of its associated inferencing algorithm (Y), the story fragment is provided to an arbitrator associated with a decision making system at 260. At 262, it is determined if the story fragment is sufficiently related to one or more stories of interest associated with the arbitrator. For example, the hypotheses, links, and evidence in a story fragment and its associated characteristics can be compared with the hypotheses and associated characteristics of a given story of interest to determine the relatedness of the story fragment and the story. If the story fragment is not sufficiently related to the story of interest (N), the story fragment is rejected at 258. Examiner Note: the degree/relatedness reads on validity/merit score). 71. (New) The system of claim 70, wherein the output further includes the validation plan and the merit score (See Fig. 4. [0033] Where a threshold level of similarity is found, the story fragment can be provided to human analysts through associated user interfaces 136 and 138. A user interface (e.g., 136) can include a graphical user interface that allows the analyst to quickly review the pertinent portions of the story fragment and determine its relationship to the story of interest. If the human analyst agrees that the story fragment is relevant to the story of interest, he or she can add information to the story fragment and incorporate the story fragment into the story of interest. A fusion engine 140 can mathematically reconcile the story of interest in light of the added story fragments to allow the analyst to see the impact of the story fragment. [0034] FIG. 5 illustrates a functional block diagram of an exemplary graphic user interface (GUI) 150 that can be utilized in an assisted decision making system in accordance with the present invention. The GUI 150 includes an editor 152 that allows a user to view and edit a structure argument. The editor 152 has four associated high-level functions, an editing function 154, a visualization function 156, a belief propagation function 158, and an administrative function 160). 72. (New) The system of claim 53, wherein the inference engine is a member of a multi-layered set of inference engines (See Fig. 2 on lower layer 28 and 30 and higher layer 22). Claim 73 is method claim having similar limitation as claim 53 and is rejected under the same rationale. Claim 74 is non-transitory computer readable medium claims having similar limitation as claim 53 and is rejected under the same rationale. See [0046] and claim 22 for computer readable medium. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 62, 66, 68 is/are rejected under 35 U.S.C. 103 as being unpatentable over Talbot et al (US 2006/0112048 A1) in view of OMOIGUI (US 20100070448 A1) 62. (New) Talbot fails to disclose the system of claim 53, wherein the reasoning steps comprise a mapping to a namespace. However, OMOIGUI disclose knowledge representation and inference (thereby in the same field of endeavor) and explicitly disclose reasoning steps comprise a mapping to a namespace [(0313] Dynamic Linking.TM.. Trademarked name for the ability of the Information Nervous System of the present invention to allow users to link information dynamically, semantically, and at the speed of thought, even if those information items do not contain links themselves. By virtue of employing smart objects that have intrinsic behavior and using recursive intelligence embedded in the Information Agency's XML Web Service, each node in the Semantic Network is much smarter than a regular link or node on Today's Web or the conceptual Semantic Web. In other words, each node in the Smart Virtual Network or Web of the present invention can link to other nodes, independent of authoring. Each node has behavior that can dynamically link to Agencies and Smart Agents via drag and drop and smart copy and paste, create links to Agencies in the Semantic Environment, respond to lens requests from Smart Agents to create new links, include intrinsic alerts that will dynamically create links to context and time-sensitive information on its Agency, include presentation hints for breaking news (wherein the node can automatically link to breaking news Agents in the namespace), form the basis for deep info that can allow the user to find new links, etc. A user of the present invention is therefore not at the mercy of the author of the metadata. Once the user reaches a node in the network, the user has many semantic means of navigating dynamically and automatically--using context, time, relatedness to Smart Agencies and Agents, etc.). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the knowledge acquisition of Talbot to incorporate namespace mapping of OMOIGUI. Given the fact that namespace mapping/linking form the basis for deep info, one having ordinary skill in the art would have been motivated to make this obvious modification. 66. (New) OMOIGUI disclose The system of claim 53, wherein the at least one reasoning rule set includes deductive reasoning ([1789] To appreciate some of the potential power of this feature, it is useful to note that while the system or Entities "know" who is posing the query, the Entities do not depend for that knowledge on the user informing them and keeping them constantly updated and informed (although user information can be supplied and considered at any time). If that were the case, the system could be too labor intensive to be efficient and useful in many situations; it would just be too much work. Instead, the Entities "know" who the requester is by inference and from semantics from characteristics sometimes supplied by others, sometimes derived or deduced, sometimes collected from other requests and the like, as explained throughout this application and its parent application. Examiner Note: ¶ 1 applies. The kind of reasoning rule sets is Nonfunctional descriptive material). 68. (New) OMOIGUI disclose The system of claim 53, wherein the operations further include conducting reflexive reasoning and the reasoning steps include the reflexive reasoning ([3025] 6. Semantic Cross-Referencing: A user interface allows the user to cross-reference context across ontologies. For instance, it is possible to use one perspective to view results that were generated via another perspective. Such "cross-fertilization of perspectives" accurately reflects how knowledge is acquired and/or how research evolves in the real-world. Furthermore, a user interface allows the user to cross-reference context in order to dynamically create new semantic views. Examiner Note: ¶ 1 applies. The kind of reasoning rule sets is Nonfunctional descriptive material). Examiner Note (EN) ¶ 1: In re Curry, the Board held that in a computer-implemented method of providing "wellness-related services," "the 'wellness-related data in the databases.., does not functionally change either the data storage system or communication system used in the method of claim 81. Nonfunctional descriptive material cannot render nonobvious an invention that would have otherwise been obvious." See Ex parte Curry, 84 USPQ2d 1272 (BPAI 2005), aff'd (Fed. Cir. Appeal No. 2006-1003, aff'd Rule 36 June 12, 2006). In re John, the Board held that the descriptive material (i.e., "control information" and "request" comprising a description of a development environment) recited in claim 1 is non-functional descriptive material because each of the "control information" and "request" does not functionally affect the process of managing a development environment. Rather, the control information is merely information that is used for "managing said first request" by a computer program and the request is data that is received ("receiving a first request") and processed ("processing said first request") by the system. In each case, the data (i.e., "control information" and "request") do not affect how the method of the prior art is performed on a computer system. In other words, the method of receiving and processing the request and reviewing the request "in accordance with control information" is carried out in the same way regardless of the nature of the request or control information. See Ex parte John F. Bisceglia, Appeal 2007-3447. Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Omoigui (US 20070260580 A1) disclose knowledge retrieval, management, delivery and presentation. See abstract. See also [0145], [0312] on news provider/outlets; [0580]-[0583] on reasoning and hypothesis validation. Horvitz et al (US 20080004926 A1) disclose automated reasoning based on user preference. See [0083]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUT WONG whose telephone number is (571)270-1123. The examiner can normally be reached M-F 10am-6pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUT WONG/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

May 27, 2025
Application Filed
Dec 30, 2025
Non-Final Rejection — §101, §102, §103
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602451
UNSUPERVISED ANOMALY DETECTION MACHINE LEARNING FRAMEWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12591782
INTELLIGENT SCALING FACTORS FOR USE WITH EVOLUTIONARY STRATEGIES-BASED ARTIFICIAL INTELLIGENCE (AI)
2y 5m to grant Granted Mar 31, 2026
Patent 12591786
INTELLIGENT AMMUNITION CO-EVOLUTION TASK ASSIGNMENT METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12585956
SYSTEMS, METHODS, AND GRAPHICAL USER INTERFACES FOR MITIGATING BIAS IN A MACHINE LEARNING-BASED DECISIONING MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12566977
OPTIMIZING COGBOT RETRAINING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
92%
With Interview (+15.0%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month