Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendments
This action is in response to amendments filed October 20th, 2025, in which Claims 1-2, 10-11, & 19-20 have been amended. No claims have been added or cancelled. The amendments have been entered and Claims 1-20 are currently pending.
Response to Arguments
Regarding the claim objections of the previous office action, applicant’s amendments have successfully overcome all objections. Therefore, the claim objections have been subsequently withdrawn.
Regarding the applicant’s traversal of the 35 U.S.C. 101 rejections of the previous office action, the applicant’s arguments filed October 20th, 2025 have been fully considered, and are unpersuasive.
Applicant asserts claim 1, as amended, is “not directed to an abstract idea… without significantly more”. No argument is provided explaining how it is directed to significantly more aside from the recitation of the amended limitation, “generating, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project and using the machine learning model to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the engine emissions calibration project is feasible, wherein the machine learning model is trained using salient attribute-based triples.”
The examiner respectfully asserts that “in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generating, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project… to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the engine emissions calibration project is feasible”, aside from the underlined portion, was already cited as an abstract idea in the previous office action. Further, amending it with the mental judgement being based on further evaluations of data still keeps it within the abstract idea grouping of a mental process recitation (MPEP 2106).
Further, the additional amended limitation cited above, “…wherein the machine learning model is trained using salient attribute-based triples” merely recites using a particular data type to retrain the model which recites mere instructions to apply the judicial exception, which is not indicative of integration into a practical application or significantly more (MPEP 2106.05(f)).) Therefore, the rejection for claim 1 under 35 U.S.C. 101 is maintained.
Claims 10 & 19 recite similar amended limitations, and are thereby also still rejected under 35 U.S.C. 101. Further, dependent claims 2-9, 11-18, and 20 are all dependent on these claims and thereby also still rejected under 35 U.S.C. 101 under the same rationale, in addition to their own rationale as cited in the previous action, and below.
Regarding the applicant’s traversal of the 35 U.S.C. 103 rejections of the previous office action, the applicant’s arguments filed October 20th, 2025 have been fully considered, and are unpersuasive.
Regarding claim 1, the applicant asserts that neither LECUE, MOHAMMADHASSANI, or ANGLIN, or any combination thereof, teaches the following amended limitation: “generating, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project and using the machine learning model to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the engine emissions calibration project is feasible, wherein the machine learning model is trained using salient attribute-based triples” (emphasis added in italics, matching the emphasis in the filed remarks.)
The examiner respectfully asserts that the newly amended limitation “…based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project…” is already taught by LECUE in view of MOHAMMADHASSANI & ANGLIN in the previous citations, as noted in the updated rejections below as well as here for convenience:
When LECUE in view of MOHAMMADHASSANI is combined with the teachings of ANGLIN, the combined result will result in the generation of the feasibility prediction values, as taught by ANGLIN (and cited in the previous action), to be “based on at least one inefficiency rule that incorporates at least one observation” [cited previously, LECUE, 0046-0051] of an engine emissions calibration project” [cited previously, MOHAMMADHASSANI, pages 1-2, introduction].
Further, “at least one expert derived rule” is taught by ANGLIN, as cited previously ([0004] “…A machine learning classification model, trained on a training data set, is provided in a computer that models a probabilistic relationship between observed values and discrete outcomes (An expert derived rule (a probabilistic relationship model) being applied in response to the corresponding sets of characteristics (observed values and discrete outcomes)).”) The applicant appears to be arguing this mapping but no explanation has been filed.
Further, “…wherein the machine learning model is trained using salient attribute-based triples” is taught by LECUE, which cites the use of knowledge graphs extensively, which as defined at section “2.1 Knowledge Graphs are More Than Graphs” of “Knowledge Graphs 2021: A Data Odyssey”, available at http://www.vldb.org/pvldb/vol14/p3233-weikum.pdf, “The term knowledge graphs is actually a misnomer and oversimplifies the structure and value of KBs (knowledge bases). Graphs are binary relations, but KBs are not limited to such instances, called subject-predicate object triples, or SPO triples for short. Hence, KB and not KG would be the appropriate terminology, but the term KG became widely established through press releases of big Internet stakeholders...” This shows that attribute-based triples are standard in the technology of knowledge graphs/knowledge bases. The term “salient” here, just means that the most important features are selected, as is the goal of any machine learning model.
Claims 10 & 19 recite similar amended limitations, and are thereby also still rejected under 35 U.S.C. 103 with this same rationale. Further, dependent claims 2-9, 11-18, and 20 are all dependent on these claims and thereby also still rejected under 35 U.S.C. 103 under the same rationale, in addition to their own rationale as cited in the previous action, and below. Therefore, the rejection for claim 1 under 35 U.S.C. 103 is maintained.
Claims 10 & 19 recite similar amended limitations, and are thereby also still rejected under 35 U.S.C. 103. Further, dependent claims 2-9, 11-18, and 20 are all dependent on these claims and thereby also still rejected under 35 U.S.C. 103 under the same rationale, in addition to their own rationale as cited in the previous action, and below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Regarding claim 1, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A method for assessing engine emissions calibration”. A method is one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“identifying, in a knowledge graph corresponding to engine emissions calibration, a second set of characteristics that corresponds to the first set of characteristics” (A person can mentally evaluate sets of characteristics within a knowledge graph and make a judgement to identify sets that correspond with each other (MPEP 2106).)
“determining whether each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics” (A person can mentally evaluate the characteristics within each set and make a judgement to determine whether each characteristic in one set corresponds to at least one characteristic in the other set (MPEP 2106).)
“in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generating, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project… to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the engine emissions calibration project is feasible” (A person can mentally evaluate the sets of characteristics and make a judgement to generate a prediction that indicates whether the calibration is feasible, basing it on at least one inefficiency rule that incorporates observation of an engine associated with the project (MPEP 2106).)
“determining… a certainty value associated with the feasibility prediction based on the application of the at least one expert derived rule to the second set of characteristics” (A person can mentally evaluate the generated prediction via at least one “expert derived rule” make a judgement to determine a certainty value associated with it (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“receiving a first set of characteristics associated with an engine emissions calibration project” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“in response to a determination that one or more characteristics of the first set of characteristics do not correspond to any characteristic in the second set of characteristics, using a machine learning model to update the knowledge graph to include the one or more characteristics of the first set of characteristics” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“…using the machine learning model…” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“…wherein the machine learning model is trained using salient attribute-based triples” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional element (v) recites an insignificant extra-solution activity. Further, element (v) recites steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional elements (vi), (vii), & (viii) recite mere instructions to apply the judicial exception, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites “wherein the machine learning model is trained further using data associated with other engine emissions calibration projects” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 3, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 3 recites “wherein the at least one expert derived rule includes at least one deterministic expert derived rule” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 4, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 4 recites “wherein the at least one expert derived rule includes at least one probabilistic expert derived rule” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 5, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 5 recites “wherein the feasibility prediction indicating whether the engine emissions calibration project is feasible includes indicating whether the engine emissions calibration project corresponds to an engine emission output that is within a desired range” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 6, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 6 recites “generating an output including the feasibility prediction and the certainty value” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, the courts have found steps that present output of data to not be indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 7, it is dependent upon claim 6, and thereby incorporates the limitations of, and corresponding analysis applied to claim 6. Further, claim 7 recites “providing the output at a display” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, the courts have found steps that present output of data to not be indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 8, it is dependent upon claim 6, and thereby incorporates the limitations of, and corresponding analysis applied to claim 6. Further, claim 8 recites “receiving, responsive to the output, feedback indicating whether a user accepted the feasibility prediction” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362).
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 9, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 9 recites “subsequently training the machine learning model using the feedback” (In step2A, prong 2, this recites mere application of the judicial exception (MPEP 2106.05(f).) In step 2B, mere application of the judicial exception is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 10, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A system for assessing engine emissions calibration”. A system is within one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“identify, in a knowledge graph corresponding to engine emissions calibration, a second set of characteristics that corresponds to the first set of characteristics” (A person can mentally evaluate sets of characteristics within a knowledge graph and make a judgement to identify sets that correspond with each other (MPEP 2106).)
“determine whether each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics” (A person can mentally evaluate the characteristics within each set and make a judgement to determine whether each characteristic in one set corresponds to at least one characteristic in the other set (MPEP 2106).)
“in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generate, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project… to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction, including a certainty value, indicating whether the engine emissions calibration project is feasible, the certainty value corresponding to a probability associated with the feasibility prediction” (A person can mentally evaluate the sets of characteristics and at least one “expert derived rule” and make a judgement to generate a prediction that indicates whether the calibration is feasible alongside a certainty value that corresponds to the probability of the feasibility prediction (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“A system for assessing engine emissions calibration, the system comprising: a processor; and a memory including instructions that, when executed by the processor, cause the processor to:” ((Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).))
“receive a first set of characteristics associated with an engine emissions calibration project” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“in response to a determination that one or more characteristics of the first set of characteristics do not correspond to any characteristic in the second set of characteristics, use a machine learning model to update the knowledge graph to include the one or more characteristics of the first set of characteristics” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“…using the machine learning model…” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“…wherein the machine learning model is trained using salient attribute-based triples” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional element (iv) recites use of a computer as a tool to perform the abstract idea, which is not indicative of significantly more. Additional element (v) recites an insignificant extra-solution activity. Further, element (v) recites steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional elements (vi), (vii), & (viii) recite mere instructions to apply the judicial exception, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claims 11-18, they are dependent upon claim 10, and thereby incorporate the limitations of, and corresponding analysis applied to claim 10. Further, claims 11-18 recite similar additional limitations as claims 2-9, respectively, and are rejected under the same rationale.
Regarding claim 19, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “An apparatus for project feasibility assessment”. An apparatus is one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“identify, in a knowledge graph, a second set of characteristics that corresponds to the first set of characteristics” (A person can mentally evaluate sets of characteristics within a knowledge graph and make a judgement to identify sets that correspond with each other (MPEP 2106).)
“determine whether each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics” (A person can mentally evaluate the characteristics within each set and make a judgement to determine whether each characteristic in one set corresponds to at least one characteristic in the other set (MPEP 2106).)
“in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generate, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project… to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction, including a certainty value, indicating whether the engine emissions calibration project is feasible, the certainty value corresponding to a probability associated with the feasibility prediction” (A person can mentally evaluate the sets of characteristics and at least one “expert derived rule” and make a judgement to generate a prediction that indicates whether the calibration is feasible alongside a certainty value that corresponds to the probability of the feasibility prediction (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“An apparatus for project feasibility assessment, the apparatus comprising: a processor; and a memory including instructions that, when executed by the processor, cause the processor to:” ((Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).))
“receive a first set of characteristics associated with a project” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“in response to a determination that one or more characteristics of the first set of characteristics do not correspond to any characteristic in the second set of characteristics, use a machine learning model to update the knowledge graph to include the one or more characteristics of the first set of characteristics” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“…using the machine learning model…” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“…wherein the machine learning model is trained using salient attribute-based triples” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
“generate an output including the feasibility prediction and the certainty value” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).)
“receive, responsive to the output, feedback indicating whether a user accepted the feasibility prediction” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“subsequently train the machine learning model using the feedback” (Mere instructions to apply the judicial exception (MPEP 2106.05(f)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional element (iv) recites use of a computer as a tool to perform the abstract idea, which is not indicative of significantly more. Additional elements (v), (ix), & (x) recite insignificant extra-solution activities. Further, elements (v), (ix), & (x) recite steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional elements (vi), (vii), (viii), & (xi), recite mere instructions to apply the judicial exception, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 20, it is dependent upon claim 19, and thereby incorporates the limitations of, and corresponding analysis applied to claim 19. Further, claim 20 recites similar additional limitations as claim 2, and is rejected under the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lecue, F. et al., US PGPUB No. US 2019/0065987 A1 (hereafter, LECUE), and further in view of Mohammadhassani, J., et al., “Prediction of NOx Emissions from a Direct Injection Diesel Engine Using Artifical Neural Network” (hereafter, MOHAMMADHASSANI), and further in view of Anglin, Z. et al., US PGPUB No. US 2023/0132739 A1 (hereafter, ANGLIN)
Regarding claim 1, LECUE teaches “A method… the method comprising: receiving a first set of characteristics” ([0016] “Implementations include actions of receiving a first plurality of data sets associated with one or more of a process and a device, data values in the plurality of data sets being recorded by sensors in a set of sensors, receiving a first predictive model for the first plurality of data sets, for each data value in the first plurality of data sets, determining a knowledge score for the predictive model based on weights assigned to a plurality of concepts associated with a domain ontology for a domain of the one or more of the process and the device, comparing the knowledge score for each data value in the first plurality of data sets to a threshold knowledge score to provide a comparison, and in response to the comparison, selectively amending concepts in the first predictive model to provide a second predictive model.”) This citation shows that the cited method does comprise receiving a set of data
And further, ([0021] “Implementations of the present disclosure are described in further detail herein with reference to an example context. The example context includes food manufacturing, in which a predictive model is used to determine quality control (QC) levels based on data received from one or more devices that monitor food manufacturing processes (e.g., Internet-of-Things (IoT) devices). It is contemplated, however, that implementations of the present disclosure can be realized in any appropriate context. Other example contexts include maintenance of machines (e.g., robots), and retail. In the example context, food manufacturing is the domain, and an objective can include predicting the number of quality control checks to be performed. More specifically, an objective can include selecting a set of sensors for quality control checks, which reduce noise, and energy consumption. Example sensors can include IoT devices, which monitor a food manufacturing process, and provide data responsive to an environment, in which food is manufactured ( e.g., temperature, humidity, pressure), and/or characteristics of the food being manufactured ( e.g., temperature, sugar level).”) This citation shows that the data may be comprised of “characteristics” of “any appropriate context”.
Further, LECUE teaches “identifying, in a knowledge graph… a second set of characteristics that corresponds to the first set of characteristics” ([0016] “Implementations include actions of receiving a first plurality of data sets associated with one or more of a process and a device (a first set of characteristics), data values in the plurality of data sets being recorded by sensors in a set of sensors, receiving a first predictive model for the first plurality of data sets, for each data value in the first plurality of data sets, determining a knowledge score for the predictive model based on weights assigned to a plurality of concepts associated with a domain ontology for a domain of the one or more of the process and the device (the knowledge score being the second set of characteristics), comparing the knowledge score for each data value in the first plurality of data sets to a threshold knowledge score to provide a comparison, and in response to the comparison, selectively amending concepts in the first predictive model to provide a second predictive model.”) This citation shows that the cited method identifies “for each data value/characteristic” a “knowledge score” that corresponds to it.
And further, ([0022] “In accordance with implementations of the present disclosure, a domain ontology is provided that is specific to the context. In the example context, the domain ontology includes information, such as key performance indicators (KPis), for quality control in food manufacturing processes. In some implementations, the domain ontology is provided as a knowledge graph, or a portion of a knowledge graph. In some examples, a knowledge graph is a collection of data and related based on a schema representing entities and relationships between entities (comparison between first and second set of characteristics).”) This citation shows that the knowledge score is from a knowledge graph (second characteristics associated with the first characteristics)
Further, LECUE teaches “determining whether each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics” ([0046-0047] “In some implementations, a data score is provided, which indicates the amount of training data that is included in the representative data (e.g., the degree to which the representative data sets overlap the overall training data). In some examples, the data score as a fraction of data sets used. For example, and using the example described herein, if there are three data sets E1, E2, and E3, then for each the data score is 1/3=0.33. In a subsequent iteration, described in further detail herein, data sets might be combined, resulting in a revised data score. For example, if E1 and E2 are combined, the data score would be 2/3=0.66.
Below are example data scores, and semantic scores for the example representative data provided above:
PNG
media_image1.png
121
382
media_image1.png
Greyscale
”) This citation explains the method for determining how much of the training data (first set of characteristics) is included in the representative data (second set of characteristics), providing a score determining whether each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics.
Further, LECUE teaches “in response to a determination that one or more characteristics of the first set of characteristics do not correspond to any characteristic in the second set of characteristics, using a machine learning model to update the knowledge graph to include the one or more characteristics of the first set of characteristics” ([0048-0051] “In accordance with implementations of the present disclosure, a knowledge coverage (also referred to as knowledge score, and/or knowledge coverage score) is determined for each data set in the representative data based on the semantic scores and the data scores. In some examples, each knowledge score (KS) can be determined based on the following example relationship:
PNG
media_image2.png
16
97
media_image2.png
Greyscale
where
PNG
media_image3.png
14
10
media_image3.png
Greyscale
1 and
PNG
media_image3.png
14
10
media_image3.png
Greyscale
2 are respective weights (e.g.,
PNG
media_image3.png
14
10
media_image3.png
Greyscale
1 =0.5,
PNG
media_image3.png
14
10
media_image3.png
Greyscale
2 =0.5). Using the examples from above, example knowledge scores are provided as:
PNG
media_image4.png
185
388
media_image4.png
Greyscale
In accordance with implementations of the present disclosure, the knowledge score for each data set can be compared to a threshold knowledge score. If a knowledge score exceeds the knowledge score threshold, the knowledge score, and the predictive model associated with the knowledge score are provided as output. That is, the predictive model corresponding to the data set having the sufficiently high knowledge score is determined to sufficiently capture the knowledge of the domain-specific ontology. Consequently, that predictive model is provided as output (e.g., to be used in production).
In accordance with implementations of the present disclosure, a predictive model can be provided for each data set. Consequently, a plurality of predictive models, and respective knowledge scores can be provided. In some examples, the knowledge scores of multiple predictive models can exceed the threshold knowledge score. In such instances, of the predictive models having knowledge scores exceeding the threshold knowledge score, the predictive model having the highest knowledge score is selected as the output. If none of the knowledge scores exceeds the knowledge score threshold, implementations of the present disclosure execute another iteration. More particularly, the training data that has been used in the previous iteration is recomposed to provide modified training data for training a subsequent predictive model (meaning characteristics from the training data (first set of characteristics) will be used to train and update the knowledge graph, and thereby included therein.). In some implementations, and as described in further detail herein, the data sets are recomposed based on additional metadata.
In some implementations, data composition (recomposition) is based on metadata of sensor (physical devices) used to record the training data (e.g., the data sets). In some examples, the metadata captures technical characteristics of the respective sensors. Example technical characteristics can include location, manufacturer, unique identifier, energy consumption, cost, defect rate, and the like. In some examples, the metadata can be provided as a table, each row representing a respective sensor, and each column representing characteristics. In some examples, the metadata of the table can be decomposed into multiple metadata vectors (MVs), each MV representing a particular sensor (e.g., thermometer, hygrometer). In some examples, an explanation vector (EV) can be provided, which includes a vector of the KSs of the respective explanations.”) The metadata providing various characteristics from the first set of characteristics is trained into and thereby included into the knowledge graph.
Further, LECUE teaches “…wherein the machine learning model is trained using salient attribute-based triples”:
LECUE cites the use of knowledge graphs extensively, which as defined at section “2.1 Knowledge Graphs are More Than Graphs” of “Knowledge Graphs 2021: A Data Odyssey”, available at http://www.vldb.org/pvldb/vol14/p3233-weikum.pdf, “The term knowledge graphs is actually a misnomer and oversimplifies the structure and value of KBs (knowledge bases). Graphs are binary relations, but KBs are not limited to such instances, called subject-predicate object triples, or SPO triples for short. Hence, KB and not KG would be the appropriate terminology, but the term KG became widely established through press releases of big Internet stakeholders...” This shows that attribute-based triples are standard in the technology of knowledge graphs/knowledge bases. The term “salient” here, just means that the most important features are selected, as is the goal of any machine learning model.
LECUE fails to explicitly teach the methods in relation to “engine emissions calibration” and it further fails to teach “in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generating, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project and using the machine learning model to apply at least one expert derived rule to the second set of characteristics, using the machine learning model to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the… project is feasible, wherein the machine learning model is trained using salient attribute-based triples; and determining, using the machine learning model, a certainty value associated with the feasibility prediction based on the application of the at least one expert derived rule to the second set of characteristics.”
However, analogous art, MOHAMMADHASSANI, does teach using machine learning methods in correlation with “engine emissions calibration” ([Pages 1-2, Introduction] “Direct injection diesel engines are used as propulsion systems with low fuel consumption and very high efficiency for automotive applications. Any attempts to use their privileges require considering emissions stringent disciplines which enforce engine manufacturers to tender their productions with lower emissions [1]. Recently, Lenz and Cozzarini [2] have presented statistics showing that the worldwide passenger car and commercial vehicle traffic contribute 20% of the total anthropogenic emissions of nitrogen oxides (NOx). These emissions have damaging effects upon the environment and people. Therefore, how to control the exhaust emissions especially NOx from diesel engines has become an essential subject for researchers of the automotive field in the world.
The diesel engine industry has undergone a great technical development in the last few years, creating a number of new strategies such as electronic control units (ECUs) and/or engineering management systems (EMSs) as well as new injection systems [3–7]. They all use some kinds of artificial intelligence (AI) techniques such as artificial neural network (ANN) (a machine learning model) to process the engine operating conditions and prognosticate the fairly best values of the controlling parameters with the aim of optimizing the engine characteristics (engine emissions calibration using characteristics).
Digital computers have provided a rapid means of performing many calculations involving the ANN methods. Along with the development of high-speed digital computers, the application of the ANN approach could be outspread in a very impressive rate in several fields. One of the major applications of ANN is industrial pollutants control. Kalogirou [8] presented an elaborated review on the recent applications of AI in environmental pollutants control.
Neural networks are powerful modeling techniques with the ability of identifying cryptic nonlinear highly complex relationships between their input and output data [9]. ANN describes such relations by updating network weights using a trial-and-error-based arithmetic method and a training algorithm such as Levenberg-Marquardt (LM).
A number of studies have been conducted to predict the characteristics of internal combustion engines (ICE) by using ANN approach. This approach has been used by Xu et al. [10] to predict engine systems reliability. The injection characteristics of direct injection (DI) diesel engines have been investigated by Yang et al. [11]. In [12], the effects of NOx and soot level in the case of high-pressure fuel injection have been investigated in a single-cylinder DI diesel engine. ANN has been used to predict the exhaust emissions and performance of a diesel engine taking into account several operating conditions such as the percentage of throttle opening, injection time, engine speed, and fuel compositions as the network inputs [13–15].However, there is no literature that reports the application of neural network to predict and model NOx emissions in terms of engine speed, intake air temperature, and mass fuel injection (MFI) rate. In this study, NOx emissions from a diesel engine are investigated using ANN. For this purpose, experimental tests have been conducted for 144 engine speeds ranging from 591 to 2308 rpm.”)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of LECUE with the teachings of MOHAMMADHASSANI because both references are in relation to machine learning techniques being used to optimize real world applications, and LECUE even teaches ([0021] “…The example context includes food manufacturing… It is contemplated, however, that implementations of the present disclosure can be realized in any appropriate context.”)
One of ordinary skill in the art would be motivated to do so because, as MOHAMMADHASSANI points out, ([Introduction] “how to control the exhaust emissions… has become an essential subject for researchers of the automotive field in the world” and “They all use some kinds of artificial intelligence (AI) techniques such as artificial neural network (ANN) to process the engine operating conditions and prognosticate the fairly best values of the controlling parameters with the aim of optimizing the engine characteristics.”
LECUE in view of MOHAMMADHASSANI still fails to explicitly teach “in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generating, based on at least one inefficiency rule that incorporates at least one observation of an engine associated with the engine emissions calibration project and using the machine learning model to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the… project is feasible, wherein the machine learning model is trained using salient attribute-based triples; and determining, using the machine learning model, a certainty value associated with the feasibility prediction based on the application of the at least one expert derived rule to the second set of characteristics.”
However, analogous art, ANGLIN, does teach “in response to a determination that each characteristic of the first set of characteristics corresponds to at least one characteristic in the second set of characteristics: generating… using the machine learning model to apply at least one expert derived rule to the second set of characteristics, a feasibility prediction indicating whether the… project is feasible… and determining, using the machine learning model, a certainty value associated with the feasibility prediction based on the application of the at least one expert derived rule to the second set of characteristics.” ([0004] “According to one embodiment of the present invention, a method in a computer provides for calibrating a machine learning classification model with uncertainty interval (a certainty value). A machine learning classification model, trained on a training data set, is provided in a computer that models a probabilistic relationship between observed values and discrete outcomes (An expert derived rule (a probabilistic relationship model) being applied in response to the corresponding sets of characteristics (observed values and discrete outcomes)). The computer generates a validation of the machine learning classification model from a validation data set. The validation includes a model confidence at the observed value. For each validation, the computer receives a correctness indication (a feasibility prediction) of a discrete outcome. Using a calibration service, the computer generates an uncertainty interval (certainty value) over the validation. The uncertainty interval is generated from the model confidence and the correctness indication (The certainty value is associated with the feasibility prediction). The computer calibrates the model confidence to probabilities of the discrete outcomes based on the uncertainty interval.”)
Further, when LECUE in view of MOHAMMADHASSANI is combined with the teachings of ANGLIN, the combined result will result in the generation of the feasibility prediction values, as taught by ANGLIN, to be “based on at least one inefficiency rule that incorporates at least one observation” [cited above, LECUE, 0046-0051] of an engine emissions calibration project” [cited above, MOHAMMADHASSANI, pages 1-2, introduction]
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of LECUE in view of MOHAMMADHASSANI with the teachings of ANGLIN because both references explore methods of employing machine learning models and techniques to optimize real-world systems.
One of ordinary skill in the art would be motivated to do so because doing so will result in a more thorough validation of predictive models, maintaining and ensuring accurate results.
Regarding claim 2, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 1. Further, LECUE teaches “wherein the machine learning model is trained further using data associated with other… calibration projects” ([0015] “Implementations of the present disclosure are generally directed to capturing a metric, referred to herein as knowledge coverage, of machine learning models. More particularly implementations of the present disclosure are directed to using knowledge coverage to drive selection of the machine learning model to provide an optimal knowledge coverage in a given domain ontology. In some examples, and as introduced above, a predictive model provides at least one result (e.g., a value), which can be described as a class of the input provided to the predictive model. Implementations of the present disclosure provide for iterative improvement of predictive models to provide a predictive model having an optimized knowledge coverage. (iterative improvement shows that it is trained further using more projects.)”) Essentially, the knowledge coverage (knowledge graph) was initially trained using other relevant characteristics to associate with input characteristics. Furthermore…
([0048-0051] “In accordance with implementations of the present disclosure, a knowledge coverage (also referred to as knowledge score, and/or knowledge coverage score) is determined for each data set in the representative data based on the semantic scores and the data scores. In some examples, each knowledge score (KS) can be determined based on the following example relationship:
PNG
media_image2.png
16
97
media_image2.png
Greyscale
where
PNG
media_image3.png
14
10
media_image3.png
Greyscale
1 and
PNG
media_image3.png
14
10
media_image3.png
Greyscale
2 are respective weights (e.g.,
PNG
media_image3.png
14
10
media_image3.png
Greyscale
1 =0.5,
PNG
media_image3.png
14
10
media_image3.png
Greyscale
2 =0.5). Using the examples from above, example knowledge scores are provided as:
PNG
media_image4.png
185
388
media_image4.png
Greyscale
In accordance with implementations of the present disclosure, the knowledge score for each data set can be compared to a threshold knowledge score. If a knowledge score exceeds the knowledge score threshold, the knowledge score, and the predictive model associated with the knowledge score are provided as output. That is, the predictive model corresponding to the data set having the sufficiently high knowledge score is determined to sufficiently capture the knowledge of the domain-specific ontology. Consequently, that predictive model is provided as output (e.g., to be used in production).
In accordance with implementations of the present disclosure, a predictive model can be provided for each data set. Consequently, a plurality of predictive models, and respective knowledge scores can be provided. In some examples, the knowledge scores of multiple predictive models can exceed the threshold knowledge score. In such instances, of the predictive models having knowledge scores exceeding the threshold knowledge score, the predictive model having the highest knowledge score is selected as the output. If none of the knowledge scores exceeds the knowledge score threshold, implementations of the present disclosure execute another iteration. More particularly, the training data that has been used in the previous iteration is recomposed to provide modified training data for training a subsequent predictive model (meaning characteristics from the training data (first set of characteristics) will be used to train and update the knowledge graph). In some implementations, and as described in further detail herein, the data sets are recomposed based on additional metadata.
In some implementations, data composition (recomposition) is based on metadata of sensor (physical devices) used to record the training data (e.g., the data sets). In some examples, the metadata captures technical characteristics of the respective sensors. Example technical characteristics can include location, manufacturer, unique identifier, energy consumption, cost, defect rate, and the like. In some examples, the metadata can be provided as a table, each row representing a respective sensor, and each column representing characteristics. In some examples, the metadata of the table can be decomposed into multiple metadata vectors (MVs), each MV representing a particular sensor (e.g., thermometer, hygrometer). In some examples, an explanation vector (EV) can be provided, which includes a vector of the KSs of the respective explanations.”) This citation shows that the knowledge graph, being updated on any new features found in the input, will then retain that for the next project. This shows that in the context of the next project, the model was “initially trained” using previous projects.
Regarding claim 3, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 1. Further, ANGLIN teaches “wherein the at least one expert derived rule includes at least one deterministic expert derived rule” ([0045] “Classification algorithms are used to divide a dataset into classes based on different parameters. The task of the classification algorithm is to find a mapping function to map an input (x) to a discrete output (y). In other words, classification algorithms are used to predict the discrete values for the classifications, such as Male or Female, True or False, Spam or Not Spam, etc. Types of Classification Algorithms include Logistic Regression, K-Nearest Neighbors, Support Vector Machines (SVM), Kernel SVM, Naïve Bayes, Decision Tree Classification, and Random Forest Classification.”) Unlike prediction, classification, is known in the art, to be a deterministic process.
Regarding claim 4, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 1. Further, ANGLIN teaches “wherein the at last one expert derived rule includes at least one probabilistic expert derived rule” ([0004] “According to one embodiment of the present invention, a method in a computer provides for calibrating a machine learning classification model with uncertainty interval (a certainty value). A machine learning classification model, trained on a training data set, is provided in a computer that models a probabilistic relationship between observed values and discrete outcomes (An expert derived rule (a probabilistic relationship model) being applied in response to the corresponding sets of characteristics (observed values and discrete outcomes)). The computer generates a validation of the machine learning classification model from a validation data set. The validation includes a model confidence at the observed value. For each validation, the computer receives a correctness indication (a feasibility prediction) of a discrete outcome. Using a calibration service, the computer generates an uncertainty interval (certainty value) over the validation. The uncertainty interval is generated from the model confidence and the correctness indication (The certainty value is associated with the feasibility prediction). The computer calibrates the model confidence to probabilities of the discrete outcomes based on the uncertainty interval.”) Here the cited expert derived rule is probabilistic.
Regarding claim 5, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 1. Further, ANGLIN teaches “wherein the feasibility prediction indicating whether the engine emissions calibration project is feasible includes indicating whether the… calibration project corresponds to an… output that is within a desired range” [Figure 6]
And further, ([0080] “In this illustrative example, calibration curve 600 maps model confidence, such as model confidence 234 of FIG. 2, to a probability estimate of correctness (feasibility prediction), such as probabilities 236 of FIG. 2. As depicted, calibration curve 600 is a logistic curve, including best fit 610, bounded over the uncertainty interval 620.”) This shows that the actual “probability estimate of correctness/correctness indication/feasibility score” is a curve with a range of possible values with a “best fit”.
Regarding claim 6, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 1. Further, ANGLIN teaches “generating an output including the feasibility prediction and the certainty value” ([0051-0052] “Calibration service 206 generates an uncertainty interval (certainty value) 220 over the validation. uncertainty interval 220 is an estimate computed from validation data set 238. Uncertainty interval 220 provides a range of expected values for an unknown parameter, for example, a population mean. Uncertainty interval 220 is generated from the model confidence and the correctness indication (feasibility prediction).
Calibration service 206 calibrates model confidence 234 to probabilities 236 of the discrete outcomes 228 based on the uncertainty interval 220. In this illustrative example, calibration curve 230 is a logistic curve of best fit 231. Calibration service 206 generating the logistic curve bounded over the uncertainty interval 220. Calibration service 206 then displays the logistic curve with the uncertainty interval 220 on a graphical user interface 214 (output of the feasibility prediction and certainty value).”)
Further, an example curve is shown in Figure 6, which shows both the uncertainty interval (certainty value) and the “probability estimate of correctness/correctness indication/feasibility prediction.”
Regarding claim 7, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 6. Further, ANGLIN teaches “providing the output at a display” ([0051-0052] “Calibration service 206 generates an uncertainty interval (certainty value) 220 over the validation. uncertainty interval 220 is an estimate computed from validation data set 238. Uncertainty interval 220 provides a range of expected values for an unknown parameter, for example, a population mean. Uncertainty interval 220 is generated from the model confidence and the correctness indication (feasibility prediction).
Calibration service 206 calibrates model confidence 234 to probabilities 236 of the discrete outcomes 228 based on the uncertainty interval 220. In this illustrative example, calibration curve 230 is a logistic curve of best fit 231. Calibration service 206 generating the logistic curve bounded over the uncertainty interval 220. Calibration service 206 then displays the logistic curve with the uncertainty interval 220 on a graphical user interface 214 (output of the feasibility prediction and certainty value on a graphical user interface/display).”)
Regarding claim 8, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 6. Further, ANGLIN teaches “receiving, responsive to the output, feedback indicating whether a user accepted the feasibility prediction” ([0073-0074] “In response to determining that the probability of the prediction is not less than the confidence threshold, the prediction is automatically applied to the record linkage, or to another corresponding business application for other use cases. In other words, model predictions having a model confidence greater than the threshold, that is, predicted classifications where the model has very low probability of being incorrect, are recorded in linked records 322 based solely on the model prediction, without intervention by user 320.
However, in response to determining that the probability of the prediction is less than the confidence threshold, the prediction flagged for review. In other words, model predictions having a model confidence less than the threshold, that is, predicted classifications where there is a high probability that the model is incorrect, are instead flagged, and forwarded to the user 216 for manual determination of a match or mismatch between the record pairs. In one illustrative example, these manual determinations by user 216 can be used to provide additional validations 232 to calibration service 206 of FIG. 2.”) Here, we see that results with low confidence are flagged for a user to manually review, causing the system to receive feedback indicating whether the user accepted the prediction.
Regarding claim 9, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 8. Further, ANGLIN teaches “subsequently training the machine learning model using the feedback” ([0046-0049] “In this illustrative example, calibration service 206 provides a classification model 222, trained on a training data set 224. Classification model 222 models a probabilistic relationship between observed values 226 and discrete outcomes 228 based validation data set 238.
Calibration service 206 provides model calibration in a Bayesian framework with support for uncertainty. Calibration service 206 replaces other commonly used calibration approaches that merely append a Bayesian network or Bayesian models on top of existing classification models. Rather than continuously refining a best fit calibration to match the training data set 224, calibration service 206 assumes that individual data points are random, and then fits and adapts uncertainty interval 220 around the mutable calibration curve 230 as more validations 232 are received (This shows that the model continuously fits and adapts/trains the uncertainty interval (certainty value) as more validations are received, such as previously cited when the user provides feedback in 0074).
In other words, calibration service 206 ingests model confidence 234 generated by machine learning models 252 and maps those confidences to the probabilities 236 of a correct positive classification. Based on those probabilities 236, calibration service 206 builds an uncertainty interval 220, and mutates the calibration curve 230 according to uncertainty interval 220. As more validations 232 are received, thereby building greater epistemic confidence, uncertainty interval 220 shrinks.
In this illustrative example, calibration service 206 generates one or more validations 232 of the classification model 222 from a validation data set 238. Validations 232 includes a model confidence 234 at the observed value, as well as a correctness indication (feasibility prediction) 240 submitted from a user 216 (subsequent training using user feedback). Calibration service 206 operates over validations 232, generated from validation data set 238.”)
Regarding claim 10, LECUE teaches “A system… the system comprising: a processor; and a memory including instructions that, when executed by the processor, cause the processor to:” ([0008] “The present disclosure further provides a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.”)
Further, claim 10 recites similar additional limitations as claim 1, and is rejected under the same rationale.
Regarding claims 11-18, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 10. Further, claims 11-18 comprise similar additional limitations as claims 2-9, respectively, and are rejected under the same rationale.
Regarding claim 19, LECUE teaches “An apparatus for project feasibility assessment, the apparatus comprising: a processor; and a memory including instructions that, when executed by the processor, cause the processor to:” ([0008] “The present disclosure further provides a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.”)
Further, ANGLIN teaches “generate an output including the feasibility prediction and the certainty value” ([0051-0052] “Calibration service 206 generates an uncertainty interval (certainty value) 220 over the validation. uncertainty interval 220 is an estimate computed from validation data set 238. Uncertainty interval 220 provides a range of expected values for an unknown parameter, for example, a population mean. Uncertainty interval 220 is generated from the model confidence and the correctness indication (feasibility prediction).
Calibration service 206 calibrates model confidence 234 to probabilities 236 of the discrete outcomes 228 based on the uncertainty interval 220. In this illustrative example, calibration curve 230 is a logistic curve of best fit 231. Calibration service 206 generating the logistic curve bounded over the uncertainty interval 220. Calibration service 206 then displays the logistic curve with the uncertainty interval 220 on a graphical user interface 214 (output of the feasibility prediction and certainty value).”)
Further, an example curve is shown in Figure 6, which shows both the uncertainty interval (certainty value) and the “probability estimate of correctness/correctness indication/feasibility prediction.”
Further, ANGLIN teaches “receive, responsive to the output, feedback indicating whether a user accepted the feasibility prediction” ([0073-0074] “In response to determining that the probability of the prediction is not less than the confidence threshold, the prediction is automatically applied to the record linkage, or to another corresponding business application for other use cases. In other words, model predictions having a model confidence greater than the threshold, that is, predicted classifications where the model has very low probability of being incorrect, are recorded in linked records 322 based solely on the model prediction, without intervention by user 320.
However, in response to determining that the probability of the prediction is less than the confidence threshold, the prediction flagged for review. In other words, model predictions having a model confidence less than the threshold, that is, predicted classifications where there is a high probability that the model is incorrect, are instead flagged, and forwarded to the user 216 for manual determination of a match or mismatch between the record pairs. In one illustrative example, these manual determinations by user 216 can be used to provide additional validations 232 to calibration service 206 of FIG. 2.”) Here, we see that results with low confidence are flagged for a user to manually review, causing the system to receive feedback indicating whether the user accepted the prediction.
Further, ANGLIN teaches “subsequently train the machine learning model using the feedback” ([0046-0049] “In this illustrative example, calibration service 206 provides a classification model 222, trained on a training data set 224. Classification model 222 models a probabilistic relationship between observed values 226 and discrete outcomes 228 based validation data set 238.
Calibration service 206 provides model calibration in a Bayesian framework with support for uncertainty. Calibration service 206 replaces other commonly used calibration approaches that merely append a Bayesian network or Bayesian models on top of existing classification models. Rather than continuously refining a best fit calibration to match the training data set 224, calibration service 206 assumes that individual data points are random, and then fits and adapts uncertainty interval 220 around the mutable calibration curve 230 as more validations 232 are received (This shows that the model continuously fits and adapts/trains the uncertainty interval (certainty value) as more validations are received, such as previously cited when the user provides feedback in 0074).
In other words, calibration service 206 ingests model confidence 234 generated by machine learning models 252 and maps those confidences to the probabilities 236 of a correct positive classification. Based on those probabilities 236, calibration service 206 builds an uncertainty interval 220, and mutates the calibration curve 230 according to uncertainty interval 220. As more validations 232 are received, thereby building greater epistemic confidence, uncertainty interval 220 shrinks.
In this illustrative example, calibration service 206 generates one or more validations 232 of the classification model 222 from a validation data set 238. Validations 232 includes a model confidence 234 at the observed value, as well as a correctness indication (feasibility prediction) 240 submitted from a user 216 (subsequent training using user feedback). Calibration service 206 operates over validations 232, generated from validation data set 238.”)
Further, claim 19 recites similar additional limitations as claim 1, and is rejected under the same rationale.
Regarding claim 20, LECUE in view of MOHAMMADHASSANI & ANGLIN teaches the limitations of claim 19. Further, claim 20 recites similar additional limitations as claim 2 and is rejected under the same rationale.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW LEE LEWIS whose telephone number is (571)272-1906. The examiner can normally be reached Monday: 12:00PM - 4:00PM and Tuesday - Friday: 12:00PM - 9PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Matthew Lee Lewis/Examiner, Art Unit 2144
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144