Prosecution Insights
Last updated: April 19, 2026
Application No. 17/879,178

TRAINING A MACHINE LEARNING MODEL USING AN EXPERT SYSTEM

Final Rejection §103
Filed
Aug 02, 2022
Examiner
SCHALLHORN, TYLER J
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Aktiengesellschaft
OA Round
2 (Final)
34%
Grant Probability
At Risk
3-4
OA Rounds
5y 1m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
89 granted / 262 resolved
-21.0% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
20 currently pending
Career history
282
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 262 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the amendment filed 28 October 2025. Claims 1–11 are pending. Claims 1 and 9 are independent. Claims 1–11 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after 16 March 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments The objection to the title of the invention is withdrawn in light of the new title. The rejection of claim 11 under § 112 is withdrawn in light of the amendment and accompanying arguments (remarks, p. 8). The rejections of claims 1–11 under § 101 is withdrawn in light of the amendment and accompanying arguments (remarks, p. 9–11). Applicant’s arguments with respect to the rejections under § 103 have been fully considered and are persuasive. Therefore, the rejections are withdrawn. However, upon further search and consideration, new grounds of rejection are made in view of Daly et al. Claim Rejections—35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 1 and 3–11 are rejected under 35 U.S.C. § 103 as being unpatentable over Costabello et al. (US 2020/0356874 A1) [hereinafter Costabello] in view of Brennan et al. (US 2018/0075359 A1) [hereinafter Brennan] and Daly et al. (US 9,390,112 B1) [hereinafter Daly]. Regarding independent claim 1, Costabello teaches [a] computer implemented method for training a machine learning model, comprising the following operations, wherein the operations are performed by components, and wherein the components are software components executed by one or more processors and/or hardware components: System circuitry including, e.g., processors and memory (Costabello, ¶ 55). training, by a training module, a machine learning model based on known facts; A knowledge graph is used as a training dataset for an embeddings generator [machine learning model] (Costabello, ¶¶ 17–20, 48). generating, by an active learning module using the machine learning model, candidate triples; The embeddings generator selects a set of candidate facts (Costabello, ¶¶ 25–27). The facts in the knowledge graph and the candidate facts are triples (Costabello, ¶¶ 18, 33). […] outputting, by the expert system, novel facts, with the novel facts representing the result of the verification; and The candidate facts are analyzed to detect their level of truthfulness and surprise, and a new surprising fact is selected and added [output] to the knowledge graph (Costabello, ¶¶ 49–51). […] Costabello teaches determining a truthfulness of the candidate facts, but does not expressly teach using an expert system (Costabello, ¶ 49). However, Brennan teaches: reasoning, by an expert system, in order to verify the candidate triples; A candidate missing edge [novel fact] is evaluated for addition to a knowledge graph (Brennan, ¶¶ 5, 21). The candidate edges are evaluated by generating a query and answering using reasoning algorithms (Brennan, ¶ 113). The reasoning algorithms are part of a cognitive system [expert system] (Brennan, ¶ 41). retraining, by the training module, the machine learning model based on the novel facts. The system may repeatedly extend the knowledge graph by generating new questions and evaluating candidate questions, iteratively extending the knowledge graph, using the extended knowledge graph from each previous step (Brennan, ¶¶ 22, 121, 124, 130). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Costabello with those of Brennan. One would have been motivated to do so in order to improve the probability of the knowledge graph data being correct, and improve the accuracy of systems relying on the knowledge graph data (Brennan, ¶¶ 22, 62). Costabello/Brennan teaches augmenting a knowledge graph, but does not expressly teach an expert system acting as an oracle for a ML model. However, Daly teaches: [reasoning, by an expert system, in order to verify the candidate triples,] wherein the expert system acts as an oracle for the machine learning model A data analysis system includes a predictive machine learning model and at least one oracle for assessing the quality of a data sample (Daly, col. 8 ll. 1–35). The new data samples are added to a data reservoir based on the quality verification from the oracle (Daly, col. 1 ll. 40–60). [outputting, by the expert system, novel facts, with the novel facts representing the result of the verification], wherein the novel facts provide labels for the candidate triples The oracle is used to assign labels to the new data samples added to the training data (Daly, col. 9 ll. 1–10). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Costabello/Brennan with those of Daly. One would have been motivated to do so in order to improve the consistency of the quality of the data (Daly, col. 1 ll. 20–35). Regarding dependent claim 3, the rejection of claim 1 is incorporated and Costabello/Brennan/Daly further teaches: wherein the generating operation includes at least: calculating, by the machine learning model, a calibrated score for each unknown fact of a set of unknown facts, wherein the calibrated score reflects a predicted truth value of the respective unknown fact; and A level of truthfulness is determined for the candidate facts (Costabello, ¶ 49). determining, by a query strategy module, the candidate triples by processing the set of unknown facts, parameters of the machine learning model and the calibrated scores, and the known facts. The system determines a confidence score for a candidate answer [unknown fact] to a query based on evidence from corpora [from which the knowledge graph is generated] (Brennan, ¶¶ 69, 89). The confidence scores may be based on weights that are learned through machine learning processes [parameters of a machine learning model] (Brennan, ¶ 117). Regarding dependent claim 4, the rejection of claim 3 is incorporated and Costabello/Brennan/Daly further teaches: wherein a calibration of the calibrated scores is implemented with: a probabilistic treatment with Bayesian neural networks, or a post processing step. The confidence scores undergo a final confidence merging and ranking stage [post processing step] (Brennan, ¶ 119). Regarding dependent claim 5, the rejection of claim 1 is incorporated and Costabello/Brennan/Daly further teaches: wherein the expert system: contains an inference engine and logical rules for processing the candidate triples, or The hypotheses [candidate edges/facts] are evaluated using rules (Brennan, ¶¶ 73–74, 79, 129). is an engineering configurator configured for checking a consistency of the candidate triples within a configuration. Regarding dependent claim 6, the rejection of claim 1 is incorporated and Costabello/Brennan/Daly further teaches: wherein the training and retraining operations include optimizing, by the training module, parameters of the machine learning model with respect to a loss function, with the loss function describing an accuracy of the calibrated scores computed by the machine learning model. A loss function is used to optimize, e.g., the surprise score, which is used when determining which candidate triples to add to the knowledge graph (Costabello, ¶¶ 23–24). Regarding dependent claim 7, the rejection of claim 1 is incorporated and Costabello/Brennan/Daly further teaches: wherein the machine learning model is implemented as a graph neural network, or as a knowledge graph embedding algorithm capable of producing the calibrated scores. The knowledge graph is input to a knowledge graph embedding circuitry [embedding algorithm] (Costabello, ¶¶ 10–12, 19–22). Regarding dependent claim 8, the rejection of claim 1 is incorporated and Costabello/Brennan/Daly further teaches: wherein the steps of the method are iterated in an active learning loop. The process of expanding the knowledge graph is repeated [looped] until a termination condition (Brennan, ¶ 22). Regarding independent claim 9, Costabello teaches [a] system for training a machine learning model, comprising: a memory storing known facts; and A computer device including data sources [memory] storing the knowledge graph (Costabello, ¶ 57). a training module, configured for training a machine learning model based on the known facts; A knowledge graph is used as a training dataset for an embeddings generator [machine learning model] (Costabello, ¶¶ 17–20, 48). an active learning module, configured for generating candidate triples using the machine learning model; The embeddings generator selects a set of candidate facts (Costabello, ¶¶ 25–27). The facts in the knowledge graph and the candidate facts are triples (Costabello, ¶¶ 18, 33). […] and outputting novel facts, with the novel facts representing the result of the verification; and The candidate facts are analyzed to detect their level of truthfulness and surprise, and a new surprising fact is selected and added [output] to the knowledge graph (Costabello, ¶¶ 49–51). […] Costabello teaches determining a truthfulness of the candidate facts, but does not expressly teach using an expert system (Costabello, ¶ 49). However, Brennan teaches: an expert system, configured for verifying the candidate triples using reasoning A candidate missing edge [novel fact] is evaluated for addition to a knowledge graph (Brennan, ¶¶ 5, 21). The candidate edges are evaluated by generating a query and answering using reasoning algorithms (Brennan, ¶ 113). The reasoning algorithms are part of a cognitive system [expert system] (Brennan, ¶ 41). wherein the training module is also configured for retraining the machine learning model based on the novel facts. The system may repeatedly extend the knowledge graph by generating new questions and evaluating candidate questions, iteratively extending the knowledge graph, using the extended knowledge graph from each previous step (Brennan, ¶¶ 22, 121, 124, 130). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Costabello with those of Brennan. One would have been motivated to do so in order to improve the probability of the knowledge graph data being correct, and improve the accuracy of systems relying on the knowledge graph data (Brennan, ¶¶ 22, 62). Regarding dependent claim 10, Costabello/Brennan teaches [a] computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method method according to claim 1. A storage medium that stores instructions [for performing the invention] (Costabello, ¶ 12). Regarding dependent claim 11, Costabello/Brennan/Daly teaches [a] provision device for the computer program product according to claim 1, wherein the provision device stores and/or provides the computer program product. Computer readable program instructions for carrying out operations of the invention, which may be located on a computer remote from the user’s computer [a provision device] (Brennan, ¶ 31). Claim 2 is rejected under 35 U.S.C. § 103 as being unpatentable over Costabello et al. (US 2020/0356874 A1) [hereinafter Costabello] in view of Brennan et al. (US 2018/0075359 A1) [hereinafter Brennan] and Daly et al. (US 9,390,112 B1) [hereinafter Daly], further in view of Song et al. (US 2020/0175406 A1) [hereinafter Song]. Regarding dependent claim 2, the rejection of claim 1 is incorporated. Costabello/Brennan/Daly teaches generating candidate triples, but does not expressly teach doing so using uncertainty sampling, Bayesian optimization, or reinforcement learning. However, Song teaches: wherein the active learning module generates candidate triples that lie close to a decision boundary of the machine learning model, by: using uncertainty sampling; using Bayesian optimization; or A method including identifying facts based on a knowledge graph data structure, and generating hypotheses from the graph, using a Bayesian learning model (Song, ¶ 7). The model is improved [optimized] based on the new fact/hypothesis (Song, ¶¶ 17, 59). learning a selection policy using reinforcement learning. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Costabello/Brennan with those of Song. Doing so would have been a matter of simple substitution of one known element [the random generation in Costabello] for another [the Bayesian optimization in Song] to obtain predictable results [expanding a knowledge graph wherein the candidate triples are generated using a Bayesian model]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 C.F.R. § 1.17(a)) pursuant to 37 C.F.R. § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler Schallhorn whose telephone number is 571-270-3178. The examiner can normally be reached Monday through Friday, 8:30 a.m. to 6 p.m. (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in the USA or Canada) or 571-272-1000. /Tyler Schallhorn/Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Aug 02, 2022
Application Filed
Jul 26, 2025
Non-Final Rejection — §103
Oct 28, 2025
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572403
AUTOMATICALLY CONVERTING ERROR LOGS HAVING DIFFERENT FORMAT TYPES INTO A STANDARDIZED AND LABELED FORMAT HAVING RELEVANT NATURAL LANGUAGE INFORMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554987
COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR DNN WEIGHT PRUNING FOR REAL-TIME EXECUTION ON MOBILE DEVICES
2y 5m to grant Granted Feb 17, 2026
Patent 12481824
CONTENT ASSOCIATION IN FILE EDITING
2y 5m to grant Granted Nov 25, 2025
Patent 12475176
AUTOMATED SYSTEM AND METHOD FOR CREATING STRUCTURED DATA OBJECTS FOR A MEDIA-BASED ELECTRONIC DOCUMENT
2y 5m to grant Granted Nov 18, 2025
Patent 12450420
GENERATION AND OPTIMIZATION OF OUTPUT REPRESENTATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
34%
Grant Probability
48%
With Interview (+13.8%)
5y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 262 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month