Prosecution Insights
Last updated: April 19, 2026
Application No. 18/193,796

SYSTEM AND METHOD FOR MANAGEMENT OF INFERENCE MODELS BASED ON FEATURE CONTRIBUTION

Non-Final OA §103§112
Filed
Mar 31, 2023
Examiner
BREEN, JAKE TIMOTHY
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
7 granted / 10 resolved
+15.0% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
24 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
30.5%
-9.5% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the filing on 03/31/2023. Claims 1-20, are pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 6 and 15 objected to because of the following informalities: Claim 6, line 3, recites "the remediate updated inference model", should recite -- the remediated updated inference model --. Claim 15, line 3, recites "the remediate updated inference model", should recite -- the remediated updated inference model --. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-9, 11-18, and 20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation “establishing, based on the position of the first limiter, a user defined range of user defined ranges” on lines 6-7, it is unclear what establishing a user defined range of user defined ranges entails. For the purpose of examination, it will be interpreted as "establishing, based on the position of the first limited, a user defined range". Claim 6 recites the limitation “providing the computer implemented services using the remediate updated inference model using the inference model to obtain an inference used in the computer implemented services” on lines 2-4, it is grammatically uninterpretable. For the purpose of examination, it will be interpreted as “providing the computer implemented services using the remediated updated inference model to obtain an inference”. Claim 11 recites the limitation “establishing, based on the position of the first limiter, a user defined range of user defined ranges” on lines 6-7, it is unclear what establishing a user defined range of user defined ranges entails. For the purpose of examination, it will be interpreted as "establishing, based on the position of the first limited, a user defined range". Claim 15 recites the limitation “providing the computer implemented services using the remediate updated inference model using the inference model to obtain an inference used in the computer implemented services” on lines 2-4, it is grammatically uninterpretable. For the purpose of examination, it will be interpreted as “providing the computer implemented services using the remediated updated inference model to obtain an inference”. Claim 20 recites the limitation “establishing, based on the position of the first limiter, a user defined range of user defined ranges” on lines 6-7, it is unclear what establishing a user defined range of user defined ranges entails. For the purpose of examination, it will be interpreted as "establishing, based on the position of the first limited, a user defined range". Claims 3-5, 7-9, 12-14, and 16-18 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre‐AIA ), second paragraph, as being indefinite for depending upon an indefinite parent claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 10-12, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2021/0319333 A1), hereinafter Lee. Regarding claim 1, Lee teaches a method for managing inference models, the method comprising (Lee discloses a method for detecting and removing bias in predictive models [see Lee, Abstract]): obtaining training data usable to update an inference model of the inference models, the inference model being trained using a first training data set; updating operation of the inference model using the training data to obtain an updated inference model; (Lee discloses that a training dataset was used to train the predictive model, and receiving the training dataset used from the user [see Lee, para. 18-19]. Lee further discloses that the model bias identification and correction process may be repeated more than once, such that the model is trained on an original first dataset, the dataset is then modified to a second dataset and the model retrained, then repeating the bias identification process again with the model trained on the second dataset [see Lee, para. 64]); identifying a level of contribution of each feature of the updated inference model on output of the updated inference model (Lee discloses testing the predictive model to see if it's biased towards one or more feature groups by calculating bias metrics, and discloses a plurality of methods of calculating bias [see Lee, para. 54]); making a determination regarding whether the level of contribution of each feature is within a user defined range (Lee discloses determining whether the p-value between the bias metric and baseline metric is below a threshold value that can be set by a user [see Lee, para. 58]); in a first instance of the determination where the level of contribution of each feature is not within the user defined range (Lee discloses determining whether the p-value between the bias metric and baseline metric is below a threshold value that can be set by a user [see Lee, para. 48]. Thus, if the model has a bias that results in a difference beyond the threshold it is not within the user defined range (i.e., below the threshold)): remediating the updated inference model to reduce an undesired level of feature bias presented by the updated inference model (Lee discloses identifying a biased feature group and retraining the model to remove the bias [see Lee, para. 24]); in a second instance of the determination where the level of contribution of each feature is within the user defined range (Lee discloses determining whether the p-value between the bias metric and baseline metric is below a threshold value that can be set by a user [see Lee, para. 48]. Thus, if the model has a bias that does not result in a difference beyond the threshold it within the user defined range (i.e., below the threshold)): treating the updated inference model as exhibiting a desired level of feature bias (Lee discloses determining whether the p-value between the bias metric and baseline metric is below a threshold value that can be set by a user [see Lee, para. 48]. Thus, if the p-value is below the threshold the model has a bias that is desired as it does not exceed the threshold). However, Lee fails to teach providing the computer implemented services using the remediated inference model and providing the computer implemented services using the updated inference model. It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate providing the computer implemented services using the remediated inference model and providing the computer implemented services using the updated inference model because Lee discloses a computing system to test bias in predictive models [Lee, para. 32 and FIG. 1], and that the models can perform prediction services [para. 21]. Thus, it would be obvious to use the models to perform computer implemented services. Regarding claim 2, Lee as applied in claim 1 above teaches all the limitations of claim 1 and further teaches: presenting, to a user, a range bar associated with a feature of features of the updated inference model (Lee discloses obtaining input from the user such as a feature list and a slider indicating a position on the range bar to select the significance level threshold for the bias metric of features [see Lee, para. 58 and 92, and FIG. 7]); obtaining, from the user, user input indicating a first position of a first limiter with respect to the range bar (Lee discloses obtaining input from the user indicating a position on the range bar to select the significance level threshold for the bias metric [see Lee, para. 58 and 92, and FIG. 7]. Thus, the selected significance level acts as a limiter such that only features with significant bias are identified); establishing, based on the position of the first limiter, a user defined range of user defined ranges (interpreted as establishing, based on the position of the first limited, a user defined range per 35 U.S.C. 112(b) rejection above) (Lee discloses obtaining input from the user indicating a position on the range bar to select the significance level threshold for the bias metric [see Lee, para. 58 and 92, and FIG. 7]. Thus, the selected significance level acts as a limiter such that only features exceeding the range (i.e., between 0 and the threshold value) are identified as significantly biased). Regarding claim 3, Lee as applied in claim 2 above teaches all the limitations of claim 2 and further teaches: prior to obtaining the user input: presenting, to the user, a level of contribution of the feature on output of the inference model (Lee discloses a UI which presents feature contribution without user input [see Lee, para. 88 and FIG. 6]. Thus, it is possible to present feature contribution prior to obtaining user input). Regarding claim 10, claim 10 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, Lee further teaches: a non-transitory machine-readable medium having instructions stored therein, which when executed by a processor, cause the processor to perform operations for managing data collection for managing inference models, the operations comprising (Lee discloses a computing system to perform the model bias identification and correction process disclosed [see Lee, para. 123] which can store program code and data executable by a processor on a non-transitory computer-readable medium [see Lee, para. 125]). Regarding claim 19, claim 19 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, Lee further teaches: a data processing system, comprising: a processor; and a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations for managing data collection for managed devices and unmanaged devices, the operations comprising (Lee discloses a computing system to perform the model bias identification and correction process disclosed [see Lee, para. 123] including a processor coupled to memory [see Lee, para. 124]). Regarding claims 11 and 20, claims 11 and 20 contains substantially similar limitations to those found in claim 2 above. Consequently, claims 11 and 20 are rejected for the same reasons. Regarding claim 12, claim 12 contains substantially similar limitations to those found in claim 3 above. Consequently, claim 12 is rejected for the same reasons. Claims 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2021/0319333 A1), hereinafter Lee, as applied in claim 1 above, in view of Das et al. (US 2022/0172004 A1), hereinafter Das. Regarding claim 8, Lee as applied in claim 1 above teaches all the limitations of claim 1. However, Lee fails to teach wherein identifying the level of contribution of each feature of the updated inference model on the output of the updated inference model comprises calculating an average marginal contribution of each feature among all possible groups of the features. In the same field of endeavor, Das teaches: wherein identifying the level of contribution of each feature of the updated inference model on the output of the updated inference model comprises calculating an average marginal contribution of each feature (Das discloses calculating feature attribute for each feature using Shapley values with different ways of aggregation such as the average of SHAP values for all features [see Das, para. 52]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein identifying the level of contribution of each feature of the updated inference model on the output of the updated inference model comprises calculating an average marginal contribution of each feature as suggested in Das into Lee to teach wherein identifying the level of contribution of each feature of the updated inference model on the output of the updated inference model comprises calculating an average marginal contribution of each feature among all possible groups of the features because the Shapely calculation of each feature as disclosed by Das [see Das, para. 52] could be implemented to calculate the bias of feature groups as disclosed by Lee [see Lee, para. 54] such that the combination would identify the level of contribution for each feature by calculating an average contribution of each feature among all possible groups of the features. It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to do so because both systems monitor machine learning models and provide GUIs for presenting feature contribution to users (see Lee, Abstract and FIGS. 6-7; see Das, Abstract and FIGS. 7-8). Incorporating the teaching of Das into Lee would implement such scalable techniques in order to improve performance of feature attribute calculations over large input data sets (see Das, para. 104). Regarding claim 9, the combination of Lee and Das as applied in claim 8 above teaches all the limitations of claim 8 and further teaches: wherein the average marginal contribution of each feature among all possible groups of the features is obtained using the Kernel Shap method (In various embodiments, a scalable and efficient implementation of the Kernel SHAP algorithm through additional optimizations, as discussed in detail below with regard to FIGS. 5 and 14, may be implemented. [see Das, para. 51]). Regarding claim 17, claim 17 contains substantially similar limitations to those found in claim 8 above. Consequently, claim 17 is rejected for the same reasons. Regarding claim 18, claim 18 contains substantially similar limitations to those found in claim 9 above. Consequently, claim 18 is rejected for the same reasons. Claims 4-7 and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2021/0319333 A1), hereinafter Lee, as applied in claim 1 above, in view of Das et al. (US 2022/0172004 A1), hereinafter Das, and further in view of Okamoto (US 2007/0120871 A1), hereinafter Okamoto. Regarding claim 4, Lee as applied in claim 3 above teaches all the limitations of claim 3 and further teaches: wherein the range bar and level of contribution of the feature on the output of the inference model is presented using a graphical user interface, the graphical user interface representing the range bar as a line and the level of contribution of the feature on the output of the inference model as a graphical element (Lee presents a range bar to select the significance level threshold for the bias metric [see Lee, para. 58 and 92, and FIG. 7], and presents the feature bias information corresponding to the feature and significance level [see Lee, para. 94 and FIG. 7]). However, Lee fails to teach the graphical user interface representing the range bar as a line and the level of contribution of the feature on the output of the inference model as a graphical element positioned at a reference point on the line. In the same field of endeavor, Das teaches: the graphical user interface representing the level of contribution of the feature on the output of the inference model as a graphical element positioned at a reference point on the line (Das presents feature importance on a bar graph with each line representing a features importance value [see Das, para. 115 and FIG. 7]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate the graphical user interface representing the level of contribution of the feature on the output of the inference model as a graphical element positioned at a reference point on the line as suggested in Das into Lee because both systems monitor machine learning models and provide GUIs for presenting feature contribution to users (see Lee, Abstract and FIGS. 6-7; see Das, Abstract and FIGS. 7-8). Incorporating the teaching of Das into Lee would implement such scalable techniques in order to improve performance of feature attribute calculations over large input data sets (see Das, para. 104). However, the combination of Lee and Das fails to teach the graphical user interface representing the range bar as a line and the level of contribution of the feature on the output of the inference model as a graphical element positioned at a reference point on the line. In the same field of endeavor, Okamoto teaches: the graphical user interface representing the range bar as a line and the level of contribution of the feature on the output as a graphical element positioned at a reference point on the line (Okamoto presents the feature importance and threshold calculated in step S204 [see Okamoto, para. 60 and FIG. 7] on a GUI with the threshold placed on the line of the feature importance [see Okamoto, para. 62 and FIG. 10]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate the graphical user interface representing the range bar as a line and the level of contribution of the feature on the output as a graphical element positioned at a reference point on the line as suggested in Okamoto into the combination of Lee and Das to modify the GUI of Lee to present the significance level [see Lee, FIG. 7] integrated with the feature value line bar graph as presented in Das [see Das, para. 115] similar to how the GUI of Okamoto displays the feature importance with the threshold value [see Okamoto, FIG. 10], such that the range bar for the significance level is on the same line as the feature bias because both methods calculate and present feature importance information (see Lee, Abstract and FIGS. 6-7; see Okamoto, para. 60-62 and FIGS. 7 and 10). Incorporating the teaching of Okamoto into the combination of Lee and Das would shorten a work time required for information presentation and search (see Okamoto, para. 87). Regarding claim 5, the combination of Lee, Das, and Okamoto as applied in claim 4 above teaches all the limitations of claim 4 and further teaches: wherein the reference point is positioned a distance from one end of the line proportionately based on a ratio of a value of the level of contribution of the feature on the output of the inference model to a maximum value of the level of contribution of the feature (It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the GUI of Lee to present the significance level [see Lee, FIG. 7] integrated with the feature value line bar graph as presented in Das [see Das, para. 115] similar to how the GUI of Okamoto displays the feature importance with the threshold value [see Okamoto, FIG. 10], such that the range bar for the significance level is on the same line as the feature bias). Regarding claim 6, the combination of Lee, Das, and Okamoto as applied in claim 4 above teaches all the limitations of claim 4 and further teaches: wherein remediating the updated inference model comprises discarding the updated inference model; and providing the computer implemented services using the remediate updated inference model using the inference model to obtain an inference used in the computer implemented services (providing the computer implemented services using the remediate updated inference model using the inference model to obtain an inference used in the computer implemented services interpreted as providing the computer implemented services using the remediated updated inference model to obtain an inference per 35 U.S.C. 112(b) rejection above) (Lee discloses a computing system to test bias in predictive model [Lee, para. 32 and FIG. 1], and that the model can perform prediction services [para. 21]. Lee further discloses identifying a biased feature group and retraining the model to remove the bias [see Lee, para. 24]. Thus, the model is effectively discarded because it's features are retrained such that the retrained model is different and the old model is no longer available. It would have been further obvious to use the retrained model to perform predictions as computer implemented services as Lee has disclosed the models are capable of doing so). Regarding claim 7, the combination of Lee, Das, and Okamoto as applied in claim 6 above teaches all the limitations of claim 6 and further teaches: where providing the computer implemented services using the updated inference model comprises using the updated inference model to obtain an inference used in the computer implemented services (Lee discloses a computing system to test bias in predictive model [Lee, para. 32 and FIG. 1], and that the model can perform prediction services [para. 21]. Thus, it would be obvious to use the model to perform predictions as computer implemented services as Lee has disclosed the models are capable of doing so). Regarding claim 13, claim 13 contains substantially similar limitations to those found in claim 4 above. Consequently, claim 13 is rejected for the same reasons. Regarding claim 14, claim 14 contains substantially similar limitations to those found in claim 5 above. Consequently, claim 14 is rejected for the same reasons. Regarding claim 15, claim 15 contains substantially similar limitations to those found in claim 6 above. Consequently, claim 15 is rejected for the same reasons. Regarding claim 16, claim 16 contains substantially similar limitations to those found in claim 7 above. Consequently, claim 16 is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ikeda (US 2018/0330193 A1) teaches an imaging processing system using machine learning models that presents slider bars for users to set parameters of the system in a GUI. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKE BREEN whose telephone number is (571)272-0456. The examiner can normally be reached Monday - Friday, 7:00 AM - 3:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.B./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Jan 29, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602577
NEURON CORE WITH TIME-EMBEDDED FLOATING POINT ARITHMETIC
2y 5m to grant Granted Apr 14, 2026
Patent 12555650
SYSTEM AND METHOD FOR MOLECULAR PROPERTY PREDICTION USING EDGE-CONDITIONED GRAPH ATTENTION NEURAL NETWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12518136
INFERENCE EXECUTION METHOD FOR CANDIDATE NEURAL NETWORKS AND SWITCHING NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+75.0%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month