Prosecution Insights
Last updated: April 19, 2026
Application No. 17/645,744

UNIFIED EXPLAINABLE MACHINE LEARNING FOR SEGMENTED RISK ASSESSMENT

Non-Final OA §103§112
Filed
Dec 22, 2021
Examiner
ROHD, BENJAMIN MATTHEW
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Equifax Inc.
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
23.5%
-16.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103 §112
DETAILED ACTION This office action is in response to amendments filed on 03/03/2026. Claims 1, 9, and 16 have been amended. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/03/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/03/2026 has been entered. Response to Arguments 35 U.S.C. 112 rejections: In light of applicant’s after final amendments filed on 01/12/2026, the previous rejections under 35 USC § 112(b) have been withdrawn. However, a new rejection under 35 USC § 112(b) has been introduced in light of the claim amendments filed on 03/03/2026. Prior Art Rejections: Applicant’s arguments regarding the prior art rejections (pg. 11-13) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant argues that the cited references do not explicitly disclose or suggest training the ensemble model using a combination of the training data previously used to train the individual segment models, and thus the references fail to teach the amended independent claims limitations "in a second training stage that is separate from the first training stage: constructing the unified risk prediction model by stacking the trained first segment model and the trained second segment model together after a first connection and a second connection are removed," and "selecting, based on the training samples, a combination of training data to use to train the unified risk prediction model, the combination of training data comprising the first training sample and the second training sample; and training the unified risk prediction model using the combination of training data". Examiner notes that the He reference has been brought in to explicitly teach training an ensemble model using a combination of training data previously used to train the individual base models. The prior art rejections have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 9, and 16 each recite the limitations “in a first training stage… training a second segment model” and “removing the second connection… based on training the second segment model in the second training stage”. It is unclear whether applicant intends for the second segment model to be trained twice—once in the first training stage and again in the second training stage—or whether this is a mistake, and the second segment model is only trained once in the first training stage. For examination purposes, the claim will be interpreted such that the second segment model is trained only once in the first training stage, in light of examiner’s understanding of the invention based on specification paragraph 0016, which specifies that individual segment models (i.e. the second segment model) are trained in the first training stage, and the ensemble model is trained in the second training stage. Claims 2-8, 10-15, and 17-20 are additionally rejected due to their dependence on rejected claims 1, 9, and 16 for the reasons outlined above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 7-9, 11-13, 15-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (hereinafter Wang), U.S. Patent US 11367142 B1 in view of Han et al. (hereinafter Han), “Learning both Weights and Connections for Efficient Neural Networks”, Akhlaghi et al. (hereinafter Akhlaghi), “Knowledge Fusion in Feedforward Artificial Neural Networks”, He et al. (hereinafter He), “Multi-Task Zipping via Layer-wise Neuron Sharing”, and Haile et al. (hereinafter Haile), U.S. Patent Application Publication US 20200020038 A1. Regarding claim 1, Wang teaches A method performed by one or more processing devices, the method comprising: (Wang teaches “the methods described herein are intended for operation as software programs running on a computer processor” (col. 30, line 67 – col. 31, line 2).) Wang teaches determining, using a unified risk prediction model built from a plurality of segment models, a risk indicator for a target entity from predictor variables associated with the target entity (Wang teaches creating a model for insurers to “forecast customer risks,” (col. 10, lines 56-57) where the customer is the target entity, “‘Current data’ or ‘current input data’ refers to data input into the trained model to generate a prediction” (i.e., predictor variables) (col. 3, lines 59-60), and the prediction is a “loss metric prediction” (i.e., risk indicator) (col. 3, line 37). Wang further teaches that “an ensemble model can generate better predictions than any single individual model,” (col. 27, lines 8-9) where said ensemble model (i.e., unified model) is created by stacking segment models, as shown in figure 26.) Wang teaches wherein the target entity belongs to one of a plurality of entity segments each associated with a segment model of the plurality of segment models (Wang teaches “In creating segments, groups of states (or other geographic regions) with similar features may be determined, and the data may be segmented accordingly” (col. 19, lines 38-47). “In embodiments where data is segmented, separate models are created for each segment” (col. 19, lines 33-35). The customer (i.e., target entity) will necessarily originate from a state, and thus will belong to the segment and associated model for that state.) Wang teaches wherein the unified risk prediction model is configured to be generated by performing operations comprising: in a first training stage: accessing training samples for the plurality of entity segments, each training sample comprising values for training predictor variables and a corresponding training output; (Wang teaches “receiving historical data comprising historical policyholder data, historical policy data, historical claims data, historical external data, and historical loss metric data, the historical data further comprising at least one input variable; segmenting the historical data into a plurality of segments” (col. 2, lines 31-36). Wang clarifies, “As used herein, ‘historical data’ refers to a data set used to train or otherwise create a model, and generally includes multiple training instances, each instance comprising one or more feature inputs and a target output” (col. 3, lines 54-57). Historical data instances are training samples, feature inputs are predictor variables, and the target output is a training output.) training a first segment model of the plurality of segment models using a first training sample of a plurality of training samples; (Wang teaches “the historical data is used to train the models…In embodiments where data is segmented, separate models are created for each segment, using the historical data from the respective segment.” (col. 19, lines 24-36). For a model of the plurality of separate models (i.e. first segment model), training is performed using historical data from the corresponding segment (i.e. first training sample).) training a second segment model of the plurality of segment models using a second training sample of the plurality of training samples; and (Wang teaches “the historical data is used to train the models…In embodiments where data is segmented, separate models are created for each segment, using the historical data from the respective segment.” (col. 19, lines 24-36). For another model of the plurality of separate models (i.e. second segment model), training is performed using historical data from the corresponding segment (i.e. second training sample).) in a second training stage that is separate from the first training stage: constructing the unified risk prediction model by stacking the trained first segment model and the trained second segment model together [after a first connection and a second connection are removed] (Wang teaches “In any of the embodiments described herein, a preferred model may be created by model stacking, as illustrated in FIG. 26. In general, model stacking creates a single model that is composed of an ensemble of other models” (col. 27, lines 4-7).) training the unified risk prediction model [using the combination of training data]; (Wang teaches that after the base models are trained and stacked to create an ensemble model, “the parameters of the ensemble model may be separately tuned” (col. 27, lines 39-40). Training the base models is a first training stage, and stacking and tuning the ensemble model is a second training stage.) Wang does not appear to explicitly disclose constructing and training the model after a first connection and a second connection are removed removing the first connection included in the first segment model based on a weight of the first connection being adjusted to zero or below a threshold value based on training the first segment model in the first training stage; removing the second connection included in the second segment model based on a weight of the second connection being adjusted to zero or below the threshold value based on training the second segment model in the second training stage; However, Han teaches constructing and training the model after a first connection and a second connection are removed (Pg. 2, section 1: “After an initial training phase, we remove all connections whose weight is lower than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This first phase learns the topology of the networks — learning which connections are important and removing the unimportant connections. We then retrain the sparse network so the remaining connections can compensate for the connections that have been removed.”) removing the first connection included in the first segment model based on a weight of the first connection being adjusted to zero or below a threshold value based on training the first segment model in the first training stage; (Pg. 2, section 1: “After an initial training phase, we remove all connections whose weight is lower than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This first phase learns the topology of the networks — learning which connections are important and removing the unimportant connections.” Connections with weight below a threshold value based on an initial training phase (i.e. first training stage) are removed.) removing the second connection included in the second segment model based on a weight of the second connection being adjusted to zero or below the threshold value based on training the second segment model in the second training stage; (Pg. 2, section 1: “After an initial training phase, we remove all connections whose weight is lower than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This first phase learns the topology of the networks — learning which connections are important and removing the unimportant connections.” Connections with weight below a threshold value based on an initial training phase (i.e. first training stage) are removed – see interpretation in light of rejection under 35 USC § 112(b).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wang and Han. Wang teaches a method for generating an ensemble model of insurance risk. Han teaches removing unimportant, low-weight connections from neural networks. One of ordinary skill would have motivation to combine Wang and Han in order to “reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy” (Han, pg. 1, Abstract). Wang and Han do not appear to explicitly disclose wherein constructing the unified risk prediction model comprises: merging input nodes of the first segment model and the second segment model; including nodes in corresponding hidden layers of the first segment model and the second segment model; and merging output nodes of the first segment model and the second segment model; and However, Akhlaghi teaches wherein constructing the unified risk prediction model comprises: merging input nodes of the first segment model and the second segment model; (Akhlaghi teaches “a method to fuse knowledge contained in separate trained networks” (pg. 257, abstract) by which “knowledge contained in several ANNs [artificial neural networks] is fused into a single ANN, which we call a fused artificial neural network (fANN). After fusion, fANN becomes able to perform classification for all the tasks of initial ANNs” (pg. 259, section 1). Figure 1 shows the fusion of two ANNs (a) and (b) to construct fANN (c) (pg. 260, section 2). The orange nodes represent the input layer of the first ANN (a), the yellow nodes represent the input layer of the second ANN (b), and the orange and yellow nodes of fANN (c) represent the merged input layer.) including nodes in corresponding hidden layers of the first segment model and the second segment model; and (Figure 1 shows the fusion of two ANNs (a) and (b) to construct fANN (c) (pg. 260, section 2). The gray nodes of ANNs (a) and (b) represent the hidden layers of each model, and the gray nodes of fANN (c) represent the inclusion of those nodes in the fused model.) merging output nodes of the first segment model and the second segment model; and (Figure 1 shows the fusion of two ANNs (a) and (b) to construct fANN (c) (pg. 260, section 2). The white nodes of ANNs (a) and (b) represent the output layers of each model, and the white nodes of fANN (c) represent the merged output layer.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wang, Han, and Akhlaghi. Wang teaches a method for generating an ensemble model of insurance risk. Han teaches removing unimportant, low-weight connections from neural networks. Akhlaghi teaches a method for fusing multiple neural networks into a single fused neural network. One of ordinary skill would have motivation to combine Wang, Han, and Akhlaghi in order to “fuse knowledge contained in separate trained networks” (Akhlaghi, pg. 257, abstract) into a single network which can “perform classification for all the tasks of initial ANNs” (Akhlaghi, pg. 259, section 1). Akhlaghi’s model fusion method enables “learning new capabilities while also maintaining performance on the existing ones” (Akhlaghi, pg. 258, section 1) to avoid problems such as catastrophic forgetting and interference. Wang, Han, and Akhlaghi do not appear to explicitly disclose selecting, based on the training samples, a combination of training data to use to train the unified risk prediction model, the combination of training data comprising the first training sample and the second training sample; and training the unified risk prediction model using the combination of training data; However, He teaches selecting, based on the training samples, a combination of training data to use to train the unified risk prediction model, the combination of training data comprising the first training sample and the second training sample; and (Pg. 2, section 3.1: “Consider two inference tasks A and B with the corresponding two well-trained models M A and M B , i.e., trained to a local minimum in error. Our goal is to construct a combined model M C …” Pg. 6, algorithm 1 outlines the steps of combining the two trained neural networks, including an input of “training datum of task A and B (including labels)” and a final step of “Conduct a light retraining on task A and B to re-boost accuracy of the joint model”. The individual well-trained models (i.e. first and second segment models) are trained on inference tasks A and B . Training data associated with tasks A and B (i.e. a combination of training data comprising the first training sample and the second training sample) is selected to train the joint model (i.e. unified risk prediction model).) training the unified risk prediction model using the combination of training data; (See the portions of section 3.1 and algorithm 1 cited above. The joint model (i.e. unified risk prediction model) is trained on tasks A and B (i.e. the combination of training data).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wang, Han, Akhlaghi, and He. Wang teaches a method for generating an ensemble model of insurance risk. Han teaches removing unimportant, low-weight connections from neural networks. Akhlaghi teaches a method for fusing multiple neural networks into a single fused neural network. He teaches a framework for fusing multiple neural networks, including retraining of the fused network on the training data used to train the individual models. One of ordinary skill would have motivation to combine Wang, Han, Akhlaghi, and He in order to “re-boost the accuracy of the combined model” (He, pg. 2, section 1). Wang, Han, Akhlaghi, and He do not appear to explicitly disclose transmitting, to a remote computing device, a responsive message including at least the risk indicator for use in controlling access of the target entity to one or more interactive computing environments. However, Haile teaches transmitting, to a remote computing device, a responsive message including at least the risk indicator for use in controlling access of the target entity to one or more interactive computing environments. (Haile teaches “The server may be configured to transmit the generated risk score to an insurer or insurance broker, and receive from the insurer or insurance broker a transmitted insurance cost or insurance policy offer” (0003). Figure 1 shows a system environment for obtaining insurance online, where the user (i.e., target entity) may have access to insurance policy offers depending on their risk score.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wang, Han, Akhlaghi, He, and Haile. Wang teaches a method for generating an ensemble model of insurance risk. Han teaches removing unimportant, low-weight connections from neural networks. Akhlaghi teaches a method for fusing multiple neural networks into a single fused neural network. He teaches a framework for fusing multiple neural networks, including retraining of the fused network on the training data used to train the individual models. Haile teaches an efficient system for determining insurance risk and offering insurance. One of ordinary skill would have motivation to combine Wang, Han, Akhlaghi, He, and Haile in order to improve the “time-consuming, complicated and tedious” (Haile, 0009) process of obtaining insurance by “offering insurance coverage and policies specific to the determined risk profiles, and enabling online purchasing of the offered insurance policies” (Haile, 0001). Regarding Claim 3, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 1, as shown above. Wang also teaches wherein: each of the plurality of segment models comprises a neural network model comprising at least an input layer, one or more hidden layers, and an output layer; and (Wang teaches that the model creation system “can generate one or more classifications models and/or regression models, including, but not limited to, perceptrons, logistic regression, feedforward neural networks (i.e., multilayer perceptrons), recurrent neural networks, deep neural networks…” (col. 13, lines 13-17). A deep neural network, by definition, includes an input layer, multiple hidden layers, and an output layer.) training a segment model comprises performing adjustments of weights of connections among the input layer, the one or more hidden layers, and the output layer of the neural network model to minimize a loss function calculated based on the training samples for the entity segment associated with the segment model. (Figure 9 illustrates the process of training a neural network. Wang teaches that “backpropagation through time may be used to calculate gradients for the connections between the neurons, as illustrated at step 906. In step 908, weights within the neural network can be updated based on the calculated gradients” (col. 13, lines 59-62). Calculating gradients and adjusting weights based on the gradients amounts to minimizing a loss function. “In embodiments where data is segmented, separate models are created for each segment, using the historical data from the respective segment.” (col. 19, lines 33-36).) Regarding Claim 4, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 3, as shown above. Akhlaghi also teaches wherein constructing the unified risk prediction model by stacking the trained plurality of segment models comprises: initializing the unified risk prediction model by building connections among the input layer, the hidden layers, and the output layer of the unified risk prediction model based on the weights of corresponding connections in the respective segment models. (Akhlaghi teaches “Having M already trained two-layer networks, the weights of the fused neural network Wif can be constructed through a surprisingly simple operation, linear combination of the corresponding weights W1m [of the original ANNs]” (pg. 260, section 2).) Regarding claim 5, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 4, as shown above. Wang also teaches wherein training the unified risk prediction model using the training samples for the plurality of entity segments comprises performing adjustments of weights of connections among the input layer, the one or more hidden layers, and the output layer of the unified risk prediction model to minimize a loss function calculated based on the training samples for the plurality of entity segments. (One of ordinary skill in the art will recognize that the fused model constructed according to the method of Akhlaghi has the structure of a simple neural network, and therefore can be trained in the same manner as the base neural networks. Wang’s figure 9 illustrates the process of training a neural network. Wang teaches that “backpropagation through time may be used to calculate gradients for the connections between the neurons, as illustrated at step 906. In step 908, weights within the neural network can be updated based on the calculated gradients” (col. 13, lines 59-62). Calculating gradients and adjusting weights based on the gradients amounts to minimizing a loss function.) Regarding claim 7, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 1, as shown above. Wang also teaches further comprising generating the predictor variables associated with the target entity according to the entity segment that the target entity belongs to. (Wang teaches “for each segment, generating a set of features based on the processed historical data” (col. 2, lines 40-41). Fig. 18 shows the feature engineering process by which the input data is transformed into a reduced set of top features (i.e., predictor variables). “In embodiments where the data is segmented, the described feature engineering process is performed separately for the data in each segment.” (col. 22, lines 6-8). When current data is used to make predictions, “the same features selected during steps 1850 and 1860 of the feature engineering process on the training data are used as inputs” (col. 25, lines 27-29). In other words, a different set of features is used to train each segment model, and during prediction, the same features are generated for the customer (target entity) based on the segment to which they belong.) Regarding claim 8, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 1, as shown above. Wang also teaches wherein the plurality of entity segments are disjoint. (Wang teaches “In creating segments, groups of states (or other geographic regions) with similar features may be determined, and the data may be segmented accordingly…For example, states with a relatively large portion of historical BI pure premium less than a threshold amount, e.g., $100 (as determined by, e.g., examining a histogram of pure premiums for a specified time period), may be included in one segment, with the remaining states included in the other segment” (col. 19, lines 38-47). It will be apparent to one of ordinary skill in the art that creating a segment including certain states and creating another segment with the other remaining states will necessarily create disjoint segments.) Claims 9, 11-13, and 15 are system claims, containing substantially the same elements as method claims 1, 3-5, and 7, respectively. Wang, Han, Akhlaghi, He, and Haile teach the elements of claims 1, 3-5, and 7, as shown above. Wang also teaches A system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to: (Examiner notes this limitation is interpreted as a general-purpose computing environment. Wang teaches “the methods described herein are intended for operation as software programs running on a computer processor” (col. 30, line 67 – col. 31, line 2). “The instructions 3024 may also reside, completely or at least partially, within the main memory” (col. 30, lines 47-48).) Claims 16 and 18-20 are product claims, containing substantially the same elements as method claims 1 and 3-5, respectively. Wang, Han, Akhlaghi, He, and Haile teach the elements of claims 1 and 3-5, as shown above. Wang also teaches A non-transitory computer-readable storage medium having program code that is executable by a processor device to cause a computing device to perform operations, the operations comprising: (Examiner notes this limitation is interpreted as a general-purpose computing environment. Wang teaches “a machine-readable medium 3022 on which is stored one or more sets of instructions 3024, such as, but not limited to, software embodying any one or more of the methodologies or functions described herein” (col. 30, lines 42-46), and that “the methods described herein are intended for operation as software programs running on a computer processor” (col. 30, line 67 – col. 31, line 2).) Claims 2, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Han, Akhlaghi, He, and Haile and further in view of Merrill et al. (hereinafter Merrill), U.S. Patent Application Publication US 20190378210 A1. Regarding Claim 2, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 1, as shown above. Wang, Han, Akhlaghi, He, and Haile do not appear to explicitly disclose further comprising generating, for the target entity, explanatory data indicating relationships between changes in the risk indicator and changes in the predictor variables associated with the target entity and including the explanatory data in the responsive message. However, Merrill teaches further comprising generating, for the target entity, explanatory data indicating relationships between changes in the risk indicator and changes in the predictor variables associated with the target entity and (Merrill teaches “model evaluation and explanation system 120 uses score decompositions to determine important features a model (or ensemble) that impact scores generated by the model (or ensemble). In some embodiments, the model evaluation system evaluates and explains the model (or ensemble) by generating score explanation information for a specific score generated by the ensemble model for a particular input data set” (0032). Features are equivalent to predictor variables, the score generated by the model corresponds to a risk indicator, and the determined important features are explanatory data.) Merrill teaches including the explanatory data in the responsive message. (Merrill teaches “the method 200 includes providing the identified features to an operator device (e.g., 171) via a network” (0190).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wang, Han, Akhlaghi, He, Haile, and Merrill. Wang teaches a method for generating an ensemble model of insurance risk. Han teaches removing unimportant, low-weight connections from neural networks. Akhlaghi teaches a method for fusing multiple neural networks into a single fused neural network. He teaches a framework for fusing multiple neural networks, including retraining of the fused network on the training data used to train the individual models. Haile teaches an efficient system for determining insurance risk and offering insurance. Merrill teaches methods for providing explainability information for financial risk models. One of ordinary skill would have motivation to combine Wang, Han, Akhlaghi, He, Haile, and Merrill because in financial risk modeling, “there is a need in the machine learning field to provide model explainability information for a machine learning model in order to comply with regulations such as the Equal Credit Opportunity Act, the Fair Credit Reporting Act, and the OCC and Federal Reserve Guidance on Model Risk Management, which require detailed explanations of the model's overall decision making, explanations of each model-based decision, and explanations of differences in model decisions between two or more segments of a population” (Merrill, 0013). Claim 10 is a system claim, containing substantially the same elements as method claim 2. Wang, Han, Akhlaghi, He, Haile, and Merrill teach the elements of claim 2, as shown above. Claim 17 is a product claim, containing substantially the same elements as method claim 2. Wang, Han, Akhlaghi, He, Haile, and Merrill teach the elements of claim 2, as shown above. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Han, Akhlaghi, He, and Haile and further in view of Weber et al. (hereinafter Weber), U.S. Patent Application Publication US 20210049469 A1. Regarding Claim 6, Wang, Han, Akhlaghi, He, and Haile teach The method of claim 4, as shown above. Wang, Han, Akhlaghi, He, and Haile do not appear to explicitly disclose wherein building connections among the input layer, the hidden layers, and the output layer of the unified risk prediction model based on the weights of corresponding connections in the respective segment models comprises removing a connection based on the weight of the connection is zero. However, Weber teaches wherein building connections among the input layer, the hidden layers, and the output layer of the unified risk prediction model based on the weights of corresponding connections in the respective segment models comprises removing a connection based on the weight of the connection is zero. (Weber teaches sparsification of a dense neural network by “removing all edges that will not or will not significantly contribute to the output or final result, either because the weights are zero, and/or…” (0054).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wang, Han, Akhlaghi, He, Haile, and Weber. Wang teaches a method for generating an ensemble model of insurance risk. Han teaches removing unimportant, low-weight connections from neural networks. Akhlaghi teaches a method for fusing multiple neural networks into a single fused neural network. He teaches a framework for fusing multiple neural networks, including retraining of the fused network on the training data used to train the individual models. Haile teaches an efficient system for determining insurance risk and offering insurance. Weber teaches methods for memory remapping and sparsification to improve neural network efficiency. One of ordinary skill would have motivation to combine Wang, Han, Akhlaghi, He, Haile, and Weber because Weber’s sparsification method provides “improvements includ[ing] less memory consumption by removing unnecessary connections or computation streams from the network since the zeros are then not stored either in the network weights nor the input and output data” (Weber, 0080). Claim 14 is a system claim, containing substantially the same elements as method claim 6. Wang, Han, Akhlaghi, He, Haile, and Weber teach the elements of claim 6, as shown above. Conclusion Claims 1-20 are rejected. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN M ROHD whose telephone number is (571)272-6445. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.M.R./Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Dec 22, 2021
Application Filed
Jun 25, 2025
Non-Final Rejection — §103, §112
Aug 26, 2025
Examiner Interview Summary
Aug 26, 2025
Applicant Interview (Telephonic)
Sep 19, 2025
Response Filed
Nov 10, 2025
Final Rejection — §103, §112
Jan 12, 2026
Response after Non-Final Action
Mar 03, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month