Prosecution Insights
Last updated: April 19, 2026
Application No. 17/132,247

COMPUTERIZED SYSTEM AND METHOD FOR IDENTIFYING AND APPLYING CLASS SPECIFIC FEATURES OF A MACHINE LEARNING MODEL IN A COMMUNICATION NETWORK

Final Rejection §101§103
Filed
Dec 23, 2020
Examiner
DIEP, DUY T
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Verizon Patent and Licensing Inc.
OA Round
6 (Final)
25%
Grant Probability
At Risk
7-8
OA Rounds
4y 2m
To Grant
30%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
5 granted / 20 resolved
-30.0% vs TC avg
Moderate +6% lift
Without
With
+5.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
59
Total Applications
across all art units

Statute-Specific Performance

§101
34.1%
-5.9% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments The amendments and arguments filed 12/08/2025 have been entered. Claims 1-5, 8-13, 15-18, 20 remain pending in the application. Applicant’s argument, with respect to the claim rejection(s) of claim(s) 1-20 under 35 U.S.C 101 filed 09/10/2025 have been fully considered but they are not persuasive. Therefore, the previous rejection(s) the claim(s) as set forth in the previous office action will be maintain. The applicant argues that the amended claims are not directed to an abstract idea because the claim recite specific technical operations that improve the functioning of machine learning systems. In particular, Applicant asserts that the claims perform operations including modifying feature values within a training dataset to obtain unbiased values, executing a machine learning model using the modified dataset, computing an impact value representing how the modified training data affects model output quality, determining class-specific feature importance based on original and modified quality measures, generating a sorted list of features according to the computed impacts values and retraining the machine learning model based on the sorted feature list so that the trained model prioritizes features during runtime deployment. Applicant contends that these operations address deficiencies in conventional machine learning systems that indiscriminately analyze all available features regardless of runtime relevance, thereby improving model accuracy and computational efficiency. Applicant further argues that the claims integrate any alleged abstract idea into a practical application because the trained machine learning model is applied in a runtime environment associated with wireless network operations and is used to determine controls for user equipment (UE) that modify real-world or digital functionality of the UE and enable automatic operation of device controls without user input. Applicant additionally relies on cases such as Enfish, McRO and AI Visualize to assert that improvements to software functionality constitutes patent-eligible subject matter, and cites Ex parte Desjardins for the proposition that improvements to how a machine learning model operates may be patent eligible. – including feature debiasing, impact value computation, feature ranking, retraining using prioritized features, and automated UE control – amounts to significantly more than an abstract idea and represents a non-conventional arrangement of machine learning operations. The applicant’s arguments have been fully considered but are not persuasive. The amended claims remain directed to the abstract idea previously identified by the Examiner, namely evaluating and processing data to determine feature importance and applying the resulting information to train and use a machine learning model. The additional limitations cited by Applicant – including modifying feature values within a training dataset to obtain unbiased values, executing a machine learning model using the modified dataset, computing an impact value representing how the modified training data affects model output quality, determining class-specific feature importance based on original and modified quality measures, generating a sorted list of features, and retraining the model based on the sorted features – constitutes data analysis, mathematical evaluation mentally and information processing steps with regard to training data and feature data that can be performed mentally and therefore fall within the categories of mental processes identified in the Patent Subject Matter Eligibility Guidance. The amendment describing computing importance value for the features and the use of prioritizing the features according to their importance does not alter the nature of these operations, which remain directed to analyzing data and determining feature importance. The amendments simply recite the computation of the impact value to indicate the importance of features based on the evaluation of quality from the result of the machine learning model applying those features, which constitutes a mental process as well as a mathematical concept. A person can mentally compute a value or evaluate the machine learning result to determine which features bring out the most optimized result, thereby prioritize these setting in further runtime environment. Applicant’s assertion that the claims improve machine learning technology is also not persuasive. The claims do not recite a specific improvement to the structure or operation of a computer or to the architecture or algorithm of a machine learning model itself, but instead recite the use of a machine learning model as a tool to analyze training data, compute values, and retrain the model. Such operations represent the application of black-box AI tool for data processing using generic computing components rather than a technological improvement to computer functionality. The claimed computing score for feature importance and prioritizing them are mental processes as analyzed above and are not integrate into a practical application in view of the application of the black-box AI machine learning model as a tool. Accordingly, the claims do not fall within the type of software improvements recognized as patent eligible in cases such as Enfish for McRO. Applicant’s argument that the claims are integrated into a practical application through the determinization and communication of controls to user equipment is likewise unpersuasive. The recites steps of applying the trained machine learning model, determining UE controls, and communicating instructions over a network are recites at a high level of generality and merely represents the use of generic computing devices and routine network communication to implement the abstract idea. These limitations therefore constitute insignificant extra-solution activity and do not integrate the abstract idea into a practical application. Furthermore, Applicant’s reliance on Ex parte Desjardins is not persuasive because the present claims do not recite a specific modification to model parameters or a mechanism that changes how the machine learning model itself learns or operates but instead recite evaluating feature importance and retraining the model using the data analyzing result. When considered individually or in ordered combination, the additional elements recited in the claims- such as executing the machine learning model, training the model, applying the model to a runtime environment, and communicating instructions to user equipment – represent generic computer implementation of the abstract idea and routine computer functions that do not amount to significantly more than the judicial exception. Accordingly, the claims do not integrate the abstract idea into a practical application and do not recite significantly more than the abstract idea, Therefore, the rejections of the claims under 35 U.S.C 101 will be maintained. Applicant’s argument, with respect to the claim rejection(s) of claim(s) 1-20 under 35 U.S.C 103 filed 09/10/2025 have been fully considered but they are not persuasive. Therefore, the previous rejection(s) the claim(s) as set forth in the previous office action will be maintain. The applicant argues that the combination of references fail to teach or suggest the amended limitation of “computing an impact value representing an impact that the modified training data set has on an output of the executed machine learning model for the runtime environment, the computation of the impact value comprising determining a feature importance of the identified feature based on an original quality measure of the machine learning model and a class-specific quality measure of the machine learning model after the modification of the identified feature” Applicant asserts that Pai merely discloses a post-hoc interpretability framework in which explanation scores are generated by comparing feature values of a test point with prototype feature values selected from the original data set. According to Applicant, these explanation scores indicate the relative importance or influence of features with respect to classification decisions but do not involve modifying the training data or evaluating the effect of such modification. Applicant further argues that Pai’s explanation score are inherently non-causal with respect to training data changes, because the scores measure similarity or contrast between data points and prototype vectors rather than the causal effect of modifying training data on model outputs. Applicant additionally contends that Pai does not modify the training dataset, does not re-execute the model using modified training data, and does not compare model-quality measures before and after such modification, and therefore cannot be reasonably construed as teaching the claimed computation of an impact value based on pre- and post- modification quality measures Applicant also asserts that Pai fails to teach or suggest determining feature importance based on an original quality measure of the machine learning model and a class-specific quality measure of the machine learning model after the modification of the identified feature, as recited in the amended claims. Finally, applicant argues that Martinson and Tsai do not remedy these alleged deficiencies because those reference were not cited for computing such an impact value and similarly do not disclose determining feature importance based on pre- and post- modification quality measures. Applicant’s arguments have been fully considered but are not persuasive. Applicant argues that Pai merely provides a post-hoc interpretability framework and does not teach computing an impact value based on modification of training data or comparison of model quality measures before and after modification. However, Pai explicitly discloses generating explanation scores that reflect the impact of features on the decisions made by a machine learning model, wherein such scores indicate the importance of feature values to classification made by the model (Pai ¶0024, ¶0026). Pai further teaches that the trained model may be analyzed during training, after training, and/or after deployment to evaluate the behavior of the machine learning model using components such as score generator that produces explanation score providing insight associated with feature importance of relevance (Pai ¶0045). Accordingly, Pai discloses determining feature importance reflecting the impact that features have on outputs of the machine learning model. As explained in the rejection, Tsai teaches modifying feature values of a dataset and using operators such as sorting to generate or modify sets of feature values that are input into the machine learning model during training (Tsai ¶0020, ¶0033, ¶0097, ¶0100), while Martinson teaches incrementally updating or retraining machine learning model using updated data (Tsai ¶0093). Therefore, Tsai in view of Martinson teaches retraining or updating a machine learning model based on modified feature values. A person ordinary skill in the art would have recognized the benefit of applying Pai’s explanation-score-based feature importance evaluation to the machine learning model trained or retrained using the modified feature values taught by Tsai and Martinson in order to evaluate the impact of the modified features on the behavior and output of the machine learning model. Pai also discloses the benefits of generating explanation score to reflect feature importance at ¶0045 “a user can view the one or more explanation scores and/or other information or metrics that give insight associated with feature importance or relevance. Thus, the user may determine whether the training model 110 is behaving properly and is suitable to be deployed, whether to further train the model to achieve more desirable behavior, or whether to redesign the trained model” Under the broadest reasonable interpretation, Pai’s explanation score corresponds to the claimed impact value representing the impact that modified training data has on the output of the machine learning model, since Pai expressly discloses that explanation scores provide insight into feature importance and the influence of features on model decisions. Accordingly, when Pai’s feature-importance evaluation is applied to the model trained with modified features as taught by Tsai and Martinson, the resulting analysis necessarily reflects the impact that the modified/updated training data has on the machine learning model output. Therefore, the combination of Martinson, Tsai and Pai teaches or at least suggests the amended limitations of the claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 8-13, 15-18, 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: Claim 1 recites a method, one of the four statutory categories of patentable subject matter. Step 2A, Prong I: Claim 1 further recites the limitations of: “identifying, ..., a runtime environment corresponding to wireless network operations associated with user equipment (UE)” The process of identifying a runtime environment corresponding to wireless network operations associated with user equipment (UE) is a mental process. A user can mentally identify a runtime environment within an iPhone, which is the iOS operating system with various wireless network operations associated with the iPhone, or a printer connected to a home Wi-Fi may be identified by the user. “identifying, …, a machine learning model and a training data set, the machine learning model corresponding to the runtime environment, the training data set corresponding to a type of the machine learning model, the training data set comprising a set of features” The identifying a machine learning model and a training data set is considered to be a mental process. A user can mentally identify a machine learning model as well as the runtime environment that the model is corresponded to. A user can then manually identify a training data set comprising of features such as mentally identify that the training set will comprise of cat and dog images after identifying that the machine learning model is a classification model for an operating system. “for each of the features included in the set of features: modifying the training data set by identifying a feature within the set of features and modifying an initial value of the identified feature, the modification resulting in a new value of the identified feature that corresponds to an unbiased value” The modifying of a training data set and identifying feature is considered to be a mental process. A user can perform the step of identify and modify via pen and paper. A user can first manually identify a feature and bias value in a machine learning model, then manually modifying by increase or replace the initial value of the feature to obtain a modified feature with another value that is unbiased. “computing an impact value representing an impact that the modified training data set has on an output of the executed machine learning model for the runtime environment, the computation of the impact value comprising determining a feature importance of the identified feature based on an original quality measure of the machine learning model and a class-specific quality measure of the machine learning model after the modification of the identified feature” The computing of a value is consider to be a mathematical concept, where in the computing of a value requires multiple mathematical calculations. The computing can also be considered as a mental process as the computing can be performed by a user via pen and paper. A user can mentally perform the calculation to acquire the impact value that represent how the modified training data impact the output of the model. Furthermore, a person can mentally evaluate the result of a trained machine learning model to perform the mental comparison to determine which feature value provide significant change to the model thereby determining the quality. Such evaluation determination constitutes mental process that can be performed within a human mind. “determining, …, a sorted list of features based on the computed impact value for each feature in the set of features, the sorted list comprising information indicating a determined class and value of each feature in the set of features;” The determining a sorted list based on a value is considered to be a mental process. A user can mentally check the impact value to determine a sorted list of features based on impact value. “determining, ..., controls of the UE that are specific for the runtime environment based on the application of the trained machine learning model, the controls of the UE corresponding to real-world and digital functionality of the UE that modify existing functionality of the UE”. The determining of controls of a user equipment can be a mental process as a user can determine control of an equipment that that are specific for the runtime environment based on the application of the trained machine learning model. For instance, a user can mentally determine their control toward a vehicle such as an intention to turn left based on a result of a trained machine learning model. Such intention of turning left of a vehicle involve real-world interaction and modify existing function of a vehicle. Step 2A, Prong II: Claim 1 recites the following additional elements: “by a computing device”, “by the computing device” These additional elements is a high-level recitation of generic computer components used as a tool, and do not provide integration into a practical application, thus they do not provide significantly more than the abstract idea. “executing the machine learning model based on the modified training data set” This additional element is considered to be a mere instruction to apply an exception with the words "apply it" (or an equivalent) per MPEP 2106.05(f), because the claim only recites the idea of a solution which is executing the machine learning model based on a data set without reciting how to accomplish the solution to the problem, and does not provide integration into a practical application. “training, ..., the machine learning model based on the sorted list of features such that the trained machine learning model priorities features according to the sorted list of features when the trained machine learning is applied to the runtime environment” This additional element is considered to be a mere instruction to apply an exception with the words "apply it" (or an equivalent) per MPEP 2106.05(f), because the claim merely recites the idea of a solution which the machine learning model is trained on a sorted list such that the sorted list of features is prioritized without reciting how the model performs such prioritization or any specific technical mechanism for implementing the prioritization, such that the element can accomplish the solution to the problem or provide integration of judicial exception into a practical application. “applying, …, the trained machine learning model to the runtime environment.” This additional element is considered to be a mere instruction to apply an exception with the words "apply it" (or an equivalent) per MPEP 2106.05(f), because the claim only recites the idea of applying a trained machine learning model to a runtime environment without reciting how to accomplish the solution to the problem, and does not provide integration into a practical application. “communicating, over a network, instructions to the UE, the instructions comprising information related to the determined controls, the communication causing, without user input, the UE to automatically, over the network, perform operation of the controls within the runtime environment, such that the UE is rendered capable to operate within the runtime environment” This additional element recites an insignificant extra-solution of a well-known technique of communicating data over network as identified in MPEP 2106.05(g), and does not provide integration into a practical application or significantly more than the abstract idea. Step 2B: When considered individually or in combination, the additional limitations and elements of claim 1 does not amount to significantly more than the judicial exception for the same reasons discussed above as to why the additional limitations do not integrate the abstract idea into a practical application. The additional elements of outlined in Step 2A performing functions as designed simply accomplishes execution of the abstract ideas. Additional elements “executing the machine learning model based on the modified training data set”, “wherein the machine learning model is trained on the sorted list of features”, and “applying, …, the trained machine learning model to a runtime environment” recites mere instructions to apply an exception with the words "apply it" (or an equivalent) per MPEP 2106.05(f), without reciting how to accomplish the solution to the problem. Additional elements “by a computing device”, “by the computing device” are a high-level recitation of generic computer components or elements used as a tool, and do not provide integration into a practical application, thus they do not provide significantly more than the abstract idea. The additional element “communicating, over a network, instructions to the UE, the instructions comprising information related to the determined controls, the communication causing, without user input, the UE to automatically, over the network, perform operation of the controls within the runtime environment, such that the UE is rendered capable to operate within the runtime environment” further represents a well-understood, routine, conventional activity as identified in MPEP 2106.05(d)(II)(i), which indicate that transmitting data over a network is a well-understood, routine, conventional activity when it is claimed in a generic manner (as it is here). Accordingly, a conclusion that the communicating step is well-understood, routine, conventional activity is supported under Berkheimer option II. In conclusion from above for elements considered as a mental process or mathematical concept, elements reciting generic computer components, elements reciting mere instruction to apply an exception are carried over and do not provide significantly more than the abstract idea. Looking at the limitations in combination and the claims as a whole does not change this conclusion and the claim is ineligible. Therefore, additional limitations of claim 1 do not amount to significantly more than the judicial exception. Thus, claim 1 recites abstract ideas with additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 1 is not patent eligible. Regarding claim 2 depends on claim 1, thus the rejection of claim 1 is incorporated. Claim 2 recites the following limitations: “identifying the runtime environment;” The identifying of the runtime environment is considered to be a mental process. A user can mentally identify a runtime environment that fits their purpose. “selecting the trained machine learning model based on the runtime environment;” The selecting of the model is considered to be a mental process. A user can mentally select any trained machine learning models to their liking. “collecting sensor data from another device operating within said runtime environment;” This additional element recites an insignificant extra-solution of mere data gathering as identified in MPEP 2106.05(g), and does not provide integration into a practical application. The additional element further represents well-understood, routine, conventional activity. The court decision cited in MPEP 2106.05(d)(II) Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 indicate that collecting data is a well-understood, routine, and conventional function when it is claimed in a merely generic manner. Accordingly, a conclusion that the collecting sensor data is well-understood, routine, conventional activity is supported under Berkheimer option II. “executing the trained machine learning model with the collected sensor data as input;” This additional element is considered to be a mere instruction to apply an exception with the words "apply it" (or an equivalent) per MPEP 2106.05(f), because the claim only recites the idea of executing a trained machine learning model with the collected data without reciting how to accomplish the solution to the problem. “outputting results of the execution of the trained machine learning model.” This additional element recites an insignificant extra-solution of mere data outputting as identified in MPEP 2106.05(g), and does not provide integration into a practical application. The additional element further represents well-understood, routine, conventional activity. The court decision cited in MPEP 2106.05(d)(II) 3rd entry from the bottom: iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93 indicate that presenting is a well-understood, routine, and conventional function when it is claimed in a merely generic manner. The ‘outputting’ type step is considered to be a function of presenting. Accordingly, a conclusion that the outputting step is well-understood, routine, conventional activity is supported under Berkheimer option II. Claim 2 recites abstract ideas with additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 2 is not patent eligible. Regarding claim 3 depends on claim 2, thus the rejection of claim 2 is incorporated. Claim 3 recites the limitation “wherein the output results are fed back to the computing device for further training of the machine learning model”. This additional element is considered to be a mere instruction to apply an exception with the words "apply it" (or an equivalent) per MPEP 2106.05(f), because the claim only recites the idea of the output results are fed back to the device for training without reciting how to accomplish the solution to the problem. Therefore, additional limitations of claim 3 do not amount to significantly more than the judicial exception, and claim 3 is not patent eligible. Regarding claim 4 depends on claim 1, thus the rejection of claim 1 is incorporated. Claim 4 recites the limitation “analyzing the sorted list of features by comparing values of each feature output by the machine learning model”, which further specifies the mental process of determining a sorted list as discussed in claim 1. A user can mentally compare values of features output to analyze the list, the process can be done via pen and paper. Claim 4 recites the limitation “determining, based on said analysis, whether any feature has been incorrectly assigned a class or value.” which further specifies the mental process determining a sorted list as discussed in claim 1. A user can mentally check and determine which feature has been incorrectly assigned a value or class. Therefore, additional limitations of claim 4 do not amount to significantly more than the judicial exception, and claim 4 is not patent eligible. Regarding claim 5 depends on claim 4, thus the rejection of claim 4 is incorporated. Claim 5 recites the limitation “when it is determined that a feature within the set of features has had an incorrectly assigned class or value, repeating the modifying, executing and computing steps in order to determine another sorted list”, which further specifies the mental process of claim 4. A user can mentally perform all repeated steps of modifying, executing and computing, and these steps can be performed with pen and paper. Therefore, additional limitations of claim 5 do not amount to significantly more than the judicial exception, and claim 5 is not patent eligible. Regarding claim 8 depends on claim 1, thus the rejection of claim 1 is incorporated. Claim 8 recites the limitation “modifying the training data further comprises removing the identified feature from an input of the machine learning model during said execution”, which further specifies the mental process of modifying training data as discussed in claim 1. A user can remove a feature from an input to fir their purpose. Therefore, additional limitations of claim 8 do not amount to significantly more than the judicial exception, and claim 8 is not patent eligible. Regarding claim 9 depends on claim 1, thus the rejection of claim 1 is incorporated. Claim 9 recites the limitation “modifying the training data set comprises shuffling values of at least a portion of the features in the set of features”, which further specifies the mental process of modifying data as discussed in claim 1. A user can shuffle value of feature to fir their purpose. The act of shuffling can also be performed by pen and paper. Therefore, additional limitations of claim 9 do not amount to significantly more than the judicial exception, and claim 9 is not patent eligible. Regarding claim 10 depends on claim 9, thus the rejection of claim 9 is incorporated. Claim 10 recites the limitation “said shuffling is performed randomly” which further specifies the mental process as discussed in claim 9. The shuffling act can be performed randomly by a user. Therefore, additional limitations of claim 10 do not amount to significantly more than the judicial exception, and claim 10 is not patent eligible. Regarding claim 11 which recites a device, one of the four statutory categories of patentable subject matter. Claim 11 is similarly rejected based on the same rationale as claim 1, because the claim recites similar limitation and processing steps. Regarding claim 12 depends on claim 11 thus the rejection of claim 11 is incorporated. Claim 12 is similarly rejected based on the same rationale as claim 2, because the claim recites similar limitation and processing steps. Regarding claim 13 depends on claim 11 thus the rejection of claim 11 is incorporated. Claim 13 is similarly rejected based on the same rationale as claim 4 and claim 5, because the claim recites similar limitation and processing steps. Regarding claim 15 depends on claim 11 thus the rejection of claim 11 is incorporated. Claim 15 is similarly rejected based on the same rationale as claim 9 and claim 10, because the claim recites similar limitation and processing steps. Regarding claim 16 which recites a device, one of the four statutory categories of patentable subject matter. Claim 16 is similarly rejected based on the same rationale as claim 1, because the claim recites similar limitation and processing steps. Regarding claim 17 depends on claim 16 thus the rejection of claim 16 is incorporated. Claim 17 is similarly rejected based on the same rationale as claim 2, because the claim recites similar limitation and processing steps. Regarding claim 18 depends on claim 16 thus the rejection of claim 16 is incorporated. Claim 18 is similarly rejected based on the same rationale as claim 4 and claim 5, because the claim recites similar limitation and processing steps. Regarding claim 20 depends on claim 16 thus the rejection of claim 16 is incorporated. Claim 18 is similarly rejected based on the same rationale as claim 9 and claim 10, because the claim recites similar limitation and processing steps. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, 8, 11-13, 16-18 are rejected under 35 U.S.C. 103 as being unpatentable in view of Martinson et.al (US 20180053102 A1), further in view of Tsai et.al (US 20190325352 A1), further in view of Pai et.al (US 20200279140 A1). Regarding claim 1, Martinson teaches the limitation “identifying, by a computing device, a runtime environment corresponding to wireless network operations associated with user equipment (UE)” (paragraph 36 “The network 111 may also be coupled to or include portions of a telecommunications network for sending data in a variety of different communication protocols ... in some implementations, the network 111 is a wireless network using a connection”, paragraph 44 “The sensor(s) 103a and/or 103b (also referred to herein as 103) may include any type of sensors suitable for the moving platform(s) 101 and/or the client device(s) 117. The sensor(s) 103 may be configured to collect any type of sensor data suitable to determine characteristics of a moving platform 101, its internal and external environments”, paragraph 41 “The moving platform(s) 101 include computing devices having memory(ies), processor(s), and communication unit(s) ... Non-limiting examples of the moving platform(s) 101 include a vehicle, an automobile, a bus, a boat, a plane, a bionic implant, or any other moving platforms with computer electronics (e.g., a processor, a memory or any combination of non-transitory computer electronics).”, and paragraph 48 “The processor(s) 213 (e.g., see FIG. 2) of the moving platform(s) 101, modeling server 121, and/or the client device(s) 117 may receive and process the sensor data from the sensors 103.” Martinson discloses a method and system for individualized adaptation of driver action prediction models. Within the disclosure, Martinson discloses the method comprises sensors to collect data regarding characteristics of a moving platform such as internal and external environment (runtime environment), wherein the moving platform can be a vehicle (user equipment) with attached computing device to receive the sensor data for processing, such that the communication between the sensor and the moving platform can be a wireless network connection.) Martinson teaches the limitation “identifying, by the computing device, a machine learning model and a training data set, the machine learning model corresponding to the runtime environment, the training data set corresponding to a type of the machine learning model, the training data set comprising a set of features” (paragraph 29 “a customizable advance driver assistance engine 105 that may be configured to use and adapt a neural network based driver action prediction model. For example, the technology may generate training labels (also called targets) based on extracted feature(s) and detected driver action(s) and use the labels to incrementally update and improve the performance of a pre-trained driver action prediction model”, paragraph 30 “As a further example, a driver action prediction model may include a computer learning algorithm, such as a neural network. For instance, some examples of neural network based driver action prediction models include one or more multi-layer neural networks, deep convolutional neural networks, and recurrent neural networks, although other machine learning models are also contemplated in this application and encompassed hereby”, paragraph 74 “For example, the prediction engine 231 may generate labels (e.g., using a computer learning model, a hand labeling coupled to a classifier, etc.) describing user actions based on the sensor data”, and paragraph 79 “the model adaptation engine 233 may run a training algorithm to generate training examples (e.g., by combining features extracted for prediction and a recognized action label)” Martinson discloses the driver action prediction model may include a computer learning algorithm such as a neural network, and sensors data with labels as well as features extracted for prediction may be collected as training data. The driver action prediction model may be trained based on collected sensor data of environment (machine learning model corresponding to the runtime environment), and the prediction engine may generate labels for sensor data using a classifier model (training data set corresponding to a type of the machine learning model).) Martinson teaches the limitation “determining, by the computing device, controls of the UE that are specific for the runtime environment based on the application of the trained machine learning model, the controls of the UE corresponding to real-world and digital functionality of the UE that modify existing functionality of the UE” (paragraph 83 “As a further example, the diagram 300 illustrates that the advance driver assistance engine 105 may receive sensor data 301 from sensors 103 (not shown) associated with a moving platform 101, such as the vehicle 303. The sensor data 301 may include environment sensing data”, paragraph 84 “Using the sensor data 301, the advance driver assistance engine 105 then predict driver actions and/or adapt a driver action prediction model ... In some implementations, the predicted future driver action may be returned to other systems of the vehicle 303 to provide actions (e.g., automatic steering, braking, signaling, etc.) or warnings (e.g., alarms for the driver)” Martinson discloses the advance driver assistance engine that employ the driver action prediction model is configured to receive sensor data of the environment (runtime environment) such that the driver action prediction model can use the sensor data to provide predicted future driver action, wherein the actions may be an automatic function of the moving platform (user equipment) such as automatic steering, braking, etc. These controls of the moving platform corresponding to real-world and modify existing functionality of the moving platform (e.g. automatic braking during driving of a vehicle). Martinson teaches the limitation “communicating, over a network, instructions to the UE, the instructions comprising information related to the determined controls, the communication causing, without user input, the UE to automatically, over the network, perform operation of the controls within the runtime environment, such that the UE is rendered capable to operate within the runtime environment” (paragraph 84 “Using the sensor data 301, the advance driver assistance engine 105 then predict driver actions and/or adapt a driver action prediction model ... In some implementations, the predicted future driver action may be returned to other systems of the vehicle 303 to provide actions (e.g., automatic steering, braking, signaling, etc.) or warnings (e.g., alarms for the driver)” Martinson discloses the moving platform comprises computing system as recited above with wireless network communication that is capable to provide control actions to the moving platform (user equipment) based on the sensor data of environment (runtime environment), wherein the actions may be automatic actions such as steering or braking. The moving platform and the model within the system relies on the sensor data to perform the action, thus suggest that the moving platform is capable to operate within the environment as recorded by the sensor.) Martinson teaches the limitation “applying, by the computing device, the trained machine learning model to a runtime environment” (paragraph 87 “the driver action prediction model may include a stock machine learning-based driver action prediction model. Stock means the model was pre-trained using a collective of sensor data aggregated from a multiplicity of moving platforms 101 to identify general driver behavior.”, and paragraph 91 “the advance driver assistance engine 105 may update (also called train) the driver action prediction network model with local data (e.g., driver, vehicle, or environment specific data), as described elsewhere herein. In some implementations, a non-individualized driver action prediction model may be loaded into the advance driver assistance engine 105 initially and then the model may be adapted to a specific user, vehicle 303, or environment, etc. For example, one of the advantages of the technology described herein is that it allows pre-existing models to be adapted, so that the advance driver assistance engine 105 will work with a stock, pre-trained model and also be adapted and improved upon (e.g., rather than being replaced outright).” Martinson discloses the driver action prediction model may be pretrained, thus suggest a trained machine learning model. The pre-trained driver action prediction model may then be implemented into the advance driver assistance engine to be adapted to a specific user, vehicle, or environment, suggesting the applying of the trained machine learning model to a runtime environment. The pre-trained driver action prediction model may be further trained using the training technique that interact with feature data as disclosed by Tsai based on the teaching combination below.) Martinson does not teach the limitation “For each of the features included in the set of features: modifying the training data set by identifying a feature within the set of features and modifying an initial value of the identified feature, ...”. However, Tsai teaches the limitation (paragraph 3 “ For example, a data set for a machine-learning model may have thousands to millions of features, including features that are created from combinations of other features, while only a fraction of the features and/or combinations may be relevant and/or important to the machine-learning model.”, paragraph 4 “The same features may further be identified and/or specified multiple times during different steps associated with creating, training, validating, and/or executing the same machine learning model” and paragraph 33 “In one or more embodiments, data-processing system 102 uses an execution engine 110 and a set of operators 112 to generate and/or modify sets of feature values 118”. Tsai discloses a system and method for prototype-based machine learning model reasoning interpretation. Tsai discloses the features of data set for a machine learning model, wherein the data set for a machine learning model may be understood by a person ordinary skilled in the art as training data. Furthermore, based on the teaching combination with Martinson below, the data set may be the training data set disclosed within the teaching by Martinson with identified features. The technique to identify features and modifying feature values that will be used as part of the input to the ML model may then be configured by one of ordinary skilled in the art, such that the model can be trained with different feature values representing different scenarios.) Martinson does not teach “executing the machine learning model based on the modified training data set”. However, Tsai teaches the limitation (paragraph 0033, where Tsai discloses “…modify sets of feature values 118 that are inputted into the machine learning models… data-processing system 102 may use execution engine 110 to obtain and/or calculate feature values 118 of primary features 114 and/or derived features 116 for a machine learning model”. Tsai discloses obtaining calculated feature values of features to be used for a machine learning model, thus implying modified training data set of features to be used for executing the machine learning model. While Martinson does not explicitly disclose feature data is modified, Martinson did disclose the incrementally updating/retraining the model with updated data (paragraph 93), thus Martinson’s incrementally training process corresponds to training with modified set of feature values by Tsai, thereby Martinson in view of Tsai teach or at least suggests the retraining/incrementally updating of the machine learning model based on the modified set of data with modified feature values.) Martinson does not teach “determining, by the computing device, a sorted list of features based on the computed impact value for each feature in the set of features, the sorted list comprising information indicating a determined class and value of each feature in the set of features”. However, Tsai teaches this limitation (paragraph 0038 where Tsai discloses “Operators 218 may specify operations to be performed on lists or sets of documents 230 representing entities and/or features used with the machine learning model”, paragraph 0039 where Tsai discloses “the sort operator may order the documents by a feature or other value. For example, the sort operator may be used to order a list of documents 230 by ascending or descending feature values in the documents” and paragraph 0045 “The user-defined operator may include a class, object, expression, formula, and/or operation to be applied to one or more lists of documents”. Tsai discloses the use of operator such as sort to create a sorted list of features for use within the training of machine learning model. Within the sort operator, the list contains information of feature values and can also include information of class of feature.) Martinson does not teach “training, by the computing device, the machine learning model based on the sorted list of features such that the trained machine learning model priorities features according to the sorted list of features when the trained machine learning is applied to the runtime environment”. However, Tsai teaches this limitation (paragraph 0020, where Tsai discloses “For example, data-processing system 102 may create and train one or more machine learning models”, paragraph 0033 “data-processing system 102 uses an execution engine 110 and a set of operators 112 to generate and/or modify sets of feature values 118 that are inputted into the machine learning models and/or used as scores that are outputted from the machine learning models”, paragraph 97 “the operator is applied to the required features (operation 510). ... The operator may also, or instead, sort the features in a set”, and paragraph 100 “Next, a list of calculated features and a feature dependency graph are used to identify a feature ... The required features may also be supplemented and/or ordered based on feature dependencies from the feature dependency graph. As a result, the feature obtained in operation 604 may represent the highest feature in the order that has not yet been calculated” Tsai discloses the data processing system to train one or more machine learning models, wherein these models consist of function to analyze and modify input data, more specifically to modify features from input data using operator such as sort to obtain a sorted list which is then used for training the machine learning model. Under the broadest reasonable interpretation, the use of sorted list of features during the training of the machine learning model, wherein features may be sorted/ordered based on dependencies such that the feature may represent the highest feature, indicates that features are evaluated and processed according to their order in the sorted list, thereby prioritizing features according to the sorted list when the machine learning model is applied in operation. A person ordinary skill in the art would understand that machine learning models derive their behavior from training data and feature inputs used during training. Accordingly, when the features are sorted or ordered with highest feature to be used as part of the inputs to the model, the resulting trained model reflects the prioritization implied by the sorted list of features during implementation of the model in an environment based on the teaching combination with Martinson, thus Tsai’s teaching corresponds to the claimed process of training the model based on sorted list of features and priorities features according to the sorted list, as claimed.) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a method and system for individualized adaptation of driver action prediction models by Martinson with the teaching of a system and method for processing data include modifying and handling of training feature data by Tsai. The motivation to do so is referred to in Tsai’s disclosure (paragraph 0033 “In addition, data-processing system 102 may calculate feature values 118 and/or apply operators 112 in a way that avoids repeated and/or unnecessary calculation of feature values 118 while increasing the efficiency with which multiple sets of feature values 118 are calculated”, and paragraph 0077 “the system may improve technologies for executing machine-learning models and/or calculating feature values for the machine learning models, as well as applications, distributed systems, and/or computer systems that execute the technologies and/or machine-learning models.”. Tsai discloses the advantages of the current invention which improve technologies for executing machine learning models and calculating feature values for the machine learning models. The data processing system as disclosed within the system by Tsai can help calculate feature values and/or apply operators in a way that avoids repeated and/or unnecessary calculation of feature values while increasing the efficiency with which multiple sets of feature values. The system by Martinson also discloses identifying features related to predicting driver action from the sensor data. Therefore, the teaching by Martinson can be further improved by incorporating the teaching by Tsai for various improvement toward handling can calculating feature values of the machine learning model, thus obtain a further improved model with corresponded feature and feature values to implement into the user equipment and its environment.) Martinson/Tsai does not teach the limitation “... the modification resulting in a new value of the identified feature that corresponds to an unbiased value”. However, Pai teaches this limitation (paragraph 0054 “...In some examples, the report generator 128 may generate recommendations. For example, the recommendations may include recommended fixes for correcting biases, overfitting, and/or other false positives where features may be incorrectly labeled as important based on the explanation scores generated by the score generator 116”. Pai discloses a system and method for prototype-based machine learning model reasoning interpretation by computing an explanation score to reflect the impact of the features. Within the disclosure, the report generator component may generate recommendations for fixing or correcting biases of features to obtain an unbiased feature.) Martinson/Tsai does not teach the limitation “computing an impact value representing an impact that the modified training data set has on an output of the executed machine learning model for the runtime environment, the computation of the impact value comprising determining a feature importance of the identified feature based on an original quality measure of the machine learning model and a class-specific quality measure of the machine learning model after the modification of the identified feature”. However, Pai teaches this limitation (paragraph 0024, where Pai discloses “Reports may be generated from the explanation scores and/or critic fraction so that users of the MLM can understand the impact of various features (e.g., a user's credit score) on a particular decision (e.g., rejected a user's loan application) made by the MLM”, paragraph 26 “A “global explanation score” may be indicative of an importance of a value(s) of a feature(s) to classifications made by a machine learning model. A global explanation score captures the behavior of many prototypes of a prototype model with respect to one or more particular features”, paragraph 44 “the trained model 110 may include one or more models”, and paragraph 45 “The trained model 110 may be analyzed during training, after training, and/or after deployment (e.g., as the deployed model 124 of FIG. 1B) to evaluate the behavior of the trained model 110. The analysis may be performed by a prototype generator 112, a distance component 114, and/or a score generator 116 ... The report generator 128 may generate one or more reports and/or provide (e.g., over a computer network) a user interface to a client device, such that a user can view the one or more explanation scores and/or other information or metrics that give insight associated with feature importance or relevance” Pai discloses generating explanation scores so that users of the machine learning model can understand the impact of various features on decisions made by the model and that global explanation score may indicate the importance of values of features to classification made by a machine learning model. Additionally, the evaluation of feature importance by the score may be performed in various stages of training such as during training or after training or after deployment. Accordingly, Pai’s explanation-score-based feature importance evaluation corresponds to the claimed impact value representing the impact of the training data features have on the output behavior of the machine learning model. Furthermore, as discussed above, Tsai teaches modifying feature values of a data set while Martinson teaches incrementally updating or retraining the model using updated data, thus Tsai in view of Martinson teaches retraining or updating the machine learning model based on modified feature values. Therefore, a person ordinary skill in the art would have recognize the benefit of applying Pai’s explanation-score-based feature importance evaluation to the machine learning model, which may have been the original model trained with features and the updated model retrained using modified feature values as taught by Tsai and Martinson. Doing so would allow the practitioner to evaluate the impact of the modified features on the output behavior of the machine learning model, gain insight associated with feature importance, and determine whether the retrained model is behaving properly.) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a method and system for individualized adaptation of driver action prediction models by Martinson, and the teaching of a system and method for processing data include modifying and handling of training feature data by Tsai, with the teaching of system and method for prototype-based machine learning model reasoning interpretation by computing an explanation score to reflect the impact of the features by Pai. The motivation to do is referred to in Pai’s disclosure (paragraph 0023, where Pai discloses “The explanation scores can be used to rank the relative importance of different features to the classifications made by the machine learning model. Thus, the explanation scores can be used to understand the behavior of the machine learning model”, paragraph 45 “a user can view the one or more explanation scores and/or other information or metrics that give insight associated with feature importance or relevance. Thus, the user may determine whether the training model 110 is behaving properly and is suitable to be deployed, whether to further train the model to achieve more desirable behavior, or whether to redesign the trained model 110”, and paragraph 51 “for example, a machine learning model developer can visually identify any potential biases or other problems in the data such that the machine learning model can be modified if needed (e.g., via more rounds of training or starting a new training data set).” Pai discloses the explanation score can be used to create a rank list of features to better analyze the features, thus the user can gain further insight associated with feature importance of the model, so that the user can determine whether the model is behaving properly and is suitable to be deployed, whether to further train the model to achieve more desirable behavior, or whether to redesign the trained model. Pai also discloses modifying features to further improve the trained machine learning model, which is similar to the disclosure made by Tsai. Therefore, the system and method of training the machine learning model including utilizing features within machine learning model by Martinson/Tsai can also incorporate this feature to better understand and utilize the learning model, thus improve the overall machine learning model learning framework.) Regarding claim 2 depends on claim 1, thus the rejection by the teaching combination reference of claim 1 is incorporated. Martinson teaches the limitation “application of the trained machine learning model comprises: identifying the runtime environment” (paragraph 44 “The sensor(s) 103a and/or 103b (also referred to herein as 103) may include any type of sensors suitable for the moving platform(s) 101 and/or the client device(s) 117. The sensor(s) 103 may be configured to collect any type of sensor data suitable to determine characteristics of a moving platform 101, its internal and external environments” Martinson discloses the sensor collect data to determine characteristics of a moving platform including its internal and external environments (runtime environment).) Martinson teaches the limitation “outputting results of the execution of the trained machine learning model” (paragraph 87 “the driver action prediction model may include a stock machine learning-based driver action prediction model. Stock means the model was pre-trained using a collective of sensor data aggregated from a multiplicity of moving platforms 101 to identify general driver behavior.” Martinson discloses the model was pre-trained using a collective of sensor data, suggesting the pre-trained model provide an action prediction result upon executing the pre-training.) Martinson teaches the limitation “selecting the trained machine learning model based on the runtime environment”. (paragraph 82 “A multiplicity of sensor data may be used by the advance driver assistance engine 106 to perform real-time training data collection for training the driver action prediction model for a specific driver, so that the driver action prediction model can be adapted or customized to predict that specific driver's actions.” Martinson discloses sensor data with regard to the environment in real-time may be collected for further training the pre-trained driver action prediction model such that the model can be adapted or customized for further usage, which suggest a selection of the pre-trained driver action prediction model to be customized to adapt to a specific user’s action with regard to the environment.) Martinson teaches the limitation “collecting sensor data from another device operating within said runtime environment” (paragraph 83 “As a further example, the diagram 300 illustrates that the advance driver assistance engine 105 may receive sensor data 301 from sensors 103 (not shown) associated with a moving platform 101, such as the vehicle 303. ... V2V sensing (e.g., sensor data provided from one vehicle to another vehicle)” Martinson discloses the advance driver assistance engine may receive sensor data from sensors of another moving platform such as V2V sensing (e.g., sensor data provided from one vehicle to another vehicle).) Martinson teaches the limitation “executing the trained machine learning model with the collected sensor data as input” (paragraph 84 “Using the sensor data 301, the advance driver assistance engine 105 then predict driver actions and/or adapt a driver action prediction model ... the predicted future driver action may be returned to other systems of the vehicle 303 to provide actions ... may be transmitted to adjacent vehicles and/or infrastructure to notify these nodes of impending predicted driver actions ...” Martinson discloses using the sensor data, the advance driver assistance engine can adapt a driver action prediction model to provide prediction of actions, thus suggesting that the sensor data is used as input, wherein the sensor data may include sensor data provided from another vehicle using V2V sensing.) Regarding claim 3 depends on claim 2, thus the rejection by teaching combination reference in claim 2 is incorporated. Martinson teaches “the output results are fed back to the computing device for further training of the machine learning model” (paragraph 93 “training neural networks may be performed using backpropagation that implements a gradient descent approach to learning. In some instances, the same algorithm may be used for processing a large dataset as is used for incrementally updating the model. Accordingly, instead of retraining the method from scratch when new data is received, the model can be updated incrementally as data is iteratively received” Martinson discloses the training of neural networks may be performed using backpropagation, wherein one of ordinary skilled in the art would recognize that back propagation involve using the output of a neural network to calculate the error, and this error is then "fed back" through the network to update its internal parameters for further training.) Regarding claim 4 depends on claim 1, thus the rejection of claim 1 is incorporated. Tsai teaches the limitation “analyzing the sorted list of features by comparing values of each feature output by the machine learning model” (paragraph 0061, where Tsai discloses “Evaluation apparatus 204 then compares calculated feature list 226 with required features 212 for a given operator from operator dependency graph 220 to determine additional features 216 to be calculated for the operator and/or prevent previously calculated features from being recalculated for use with the operator”. Tsai discloses the evaluation apparatus compares calculated feature list, wherein the calculated list comprises of information of feature values to determine whether the feature has been previously calculated to prevent the recalculation of them.) Pai teaches the limitation “determining, based on said analysis, whether any feature has been incorrectly assigned a class or value” (paragraph 0054, where Pai discloses “In some examples, the report generator 128 may generate recommendations. For example, the recommendations may include recommended fixes for correcting biases, overfitting, and/or other false positives where features may be incorrectly labeled as important”. Pai discloses the report generator can identify when feature may be incorrectly labeled or biases, overfitting, and other false positives of feature that was assigned.) Regarding claim 5 depends on claim 4, thus the rejection of claim 4 is incorporated. Pai teaches “when it is determined that a feature within the set of features has had an incorrectly assigned class or value, repeating the modifying, executing and computing steps in order to determine another sorted list” (paragraph 0054, where Pai discloses “In some examples, the report generator 128 may recommend retraining the deployed model 124. For example, when then it is detected that that training data classifications have been over fitted, the recommendation may be to retrain the model”. Pai discloses after detecting that training data have been over fitted or included any problems as mentioned in claim 4, the report generator may recommend retraining the model, implying that the training method as disclosed in claim 1 will be applied again, thus another sorted list will be produced in accordance with the method in claim 1.) Regarding claim 8 depends on claim 1, thus the rejection of claim 1 is incorporated. Tsai teaches “modifying the training data further comprises removing the identified feature from an input of the machine learning model during said execution” (paragraph 0064, where Tsai discloses “Evaluation apparatus 204 may then remove, from required features 212, one or more features that have already been calculated according to calculated feature list 226”. Tsai discloses the evaluation apparatus can remove features that have already been calculated and have no need for usage as input in training the machine learning model.) Regarding claim 11, Martinson teaches “a processor” (paragraph 121 “A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.”. Martinson discloses the computer system to include at least one processor for storing and/or executing program code to control the system as well as various other components of the moving platform.) The applicant is further directed to the rejections to claim 1 set forth above, as they are rejected based on the same rationale because the claim recites similar limitation and processing steps. The motivation to combine the teaching of Martinson in view of Tsai, further in view of Pai is similar to the teaching motivation in claim 1. Regarding claim 12 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is further directed to the rejections of claim 2 as set forth above, as they are rejected based on the same rationale, because the claim recites similar limitation and processing steps Regarding claim 13 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is further directed to the rejections of both claim 4 and claim 5 as set forth above combined, as they are rejected based on the same rationale, because the claim recites similar limitation and processing steps Regarding claim 16, Martinson teaches “A non-transitory computer-readable medium tangibly encoded with instructions” (paragraph 14 “a system may include one or more computer processors and one or more non-transitory memories storing instructions that, when executed by the one or more computer processors, cause the computer system to perform operations”, and paragraph 119 “Such a computer program may be stored in a computer readable storage medium” Martinson discloses the system comprises of a non-transitory memories, wherein the memories is a computer readable storage medium to store program instructions to be executed by the computer processor.) The applicant is further directed to the rejections to claim 1 set forth above, as they are rejected based on the same rationale because the claim recites similar limitation and processing steps. The motivation to combine the teaching of Martinson in view of Tsai, further in view of Pai is similar to the teaching motivation in claim 1. Regarding claim 17 depends on claim 16, thus the rejection of claim 16 is incorporated. The applicant is further directed to the rejections of claim 2 as set forth above, as they are rejected based on the same rationale, because the claim recites similar limitation and processing steps Regarding claim 18 depends on claim 16, thus the rejection of claim 16 is incorporated. The applicant is further directed to the rejections of both claim 4 and claim 5 as set forth above combined, as they are rejected based on the same rationale, because the claim recites similar limitation and processing steps Claims 9, 10, 15, 20 are rejected under 35 U.S.C. 103 as being unpatentable in view of Martinson et.al (US 20180053102 A1), further in view of Tsai et.al (US 20190325352 A1), further in view of Pai et.al (US 20200279140 A1), further in view of Patel et.al (US 1868440 B1). Regarding claim 9 depends on claim 1, thus the rejection by the teaching combination reference of claim 1 is incorporated. Martinson/Tsai/Pai does not teach the limitation “modifying the training data set comprises shuffling values of at least a portion of the features in the set of features”. However, Patel teaches this limitation (Column 5, lines 55-56, where Patel discloses “In some embodiments the training data is shuffled before training, or between passes of the training”. Patel discloses a feature to shuffle training data before or between passes of the training.) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a method and system for individualized adaptation of driver action prediction models by Martinson, and the teaching of a system and method for processing data include modifying and handling of training feature data by Tsai, and the teaching of system and method for prototype-based machine learning model reasoning interpretation by computing an explanation score to reflect the impact of the features by Pai, with the teaching of shuffling training data by Patel. The motivation to do so is referred to in Patel’s disclosure (Column 5, lines 61-65, where Patel discloses “Shuffling changes the order or arrangement in which the data is utilized for training so that the training algorithm does not encounter groupings of similar types of data, or a single type of data for too many observations in succession”. Patel discloses the benefit of shuffling data to ensure the algorithm does not encounter groupings of similar types of data or data type with too many observations in succession. Therefore, Tsai’s teaching within the teaching combination of Martinson/Tsai/Pai regarding modifying values of feature data set can also incorporate the shuffling technique to shuffle data of values of feature in the training data set to inherit mentioned benefits.) Regarding claim 10 depends on claim 9, thus the rejection of claim 9 is incorporated. Patel further teaches “shuffling is performed randomly” (Column 5, lines 61-63, where Patel discloses “… the shuffling in many embodiments is a random or pseudo-random shuffling to generate a truly random ordering”. Patel discloses the shuffling technique is performed as a pseudo-random shuffling to ensure a random result when performing shuffling.) Regarding claim 15 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is further directed to the rejections of both claim 9 and claim 10 as set forth above combined, as they are rejected based on the same rationale, because the claim recites similar limitation and processing steps. Regarding claim 20 depends on claim 16, thus the rejection of claim 16 is incorporated. The applicant is further directed to the rejections of both claim 9 and claim 10 as set forth above combined, as they are rejected based on the same rationale, because the claim recites similar limitation and processing steps. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY TU DIEP whose telephone number is (703)756-1738. The examiner can normally be reached M-F 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY T DIEP/ Examiner, Art Unit 2123 /ALEXEY SHMATOV/ Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Dec 23, 2020
Application Filed
Feb 01, 2024
Non-Final Rejection — §101, §103
Apr 25, 2024
Response Filed
Jun 13, 2024
Final Rejection — §101, §103
Aug 20, 2024
Response after Non-Final Action
Sep 12, 2024
Response after Non-Final Action
Sep 12, 2024
Examiner Interview (Telephonic)
Sep 19, 2024
Request for Continued Examination
Oct 07, 2024
Response after Non-Final Action
Oct 28, 2024
Non-Final Rejection — §101, §103
Feb 04, 2025
Response Filed
May 05, 2025
Final Rejection — §101, §103
Aug 11, 2025
Request for Continued Examination
Aug 20, 2025
Response after Non-Final Action
Sep 05, 2025
Non-Final Rejection — §101, §103
Dec 08, 2025
Response Filed
Mar 11, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579428
METHOD FOR INJECTING HUMAN KNOWLEDGE INTO AI MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12488223
FEDERATED LEARNING FOR TRAINING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 02, 2025
Patent 12412129
DISTRIBUTED SUPPORT VECTOR MACHINE PRIVACY-PRESERVING METHOD, SYSTEM, STORAGE MEDIUM AND APPLICATION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
25%
Grant Probability
30%
With Interview (+5.5%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month