Prosecution Insights
Last updated: April 19, 2026
Application No. 18/178,223

BUILDING GENERALIZED MACHINE LEARNING MODELS FROM MACHINE LEARNING MODEL EXPLANATIONS

Final Rejection §101§102§103
Filed
Mar 03, 2023
Examiner
BRACERO, ANDREW ANGEL
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Cypress Semiconductor Corporation
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
5 granted / 5 resolved
+45.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are presented for examination in this application, 18/178,223 filed 2023-03-03, having an effective filing date of 2022-10-06 via provisional application 63/ 413,725. The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Response to Arguments Applicant’s arguments and remarks filed 2025-01-27 have been fully considered. The arguments and remarks regarding the 35 U.S.C 101 rejections were not found to be persuasive. The arguments and remarks regarding the 35 U.S.C 103 rejections were found to be persuasive however the amendments have necessitated a change in the references applied. The 35 U.S.C 103 rejections have been maintained via new ground of rejection. 35 U.S.C 101 Applicant’s response: Applicant asserts “The Office Action indicates that independent claims 1, 8, and 15, from which claims 7, 14, and 20 respectively depend, are indicated as being directed to patent-eligible subject matter. A dependent claim incorporates all the elements of the independent claim from which it depends, while adding additional elements that further narrow the scope of the independent claim from which it depends. Therefore, if an independent claim is patent-eligible, any claim that properly depends therefrom should inherit its patent eligibility. Moreover, originally filed claims 7, 14, and 20 do not merely recite the evaluation or comparison of information. Rather, they recite a specific technical process in which explanation-derived feedback may be used to control the evaluation of an enhanced machine learning model. This process may improve the functioning of the machine learning system itself by enabling guided incremental learning that reduces catastrophic forgetting and improves classification accuracy. Such improvements constitute a technical improvement to computer-implemented machine learning systems. Accordingly, the rejections of originally filed claims 7, 14, and 20 under 35 U.S.C. § 101 are believed to be in error. For example, claims 7, 14, and 20 have been amended to recite "wherein" clauses that further define how to generate an enhanced version of the machine learning model in accordance with the operating mode," which the Office Action has already indicated is directed to patent- eligible subject matter. Accordingly, Applicant respectfully requests that the rejections of the present claims under 35 U.S.C. § 101 be withdrawn” Examiner’s response: Examiner respectfully disagrees. As the Examiner did not find any abstract ideas in the independent claim, the Examiner found the independent claims to satisfy the analysis for eligibility under step 2A prong one. Accordingly, dependent claims 7, 14, and 20 did contain abstract ideas of which a new analysis for those claims was deemed to be necessary as claims are examined individually for patent eligibility. Claims 7, 14, and 20 do not have any claims that depend from them and therefore were the only claims in need of an analysis. Even though claims are examined in light of the specification, claims 7, 14, and 20 do not recite any additional elements that make the claims eligible. “If applicant amends a claim to add a generic computer or generic computer components and asserts that the claim recites significantly more because the generic computer is 'specially programmed' (as in Alappat, now considered superseded) or is a 'particular machine' (as in Bilski), the examiner should look at whether the added elements integrate the exception into a practical application or provide significantly more than the judicial exception. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 223-24, 110 USPQ2d 1976, 1983-84 (2014). See In re Alappat, 33 F.3d 1526, 1545, 31 USPQ2d 1545, 1558 (Fed. Cir. 1994); In re Bilski, 545 F.3d 943, 88 USPQ2d 1385 (Fed. Cir. 2008)”. MPEP 2106.05(b).“It is important to note that a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions does not qualify as a particular machine”. MPEP 2016.05(b)(I). A person having ordinary skill in the art would find evaluating or comparing two different models to be considered a conventional computer function within the field of machine learning and artificial intelligence. The claim as a whole is still directed to abstract idea mental process. Furthermore, under step 2B, a consideration for determining whether a claim recites significantly more than a judicial exception is whether the additional element(s) are well-understood, routine, conventional activities previously known to the industry. The aforementioned claim limitations, as recited in dependent claim 7, 14, and 20, recite well-understood, routine, conventional functions claimed as insignificant extra-solution activities that amount to merely receiving or transmitting data over a network, e.g., using the Internet to gather data (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). As recited, the additional element in the aforementioned claims includes obtaining a first and second evaluation from the output of a neural network which amounts to transmitting data over a network. Thus, for at least the reasons described, the examiner respectfully finds the claims ineligible. 35 U.S.C 103 Applicant’s response: Applicant asserts “Obviousness requires that all claim features are taught or suggested by the combination of cited references. Applicant respectfully submits that the combination of cited references fails to teach or suggest every element of amended claim 1. For example, the combination of Sharpe and Spinner does not teach or suggest a machine learning model trained to predict an activity class for activity recognition, or causing a user interface to operate in an operating mode of a plurality of operating modes for machine learning model building. Therefore, the combination of Sharpe and Spinner does not teach or suggest, at least, "receive, from a client device via a user interface, input data comprising an initial version of a machine learning model trained to predict an activity class for activity recognition" and/or "cause the user interface to operate in an operating mode of a plurality of operating modes for machine learning model building," as recited in amended claim 1. During the interview with the Examiner on January 7, 2026, the Examiners indicated that amendments of the type made herein to claim 1 would necessitate further search and consideration. Accordingly, the combination of Sharpe and Spinner fails to teach or suggest every element of amended claim 1. ” … “Similar language is also included in amended claims 8 and 15. Thus, the combination of Sharpe and Spinner does not teach or suggest all the features of independent claims 1, 8, and 15 and corresponding dependent claims 2-4, 7, 9-11, 14, 16, 17, and 20. Accordingly, Applicant respectfully requests the rejections of claims 1-4, 7- 11, 14-17, and 20 under 35 U.S.C. § 103 be withdrawn.” … “Claims 5, 6, 12, 13, 18, and 19 depend on and include the features of one of claims 1, 8, or 15. As discussed above, the combination of Sharpe and Spinner fails to teach or suggest all of the features of independent claims 1, 8, and 15. Jin fails to cure at least these deficiencies of independent claims 1, 8, and 15. Therefore, Applicant respectfully submits that claims 5, 6, 12, 13, 18, and 19 are patentable over the cited references at least by virtue of their respective dependencies from independent claims 1, 8, and 15. Accordingly, Applicant requests that the rejections of claims 5, 6, 12, 13, 18, and 19 under 35 U.S.C. § 103 be withdrawn”. Examiner’s response: Arguments regarding the amended limitations have been fully considered but are moot in view of the new grounds of rejection. Claim Objections The following claims are objected to for the following reasons: Claims 8 and 14(typo) — “causing, by the at least one processing, the user interface” in claims 8 and 14 are a typo. A potential fix to the correction could include “causing, by the at least one processing device, the user” (emphasis added). Appropriate corrections should be made. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 7, 14, and 20 are rejected under 35 U.S.C 101 as being unpatentable because the claimed invention in these claims is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg 50-57 (January 7, 2019) (“2019 PEG”). Regarding claim 7 (currently amended): Step 1 – Is the claim directed to a process, machine, manufacture, or a composition of matter? Yes, the claim is directed to a machine (system). Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or a natural phenomenon? Yes, the claim recites an abstract idea: evaluate the enhanced version of the machine learning model by comparing the first evaluation to the second evaluation — this limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind or by a human using pen and paper (see MPEP 2106.04(a)(2) III C.). Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claim recites additional elements that do not integrate the judicial exception into a practical application: a system comprising: a memory — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). a processing device, operatively coupled to the memory — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). receive, from a client device via a user interface, input data comprising an initial version of a machine learning model — this limitation amounts to data outputting which is an insignificant extra-solution activity (see MPEP 2106.05(g)(3)) which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity. initialize an operating mode of the user interface from machine learning model building — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). generate an enhanced version of the machine learning model in accordance with the operating mode — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). the processing device is further configured to — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). obtain a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model — this limitation amounts to data gathering and outputting which is an insignificant extra-solution activity (see MPEP 2106.05(g)(3)) which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity. Step 2B – Does the claim recite additional elements that amount to significantly more than the abstract idea itself? No, there are no additional elements that amount to significantly more than the judicial exception. Amy additional elements that were determined to be insignificant extra-solution activity in step 2A prong 2 are further evaluated in step 2B on whether they are well-understood, routine, and conventional activities. The “receive, from a client device via a user interface, input data comprising an initial version of a machine learning model” and “obtain a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model” limitations were found to be insignificant extra-solution activities in claim 7. These limitations are recited at a high-level of generality and amount to transmitting data over a network, which is a well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.). Regarding claim 14: Step 1 – Is the claim directed to a process, machine, manufacture, or a composition of matter? Yes, the claim is directed to a method. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or a natural phenomenon? Yes, the claim recites an abstract idea: evaluate the enhanced version of the machine learning model by comparing the first evaluation to the second evaluation — this limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind or by a human using pen and paper (see MPEP 2106.04(a)(2) III C.). Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claim recites additional elements that do not integrate the judicial exception into a practical application: a method comprising: receiving, by at least one processing device from a client device via a user interface, input data comprising an initial version of a machine learning model — this limitation amounts to data outputting which is an insignificant extra-solution activity (see MPEP 2106.05(g)(3)) which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity. initializing, by at least one processing device based on the input data, an operating mode of the user interface from machine learning model building — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). generating, by at least one processing device based on the input data, an enhanced version of the machine learning model in accordance with the operating mode — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). by the at least one processing device — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). obtaining a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model — this limitation amounts to data gathering and outputting which is an insignificant extra-solution activity (see MPEP 2106.05(g)(3)) which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity. Step 2B – Does the claim recite additional elements that amount to significantly more than the abstract idea itself? No, there are no additional elements that amount to significantly more than the judicial exception. Amy additional elements that were determined to be insignificant extra-solution activity in step 2A prong 2 are further evaluated in step 2B on whether they are well-understood, routine, and conventional activities. The “receiving, from a client device via a user interface, input data comprising an initial version of a machine learning model” and “obtaining a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model” limitations were found to be insignificant extra-solution activities in claim 14. These limitations are recited at a high-level of generality and amount to transmitting data over a network, which is a well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.). Regarding claim 20: Step 1 – Is the claim directed to a process, machine, manufacture, or a composition of matter? Yes, the claim is directed to a manufacture (non-transitory computer-readable storage medium) Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or a natural phenomenon? Yes, the claim recites an abstract idea: evaluate the enhanced version of the machine learning model by comparing the first evaluation to the second evaluation — this limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed by the human mind or by a human using pen and paper (see MPEP 2106.04(a)(2) III C.). Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claim recites additional elements that do not integrate the judicial exception into a practical application: a non-transitory computer-readable storage medium comprising instructions — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). receive, from a client device via a user interface, input data comprising an initial version of a machine learning model — this limitation amounts to data outputting which is an insignificant extra-solution activity (see MPEP 2106.05(g)(3)) which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity. initialize an operating mode of the user interface from machine learning model building — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). generate an enhanced version of the machine learning model in accordance with the operating mode — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). the processing device is further configured to — this limitation is directed to mere instructions to apply an exception, as the use of a computer or other machinery in its ordinary capacity amounts to invoking computer components merely as a tool to perform an existing process (see MPEP 2106.05(f)(2)). obtain a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model — this limitation amounts to data gathering and outputting which is an insignificant extra-solution activity (see MPEP 2106.05(g)(3)) which has been recognized by the courts (as per Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754) as insignificant extra-solution activity. Step 2B – Does the claim recite additional elements that amount to significantly more than the abstract idea itself? No, there are no additional elements that amount to significantly more than the judicial exception. Amy additional elements that were determined to be insignificant extra-solution activity in step 2A prong 2 are further evaluated in step 2B on whether they are well-understood, routine, and conventional activities. The “receive, from a client device via a user interface, input data comprising an initial version of a machine learning model” and “obtain a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model” limitations were found to be insignificant extra-solution activities in claim 20. These limitations are recited at a high-level of generality and amount to transmitting data over a network, which is a well-understood, routine, and conventional activity (see MPEP 2106.05(d) II.). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 8 and 14 are rejected under 35 U.S.C 103 as being unpatentable over Wangenheim et al. (“Visual tools for teaching machine learning in K-12: A ten-year systematic mapping”, hereinafter Wangenheim). Regarding claim 8 (currently amended): Wangenheim receiving, by at least one processing device from a client device via a user interface, input data comprising an initial version of a machine learning model trained to predict an activity class for activity recognition (see pg. 5752 section 5 : “In general, the tools support supervised learning, with few exceptions supporting reinforcement learning (Cognimates, ML4K, and SnAIp) and/or unsupervised learning (Orange, RapidMiner). Model training can be performed on the local machine (BlockWiSARD, RapidMiner), with some tools allowing the use of a cloud server (eCraft2learn, Cognimates) or directly on a mobile device (Zhu, 2019). Yet, most use the user’s web browser to train the model (Teachable Machine, PIC, LearningML, mBlock)”. Also see pg. 5756 section 4.3: “LearningML intends to show in advanced mode also a confusion matrix, a table that in each row presents the examples in a predicted class while each column represents the examples in an actual class. These visualizations of the results of the classification, facilitate the identification of data that are not accurately classified, and thus, support the analysis of the students to improve the model’s performance. The use of examples to support the understanding of classes appears to be a promising solution that resonates with users (Kim et al., 2015).”. Also see pg. 5757 section 4.4: “While some tools just support the export of the created ML model, several provide also support for the deployment as part of a game or mobile application, integrated or as an extension of a block-based programming environment (Fig. 10).”), causing, by the at least one processing, the user interface to operate in an operating mode of a plurality of operating modes for machine learning model building (see pg. 5762 section 5:“Therefore, the goal has to be to create an ML learning environment with sufficient scaffolds for novices to start to create ML models with little or no formal instruction (low threshold) while also being able to support sophisticated programs (high ceiling). To simultaneously target different kinds of users, some of the tools (i.e., DeepScratch, Google TM, Orange, PIC, SnAIp) offer advanced modes in which they allow more advanced students to define hyperparameters for training (such as learning rate, epochs, batch size, etc.) or more detailed evaluation metrics while hiding these details from novices.”); and generating, by the at least an enhanced version of the machine learning model in accordance with the operating mode (see pg. 5752 section 4.3: “Using visual tools, ML concepts are typically concealed with black boxes to reduce the cognitive load when learning (Resnick et al., 2000). Such abstractions of ML concepts include very high-level representations, as, in ML4K, training the model is reduced to a single action button. Yet, as this concealing of ML concepts limits people’s ability to construct a basic understanding of ML concepts (Hitron et al., 2019; Resnick et al., 2000), some tools provide advanced modes that provide a lower-level representation. For example, DeepScratch, eCraft2Learn, Milo and PIC, allow defining parameters of the neural network architecture (such as type of model, number of layers, etc.), while data flow-based tools such as Orange, even provide low-level functionalities to build a neural network from neurons and layers. Such an advanced mode is also provided concerning training parameters (such as epochs, learning rate, batches, etc.) as part of DeepScratch, eCraft2Learn, Google TM, Milo, Orange, PIC, RapidMiner, and SnAIP.”.) Regarding claim 14: Wangenheim teaches the system of claim 8. Wangenheim further wherein, to generate the enhanced version of the machine learning model, the processing device is further configured to: obtain a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model (see pg. 5758 section 5: “These tools support exploration allowing students to try out different alternatives and create their custom ML models. Providing a visual interface, the tools allow the students to interact and execute a human-centric ML process in an interactive way using a train-feedback-correct cycle, enabling them to iteratively evaluate the current state of the model and take appropriate actions to improve it.”. Also see pg. 5744 section 4.2: “ However, providing an advanced mode, some of the tools also enable more knowledgeable users to interact on a more detailed level when building, training, and/or evaluating the ML model”. Also see fig. 9 that shows epochs of performance evaluations). ; and evaluate the enhanced version of the machine learning by comparing the first evaluation to the second evaluation (see fig. 9 that shows epochs of performance evaluations). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 7, 9-11, 15-17, and 20 are rejected under 35 U.S.C 103 as being unpatentable over Wangenheim et al. (“Visual tools for teaching machine learning in K-12: A ten-year systematic mapping”, hereinafter Wangenheim) in view of Sharpe et al. (US20240037427A1, hereinafter Sharpe). Regarding claim 1 (currently amended): Wangenheim teaches to receive, from a client device via a user interface, input data comprising an initial version of a machine learning model trained to predict an activity class for activity recognition (see pg. 5752 section 5 : “In general, the tools support supervised learning, with few exceptions supporting reinforcement learning (Cognimates, ML4K, and SnAIp) and/or unsupervised learning (Orange, RapidMiner). Model training can be performed on the local machine (BlockWiSARD, RapidMiner), with some tools allowing the use of a cloud server (eCraft2learn, Cognimates) or directly on a mobile device (Zhu, 2019). Yet, most use the user’s web browser to train the model (Teachable Machine, PIC, LearningML, mBlock)”. Also see pg. 5756 section 4.3: “LearningML intends to show in advanced mode also a confusion matrix, a table that in each row presents the examples in a predicted class while each column represents the examples in an actual class. These visualizations of the results of the classification, facilitate the identification of data that are not accurately classified, and thus, support the analysis of the students to improve the model’s performance. The use of examples to support the understanding of classes appears to be a promising solution that resonates with users (Kim et al., 2015).”. Also see pg. 5757 section 4.4: “While some tools just support the export of the created ML model, several provide also support for the deployment as part of a game or mobile application, integrated or as an extension of a block-based programming environment (Fig. 10).”), cause the user interface to operate in an operating mode of a plurality of operating modes for machine learning model building (see pg. 5762 section 5:“Therefore, the goal has to be to create an ML learning environment with sufficient scaffolds for novices to start to create ML models with little or no formal instruction (low threshold) while also being able to support sophisticated programs (high ceiling). To simultaneously target different kinds of users, some of the tools (i.e., DeepScratch, Google TM, Orange, PIC, SnAIp) offer advanced modes in which they allow more advanced students to define hyperparameters for training (such as learning rate, epochs, batch size, etc.) or more detailed evaluation metrics while hiding these details from novices.”); and generate an enhanced version of the machine learning model in accordance with the operating mode (see pg. 5752 section 4.3: “Using visual tools, ML concepts are typically concealed with black boxes to reduce the cognitive load when learning (Resnick et al., 2000). Such abstractions of ML concepts include very high-level representations, as, in ML4K, training the model is reduced to a single action button. Yet, as this concealing of ML concepts limits people’s ability to construct a basic understanding of ML concepts (Hitron et al., 2019; Resnick et al., 2000), some tools provide advanced modes that provide a lower-level representation. For example, DeepScratch, eCraft2Learn, Milo and PIC, allow defining parameters of the neural network architecture (such as type of model, number of layers, etc.), while data flow-based tools such as Orange, even provide low-level functionalities to build a neural network from neurons and layers. Such an advanced mode is also provided concerning training parameters (such as epochs, learning rate, batches, etc.) as part of DeepScratch, eCraft2Learn, Google TM, Milo, Orange, PIC, RapidMiner, and SnAIP.”.) Wangenheim does not explicitly teach a system comprising: a memory and a processing device, operatively coupled to the memory. Sharpe, however, analogously teaches a system comprising: a memory and a processing device, operatively coupled to the memory (see para [0060]: “The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine-readable medium.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim and Sharpe before him or her, to modify the system of claim 1 comprising: a memory and a processing device, operatively coupled to the memory in order to perform the with a computer system (see Sharpe at para [0060]: “The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine-readable medium.”.). Regarding claim 2: Wangenheim in view of Sharpe teaches the system of claim 1. Wangenheim does not explicitly teach wherein the enhanced version of the machine learning model is generated based on an explanation indicative of feature importance. Sharpe, however, analogously teaches wherein the enhanced version of the machine learning model is generated based on an explanation indicative of feature importance (see [0019]: “The ML explanation system 102 may determine the first feature so that it can be dropped from the dataset. By removing the first feature from the dataset and retraining the machine learning model, the ML explanation system 102 may determine the effect the first feature had on the performance of the machine learning model.”. Also see para [0004]: “Specifically, methods and systems described herein create a modified dataset by dropping one or more features that have been identified as being important for decisions made by a model. The model is then retrained using the modified dataset. The performance of the model (e.g., after being retrained) is compared with the original performance of the model (e.g., the original performance of the model after being trained on the original or complete dataset). Evaluating, the model and importance of features in this way eliminates the need for replacing data with perturbations and their associated drawbacks described above. Although retraining the model with a modified dataset from which a feature has been dropped may require additional computing resources, it allows for improved evaluations of explanation methods and thus allow for improved explainability of models (e.g., machine learning or artificial intelligence models) through proper selection of superior XAI techniques.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim and Sharpe before him or her, to modify the system of claim 2 to include attributes of wherein the enhanced version of the machine learning model is generated based on an explanation indicative of feature importance in order to allow for improved evaluations of explanation methods and improved explainability of models (see Sharpe at para [0019]: “Although retraining the model with a modified dataset from which a feature has been dropped may require additional computing resources, it allows for improved evaluations of explanation methods and thus allow for improved explainability of models (e.g., machine learning or artificial intelligence models) through proper selection of superior XAI techniques.”). Regarding claims 9 and 16: Claims 9 and 16 recite analogous limitations to claim 2 and therefore are rejected on the same grounds. Regarding claim 3: Wangenheim in view of Sharpe teaches the system of claim 2. Wangenheim does not explicitly teach wherein the explanation is a local interpretable model-agnostic explanation (LIME)-based explanation. Sharpe, however, analogously teaches wherein the explanation is a local interpretable model-agnostic explanation (LIME)-based explanation (see para [0015]: “ Each importance metric of the plurality of importance metrics may correspond to a respective feature of a feature set used by a machine learning model. The importance metrics may be obtained using an XAI technique and may be a score that indicates how influential a corresponding feature is in a classification or other output generated by a model (e.g., a machine learning model). For example, the importance metrics may be Shapley Additive exPlanations (SHAP) values, local interpretable machine learning (LIME) values, or may be generated using layer-wise relevance propagation techniques, generalized additive model techniques, or a variety of other XAI techniques.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim and Sharpe before him or her, to modify the system of claim 3 to include attributes of wherein the explanation is a local interpretable model-agnostic explanation (LIME)-based explanation in order to use scores that indicates how influential a corresponding feature is in classification or other output generated by a model (see Sharpe at para [0015] : “The importance metrics may be obtained using an XAI technique and may be a score that indicates how influential a corresponding feature is in a classification or other output generated by a model (e.g., a machine learning model). For example, the importance metrics may be Shapley Additive exPlanations (SHAP) values, local interpretable machine learning (LIME) values”) Regarding claim 10: Claim 10 recites analogous limitations to claim 3 and therefore is rejected on the same grounds. Regarding claim 4 (currently amended): Wangenheim in view of Sharpe teaches the system of claim 2. Wangenheim does not explicitly teach wherein, to generate the enhanced version of the machine learning model, the processing device is further configured to: send the explanation to the client device, receive, from the client device, user input relating to the explanation, wherein the user input comprises a set of ground truth features, and generate the enhanced version of the machine learning model based on the user input relating to the explanation. Sharpe, however, analogously teaches wherein, to generate the enhanced version of the machine learning model, the processing device is further configured to: send the explanation to the client device (see para [0034]: “The devices in FIG. 1 (e.g., ML explanation system 102 or the user device 104) may communicate (e.g., with each other or other computing systems not shown in FIG. 1 … For example, the ML explanation system 102, any component of the processing system (e.g., the communication subsystem 112 or the ML subsystem 114), and the user device 104 may be implemented by one or more computing platforms. )”); receive, from the client device, user input relating to the explanation, wherein the user input comprises a set of ground truth features (see para [0035]: “With respect to FIG. 3 , machine learning model 342 may take inputs 344 and provide outputs 346. In one use case, outputs 346 may be fed back to machine learning model 342 as input to train machine learning model 342 (e.g., alone or in conjunction with user indications of the accuracy of outputs 346, with labels associated with the inputs, or with other reference feedback information. In another use case, machine learning model 342 may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction (e.g., outputs 346) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information).”); and generate the enhanced version of the machine learning model based on the user input relating to the explanation (see para [0035]: “In another use case, machine learning model 342 may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction (e.g., outputs 346) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information)”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim and Sharpe before him or her, to modify the system of claim 4 to include attributes of wherein, to generate the enhanced version of the machine learning model, the processing device is further configured to: send the explanation to the client device, receive, from the client device, user input relating to the explanation, wherein the user input comprises a set of ground truth features, and generate the enhanced version of the machine learning model based on the user input relating to the explanation in order to update parameters based on predictions (see Sharpe at para [0035]: “In another use case, machine learning model 342 may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction”). Regarding claims 11 and 17: Claims 11 and 17 recite analogous limitations to claim 4 and therefore are rejected on the same grounds. Regarding claim 7: Wangenheim in view of Sharpe teaches the system of claim 1. Wangenheim further teaches the system of claim 1, wherein, to generate the enhanced version of the machine learning model, the processing device is further configured to: obtain a first evaluation of the initial version of the machine learning model and a second evaluation of the enhanced version of the machine learning model (see pg. 5758 section 5: “These tools support exploration allowing students to try out different alternatives and create their custom ML models. Providing a visual interface, the tools allow the students to interact and execute a human-centric ML process in an interactive way using a train-feedback-correct cycle, enabling them to iteratively evaluate the current state of the model and take appropriate actions to improve it.”. Also see pg. 5744 section 4.2: “ However, providing an advanced mode, some of the tools also enable more knowledgeable users to interact on a more detailed level when building, training, and/or evaluating the ML model”. Also see fig. 9 that shows epochs of performance evaluations). ; and evaluate the enhanced version of the machine learning by comparing the first evaluation to the second evaluation (see fig. 9 that shows epochs of performance evaluations). Regarding claim 20: Claim 20 recites analogous limitations to claim 7 and therefore are rejected on the same grounds. Regarding claim 15 (currently amended): Wangenheim teaches to receive, from a client device via a user interface, input data comprising an initial version of a machine learning model trained to predict an activity class for activity recognition (see pg. 5752 section 5 : “In general, the tools support supervised learning, with few exceptions supporting reinforcement learning (Cognimates, ML4K, and SnAIp) and/or unsupervised learning (Orange, RapidMiner). Model training can be performed on the local machine (BlockWiSARD, RapidMiner), with some tools allowing the use of a cloud server (eCraft2learn, Cognimates) or directly on a mobile device (Zhu, 2019). Yet, most use the user’s web browser to train the model (Teachable Machine, PIC, LearningML, mBlock)”. Also see pg. 5756 section 4.3: “LearningML intends to show in advanced mode also a confusion matrix, a table that in each row presents the examples in a predicted class while each column represents the examples in an actual class. These visualizations of the results of the classification, facilitate the identification of data that are not accurately classified, and thus, support the analysis of the students to improve the model’s performance. The use of examples to support the understanding of classes appears to be a promising solution that resonates with users (Kim et al., 2015).”. Also see pg. 5757 section 4.4: “While some tools just support the export of the created ML model, several provide also support for the deployment as part of a game or mobile application, integrated or as an extension of a block-based programming environment (Fig. 10).”), cause the user interface to operate in an operating mode of a plurality of operating modes for machine learning model building (see pg. 5762 section 5:“Therefore, the goal has to be to create an ML learning environment with sufficient scaffolds for novices to start to create ML models with little or no formal instruction (low threshold) while also being able to support sophisticated programs (high ceiling). To simultaneously target different kinds of users, some of the tools (i.e., DeepScratch, Google TM, Orange, PIC, SnAIp) offer advanced modes in which they allow more advanced students to define hyperparameters for training (such as learning rate, epochs, batch size, etc.) or more detailed evaluation metrics while hiding these details from novices.”); and generate an enhanced version of the machine learning model in accordance with the operating mode (see pg. 5752 section 4.3: “Using visual tools, ML concepts are typically concealed with black boxes to reduce the cognitive load when learning (Resnick et al., 2000). Such abstractions of ML concepts include very high-level representations, as, in ML4K, training the model is reduced to a single action button. Yet, as this concealing of ML concepts limits people’s ability to construct a basic understanding of ML concepts (Hitron et al., 2019; Resnick et al., 2000), some tools provide advanced modes that provide a lower-level representation. For example, DeepScratch, eCraft2Learn, Milo and PIC, allow defining parameters of the neural network architecture (such as type of model, number of layers, etc.), while data flow-based tools such as Orange, even provide low-level functionalities to build a neural network from neurons and layers. Such an advanced mode is also provided concerning training parameters (such as epochs, learning rate, batches, etc.) as part of DeepScratch, eCraft2Learn, Google TM, Milo, Orange, PIC, RapidMiner, and SnAIP.”.) Wangenheim does not explicitly teach a non-transitory computer-readable storage medium. Sharpe, however, analogously teaches a non-transitory computer-readable storage medium (see para [0060]: “The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine-readable medium.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim and Sharpe before him or her, to modify the system of claim 15 comprising: a memory and a processing device, operatively coupled to the memory in order to perform the with a computer system (see Sharpe at para [0060]: “The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine-readable medium.”.). Claims 5, 6, 12, 13, 18, and 19 are rejected under 35 U.S.C 103 as being unpatentable in view of Wangenheim et al. (“Visual tools for teaching machine learning in K-12: A ten-year systematic mapping”, hereinafter Wangenheim) in view of Sharpe et al. (US20240037427A1 hereinafter referred to as Sharpe) in further view of Jin et al. (“Artificial intelligence in glioma imaging: challenges and advances” hereinafter referred to as Jin). Regarding claim 5: Wangenheim in view of Sharpe teaches the system of claim 1. Wangenheim does not explicitly teaches to generate the enhanced version of the machine learning model, the processing device is further to implement incremental learning. Jin, however, analogously teaches to generate the enhanced version of the machine learning model, the processing device is further to implement incremental learning (see pg. 7 section 2.2.1. ‘Choosing and training models’: “Transfer learning, however, can suffer from catastrophic forgetting issues, where the knowledge about the old task may not be maintained when adapting parameters to a new dataset or task. To avoid this pitfall and enable continual learning [92] …”. Also see pg. 10 section 2.3.2 ‘Model interpretability’: “In machine and deep learning literature, this challenge is referred to as the model interpretability or explainable AI (XAI) problem, i.e. to open the black box models and reveal how the model makes the predictions in terms that human users can understand [124]. Model interpretability is especially important in deploying AI techniques in clinical settings. With a black-box model, the clinical users will only receive a prediction without an explanation or justification.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim, Sharpe, and Jin before him or her, to modify the system of claim 5 to include attributes of wherein to generate the enhanced version of the machine learning model, the processing device is further to implement incremental learning in order to avoid catastrophic forgetting (see Jin at pg. 7 section 2.2.1. ‘Choosing and training models’: “Transfer learning, however, can suffer from catastrophic forgetting issues, where the knowledge about the old task may not be maintained when adapting parameters to a new dataset or task. To avoid this pitfall and enable continual learning [92] …”.). Regarding claims 12 and 18: Claims 12 and 18 recite analogous limitations to claim 5 and therefore are rejected on the same grounds. Regarding claim 6: Wangenheim in view of Sharpe in further view of Jin teaches the system of claim 5. Wangenheim does not explicitly teach wherein the wherein the incremental learning is regularization-based elastic weight consolidation (EWC) incremental learning. Jin, however, analogously teaches wherein the incremental learning is regularization-based elastic weight consolidation (EWC) incremental learning (see pg. 7 section 2.2.1. ‘Choosing and training models’: “Transfer learning, however, can suffer from catastrophic forgetting issues, where the knowledge about the old task may not be maintained when adapting parameters to a new dataset or task. To avoid this pitfall and enable continual learning [92], several approaches were proposed in the realm of brain segmentation. Garderen et al applied a regularization called elastic weight consolidation (EWC) during transfer learning [93].”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Wangenheim, Sharpe, and Jin before him or her, to modify the system of claim 6 to include attributes of wherein the incremental learning is regularization-based elastic weight consolidation (EWC) incremental learning in order to improve performance (see Jin at pg. 7 section 2.2.1 ‘Choosing and training models’: “Research on segmenting low- and highgrade gliomas showed that EWC improved performance on the old domain after transfer learning on the new domain. Conversely, it also restricted the adaptation capacity to the new domain”). Regarding claims 13 and 19: Claims 13 and 19 recite analogous limitations to claim 6 and therefore are rejected on the same grounds. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew A Bracero whose telephone number is ((571)270-0592. The examiner can normally be reached Monday - Thursday 7:30a.m. - 5:00 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached Monday - Thursday 7:30a.m. - 5:00 p.m. ET at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW BRACERO/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Mar 03, 2023
Application Filed
Nov 20, 2025
Non-Final Rejection — §101, §102, §103
Dec 03, 2025
Interview Requested
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 07, 2026
Examiner Interview Summary
Jan 27, 2026
Response Filed
Mar 13, 2026
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month