Prosecution Insights
Last updated: April 19, 2026
Application No. 17/141,780

MACHINE LEARNING METHOD FOR INCREMENTAL LEARNING AND COMPUTING DEVICE FOR PERFORMING THE MACHINE LEARNING METHOD

Non-Final OA §101§103
Filed
Jan 05, 2021
Examiner
ALSHAHARI, SADIK AHMED
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
3 (Non-Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
12 granted / 34 resolved
-19.7% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of Claims Claim(s) 1-2, 4, 6-8, and 10-12 are pending and are examined herein. Claim(s) 1, 4, and 10 have been Amended. Claim(s) 3, 5, 9, and 13 are Cancelled. Claim(s) 1-2, 4, 6-8, and 10-12 are rejected under 35 U.S.C. § 101 and 103. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/18/2025 has been entered. Response to Amendment The amendment filed on March 18, 2025 has been entered. Claims 1-2, 4, 6-8, and 10-12 are pending in the application. Applicant’s amendments to the claims have been fully considered and are addressed in the rejections below. Response to Arguments Applicant's arguments, with respect to the rejection under 35 U.S.C. § 101 filed on 03/18/2025, have been fully considered but they are not persuasive. Applicant’s argument (Pp. 8-9 of the remarks): Applicant argues that amended independent claims 1 and 10 do not recite an abstract idea under Prong 1 of the subject matter eligibility analysis. The limitations of "sorting" and "constructing," as now recited, go well beyond mere "mental processes," such as "concepts performed in the human mind," such as "with pencil and paper." Rather, these features involve specific technical implementations that go beyond mere mental processes. Examiner's response: The examiner respectfully disagrees with the applicant assertion that the claims do not recite a judicial exception under Step 2A, Prong One of the subject matter eligibility analysis. As explained in the Office Action, the limitations of "sorting" and "constructing," as now recited, and under their broadest reasonable interpretation, encompass concepts that can be practically performed in the human mind with physical aid (e.g., pen and paper). For example, the step of selecting two or more features from a set of features and sorting these features in a specific sequence is a process that can be performed mentally, including observation, evaluation, judgment. See MPEP § 2106.04(a)(2)(III). Additionally, the process of constructing a graph representation of the feature values as nodes and edges is a cognitive activity that can be derived manually in the human mind with the aid of pen and paper. See MPEP § 2106.04(a)(2)(III). The claim does not provide sufficient detail regarding any technical implementation of these steps and merely recite a high level execution using a generic computer device comprising a processor and a machine learning module (i.e., software component). Beyond the use of the computer components to perform the judicial exception, the recited steps are directed to mental processes. Accordingly, the examiner maintains that the claims recite a judicial exception under Step 2A, Prong One. Applicant’s argument (Pp. 8-9 of the remarks): Applicant further argues that amended independent claims 1 and 10 integrate any such alleged abstract idea into a practical application and recite significantly more when considering the steps of Prong 2. The "sorting" step is not simply a mental process but involves technical implementation using a computing device. Sorting "features, included in the encoded training data, as nodes and connecting adjacent nodes in" a specific order requires computational resources and algorithms to efficiently handle large datasets. This step cannot be practically performed in the human mind or with physical aid such as pen and paper due to the complexity and volume of data involved. The "constructing" step involves creating a graph representation of sorted features using nodes and edges. This process requires computational resources to manage and analyze the relationships between features. … The "constructing" step is a specific technical implementation that contributes to the overall improvement of machine learning models and data analysis. Applicant further argues that the recitations of the converting of the features, as now recited, go well beyond mere "mathematical concepts," such as using a mathematical algorithm and/or mathematical model. Rather, these features involve specific technical implementations that go beyond mere math or mental processes. As now claimed, the "converting" step involves using advanced techniques such as linear discriminant analysis (LDA), principal component analysis (PCA), and deep learning-based feature extraction. These techniques are well-established in the field of machine learning and require computational resources and specialized algorithms. Examiner's response: The examiner respectfully disagrees. The additional elements of using a computer device and its components to perform the steps of “sorting” and “constructing” merely recite the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). In other words, the claims invoke a computer device to execute computer instructions (e.g., a machine learning module) to perform the abstract idea (e.g., sorting features and constructing a feature graph representation). With respect to the claim limitation of “converting” two or more features into new features (e.g., converting a continuous value of an arbitrary feature into a discrete value or a categorical value or converting a letter-based value into a number value), this step encompasses a mental process and/or mathematical concept. The use of well-established statistical and mathematical algorithms such as such as linear discriminant analysis (LDA) or principal component analysis (PCA), merely represents a conventional algorithm configured to perform feature conversion/selection. Accordingly to the applicant’s own remarks, and further as evidenced by Gibson et al. (Pub. No.: US 20200152295 A1), these methods are well-known an conventional in the art of machine learning and data analysis. See Gibson [0183]-[0184]. Therefore, merely reciting the use of these methods executed on a generic computer does not preclude the mental or mathematical nature of the step. Instead, it merely amounts to a programmed computer to perform generic computer functions that are well-known, routine and conventional in the field. Examiner notes that employing generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not add significantly more. Thus, the claims fail to recite sufficient additional elements that would transform the abstract idea into patent-eligible subject matter. While the applicant asserts that the “constructing” step contributes to overall improvement of machine leaning models and data analysis, the examiner notes that this step is directed to a judicial exception. Under Step 2A, Prong Two, the “improvement” analysis evaluates whether the claim pertains to an improvement to the functioning of a computer or to another technology without reference to what is well-understood, routine, conventional activity. Here, the alleged overall improvement appears to pertain to the judicial exception itself, and the identified additional elements are not themselves, or in combination, sufficient to reflect the alleged improvement. Furthermore, the use various methods such as linear discriminant analysis (LDA), principal component analysis (PCA), and an Autoencoder to perform feature selection or dimensionality reduction define only well-understood, routine, conventional activity previously known in the field. See MPEP § 2106.05(d). Accordingly, the claimed additional elements fail to provide meaningful limitations that can transform the exception into a patent-eligible application. In view of the above, the rejection under 35 U.S.C. § 101 is maintained. Applicant's arguments, with respect to the rejection under 35 U.S.C. § 103 filed on 03/18/2025 (see remarks Pp. 10-11) have been fully considered but are moot in view of the new grounds of rejection. The arguments amount to a general allegation that the cited references fail to teach or suggest the amended features in the independent claim 1 and 10, without specifically identifying distinctions or novel features that differentiate the claim over the cited art. Moreover, Applicant’s arguments are moot in view of the current ground of rejection. The Examiner refers to the updated rejection under 35 U.S.C. § 103 for more details. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult MPEP 2106 for more details of the analysis. Under Step 1 analysis, Claims 1-2, 4, and 6-8 recite a machine learning method (representing a process); Claims 10-12 recite a computing device (representing a machine). Therefore, each set of the claims falls into one of the four statutory categories (i.e., process, machine, article of manufacture, or composition of matter). Claims 1 -2, 4, 6-8, and 10-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more, and hence is not patent-eligible subject matter. Regarding Amended Claim 1, Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. encoding training data labeled to a plurality of class labels; encoding a next set of the training data; (The “encoding” steps, as drafted and under their broadest reasonable interpretation, are directed to concepts that fall within the mental process and/or mathematical concept groupings of abstract ideas. Specifically, the “encoding” step involves converting a continuous value of a feature into a discrete value or a categorical value which encompass the mathematical Formula and Calculations, that can be practically performed in the human mind and/or with physical aid (e.g., pen and paper), see MPEP § 2106.04(a)(2)(I).). sorting two or more features, included in the encoded training data, in a specific order to generate a feature sequence; … sorting the new features in a specific order to generate the feature sequence. (The “sorting” step, as drafted and under its broadest reasonable interpretation, falls within the mental processes category of abstract ideas. The sorting step involves ranking or organizing a subset of features in a specific sequence. This step can be practically performed in the human mind. Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III).) constructing values, respectively included in the sorted features, as nodes and connecting adjacent nodes of the nodes in the specific order by using the edge to generate a plurality of feature networks classified into the plurality of class labels on the basis of the generated feature sequence; (The “constructing” step, as drafted and under its broadest reasonable interpretation, falls within the mental processes category of abstract ideas. The plain meaning of “constructing” in this context refers to organizing or arranging the feature values in a graph representation with nodes and edges. The recited “feature network” is understood to represent a feature graph. The concept of creating a graph to represent the extracted features using nodes and edges is a process that can be performed in the human mind with the aid of a pencil and paper, see MPEP § 2106.04(a)(2)(III). An individual can manually sort features in a specific order to generate a sequence of features and create a graphical representation of these features as nodes and connecting edges.) determining feature networks, selected based on performance from among the generated plurality of feature networks, as significant feature networks; (The “determining” step, as drafted and under its broadest reasonable interpretation, falls within the mental processes category of abstract ideas. This step involves selecting feature graph based on specific criteria (e.g., performance). This high-level of selection represents an act of evaluating information and decision making that can be performed in the human mind. Accordingly, this step encompasses mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III).) combining the determined significant feature networks to build a model; (The “combining” step, as drafted and under its broadest reasonable interpretation, falls within the mental processes category of abstract ideas. This step involves merging selected feature graphs to generate a combined model. Examiner notes that combining the optimal feature graphs to generate a combined graph can be practical performed in the human mind with the aid of a pen and a paper. See MPEP § 2106.04(a)(2)(III). The claim does not define any detail on how the combining is technical achieved. This high-level of recitation of combining encompasses the abstract idea that can be manually derived by a skilled person to generate a model representing a feature graphs.) calculating a new weight by using an instance of the encoded next set of the training data to normalize the calculated new weight; (The “calculating” step, as drafted and under its broadest reasonable interpretation, falls within the mental process and/or mathematical concept groupings of abstract ideas. The examiner notes that the calculating step represents a mathematical calculation, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation, see MPEP § 2106.04(a)(2)(I).) Step 2A Prong 2: Under this prong, we evaluate whether the claim recites additional elements that integrate the abstract idea into a practical application by considering the claim as a whole. The judicial exception is not integrated into a practical application. Additional Elements Analysis: The recitation of “a machine learning method for incremental learning, performed by a computing device” merely sets forth a field of use of the claimed method and the use of generic computer component to perform the method. This amounts to generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). Additionally, the use of a computer device to execute the method amount to no more than invoking computer as a tool to perform the abstract idea. See MPEP § 2106.05(f). updating the weight of each of the determined significant feature networks on the basis of the normalized new weight to incrementally update the built model. (This amounts to no more than merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Examiner’s note: performing iterative update based on the canulation i.e., high level recitation of iterative computation or processing to update the model’s components, amounts to the use of a computer or other machinery in its ordinary capacity to perform a generic computer function. The process of performing iterative update (i.e., Performing repetitive calculations) merely defines a generic computer component performing generic computer functions at a high level of generality.) wherein the incrementally updating of the built model comprises adding the normalized new weight to the weight of each of the determined significant feature networks to incrementally update the built model, (The claimed “adding the normalized new weight to the weight of each of the determined significant feature networks” is part of the abstract idea of mental process and/or a Mathematical concepts, as discussed under MPEP § 2106.04(a)(2)(I) and (III). The recitation of “incrementally updating the built model” amounts to mere instructions to “apply it” i.e., performing iterative update based on the canulation. It is noted, that the built model involves a process of creating or designing feature graphs and combining them that are directed to the abstract idea of a mental process. Thereby, the incremental update of the weight amount to reciting a generic computer function (i.e., performing repetitive calculations) at a high level of generality. See MPEP § 2106.05(f). Accordingly, the recitation of performing a generic computer function does not meaningfully limit the judicial exception.) wherein the generating of the feature sequence comprises converting two or more features, included in the encoded training data, into new features by using at least one of linear discriminant analysis (LDA), principal component analysis (PCA), and a deep learning based feature extracting technique, … (The “converting” step of two or more features into a new features is part of the abstract idea of a mental process. The process of statistically transforming a subset of features into a new representation. The use of conventional algorithms such as linear discriminant analysis (LDA), principal component analysis (PCA), and a deep learning based feature extracting technique, amounts to no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the field, as discussed in MPEP § 2106.05(d).) Step 2B: Under this prong, the claim must include additional elements that amount to significantly more than the judicial exception. These elements must not be well-understood, routine, or conventional in the relevant field. When viewed individually and as an ordered combination, the claim does not include any such additional elements that are sufficient to amount to significantly more (i.e., inventive concept). Additional Elements Analysis: As explained in Step 2A, Prong Two, the recitation of “a machine learning method for incremental learning, performed by a computing device” merely recites a computer component configured to execute the method. Merely reciting a computer component to perform the machine learning method cannot provide an inventive concept. See MPEP § 2106.05(f). Furthermore, the limitations of incrementally updating the model weights represent a generic computer function that is recited at a high level of generality. This claim step represents performing iterative computation of the model’s weight (i.e., performing repetitive calculations). The addition of a generic function has been recognized as well-understood, routine, conventional activity when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See MPEP § 2106.05(g). Accordingly, Therefore, the addition of performing a generic computer function cannot provide an inventive concept. As explained above, the recitation of “using at least one of linear discriminant analysis (LDA), principal component analysis (PCA), and a deep learning based feature extracting technique” to perform feature selection or dimensionality reduction merely represents invoking conventional algorithms on the computer device to perform the abstract idea of sorting features to generate a feature sequence. The concept of using standard statistical or mathematical algorithms for data transformation and/or feature reduction merely reflects the use of a generic computer component to perform the abstract idea. These different algorithms are well-known and conventional in the field, as evidenced by Gibson, which explicitly describes the various methods such as LDA, PCA, and autoencoder as well-known dimensionality reduction techniques in the art. See Gibson et al., (US 20200152295 AI) [0184]-[0191]. Accordingly, the use of these methods cannot provide an inventive concept. Accordingly, when viewed as a whole, the claim is primarily directed to the abstract idea of sorting features into a sequence, creating feature graphs, and selecting feature graph to generate the model, and the recited additional elements, whether alone or in combination with the judicial exception, are not sufficient to integrate the abstract idea into a practical application or amount to significantly more. Therefore, claim 1 does not recite patent-eligible subject matter. Regarding Original Claim 2, Step 2A Prong 1: Claim 2, which incorporates the rejection of claim 1, recites further limitation such as: wherein the encoding of the training data comprises converting a continuous value of a feature, included in the training data, into a discrete value or a categorical value on the basis of a predefined encoding rule. (Claim 2 merely defines the encoding of the training data. This is part of the abstract idea recited in claim 1 which merely defines a convention operation (i.e., mathematical operation) that can be practically performed in the human mind with the aid of pen and paper. This limitation is directed to the abstract idea of a mathematical concept and mental process grouping. See MPEP § 2106.04(a)(2)(I) and (III).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 2 is ineligible. Regarding Currently Amended Claim 4, Step 2A Prong 1: Claim 4, which incorporates the rejection of claim 1, recites further limitation such as: randomly selecting two or more features from the encoded training data; and sorting the randomly selected two or more features in the specific order to generate the feature sequence. (That is part of the abstract idea identified of a mental process. The concept of randomly selecting a set of features and sorting features into a sequence falls within the mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III).) Step 2A Prong 2: The judicial exception is not integrated into a practical application. Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 4 is ineligible. Regarding Original Claim 6, Step 2A Prong 1: Claim 6, which incorporates the rejection of claim 1, recites further limitations such as: calculating the weight of each of the plurality of feature networks by using an instance of the training data and normalizing the calculated weight; (That is part of the abstract idea of a “Mathematical Concept”. Examiner’s note: Under the broadest reasonable interpretation (BRI), the “calculating” step involves mathematical calculation, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation, see MPEP § 2106.04(a)(2)(I).) assessing performance of each of feature networks by using the plurality of feature networks and the normalized weight; (That is part of the abstract idea of a “mental process”- concepts performed in the human mind (including an observation, evaluation, judgment, opinion), see MPEP § 2106.04(a)(2)(III). An individual can manually assess the performance of each feature network based on the calculated/normalized weight (e.g., average accuracy or performance metric).) determining priorities of the plurality of feature networks on the basis of the assessed performance; (That is part of the abstract idea of a “mental process”- concepts performed in the human mind (including an observation, evaluation, judgment, opinion), see MPEP § 2106.04(a)(2)(III). The determining step involves a process that can be performed in the human mind. For example, an individual can manually rank feature network based on their performance metric.) determining, as the significant feature networks, feature networks ranked as having a priority from among the plurality of feature networks on the basis of a predetermined number. (That is part of the abstract idea of a “mental process”, a process of which can be performed in the human mind including an observation, evaluation, judgment, and opinion, see MPEP § 2106.04(a)(2)(III). An individual can manually select the top feature networks based on the ranking assessment.) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 6 is ineligible. Regarding Original Claim 7, Step 2A Prong 1: Claim 7, which incorporates the rejection of claim 6, recites further limitation such as: calculating a weight of the first feature network by using an instance of the training data labeled to the first class label; calculating a weight of the second feature network differing from the weight of the first feature network by using an instance of the training data labeled to the second class label; and normalizing the weight of the first feature network and the weight of the second feature network. (That is part of the abstract idea of a “Mathematical Concept”. Examiner’s note: Under the broadest reasonable interpretation (BRI), the “calculating” and the “normalizing” steps involve mathematical calculations, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation, see MPEP § 2106.04(a)(2)(I).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 7 is ineligible. Regarding Original Claim 8, Step 2A Prong 1: Claim 8, which incorporates the rejection of claim 6, recites further limitation such as: calculating an accuracy of determining a class by using the plurality of feature networks, the normalized weight, and an instance labeled to a class label; and assessing performance of each of the feature networks on the basis of the calculated accuracy of determining a class. (That is part of the abstract idea of mathematical concepts and mental processes. See MPEP § 2106.04(a)(2), subsection (I) & subsection (III). The calculating an accuracy of the feature network to determine how accurately can determine the class label for a given data instance and using the accuracy to assess the performance of the feature networks is a process that encompasses the mathematical calculation and mental process. This includes concepts performed in the human mind (including an observation, evaluation, judgment or opinion).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 8 is ineligible. Regarding Amended Claim 10, The claim recites similar limitations as corresponding claim 1. Therefore, the same analysis (subject matter eligibility analysis) that was utilized for claim 1, as described above, is equally applicable to claim 10. The only difference is that claim 1 is drawn to a method, and claim 10 is drawn to a computer device for executing a machine learning method. The recitation of “a computer device for executing a machine learning method for incremental learning, the computing device comprising: a processor; a storage …,” is merely using a generic computer component to perform the method including the judicial exception. Therefore, the recitation of the computer device and its components amounts to no more than mere instructions to apply the judicial exception. See MPEP 2106.05(f). Therefore, claim 10 is ineligible. Regarding Original Claim 11, The claim recites similar limitations as identified above in claim 1. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 1, as described above, is equally applicable to claim 11. Therefore, claim 11 is not patent-eligible subject matter, for the same reasons provided in Claim 1. Regarding Original Claim 12, The claim recites similar limitations as corresponding claim 6. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 6, as described above, is equally applicable to claim 12. Therefore, claim 12 is not patent-eligible subject matter, for the same reasons provided in Claim 6. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-2, 6-8, and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Parikh et al., (NPL: “An Ensemble-Based Incremental Learning Approach to Data Fusion” (2007)) in view of Bhowan et al., (Pub. No.: US 20170286866 A1), further in view of Chatterjee et al., (Pub. No.: US 20200097545 A1), and further in view of Xiang et al., (NPL: “Incremental and adaptive abnormal behaviour detection.” (2008)). Regarding Amended Claim 1, Parikh discloses the following: A machine learning method for incremental learning, performed by a computing device, the machine learning method comprising: (Parikh, [Abstract] “This paper introduces Learn++, an ensemble of classifiers based algorithm originally developed for incremental learning, and now adapted for information/data fusion applications.” [II. LEARN++ FOR DATA FUSION] “Learn++ achieves incremental learning by generating an ensemble of classifiers, where each classifier is trained on a strategically updated distribution of the training data that focus on instances previously not seen or learned. …, Learn++ specifically targets learning from additional data, i.e., Learn++ generates an ensemble for each data set that becomes available, and combines these ensembles to create an ensemble of ensembles or a meta-ensemble of classifiers.”) encoding training data labeled to a plurality of class labels; (Parikh, [II. LEARN++ FOR DATA FUSION] “For each data set FSk submitted to Learn++, the inputs to the algorithm are: 1) the training data Sk with mk instances xi along with their correct labels yi ∈ Ω = {ω1,...,ωC }, i = 1, 2,...,mk, for C number of classes;”) [Examiner’s Note: Learn++ takes training data instances (xi) with their correct labels (yi) as inputs, where yi belongs to a set of class labels {w1, wc} for a number of classes C.] … generate a plurality of feature networks classified into the plurality of class labels … (Parikh, [[II. LEARN++ FOR DATA FUSION] “In the context of data fusion, each source introduces data with a new feature set denoted as FSk, k = 1, 2,...,K, where K is the total number of data sources. For each data set FSk submitted to Learn++, the inputs to the algorithm are: 1) the training data Sk with mk instances xi along with their correct labels yi ∈ Ω = {ω1,...,ωC }, i = 1, 2,...,mk, for C number of classes; 2) a supervised classification algorithm “BaseClassifier” to generate individual classifiers (henceforth, hypotheses); and 3) an integer Tk, indicating the number of classifiers (NOCs) to be generated for the kth data set. …, Learn++ generates each classifier of the ensemble sequentially using an iterative process.”) determining feature networks, selected based on performance from among the generated plurality of feature networks, as significant feature networks; (Parikh, [II. LEARN++ FOR DATA FUSION] “All hypotheses generated thus far are then combined using “weighted majority voting” to obtain the composite hypothesis Hk t (step 5), …, Therefore, those hypotheses with smaller training error are awarded a higher voting weight and thus have more say in the final classification decision.” Fig. 3 conceptually illustrates the system-level organization of the overall algorithm as structured for data fusion applications: an ensemble of classifiers is generated as described above for each of the feature sets, which are then combined through weighted majority voting. For data fusion applications, however, performance-based voting weights for each classifier log(1/βk t ) are further adjusted before final voting based on expected or observed training performance on each data source: … etc.” To summarize, Learn++ employs two sets of weights and a reliability factor when used for data fusion. • The distribution weights w(i) assigned to each instance xi of the training data, and used to determine which instances are more likely to be drawn into the training subset of the next classifier. • The voting weights l o g ( 1 / β t k   ) assigned to “each classifier” based on its training performance. The higher is the training performance of h t k , the higher is its voting weight for final classification. ….etc.”) combining the determined significant feature networks to build a model; (Parikh, [II. LEARN++ FOR DATA FUSION] “The final hypothesis H f i n a l is obtained by combining all hypotheses that have been generated thus far from all K data sources.”) [Examiner’s Note: Learn++ generates an ensemble of classifiers using specific set of feature sequence. Each classifier (hypothesis) within the ensemble is trained on a subset of the data Sk with its specific feature set Fk. Each classifier’s performance is evaluated on its respective feature set, and classifiers that demonstrate better performance based on classification accuracy or error rate, are combined using ensemble technique like weighted majority voting. Learn++ combines classifiers from different feature sets to build a final composite hypothesis (ensemble model).] PNG media_image1.png 450 554 media_image1.png Greyscale … a next set of the training data; calculating a new weight by using an instance of the encoded next set of the training data training data to normalize the calculated new weight; (Parikh, [P. 5, Section: III, Col. 2] “5) The incremental learning ability of Learn++ is preserved in the data fusion setting as well: if additional data later become available from new or existing sources (with or without new features), Learn++ can learn that information from the new data without requiring access to previously seen data.” [II. LEARN++ FOR DATA FUSION] “As mentioned above, we insist that this error be less than 1/2. If that is the case, the hypothesis h t k is accepted, and its error is normalized to obtain (see equation (4)) If ε t k t > 1/2, then the current h t k is discarded, and a new training subset is selected by returning to step 2. All hypotheses generated thus far are then combined using “weighted majority voting” to obtain the composite hypothesis H t k (step 5), for which each hypothesis h t k is assigned a weight inversely proportional to its normalized error. …., The error of the composite hypothesis H t k   is then computed in a similar fashion to that of h t k (step 6) as (see equation (6). Since individual hypotheses that make up the composite hypothesis all have individual errors less than 1/2, so too will the composite error, i.e., 0 ≤ E t k < 1/2. The normalized composite error B t k   can then be obtained as (See equation (7)) and is used for updating the distribution weights assigned to individual instances, …etc.” [Examiner’s Note: the examiner notes the training subset at each iteration is interpreted as “a next set of training data”.]) and updating the weight of each of the determined significant feature networks on the basis of the normalized new weight to incrementally update the built model. (Parikh, [II. LEARN++ FOR DATA FUSION] “The normalized composite error B t k can then be obtained as (See equation (7)) and is used for updating the distribution weights assigned to individual instances Equation (8) indicates that the distribution weights of those instances correctly classified by the composite hypothesis H t k are reduced by a factor of B t k , making them less likely to be selected to the training subset of the next iteration. …, : the weight update rule of Learn++ specifically targets learning novel information from new data, … whereas Learn++ updates its distribution based on the decision of the current “ensemble” through the composite hypothesis H t k . This procedure forces Learn++ to focus on instances that h
Read full office action

Prosecution Timeline

Jan 05, 2021
Application Filed
Jul 19, 2024
Non-Final Rejection — §101, §103
Oct 15, 2024
Response Filed
Dec 19, 2024
Final Rejection — §101, §103
Mar 18, 2025
Request for Continued Examination
Mar 25, 2025
Response after Non-Final Action
Aug 20, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596930
SENSOR COMPENSATION USING BACKPROPAGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12493786
Visual Analytics System to Assess, Understand, and Improve Deep Neural Networks
2y 5m to grant Granted Dec 09, 2025
Patent 12462199
ADAPTIVE FILTER BASED LEARNING MODEL FOR TIME SERIES SENSOR SIGNAL CLASSIFICATION ON EDGE DEVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12437199
Activation Compression Method for Deep Learning Acceleration
2y 5m to grant Granted Oct 07, 2025
Patent 12430552
Processing Data Batches in a Multi-Layer Network
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
82%
With Interview (+47.1%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month