Prosecution Insights
Last updated: April 19, 2026
Application No. 17/609,252

CLASSIFIER SYSTEM AND METHOD FOR GENERATING CLASSIFICATION MODELS IN A DISTRIBUTED MANNER

Final Rejection §101§103§112
Filed
Nov 05, 2021
Examiner
JAYAKUMAR, CHAITANYA R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Aicura Medical GmbH
OA Round
2 (Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
13 granted / 51 resolved
-29.5% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
18 currently pending
Career history
69
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 51 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the submission filed 01 December 2025 for application 17/609,252. Currently claims 1 and 4-11 have been amended. Claims 2 and 3 are canceled. Claims 13-19 have been newly added. Claims 1 and 4-19 are pending and have been examined. The §112(b) rejection of claims 1-12 has been withdrawn in view of the amendments made. Response to Arguments Regarding applicant’s arguments, filed 01 December 2025, see page 10, with respect to 35 U.S.C. 101, Applicant respectfully traverses. However, solely in an effort to expedite prosecution, and without concession as to the propriety of the rejection, Claim 1 has been amended to address the Examiner's rejection. Applicant submits that Claim 1 (and Claims 2-6 and 12, and new Claims 13-16 and 19) are not directed to software per se. Rather, Applicant submits that the classifier system of Claim 1 (and new Claim 19) includes structure such as a central processing unit for the central classifier unit, where the central processing unit is not software per se. Applicant notes that the amendment requiring the central classifier unit comprising a central processing unit finds support at page 5, line 31 of the English translation of the specification filed on November 5, 2021 ("Specification"). Accordingly, Applicant respectfully requests that this rejection under § 101 should be reconsidered and withdrawn. Examiners response: Examiner respectfully disagrees because Applicant’s arguments have been fully considered but they are not persuasive. Although the claim 1 recites that a central classifier unit comprises a central processing unit, it is not clear if this central processing unit is a hardware processor because the claims, specification, drawings etc do not define the central processing unit to be hardware. Here, the central processing unit is interpreted as a general unit that does some processing centrally which under the broadest reasonable interpretation sill encompasses software per se. Hence, the claim is not patent eligible. Regarding applicant’s arguments, filed 01 December 2025, see pages 10 and 11, with respect to 35 U.S.C. 101, Applicant respectfully traverses on Page 10 and does not agree that independent Claims 1 and 7, and their respective dependent claims, are patent ineligible because the claims allegedly are directed to subject matter that includes an abstract idea. Applicant continues to argue on Page 11 regarding independent Claim 1 and Step 2A, prong 1 of the Alice Mayo test, Applicant notes the Examiner only considers a subset of the operations in independent Claim 1 to be directed to an abstract idea such as a "mental process." See Office Action, page 6. In contrast to the Examiner's assertions, Applicant is not aware of, and the rejection has not explained, how the human mind would be able to perform the "determining" operation that is performed by several local decentralized classifier units and additionally would be able to perform the "generate" and "derive" operations that are performed by a separate central classifier unit as claimed in independent Claim 1, either alone or with the aid of pencil and paper. At best, the Examiner has only provided a general statement about the quoted limitations from independent Claim 1 and their potential for covering mental processes under broadest reasonable interpretation. Applicant submits that independent Claim 1 does not refer to a mathematical concept or a method of organizing human activity. In addition, Applicant submits that independent Claim 1 does not recite a mathematical formula or algorithm and is not directed to any of the noted methods of organizing human activity set forth in the MPEP. Notably, Applicant submits that it would not be possible for a human mind, either alone or with the aid of pencil and paper, to perform the operations attributed to the several local decentralized classifier units and to perform the operations attributed to the central classifier unit. Given that independent Claim 1 includes several local decentralized classifier units, which may be in separate locations, increasing the number of local decentralized classifier units that fall under the umbrella of the term "several" would render it not possible to perform the claimed processes by a human mind. Similarly, increasing the number of model parameter values / system parameters utilized by the claimed classification models of the classifier system would further increase the impossibility of being able to perform the claimed processes with a human mind. Applicant thus submits that independent Claim 1 is not an abstract idea such as a "mental process", and should be considered patent-eligible subject matter under Step 2A, Prong 1 of the Alice Mayo test. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive. Examiner respectfully disagrees that the claims are not abstract because in the “determine…” limitation one can evaluate a membership value mentally by simply looking at a data set. For example, looking at an image of a pink rose one can mentally evaluate it and say that it belongs to a “rose” class. In the “generate…” limitation for example, one can also generate parameter values like pink rose, white rose, red rose etc and in the “derive…” limitation one can mentally evaluate that the central model parameter value for all these different varieties is “rose”. Hence, under the broadest reasonable interpretation these limitations can be performed in the mind and are abstract. Lastly, although applicant argues that the claim does not recite a mathematical formula or algorithm and is not directed to any of the noted methods of organizing human activity the claims are not rejected on the basis of reciting a mathematical formula or algorithm any of the noted methods of organizing human activity but are rejected as being abstract because of the “mental process” grouping. Furthermore, although Applicant argues that Claim 1 includes several local decentralized classifier units, which may be in separate locations, increasing the number of local decentralized classifier units that fall under the umbrella of the term "several" would render it not possible to perform the claimed processes by a human mind, this limitation is not considered to be an abstract idea under step 2A, prong 1, but instead is identified as an additional element under Step 2A, prong 2. Lastly, Applicant themselves agree on Page 14 (Paragraph 1, Last line) that “Applicant submits that independent claim 1 is actually directed to a judicial exception”. Hence, the claims are abstract. Regarding applicant’s arguments, filed 01 December 2025, see pages 12 and 13, with respect to 35 U.S.C. 101, that even assuming without admitting that independent Claim 1 can be considered an abstract idea, a point which Applicant does not concede, Applicant additionally respectfully submits that the Examiner has erred by not considering the "determine", "generate" and "drive" operations as quoted from independent Claim 1 on page 6 of the Office Action in light of the entire claim as a whole, including whether the alleged judicial exception is integrated into a practical application and/or amounts to significantly more under Step 2B of the Alice Mayo test. Regarding the considering of the claim as a whole, in McRo v. Bandai Namco the Court of Appeals for the Federal Circuit (CAFC) stated:"[w]e have previously cautioned that courts 'must be careful to avoid oversimplifying the claims by looking at them generally and failing to account for the specific requirements of the claims"1(emphasis added). The Applicant respectfully submits that Step 2B of the Mayo test requires that the elements of each claim be considered both "'individually and as an ordered combination' to determine whether a claim includes significantly more than a judicial exception," such that "the combination must also be shown to represent well-understood, routine, conventional activity in the pertinent art."2 In this regard, the Examiner must examine the claims as a whole. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive. Examiner respectfully disagrees that the claims are not considered as a whole because Examiner clearly states in step 2B in the detailed rejection below (and in the previous office action) that even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity. This shows that in Step 2A, prong 1 first the abstract ideas were identified and then all limitations were evaluated together as a whole after that along with the additional elements that were identified in Step 2A, prong 2. In the last step (Step 2B), Berkheimer analysis is also provided for the insignificant extra solution activities because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding applicant’s arguments, filed 01 December 2025, see pages 13-15, with respect to 35 U.S.C. 101, specifically on Page 14 Applicant notes the Examiner recites only a subset of the operations in independent Claim 1 as allegedly being a judicial exception not directed to patent-eligible subject matter, and asserts based on the inclusion of that subset of operations within independent Claim 1 that the entire claim therefore fails the Alice Mayo test. The Examiner supports this on page 7-8 by stating other limitations are merely "additional elements" that are alleged to be generically recited, and thus do not integrate the alleged abstract idea into a judicial exception. However, when taking the entirety of the limitations of independent Claim 1 as a whole in consideration, Applicant submits that independent Claim 1 is actually directed to a judicial exception and/or amounts to significantly more. In particular, Applicant notes that independent Claim 1 is directed to a technological improvement of a technological problem and thus incorporates any alleged abstract idea into a practical application (i.e., under Step 2A, Prong 2 of the Alice Mayo test) and/or recites significantly more (i.e., under Step 2B of the Alice/Mayo test). As discussed throughout the Specification, the present application involves a technological problem of reducing the bandwidth needed for data exchange for mutual updates of instances of the classifier system (e.g., including local decentralized classifier units that may be separated and at different locations). To address this technological problem, Applicant submits that the present application is directed to improvements in data transfer between local or decentralized classifier units and a central classifier unit. In particular, the classifier system of independent Claim 1 aims to reduce the technical effort that is needed for updating computer-implemented classification models, including those at different or separate locations. Applicant argues on Page 15 (paragraph 2), that this reduction in the amount of data needing to be transferred between the local decentralized classifier units and the central classifier unit (e.g., with the partial data transfer for the multiclass classification models and/or the shared training data sets) improves the operation of the classifier system as a whole by reducing the bandwidth needed to effect the data transfer between the classifier units within the system. This reduction in the amount of data needing to be transferred between the local decentralized classifier units and the central classifier unit additionally improves the operation of the classifier system as a whole by converting at least some data between a non-standardized data format (e.g., that may be utilized by a particular local decentralized classifier unit) to a standardized data format (e.g., that may be utilized by the shared central classifier unit) either before, during, and/or after the data transfer between the particular local decentralized classifier unit and the central classifier unit. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive. Only a subset of limitations were identified as abstract in step 2A, prong 2. The rest of the limitations were identified as additional elements in Step 2A, prong 2 and. Then the claim as a whole was considered to see if the additional elements integrate the abstract ideas into a practical application. Again the limitations were considered together as a whole in step 2B to check if the additional elements amount to significantly more than the judicial exception. But in claim 1 the additional elements were generically recited and at best are merely adding the words “Apply it” to the judicial exception which cannot provide an inventive concept and does not amount to significantly more than the judicial exception. Furthermore, Applicant argues that the present application involves a technological problem of reducing the bandwidth needed for data exchange for mutual updates of instances of the classifier system but the improvement here in in the abstract idea of reducing the amount of data. As disclosed in MPEP 2106.05(a) it is important to note, the judicial exception alone cannot provide the improvement. Furthermore, Examiner respectfully disagrees that the claims integrate the abstract idea into a practical application because firstly, it is unclear as to what is the exact technological field is here. Furthermore, the limitations of “transfer and transmitting” are identified as additional elements in Step 2A, prong 2 that are well-understood, routine and conventional because they merely transmit or transfer data. If the additional element (or combination of elements) is no more than well-understood, routine, conventional activities previously known to the industry, which is recited at a high level of generality, then this consideration does not favor eligibility. See MPEP 2106.05(d)(II) for limitations that the courts have similarly found directed to receiving or transmitting data over a network. Hence, as explained below in the detailed rejection, these limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Furthermore, on Page 15, Applicant themselves state that “this reduction in the amount of data needing to be transferred between the local decentralized classifier units and the central classifier unit improves the operation of the classifier system as a whole” but note that the improvement here appears to be in the reduction in the amount of data that is being transferred and not in any improved process. Hence, the claim is not patent eligible. Lastly, although Applicant argues this reduction in the amount of data needing to be transferred between the local decentralized classifier units and the central classifier unit (e.g., with the partial data transfer for the multiclass classification models and/or the shared training data sets) improves the operation of the classifier system as a whole by reducing the bandwidth needed to effect the data transfer between the classifier units within the system, as explained above the improvement here in in the abstract idea of reducing the amount of data that is being into the classifier units. As disclosed in MPEP 2106.05(a) it is important to note, the judicial exception alone cannot provide the improvement. Regarding applicant’s arguments, filed 01 December 2025, see pages 15-16, with respect to 35 U.S.C. 101, regarding independent Claim 7 and Step 2A, prong 1 of the Alice Mayo test, Applicant again notes the Examiner considers only a subset of the operations in independent Claim 7 to be directed to an abstract idea such as a "mental process." See Office Action, page 16. In contrast to the Examiner's assertions, Applicant is not aware of, and the rejection has not explained, how the human mind would be able to perform the "forming" operation that is performed by a central classifier unit as claimed in independent Claim 7, either alone or with the aid of pencil and paper. At best, the Examiner has only provided a general statement about the quoted limitations from independent Claim 7 and their potential for covering mental processes under broadest reasonable interpretation. Applicant submits that independent Claim 7 does not refer to a mathematical concept or a method of organizing human activity. In addition, Applicant submits that independent Claim 7 does not recite a mathematical formula or algorithm and is not directed to any of the noted methods of organizing human activity set forth in the MPEP. Notably, Applicant submits that it would not be possible for a human mind, either alone or with the aid of pencil and paper, to perform the operations attributed to the central classifier unit. In particular, increasing the number of model parameter values / system parameters utilized as part of the processes of the method in independent Claim 7 would increase the impossibility of being able to perform the claimed processes with a human mind. As such, Applicant submits that independent Claim 7 is not an abstract idea such as a "mental process", and should be considered patent-eligible subject matter under Step 2A, Prong 1 of the Alice Mayo test. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive. Examiner respectfully disagrees that the claim is not abstract because in the “forming or updating…” limitation there is an “or” and under the broadest reasonable interpretation, one can mentally update a central classification model from the transmitted values and is therefore abstract. For example, if the transmitted model parameters values are red roses, pink roses, yellow roses etc one can mentally update the central model to be colored roses. Furthermore, although Applicant argues that increasing the model parameter values would increase the impossibility of being able to perform the claimed processes with a human mind, the claims do not recite increasing any model parameter values at all. Lastly, although applicant argues that the claim does not recite a mathematical formula or algorithm and is not directed to any of the noted methods of organizing human activity the claims are not rejected on the basis of reciting a mathematical formula or algorithm any of the noted methods of organizing human activity but are rejected as being abstract because of the “mental process” grouping. Regarding applicant’s arguments, filed 01 December 2025, see pages 15-16, with respect to 35 U.S.C. 101, Applicant argues that even assuming without admitting that independent Claim 7 can be considered an abstract idea, a point which Applicant does not concede, Applicant additionally respectfully submits that the Examiner has erred by not considering the "forming" operations as quoted from independent Claim 7 on pages 16-17 of the Office Action in light of the entire claim as a whole, including whether the alleged judicial exception is integrated into a practical application and/or amounts to significantly more under Step 2B of the Alice Mayo test. Applicant again refers to the need to consider the claim as a whole per McRo and BASCOM, and the teachings of MPEP 2106.04(d) and 2106.05 as discussed above. Applicant notes the Examiner recites only a subset of the operations in independent Claim 7 as allegedly being a judicial exception not directed to patent-eligible subject matter, and asserts based on the inclusion of that subset of operations that the entire claim therefore fails the Alice Mayo test. The Examiner supports this on page 17 by stating other limitations are merely "additional elements" that supposedly represents extra-solution activity, and thus do not integrate the alleged abstract idea into a judicial exception. However, when taking the entirety of the limitations of independent Claim 7 as a whole in consideration, Applicant submits that independent Claim 7 is actually directed to a judicial exception and/or amounts to significantly more. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive. Examiner respectfully disagrees that the claims are not considered as a whole because Examiner clearly states in step 2B in the detailed rejection below (and in the previous office action) that even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity. This shows that in Step 2A, prong 1 first the abstract ideas were identified and then all limitations were evaluated together as a whole after that along with the additional elements that were identified in Step 2A, prong 2. In the last step (Step 2B), Berkheimer analysis is also provided for the insignificant extra solution activities because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding applicant’s arguments, filed 01 December 2025, see pages 16-18, with respect to 35 U.S.C. 101, Applicant argues that in particular, Applicant notes that independent Claim 7 is directed to a technological improvement to a technological problem and thus incorporates any alleged abstract idea into a practical application (i.e., under Step 2A, Prong 2) and/or recites significantly more (i.e., under Step 2B). As discussed throughout the Specification, the present application involves a technological problem of reducing the bandwidth needed for data exchange for mutual updates of instances of classification models (e.g., including between classification models that may be decentralized, and classification models at a central classifier unit). Applicant submits that the present application is directed to improvements in data transfer between classification models formed in a decentralized manner, and a central classifier unit. In particular, the classifier system of independent Claim 7 aims to reduce the technical effort that is needed for updating computer-implemented classification models, including those at different or separate locations, with a central classifier unit. For example, as noted in the paragraph at page 15, lines 5-7 of the Specification, the configuration of the binary classification models (including as sub-classification models of the multiclass classification model) results in the ability to transfer only a selected few of the binary classification models from the central classifier unit to the decentralized classification models, instead of having to transfer the entire multiclass classification model to the decentralized locations. In addition, a decentralized classification model may be optimized by taking into account training data sets from other decentralized classification models, including with the classification models of the central classifier unit. See e.g., Specification, page 15, lines 1-4. This reduction in the amount of data needing to be transferred between the decentralized classification models and the central classifier unit (e.g., with the partial data transfer for the multiclass classification models and/or the shared training data sets) improves the operation of a method of data transfer as a whole by reducing the bandwidth needed to effect the data transfer between the central classifier unit and the decentralized classification models. This reduction in the amount of data needing to be transferred between the decentralized classification models and the central classifier unit additionally improves the operation of the method of data transfer as a whole by converting at least some data between a non-standardized data format (e.g., that may be utilized by a particular local decentralized classifier unit) to a standardized data format (e.g., that may be utilized by the shared central classifier unit) either before, during, and/or after the data transfer between the particular decentralized classification model and the central classifier unit. Accordingly, Applicant submits that the limitations of independent Claim 7 at least are directed to a technological improvement to a technological problem and thus incorporates the alleged abstract idea into a practical application (i.e., under Step 2A, Prong 2 of the Alice Mayo test) and/or recite significantly more (i.e., under Step 2B of the Alice Mayo test). Examiners response: Applicant’s arguments have been fully considered but they are not persuasive. Examiner respectfully disagrees that the claims integrate the abstract idea into a practical application because firstly, it is unclear as to what is the exact technological field is here. Furthermore, the limitations of “transfer and transmitting” are identified as additional elements in Step 2A, prong 2 that are well-understood, routine and conventional because they merely transmit or transfer data. See MPEP 2106.05(d)(II) for limitations that the courts have similarly found directed to receiving or transmitting data over a network. Hence, as explained below in the detailed rejection, these limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Furthermore, Applicant argues that the present application involves a technological problem of reducing the bandwidth needed for data exchange for mutual updates of instances of the classifier system but the improvement here in in the abstract idea of reducing the amount of data. As disclosed in MPEP 2106.05(a) it is important to note, the judicial exception alone cannot provide the improvement. Furthermore, on Page 15, Applicant themselves state that “this reduction in the amount of data needing to be transferred between the local decentralized classifier units and the central classifier unit improves the operation of the classifier system as a whole” but note that the improvement here appears to be in the reduction in the amount of data that is being transferred and not in any improved process. Hence, the claim is not patent eligible. Regarding applicant’s arguments, filed 01 December 2025, see pages 18-20, with respect to 35 U.S.C. 103, Applicant argues that in contrast to the prior art, for the classifier system of Claim 1 only selected parameter values that belong to a respective partial multiclass model need to be exchanged because the multiclass model(s) are exclusively comprised of binary sub-models (e.g., as shown in the Figures of the present application). This allows complete updating of a multiclass model by only needing to exchange a subset of the data representing the complete multiclass model, including exchanging the data that belongs to updated binary sub-models, between local decentralized classifier units and a central classifier unit. In this regard, Applicant submits that the prior art, either alone or in combination, fails to disclose, teach, or suggest the limitations of independent Claim 1. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive because firstly, claim 1 recites “several local decentralized classifier units, wherein the several local decentralized classifier units respectively implement one or several local binary classification models or one or several local multiclass classification models composed of one or several local binary sub-classification models that are configured to” and reference Chi teaches that limitation at least on Pages 2, 3, and 33 and Figure 1.1 as shown in the detailed rejection below. The argument does not explain why the cited sections in the reference do not teach the limitations as claimed. Secondly, although the Applicant argues about reducing the bandwidth none of the claims recite reducing bandwidth. If Applicant intended to argue that by performing the steps of the claims there is reduction in bandwidth, then, since the prior art teaches each and every element of the claims it would also achieve this reduction in bandwidth. Hence, as shown in the detailed rejection below the cited references teach each and every element of the claims. Regarding applicant’s arguments, filed 01 December 2025, see page 20, with respect to 35 U.S.C. 103, Applicant submits that the prior art relied upon by the Examiner, either alone or in combination, does not read on the limitations of independent Claim 7 as amended. In particular, Applicant refers to the improvements to data transfer of the present invention, including a reduced need for bandwidth when transferring data between a central classifier unit and decentralized classification models, as discussed in the arguments against the § 101 rejection of independent Claim 7 and submits that none of the prior art documents disclose, teach, or suggest this reduction of needed bandwidth for updating central versus local decentralized classification models. Applicant submits that data representing complete multiclass models are transmitted and aggregated in typical federated learning systems including those that may be over decentralized networks, such as the federated learning systems disclosed in the prior art cited by the Examiner. This transmission and aggregation involves the exchange of sizeable amounts of data. However, in contrast to those typical systems, the present invention provides the improvement of reducing the bandwidth needed (and thus the amount of exchanged data) for mutual updates in a distributed classifier system. In contrast to the prior art, for the method of Claim 7 only selected parameter values that belong to a respective partial multiclass model need to be exchanged because the multiclass model(s) are exclusively comprised of binary sub-models (e.g., as shown in the Figures of the present application). This allows complete updating of a multiclass model by only needing to exchange a subset of the data representing the complete multiclass model, including exchanging the data that belongs to updated binary sub-models, between decentralized classification models and a central classifier unit. In this regard, Applicant submits that the prior art, either alone or in combination, fails to disclose, teach, or suggest the limitations of independent Claim 7. For at least the foregoing reasons, independent Claims 1 and 7 are believed allowable. Dependent Claims 4-6 and 8-12 depend on independent Claim 1 or 7, and are believed allowable at least for the same reasons as independent Claim 1 or 7. Examiners response: Applicant’s arguments have been fully considered but they are not persuasive because again firstly, the argument does not explain which of the cited sections in the reference do not teach which specific limitations as claimed. Secondly, although the Applicant argues about reducing the bandwidth, none of the claims recite reducing bandwidth. If Applicant intended to argue that by performing the steps of the claims there is reduction in bandwidth, then, since the prior art teaches each and every element of the claims it would also achieve this reduction in bandwidth. Hence, as shown in the detailed rejection below the cited references teach each and every element of the claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10, 17, and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "…the one or several local decentralized classifier units. in the line 3. There is insufficient antecedent basis for this limitation in the claim. Claim 17 recites the limitation "…the central processing unit…" on line 1. There is insufficient antecedent basis for this limitation in the claim. Claim 19 recites the limitation "…the result…" on line 13 on Page 7. There is insufficient antecedent basis for this limitation in the claim. Claim 19 recites the limitation "…the several local decentralized binary classification units…" in the last two lines. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4-6, 12, 15, and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention is directed to software per se. Regarding claim 1: According to the first step (Step 1) of the 101 analysis, claim 1 is directed to a classifier system which under the broadest reasonable interpretation encompasses software per se. Therefore, it does not fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter) and is not eligible subject matter. Although the claim 1 recites that a central classifier unit comprises a central processing unit, it is not clear if this central processing unit is a hardware processor because the specification, drawings etc do not define a central processing unit to be hardware. Here, the central processing unit is interpreted as a general unit that does some processing centrally which under the broadest reasonable interpretation encompasses software per se. Hence, the claim is not patent eligible. Claims 4-6 and 12, 15, and 16 depend on claim 1 and therefore inherit the same rejection. In order to overcome this rejection Examiner recommends an option of amending Claim 1 to explicitly indicate that there is structure like a processor etc. which would then make it fall into one of the four statutory categories (manufacture) in Step 1 of the 101 analysis. Claims 1 and 4-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards abstract ideas without significantly more. Regarding claims 1, 4- 6 and 12-16: According to the first step (Step 1) of the 101 analysis, claims 1, 4-6 and 12-16 are directed to a classifier system which under the broadest reasonable interpretation encompasses software per se. Therefore, it does not fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Even if the embodiment of the classifier system were to be amended to fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter), the claims are still not patent eligible as they are directed to toward abstract ideas without significantly more as explained in the further steps. Regarding claim 1: In the next step (Step 2A, prong 1) of the analysis, the limitations of: determine, for a respective data set generated by system parameter values of the measurable system parameters, on the basis of local model parameter values specific to a respective local decentralized classifier unit, a membership value that indicates membership to a state class of a state represented by a data set generated by the system parameter values of the measurable system parameters; and wherein the central classifier unit is designed to: generate central model parameter values from the local model parameter values originating from the several local decentralized classifier units, generated on the basis of the measured system parameter values of the measurable system parameters that define a central binary classification model for the state class assigned to the measurable system parameters; and on the basis of central model parameter values that define one or several central binary sub-classification models for different classes, derive the central model parameter values for a central multiclass classification model and form a central multi- classification model. The above limitations, under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, the limitations: wherein the classifier system comprises: several local decentralized classifier units, wherein the several local decentralized classifier units respectively implement one or several local binary classification models or one or several local multiclass classification models composed of one or several local binary sub-classification models that are configured to: and a central classifier unit comprising a central processing unit, wherein the central classifier unit is connected to the several local decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, wherein the local model parameter values of each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value, wherein the several local decentralized classifier units of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class. The above limitations are considered to be additional elements and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than wherein the classifier system comprises: several local decentralized classifier units, wherein the several local decentralized classifier units respectively implement one or several local binary classification models or one or several local multiclass classification models composed of one or several local binary sub-classification models that are configured to: and a central classifier unit comprising a central processing unit, wherein the central classifier unit is connected to the several local decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, wherein the local model parameter values of each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value, wherein the several local decentralized classifier units of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the same step (Step 2A, prong 2) of the analysis, the limitations: wherein the central classifier unit is further configured to transfer the central model parameter values generated by the central classifier unit to at least one of the several local decentralized classifier units so that a respective local decentralized classifier unit represents a respective central classification model; wherein the central classifier unit is designed to transmit the central model parameter values of the one or several central binary sub- classification models of the central multiclass classification model to the several local decentralized classifier units. The above limitations are considered to be additional elements and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the classifier system wherein the classifier system comprises: several local decentralized classifier units, wherein the several local decentralized classifier units respectively implement one or several local binary classification models or one or several local multiclass classification models composed of one or several local binary sub-classification models that are configured to: and a central classifier unit comprising a central processing unit, wherein the central classifier unit is connected to the several local decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, wherein the local model parameter values of each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value, wherein the several local decentralized classifier units of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. In the same step (Step 2B) of the analysis, the recitation of, wherein the central classifier unit is designed to transfer the central model parameter values generated by the central classifier unit to at least one of the several decentralized classifier units so that a respective decentralized classifier unit represents a respective central classification model, wherein the central classifier unit is designed to transmit the central model parameter values of the one or several binary sub-classification models of the central multiclass classification model to the one or several decentralized binary classifier units. The above limitations amount to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 4: In the next step (Step 2A, prong 2) of the analysis, the limitation: wherein the central classifier unit and the several local decentralized classifier units implement the binary classification models by means of respective artificial neural networks, wherein the respective artificial neural networks each comprise a topology that is defined by nodes and weighted connections between the nodes, which are formed by artificial neurons organized in several layers, and wherein the model parameter values are values of weightings of the weighted connections between the nodes. The above limitation is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that wherein the central classifier unit and the several local decentralized classifier units implement the binary classification models by means of respective artificial neural networks, wherein the respective artificial neural networks each comprise a topology that is defined by nodes and weighted connections between the nodes, which are formed by artificial neurons organized in several layers, and wherein the model parameter values are values of weightings of the weighted connections between the nodes) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, wherein the central classifier unit and the several local decentralized classifier units implement the binary classification models by means of respective artificial neural networks, wherein the respective artificial neural networks each comprise a topology that is defined by nodes and weighted connections between the nodes, which are formed by artificial neurons organized in several layers, and wherein the model parameter values are values of weightings of the weighted connections between the nodes, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 5: In the next step (Step 2A, prong 1) of the analysis, the limitations of: wherein at least one of the several local decentralized classifier units is designed to update, in case of a new training data set for a respective state class, the respective binary classification model or binary sub-classification model for the respective state class, and wherein the central classifier unit is designed to update, in response to receipt of the updated model parameter values and/or gradients, only the central binary classification model or a central binary sub-classification model of a multiclass classification model that has been trained for a relevant state class. The above limitations, under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, the limitation: and to transmit updated model parameter values resulting therefrom and/or gradients obtained as part of an update to the central classifier unit, The above limitation is considered to be an additional element and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the recitation of, and to transmit updated model parameter values resulting therefrom and/or gradients obtained as part of an update to the central classifier unit, limitation amounts to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 6: In the next step (Step 2A, prong 2) of the analysis, the limitation of: wherein at least one of the several local decentralized classifier units is designed to obtain the target value for a training data set by way of language processing of a natural-language description of the state to which the locally determined system parameter values for the training data set belong. The above limitation is considered to be an additional element and as recited represents insignificant extra-solution activity that is data output, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, as discussed above the additional element of wherein at least one of the several local decentralized classifier units is designed to obtain the target value for a training data set by way of language processing of a natural-language description of the state to which the locally determined system parameter values for the training data set belong, which is recited at a high level of generality and amounts to extra-solution activity of receiving data i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. The claim is not patent eligible. Regarding claims 7-11, 17, and 18: According to the first step (Step 1) of the 101 analysis, claims 7-11 are directed to a method for distributed generating and updating of classification models (process) and falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding claim 7: In the next step (Step 2A, prong 1) of the analysis, the limitation of: forming or updating a central classification model from the transmitted model parameter values by the central classifier unit. The above limitation, under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, the limitation of: forming several binary classification models and/or of a several multiclass classification models for one target value or several target values in a decentralized manner; is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is forming several binary classification models and/or of a several multiclass classification models for one target value or several target values in a decentralized manner) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the same step (Step 2A, prong 2) of the analysis, the limitation of: transmitting model parameter values and/or gradients defining a respective binary classification model or binary sub-classification models of the multiclass classification model to a central classifier unit; The above limitation is considered to be an additional element and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method of forming several binary classification models and/or of a several multiclass classification models for one target value or several target values in a decentralized manner, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. In the same step (Step 2B) of the analysis, the recitation of, transmitting model parameter values and/or gradients defining a respective binary classification model or binary sub-classification models of the multiclass classification model to a central classifier unit, limitation amounts to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 8: In the step (Step 2A, prong 2) of the analysis, the limitation of: forming a central multiclass classification model from the binary sub-classification models by the central classifier unit. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is forming a central multiclass classification model from the binary sub-classification models by the central classifier unit) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method of forming a central multiclass classification model from the binary sub-classification models by the central classifier unit, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. Regarding claim 9: In the step (Step 2A, prong 2) of the analysis, the limitation of: transmitting the model parameter values defining the central classification model to one or several local decentralized classifier units. is considered to be an additional element and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the recitation of, transmitting the model parameter values defining the central classification model to one or several local decentralized classifier units, limitation amounts to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 10: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the binary sub-classification models of the central multiclass classification model are transmitted to the one or several local decentralized classifier units. is considered to be an additional element and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the recitation of wherein the binary sub-classification models of the central multiclass classification model are transmitted to the one or several local decentralized classifier units, limitation amounts to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 11: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the model parameter values and/or gradients defining the respective binary classification model or binary sub-classification model are transmitted to the central classifier unit after each update of a local decentralized binary classification model or binary sub-classification model, or at fixed intervals or as a function of intervals defined by a parameter, or after a formation of a respective local decentralized binary classification model or binary sub-classification model has been completed. is considered to be an additional element and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the recitation of, wherein the model parameter values and/or gradients defining the respective binary classification model or binary sub-classification model are transmitted to the central classifier unit after each update of a local decentralized binary classification model or binary sub-classification model, or at fixed intervals or as a function of intervals defined by a parameter, or after a formation of a respective local decentralized binary classification model or binary sub-classification model has been completed, limitation amounts to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 12: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the model parameter values are threshold values of a respective neuron forming a node. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the model parameter values are threshold values of a respective neuron forming a node) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the model parameter values are threshold values of a respective neuron forming a node, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 13: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the central processing unit is a server. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the central processing unit is a server) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the central processing unit is a server, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 14: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the central processing unit is installed within a server. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the central processing unit is installed within a server) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the central processing unit is installed within a server, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 15: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the several local decentralized classifier units are capable of implementing respective artificial neural networks. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the several local decentralized classifier units are capable of implementing respective artificial neural networks) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the several local decentralized classifier units are capable of implementing respective artificial neural networks, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 16: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the central processing unit is capable of implementing a respective artificial neural network. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the central processing unit is capable of implementing a respective artificial neural network) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the central processing unit is capable of implementing a respective artificial neural network, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 17: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the central processing unit is capable of implementing a respective artificial neural network. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the central processing unit is capable of implementing a respective artificial neural network) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the central processing unit is capable of implementing a respective artificial neural network, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 18: In the step (Step 2A, prong 2) of the analysis, the limitation of: wherein the one or several local decentralized classifier units are capable of implementing respective artificial neural networks. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the one or several local decentralized classifier units are capable of implementing respective artificial neural networks) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a method wherein the one or several local decentralized classifier units are capable of implementing respective artificial neural networks, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 19: According to the first step (Step 1) of the 101 analysis, claim 19 is directed to a computer implemented classifier system for classifying states of a system (machine) and falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). In the next step (Step 2A, prong 1) of the analysis, the limitations of: determine, for a respective data set generated by system parameter values of the measurable system parameters, on the basis of local model parameter values specific to a respective local decentralized classifier unit, a membership value that indicates membership to a state class of a state represented by a data set generated by the system parameter values of the measurable system parameters; and wherein the central classifier unit is designed to: generate central model parameter values from the local model parameter values originating from the several local decentralized classifier units, generated on the basis of the measured system parameter values of the measurable system parameters that define a central binary classification model for the state class assigned to the measurable system parameters; and on the basis of central model parameter values that define one or several central binary sub-classification models for different classes, derive the central model parameter values for a central multiclass classification model and form a central multi- classification model. The above limitations, under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, the limitations: A computer implemented classifier system for classifying states of a system that is characterized by measurable system parameters, wherein the classifier system comprises: several local decentralized data processing instances at different spatial locations, each local decentralized data processing instance having a processor, memory, and program code, each local decentralized data processing instance implementing decentralized classifier units, wherein each local decentralized data processing instance respectively implements one or several local binary classification models or one or several local multiclass classification models composed of local binary sub-classification models that are defined by model parameter values stored in said memory of a respective local decentralized data processing instance, wherein said local decentralized data processing instances each are configured to: and a central data processing instance implementing a central classifier unit comprising a central processing unit that is connected to the several local decentralized data processing instances implementing decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, wherein the local model parameter values of each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value, wherein the several local decentralized classifier units implemented by the local decentralized data processing instances of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class. The above limitations are considered to be additional elements and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than a computer implemented classifier system for classifying states of a system that is characterized by measurable system parameters, wherein the classifier system comprises: several local decentralized data processing instances at different spatial locations, each local decentralized data processing instance having a processor, memory, and program code, each local decentralized data processing instance implementing decentralized classifier units, wherein each local decentralized data processing instance respectively implements one or several local binary classification models or one or several local multiclass classification models composed of local binary sub-classification models that are defined by model parameter values stored in said memory of a respective local decentralized data processing instance, wherein said local decentralized data processing instances each are configured to: and a central data processing instance implementing a central classifier unit comprising a central processing unit that is connected to the several local decentralized data processing instances implementing decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, wherein the local model parameter values of each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value, wherein the several local decentralized classifier units implemented by the local decentralized data processing instances of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the same step (Step 2A, prong 2) of the analysis, the limitations: wherein the central classifier unit is further configured to transfer the central model parameter values generated by the central classifier unit to at least one of the several local decentralized classifier units so that a respective local decentralized classifier unit represents a respective central classification model; wherein the central classifier unit is designed to transmit the central model parameter values of the one or several central binary sub- classification models of the central multiclass classification model to the several local decentralized classifier units. The above limitations are considered to be additional elements and as recited represents insignificant extra-solution activity that is merely transmitting data, because it is a mere nominal or tangential addition to the claim and is therefore not indicative of integration into a practical application. See MPEP 2106.05(g). In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, a computer implemented classifier system for classifying states of a system that is characterized by measurable system parameters, wherein the classifier system comprises: several local decentralized data processing instances at different spatial locations, each local decentralized data processing instance having a processor, memory, and program code, each local decentralized data processing instance implementing decentralized classifier units, wherein each local decentralized data processing instance respectively implements one or several local binary classification models or one or several local multiclass classification models composed of local binary sub-classification models that are defined by model parameter values stored in said memory of a respective local decentralized data processing instance, wherein said local decentralized data processing instances each are configured to: and a central data processing instance implementing a central classifier unit comprising a central processing unit that is connected to the several local decentralized data processing instances implementing decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, wherein the local model parameter values of each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value, wherein the several local decentralized classifier units implemented by the local decentralized data processing instances of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. In the same step (Step 2B) of the analysis, the recitation of, wherein the central classifier unit is designed to transfer the central model parameter values generated by the central classifier unit to at least one of the several decentralized classifier units so that a respective decentralized classifier unit represents a respective central classification model, wherein the central classifier unit is designed to transmit the central model parameter values of the one or several binary sub-classification models of the central multiclass classification model to the one or several decentralized binary classifier units. The above limitations amount to insignificant extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data transmission (see MPEP 2106.05(d)). The courts have similarly found limitations directed to receiving or transmitting data over a network, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "Receiving or transmitting data over a network."). These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and do not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-6, 12-16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chi (Online Federated Learning over decentralized networks, 2018) in view of Melis et al (Inference Attacks Against Collaborative Learning, 2018) and further in view of Jakub et al (FEDERATED LEARNING: STRATEGIES FOR IMPROVING COMMUNICATION EFFICIENCY, 2017). Regarding claim 1: Chi teaches: A classifier system for classifying states of a system that is characterized by measurable system parameters, wherein the classifier system comprises ([Page 9, Section 2.1.1] Algorithm 1 General Framework for Online Learning. Note: Shows a classifier system for classifying states of a system that is characterized by measurable system parameters): several local decentralized classifier units, wherein the several local decentralized classifier units respectively implement one or several local binary classification models or one or several local multiclass classification models composed of one or several local binary sub-classification models that are configured to ([Abstract, Paragraph 1] OFL allows each node to perform local operation (e.g., train local model). [Page 2, Figure 1.1] Note: Right side of Figure 1.1 shows decentralized distributed optimization. [Page 3, Paragraph 1] Such an optimization method is often called ‘federated learning’ [6] since each node is now liberated from the control of the master node and all nodes collaborate to reach consensus by sharing information with their direct neighbors without the central coordination. [Page 33, Section 3.4.4, Paragraph 1] We consider a distributed binary classification problem based on the logistic regression function on a9a dataset 1. Each node on the graph receives a subset of the dataset): for a respective data set generated by system parameter values of the measurable system parameters, on the basis of local model parameter values specific to a respective local decentralized classifier unit ([Page 42, Paragraphs 1 & 2] In this chapter, we extend the problem to the convex-concave cases, where there may exist local or global constraints and each local agents may consider solving the convex-concave cases in the adaptation phase. Local constraints are quite common in optimization problems, such as budget limitation in portfolio selection, prior information adaptation in data fitting and projection into feasible areas in machine learning [32, 51]. On decentralized networks where communication is limited to each node’s direct neighbors, one practical solution for such a constrained problem would be constructing a local Lagrangian formulation and solving the convex-concave problem by iterative updating, and then sharing local weights via communication. [Page 82, Paragraph 2] We first construct a synthetic dataset via random walks in parameter space with Gaussian increments. The initial value is w0 2 R100, and its first 60 components are set to be 1 while the rest are set to be 􀀀1:5. We then construct the potential optimal weight vector for 9 similar tasks as wi+1 = wi + ", where " N(0; 0:1I) and i = 0; ; 8. A training sample x = [x1; ; x100] is generated by randomly selecting xi 2 [􀀀3; 3], and for each task we obtain 2000 samples. [Page 82, Paragraph 3] Algorithm efficiency is tested also on two commonly used real-world datasets: MHC-I1 [86] contains a subset of human MHC-I alleles (A0201, A0202, A0203, A0206, A0301, A3101, A3301, A6801, A6802) and features are extracted with bigram amino acid encoding [74] to project each protein sequence into 400-dimension feature space. Sentiment2 [87] contains user reviews of 4 types of products (books, DVD, electronics and kitchen) and each review lies in the 230610-dimension feature space based on the corresponding word sequence.[Page 83, Table 5.1] Note: Synthetic, MHC-1, and Sentiment corresponds to the respective data set and the left most column under "Decentralized" corresponds to the respective decentralized classifier unit. The values in the other columns correspond to model parameter values. The various parameters used to generate the data sets correspond to system parameter values of the measureable parameter values. For example, in the synthetic data set a training sample is generated from the system parameter values of 2000 samples of the measureable system parameters [-3,3]), wherein the local model parameter values of a-each respective one or several local classification models of the several local decentralized classifier units are generated by training the respective local decentralized classifier unit units with training data sets generated by locally determined system parameter values as input data sets, and an associated predefined state class as a target value ([Page 42, Paragraphs 1 & 2] In this chapter, we extend the problem to the convex-concave cases, where there may exist local or global constraints and each local agents may consider solving the convex-concave cases in the adaptation phase. Local constraints are quite common in optimization problems, such as budget limitation in portfolio selection, prior information adaptation in data fitting and projection into feasible areas in machine learning [32, 51]. On decentralized networks where communication is limited to each node’s direct neighbors, one practical solution for such a constrained problem would be constructing a local Lagrangian formulation and solving the convex-concave problem by iterative updating, and then sharing local weights via communication. [Page 82, Paragraph 2] We first construct a synthetic dataset via random walks in parameter space with Gaussian increments. The initial value is w0 2 R100, and its first 60 components are set to be 1 while the rest are set to be 􀀀1:5. We then construct the potential optimal weight vector for 9 similar tasks as wi+1 = wi + ", where " N(0; 0:1I) and i = 0; ; 8. A training sample x = [x1; ; x100] is generated by randomly selecting xi 2 [􀀀3; 3], and for each task we obtain 2000 samples. [Page 82, Paragraph 3] Algorithm efficiency is tested also on two commonly used real-world datasets: MHC-I1 [86] contains a subset of human MHC-I alleles (A0201, A0202, A0203, A0206, A0301, A3101, A3301, A6801, A6802) and features are extracted with bigram amino acid encoding [74] to project each protein sequence into 400-dimension feature space. Sentiment2 [87] contains user reviews of 4 types of products (books, DVD, electronics and kitchen) and each review lies in the 230610-dimension feature space based on the corresponding word sequence.[Page 83, Table 5.1] Note: Synthetic, MHC-1, and Sentiment corresponds to the respective data set and the left most column under "Decentralized" corresponds to the respective decentralized classifier unit. The values in the other columns correspond to model parameter values. The Full, Grid, or Ring correspond to an associated predefined state class and its respective value corresponds to the target value. The various parameters used to generate the data sets correspond to system parameter values of the measureable parameter values. For example, in the synthetic data set a training sample is generated from the system parameter values of 2000 samples of the measureable system parameters [-3,3]), wherein the several local decentralized classifier units of the classifier system are different and the local model parameter values of the several local decentralized classifier units are the result of training the respective local decentralized classifier unit with training data sets, which are generated with different measured system parameter values of the measurable system parameters and a target value representing a state of the system characterized by the measurable system parameters, which target value represents the membership of the system parameter values contained in the training data set to a state class ([Page 42, Paragraphs 1 & 2] In this chapter, we extend the problem to the convex-concave cases, where there may exist local or global constraints and each local agents may consider solving the convex-concave cases in the adaptation phase. Local constraints are quite common in optimization problems, such as budget limitation in portfolio selection, prior information adaptation in data fitting and projection into feasible areas in machine learning [32, 51]. On decentralized networks where communication is limited to each node’s direct neighbors, one practical solution for such a constrained problem would be constructing a local Lagrangian formulation and solving the convex-concave problem by iterative updating, and then sharing local weights via communication. [Page 82, Paragraph 2] We first construct a synthetic dataset via random walks in parameter space with Gaussian increments. The initial value is w0 2 R100, and its first 60 components are set to be 1 while the rest are set to be 􀀀1:5. We then construct the potential optimal weight vector for 9 similar tasks as wi+1 = wi + ", where " N(0; 0:1I) and i = 0; ; 8. A training sample x = [x1; ; x100] is generated by randomly selecting xi 2 [􀀀3; 3], and for each task we obtain 2000 samples. [Page 82, Paragraph 3] Algorithm efficiency is tested also on two commonly used real-world datasets: MHC-I1 [86] contains a subset of human MHC-I alleles (A0201, A0202, A0203, A0206, A0301, A3101, A3301, A6801, A6802) and features are extracted with bigram amino acid encoding [74] to project each protein sequence into 400-dimension feature space. Sentiment2 [87] contains user reviews of 4 types of products (books, DVD, electronics and kitchen) and each review lies in the 230610-dimension feature space based on the corresponding word sequence.[Page 83, Table 5.1] Note: Synthetic, MHC-1, and Sentiment corresponds to the respective data set and the left most column under "Decentralized" corresponds to the different decentralized classifier units. The values in the other columns correspond to model parameter values. The Full, Grid, or Ring correspond to an associated predefined state class and its respective value corresponds to the target value. The various parameters used to generate the data sets correspond to system parameter values of the measureable parameter values. For example, in the synthetic data set a training sample is generated from the system parameter values of 2000 samples of the measureable system parameters [-3,3]), wherein the central classifier unit is further configured to transfer the central model parameter values generated by the central classifier unit to at least one of the several local decentralized classifier units so that a respective local decentralized classifier unit represents a respective central classification model ([Page 2, Figure 1.1] Note: Left side of Figure 1.1 shows the central classifier unit is designed to transfer the central model parameter values generated by the central classifier unit to at least one of the workers corresponding to the several local decentralized classifier units). However, Chi does not explicitly disclose: determine a membership value that indicates membership to a state class of a state represented by a data set generated by the system parameter values of the measurable system parameters; and a central classifier unit comprising a central processing unit, wherein the central classifier unit connected to the several local decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, and wherein the central classifier unit is designed to: generate central model parameter values from the local model parameter values originating from the several local decentralized classifier units, generated on the basis of the measured system parameter values of the measurable system parameters that define a central binary classification model for the state class assigned to the measurable system parameters; and on the basis of central model parameter values that define one or several central binary sub-classification models for different classes, derive the central model parameter values for a central multiclass classification model and form a central multi- classification model, wherein the central classifier unit is designed to transmit the central model parameter values of the one or several central binary sub- classification models of the central multiclass classification model to the several local decentralized classifier units. Melis teaches, in an analogous system: determine a membership value that indicates membership to a state class of a state represented by a data set generated by the system parameter values of the measurable system parameters ([Page 6, Table 1] Note: Table 1 indicates membership, to a state class and data set. [Page 6, Column 2, Last Paragraph] For membership [Page 7, Column 1, Paragraph 1] inference, we report only precision because our decision rule from Section 4.3 is binary and does not produce a probability score. Note: Also, see Table 2 on Page 7 indicating membership values). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the classifier system of Chi to incorporate the teachings of Melis to determine a membership value that indicates membership to a state class of a state represented by a data set generated by the system parameter values of the measurable system parameters. One would have been motivated to do this modification because doing so would give the benefit of reporting only precision as taught by Melis [Page 7, Column 1, Paragraph 1]. Jakub teaches, in an analogous system: and a central classifier unit comprising a central processing unit, wherein the central classifier unit connected to the several local decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models ([Abstract] Federated Learning is a machine learning setting where the goal is to train a high quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones. [Page 4, Section Improving the quantization by structured random rotations], Paragraph 3] In the decoding phase, the server needs to perform the inverse rotation before aggregating all the updates. we use a type of structured rotation matrix which is the product of a Walsh-Hadamard matrix and a binary diagonal matrix. Note: Clients correspond to the decentralized classifier units. Mobile phones have central processing units), and wherein the central classifier unit is designed to: generate central model parameter values from the local model parameter values originating from the several local decentralized classifier units, generated on the basis of the measured system parameter values of the measurable system parameters that define a central binary classification model for the state class assigned to the measurable system parameters ([Page 2, Paragraph 2] For simplicity, we consider synchronized algorithms for Federated Learning where a typical round consists of the following steps: 1. A subset of existing clients is selected, each of which downloads the current model. 2. Each client in the subset computes an updated model based on their local data. 3. The model updates are sent from the selected clients to the sever. 4. The server aggregates these models (typically by averaging) to construct an improved global model. [Page 4, Section Improving the quantization by structured random rotations], Paragraph 3] In the decoding phase, the server needs to perform the inverse rotation before aggregating all the updates. we use a type of structured rotation matrix which is the product of a Walsh-Hadamard matrix and a binary diagonal matrix. Note: Clients correspond to the decentralized classifier units. Averaging the models corresponds to generating central model parameter values from the decentralized model parameter values); and on the basis of central model parameter values that define one or several central binary sub-classification models for different classes, derive the central model parameter values for a central multiclass classification model and form a central multi- classification model ([Page 2, Paragraph 2] The server aggregates these models (typically by averaging) to construct an improved global model. [Page 4, Section Improving the quantization by structured random rotations], Paragraph 3] In the decoding phase, the server needs to perform the inverse rotation before aggregating all the updates. we use a type of structured rotation matrix which is the product of a Walsh-Hadamard matrix and a binary diagonal matrix. Note: The improved global model corresponds to a central multi- classification model); wherein the central classifier unit is designed to transmit the central model parameter values of the one or several central binary sub- classification models of the central multiclass classification model to the several local decentralized classifier units ([Page 4, Paragraph 1] Subsampling. Instead of sending Hi t, each client only communicates matrix ^Hi t which is formed from a random subset of the (scaled) values of Hit. The server then averages the subsampled updates, producing the global update ^Ht. [Page 4, Paragraph 7] Improving the quantization by structured random rotations. The above 1-bit and multi-bit quantization approach work best when the scales are approximately equal across different dimensions. Note: Global update corresponds to transmit the central model parameter values and clients corresponds to decentralized binary classifier units. 1-bit and multi-bit corresponds to binary and multiclass classification models). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub to use and a central classifier unit comprising a central processing unit, wherein the central classifier unit connected to the several local decentralized classifier units for transmission of the respective local model parameter values defining the respective one or several local binary classification models, generate central model parameter values from the local model parameter values originating from the several local decentralized classifier units, generated on the basis of the measured system parameter values of the measurable system parameters that define a central binary classification model for the state class assigned to the measurable system parameters and on the basis of central model parameter values that define one or several central binary sub-classification models for different classes, derive the central model parameter values for a central multiclass classification model and form a central multi- classification model, wherein the central classifier unit is designed to transmit the central model parameter values of the one or several central binary sub- classification models of the central multiclass classification model to the several local decentralized classifier units. One would have been motivated to do this modification because doing so would give the benefit of constructing an improved global model as taught by Jakub [Page 2, Paragraph 2]. Regarding claim 4: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein the central classifier unit and the several local decentralized classifier units implement the binary classification models by means of respective artificial neural networks, wherein the respective artificial neural networks each comprise a topology that is defined by nodes and weighted connections between the nodes, which are formed by artificial neurons organized in several layers, and wherein the model parameter values are values of weightings of the weighted connections between the nodes. Jakub teaches, in an analogous system: wherein the central classifier unit and the several local decentralized classifier units implement the binary classification models by means of respective artificial neural networks, wherein the respective artificial neural networks each comprise a topology that is defined by nodes and weighted connections between the nodes, which are formed by artificial neurons organized in several layers, and wherein the model parameter values are values of weightings of the weighted connections between the nodes ([Abstract] We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server. [Page 2, Last but one paragraph] In Section 4, we describe Federated Learning for neural networks, where we use a separate 2D matrixW to represent the parameters of each layer. [Page 3, Paragraph 1] In the Experiments section, we evaluate the effect of these methods in training deep neural networks. [Page 4, Paragraph 2] Probabilistic quantization. Another way of compressing the updates is by quantizing the weights. Note: Neural networks corresponds to having neurons). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein the central classifier unit and the several local decentralized classifier units implement the binary classification models by means of respective artificial neural networks, wherein the respective artificial neural networks each comprise a topology that is defined by nodes and weighted connections between the nodes, which are formed by artificial neurons organized in several layers, and wherein the model parameter values are values of weightings of the weighted connections between the nodes. One would have been motivated to do this modification because doing so would give the benefit of using Federated Learning for neural networks as taught by Jakub [Page 2, Last but one paragraph]. Regarding claim 5: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein at least one of the several local decentralized classifier units is designed to update, in case of a new training data set for a respective state class, the respective binary classification model or binary sub-classification model for the respective state class and to transmit updated model parameter values resulting therefrom and/or gradients obtained as part of an the update to the central classifier unit, and wherein the central classifier unit is designed to update, in response to receipt of the updated model parameter values and/or gradients, only the central binary classification model or a the central binary sub-classification model of a multiclass classification model that has been trained for the a relevant state class. Jakub teaches, in an analogous system: wherein at least one of the several local decentralized classifier units is designed to update, in case of a new training data set for a respective state class, the respective binary classification model or binary sub-classification model for the respective state class and to transmit updated model parameter values resulting therefrom and/or gradients obtained as part of an the update to the central classifier unit, and wherein the central classifier unit is designed to update, in response to receipt of the updated model parameter values and/or gradients, only the central binary classification model or a the central binary sub-classification model of a multiclass classification model that has been trained for the a relevant state class ([Abstract] We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server. [Page 2, Paragraph 2] 2. Each client in the subset computes an updated model based on their local data. 3. The model updates are sent from the selected clients to the sever. [Page 2, Paragraph 6] These clients independently update the model based on their local data. Let the updated local models be W1 t ;W2 t ; : : : ;Wnt t , so the update of client i can be written as Hit := Wit 􀀀Wt, for i 2 St. These updates could be a single gradient computed on the client, but typically will be the result of a more complex calculation, for example, multiple steps of stochastic gradient descent (SGD) taken on the client’s local dataset. In any case, each selected client then sends the update back to the sever, where the global update is computed by aggregating2 all the client-side updates. [Page 4, Paragraph 7] Improving the quantization by structured random rotations. The above 1-bit and multi-bit quantization approach work best when the scales are approximately equal across different dimensions. Note: 1-bit and multi-bit corresponds to binary and multiclass classification models. Client corresponds to the decentralized classifier unit and server corresponds to the central classifier unit). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein at least one of the several local decentralized classifier units is designed to update, in case of a new training data set for a respective state class, the respective binary classification model or binary sub-classification model for the respective state class and to transmit updated model parameter values resulting therefrom and/or gradients obtained as part of an the update to the central classifier unit, and wherein the central classifier unit is designed to update, in response to receipt of the updated model parameter values and/or gradients, only the central binary classification model or a the central binary sub-classification model of a multiclass classification model that has been trained for the a relevant state class. One would have been motivated to do this modification because doing so would give the benefit of computing a global update as taught by Jakub [Page 2, Paragraph 3]. Regarding claim 6: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). Chi further teaches: wherein at least one of the several local decentralized classifier units is designed to obtain the target value for a training data set by way of language processing of a natural-language description of the state to which the locally determined system parameter values for the training data set belong ([Page 82, Paragraph 3] Sentiment2 [87] contains user reviews of 4 types of products (books) and each review lies in the 230610-dimension feature space based on the corresponding word sequence.[Page 83, Table 5.1] Note: Sentiment corresponds to the respective data set and the left most column under "Decentralized" corresponds to the several decentralized classifier unit. For example, in the Sentiment data set with books corresponds to a training data set by way of language processing of a natural-language description of the state to which the locally determined system parameter values for the training data set belong. The Full, Grid, or Ring correspond to an associated predefined state class and its respective value corresponds to the target value). Regarding claim 12: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 4 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein the model parameter values are threshold values of a respective neuron forming a node. Jakub teaches, in an analogous system: wherein the model parameter values are threshold values of a respective neuron forming a node ([Page 1, Last Paragraph] A principal motivating example for Federated Learning arises when the training data comes from users’ interaction with mobile applications. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. The training data is kept locally on users’ mobile devices, and the devices are used as nodes performing computation on their local data in order to update a global model. [Page 2, Last but one Paragraph] In Section 4, we describe Federated Learning for neural networks, where we use a separate 2D matrixWto represent the parameters of each layer. We suppose that Wgets right-multiplied, i.e., d1 and d2 represent the output and input dimensions respectively. Note that the parameters of a fully connected layer are naturally represented as 2D matrices. However, the kernel of a convolutional layer is a 4D tensor of the shape inputwidthheight#output. In such a case, Wis reshaped from the kernel to the shape (#input width height) #output. Note: Neural networks are made of neurons). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein the model parameter values are threshold values of a respective neuron forming a node. One would have been motivated to do this modification because doing so would give the benefit of decoupling the ability to do machine learning from the need to store the data in the cloud as taught by Jakub [Page 1, Last Paragraph]. Regarding claim 13: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein the central processing unit is a server. Jakub teaches, in an analogous system: wherein the central processing unit is a server ([Abstract] We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein the central processing unit is a server. One would have been motivated to do this modification because doing so would give the benefit of each client independently computing an update and communicating to the central server as taught by Jakub [Abstract]. Regarding claim 14: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein the central processing unit is installed within a server. Jakub teaches, in an analogous system: wherein the central processing unit is installed within a server ([Abstract] We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server. [Page 2, Paragraph 3] 3. The model updates are sent from the selected clients to the sever. 4. The server aggregates these models (typically by averaging) to construct an improved global model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein the central processing unit is installed within a server. One would have been motivated to do this modification because doing so would give the benefit of each client independently computing an update and communicating to the central server as taught by Jakub [Abstract]. Regarding claim 15: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein the several local decentralized classifier units are capable of implementing respective artificial neural networks. Jakub teaches, in an analogous system: wherein the several local decentralized classifier units are capable of implementing respective artificial neural networks ([Page 3, Paragraph 1] In the Experiments section, we evaluate the effect of these methods in training deep neural networks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein the several local decentralized classifier units are capable of implementing respective artificial neural networks. One would have been motivated to do this modification because doing so would give the benefit of each client independently computing an update and communicating to the central server as taught by Jakub [Abstract]. Regarding claim 16: The system of Chi, Melis, and Jakub teaches: The classifier system according to claim 1 (as shown above). However, the system of Chi and Melis is not relied upon to teach: wherein the central processing unit is capable of implementing a respective artificial neural network. Jakub teaches, in an analogous system: wherein the central processing unit is capable of implementing a respective artificial neural network ([Page 3, Paragraph 1] In the Experiments section, we evaluate the effect of these methods in training deep neural networks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Chi and Melis to incorporate the teachings of Jakub wherein the central processing unit is capable of implementing a respective artificial neural network. One would have been motivated to do this modification because doing so would give the benefit of each client independently computing an update and communicating to the central server as taught by Jakub [Abstract]. Regarding claim 19: Chi teaches: A computer implemented classifier system for classifying states of a system that is characterized by measurable system parameters, wherein the classifier system comprises ([Page 9, Section 2.1.1] Algorithm 1 General Framework for Online Learning. Note: Shows a classifier system for classifying states of a system that is characterized by measurable system parameters): several local decentralized data processing instances at different spatial locations, each local decentralized data processing instance having a processor, memory, and program code, each local decentralized data processing instance implementing decentralized classifier units, wherein each local decentralized data processing instance respectively implements one or several local binary classification models or one or several local multiclass classification models composed of local binary sub-classification models that are defined by model parameter values stored in said memory of a respective local decentralized data processing instance, wherein said local decentralized data processing instances each are configured to ([Abstract, Paragraph 1] OFL allows each node to perform local operation (e.g., train local model). The typical clients in this setting are mobile phones. [Page 2, Figure 1.1] Note: Right side of Figure 1.1 shows decentralized distributed optimization. [Page 3, Paragraph 1] Such an optimization method is often called ‘federated learning’ [6] since each node is now liberated from the control of the master node and all nodes collaborate to reach consensus by sharing information with their direct neighbors without the central coordination. [Page 33, Section 3.4.4, Paragraph 1] We consider a distributed binary classification problem based on the logistic regression function on a9a dataset 1. Each node on the graph receives a subset of the dataset. Note: Mobile phones have a processor, memory, and program code): The rest of the limitations of claim 19 is substantially similar to claim 1 and therefore is rejected on similar grounds as claim 1 as explained above. Claims 7-11, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chi (Online Federated Learning over decentralized networks, 2018) in view of Jakub et al (FEDERATED LEARNING: STRATEGIES FOR IMPROVING COMMUNICATION EFFICIENCY, 2017). Regarding claim 7: Chi teaches: A method for distributed generating and updating of classification models, the method comprising: forming several binary classification models and/or several multiclass classification models for one target value or several target values in a decentralized manner ([Page 2, Figure 1.1] Note: Right side of Figure 1.1 shows decentralized distributed optimization. [Page 33, Section 3.4.4, Paragraph 1] We consider a distributed binary classification problem based on the logistic regression function on a9a dataset 1. Each node on the graph receives a subset of the dataset). However, Chi is not relied upon to teach: transmitting model parameter values and/or gradients defining a respective binary classification model or binary sub-classification models of the multiclass classification model to a central classifier unit; and forming or updating a central classification model from the transmitted model parameter values by the central classifier unit. Jakub teaches, in an analogous system: transmitting model parameter values and/or gradients defining a respective binary classification model or binary sub-paths of the a-multiclass classification model to a central classifier unit ([Abstract] Federated Learning is a machine learning setting where the goal is to train a high quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server. Note: Server corresponds to a central classifier unit); and forming or updating a central classification model from the transmitted model parameter values by the central classifier unit ([Abstract] where the client-side updates are aggregated to compute a new global model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Chi to incorporate the teachings of Jakub to use transmitting model parameter values and/or gradients defining a respective binary classification model or binary sub-classification models of the multiclass classification model to a central classifier unit; and forming or updating a central classification model from the transmitted model parameter values by the central classifier unit. One would have been motivated to do this modification because doing so would give the benefit of computing a new global model as taught by Jakub [Abstract]. Regarding claim 8: The system of Chi and Jakub teaches: The method according to claim 7 (as shown above). However, the system of Chi and Jakub is not relied upon to teach: wherein the method further comprises forming a central multiclass classification model from the binary sub-classification models by the central classifier unit. Jakub teaches, in an analogous system: wherein the method further comprises forming a central multiclass classification model from the binary sub-classification models by the central classifier unit ([Page 4, Paragraph 7] Improving the quantization by structured random rotations. The above 1-bit and multi-bit quantization approach work best when the scales are approximately equal across different dimensions. Note: 1-bit and multi-bit corresponds to binary and multiclass classification models). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Chi to incorporate the teachings of Jakub to use wherein the method further comprises forming a central multiclass classification model from the binary sub-classification models by the central classifier unit. One would have been motivated to do this modification because doing so would give the benefit of the 1-bit and multi-bit quantization approach working best as taught by Jakub [Page 4, Paragraph 7]. Regarding claim 9: The system of Chi and Jakub teaches: The method according to claim 7 (as shown above). Chi further teaches: wherein the method further comprises: transmitting the model parameter values defining the central classification model to one or several local decentralized classifier units ([Page 2, Figure 1.1] Note: Left side of Figure 1.1 shows transmitting the model parameter values defining the central classification model to at least one of the workers corresponding to the several local decentralized classifier units). Regarding claim 10: The system of Chi and Jakub teaches: The method according to claim 8 (as shown above). However, Chi is not relied upon to teach: wherein the binary sub-classification models of the central multiclass classification model are transmitted to the one or several local decentralized classifier units. Jakub teaches, in an analogous system: wherein the binary sub-classification models of the central multiclass classification model are transmitted to the one or several local decentralized classifier units ([Page 4, Paragraph 1] Subsampling. Instead of sending Hi t, each client only communicates matrix ^Hi t which is formed from a random subset of the (scaled) values of Hit. The server then averages the subsampled updates, producing the global update ^Ht. [Page 4, Paragraph 7] Improving the quantization by structured random rotations. The above 1-bit and multi-bit quantization approach work best when the scales are approximately equal across different dimensions. Note: Clients corresponds to local decentralized binary classifier units. 1-bit and multi-bit corresponds to binary and multiclass classification models). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Chi to incorporate the teachings of Jakub wherein the binary sub-classification models of the central multiclass classification model are transmitted to the one or several local decentralized classifier units. One would have been motivated to do this modification because doing so would give the benefit of the 1-bit and multi-bit quantization approach working best as taught by Jakub [Page 4, Paragraph 7]. Regarding claim 11: The system of Chi and Jakub teaches: The method according to claim 7 (as shown above). However, Chi is not relied upon to teach: wherein the model parameter values and/or gradients defining the respective binary classification model or binary sub-classification model are transmitted to the central classifier unit after each update of a local decentralized binary classification model or binary sub-classification model, or at fixed intervals or as a function of intervals defined by a parameter, or after a formation of a respective local decentralized binary classification model or binary sub-classification model has been completed. Jakub teaches, in an analogous system: wherein the model parameter values and/or gradients defining the respective binary classification model or binary sub-classification model are transmitted to the central classifier unit after each update of a local decentralized binary classification model or binary sub-classification model, or at fixed intervals or as a function of intervals defined by a parameter, or after a formation of a respective local decentralized binary classification model or binary sub-classification model has been completed ([Page 2, Paragraph 6] We first provide a communication-naive version of the Federated Learning. In round t 0, the server distributes the current model Wt to a subset St of nt clients. These clients independently update the model based on their local data. Let the updated local models be W1 t ;W2 t ; : : : ;Wnt t , so the update of client i can be written as Hi t := Wit 􀀀Wt, for i 2 St. These updates could be a single gradient computed on the client, but typically will be the result of a more complex calculation, for example, multiple steps of stochastic gradient descent (SGD) taken on the client’s local dataset. In any case, each selected client then sends the update back to the sever. [Page 4, Paragraph 7] Improving the quantization by structured random rotations. The above 1-bit and multi-bit quantization approach work best when the scales are approximately equal across different dimensions. Note: 1-bit and multi-bit corresponds to binary and multiclass classification models). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Chi to incorporate the teachings of Jakub wherein the model parameter values and/or gradients defining the respective binary classification model or binary sub-classification model are transmitted to the central classifier unit after each update of a local decentralized binary classification model or binary sub-classification model, or at fixed intervals or as a function of intervals defined by a parameter, or after a formation of a respective local decentralized binary classification model or binary sub-classification model has been completed. One would have been motivated to do this modification because doing so would give the benefit of the clients independently updating the model based on their local data as taught by Jakub [Page 2, Paragraph 6]. Regarding claim 17: The system of Chi and Jakub teaches: The method according to claim 7 (as shown above). However, the system of Chi is not relied upon to teach: wherein the central processing unit is capable of implementing a respective artificial neural network. Jakub teaches, in an analogous system: wherein the central processing unit is capable of implementing a respective artificial neural network ([Page 3, Paragraph 1] In the Experiments section, we evaluate the effect of these methods in training deep neural networks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Chi to incorporate the teachings of Jakub wherein the central processing unit is capable of implementing a respective artificial neural network. One would have been motivated to do this modification because doing so would give the benefit of each client independently computing an update and communicating to the central server as taught by Jakub [Abstract]. Regarding claim 18: The system of Chi and Jakub teaches: The method according to claim 9 (as shown above). However, the system of Chi is not relied upon to teach: wherein the one or several local decentralized classifier units are capable of implementing respective artificial neural networks. Jakub teaches, in an analogous system: wherein the one or several local decentralized classifier units are capable of implementing respective artificial neural networks ([Page 3, Paragraph 1] In the Experiments section, we evaluate the effect of these methods in training deep neural networks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Chi to incorporate the teachings of Jakub wherein the one or several local decentralized classifier units are capable of implementing respective artificial neural networks. One would have been motivated to do this modification because doing so would give the benefit of each client independently computing an update and communicating to the central server as taught by Jakub [Abstract]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rassam et al (Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues , 2013) discloses the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. Hee (DISTRIBUTED CLASSIFICATION IN P2P NETWORKS, 215) discloses a systematic approach to (i) analyse the various types of P2P environments and highlight issues that are unique to these environments, (ii) study the existing distributed classification approaches and identify their limitations which make them unsuitable for the P2P environments, and (iii) propose several P2P classification solutions to address the identified challenges of learning in the P2P environments and the limitations of existing approaches. The challenges and limitations have been addressed using the multiple classifier system (cascade SVM and ensemble of classifiers) which has been proven with theoretical studies and experiments on real-life and synthetic datasets to be very effective for the P2P environment. In summary, this thesis has achieved its objective to provide an encyclopedic guide and solutions to learning in the P2P environments, allowing anyone to construct an accurate and efficient classification model under any type of P2P environment for a diverse domain of applications. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAITANYA RAMESH JAYAKUMAR whose telephone number is (571)272-3369. The examiner can normally be reached Mon-Fri 9am-1pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571)272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.R.J./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Nov 05, 2021
Application Filed
May 19, 2025
Non-Final Rejection — §101, §103, §112
Dec 01, 2025
Response Filed
Feb 25, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12293260
GENERATING AND DEPLOYING PACKAGES FOR MACHINE LEARNING AT EDGE DEVICES
2y 5m to grant Granted May 06, 2025
Patent 12147915
SYSTEMS AND METHODS FOR MODELLING PREDICTION ERRORS IN PATH-LEARNING OF AN AUTONOMOUS LEARNING AGENT
2y 5m to grant Granted Nov 19, 2024
Patent 11770571
Matrix Completion and Recommendation Provision with Deep Learning
2y 5m to grant Granted Sep 26, 2023
Patent 11769074
COLLECTING OBSERVATIONS FOR MACHINE LEARNING
2y 5m to grant Granted Sep 26, 2023
Patent 11741693
SYSTEM AND METHOD FOR SEMI-SUPERVISED CONDITIONAL GENERATIVE MODELING USING ADVERSARIAL NETWORKS
2y 5m to grant Granted Aug 29, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
48%
With Interview (+22.5%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 51 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month