Prosecution Insights
Last updated: April 19, 2026
Application No. 18/056,386

INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD

Final Rejection §101§103§112
Filed
Nov 17, 2022
Examiner
MILLER, ALEXANDRIA JOSEPHINE
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
5 granted / 27 resolved
-36.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
40 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claims 1-14 are presented for examination. This office action is in response to submission of application on 17-NOVEMBER-2022. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 17-NOVEMBER-2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: A training data acquisition unit: a training data acquisition unit configured to acquire training data including learning data and a correct label (Claim 1) the information processing system according to claim 1, wherein the training data acquisition unit is configured to acquire the training data from the second information processing apparatus (Claim 3) A determination unit: the information processing system according to claim 1, further comprising a determination unit configured to determine whether the first information processing apparatus performs learning, in an adequate range, for at least either the training data or a partial model (Claim 8) The information processing system according to claim 8, wherein the determination unit is configured to determine whether the first information processing apparatus performs the learning, in the adequate range, for at least either the training data or a partial model, depending on whether a component ratio of the correct label included in the training data satisfies a predetermined criterion (Claim 9) The information processing system according to claim 8, the determination unit is configured to determine whether the first information processing apparatus performs the learning, in the adequate range, for at least either the training data or a partial model, depending on whether a variation in parameter of the partial model due to the learning satisfies a predetermined criterion (Claim 10) The information processing system according to claim 11, wherein the first information processing apparatus further includes a determination unit configured to determine whether the inference is made, in an adequate range, for at least one of the data as the inference target or an inference result (Claim 12) An acquisition unit: an acquisition unit configured to acquire data serving as an inference target (Claim 11) The learning units of claim 1 and its dependent claims have not been evaluated under 112(f) despite containing the same term ‘unit’ viewed to be a generic placeholder as their inclusion of layers and a partial model in the claim language accounts for sufficient structure for performing the actions, as such language makes it clear that they contain neural networks sufficient for performing the claimed function. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitations a training data acquisition unit (Claims 1, 3), a determination unit (Claims 8-10, 12), and an acquisition unit (Claim 11) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. With regards to these limitations, the disclosure is devoid of any structure that performs the claim in the function. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, claims 1, 3, and 8-12 are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 5, 6, 8-10, and 12 rejected under 35 U.S.C. 101 because the claimed invention is direction to an abstract idea without significantly more. MPEP 2106.04(a)(2)(Ill) “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, Judgments, and opinions. Further, the MPEP recites “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide run) to perform the claim limitation. Regarding claim 5, which depends upon claim 1: While claims 1 and 4 are not rejected as an abstract idea, its limitation are herein analyzed as the examiner believes that in light of the judicial exception believed to be present in claim 5, the limitations of claims 1 and 4 would be insignificant extra-solution activity. Step 2A, Prong 1 will now be evaluated for this claim: A judicial exception is recited in this claim as it recites a mathematical concept: wherein the third learning unit is configured to update a parameter of the third partial model based on the error information and backpropagation Backpropagation refers to a specific series of mathematical calculations, and the application of it here to update the parameter renders the updating of the parameters likewise a mathematical calculation. Step 2A, Prong 2 will now be evaluated for this claim: Furthermore, the additional elements: Claim 1: a first information processing apparatus and a second information processing apparatus configured to communicate with the first information processing apparatus via a network: the information processing apparatuses are both taken to be generic computers. Claim 1: the information processing system being configured to perform learning processing on an inference model based on a neural network including an input layer, a plurality of intermediate layers, and an output layer: this limitation describes a generic neural network Claim 1: a first learning unit configured to perform first learning processing by inputting the learning data to a first partial model including the input layer and a part of the plurality of intermediate layers of the inference model – performing learning processing without further limitation is considered to be a generic computer function Claim 1: a third learning unit configured to perform third learning processing on a third partial model including the output layer using an output obtained through second learning processing performed by the second information processing apparatus and the correct label - performing learning processing without further limitation is considered to be a generic computer function. Claim 1: the second information processing apparatus comprising a second learning unit configured to perform the second learning processing by inputting an output obtained through the first learning processing to a second partial model including an intermediate layer that is included in the inference model and is different from the part of the plurality of intermediate layers included in the first partial model - performing learning processing without further limitation is considered to be a generic computer function. are interpreted as a general purpose computer under MPEP 2106.05(f) Furthermore, MPEP 2106.05(g) Insignificant Extra-Solution Activity has found mere data gathering and post-solution activity to be insignificant extra-solution activity. The following steps are mere data gathering: Claim 1: a training data acquisition unit configured to acquire training data including learning data and a correct label – acquiring data is a form of data gathering. The additional elements have been considered both individually and as an ordered combination in order to determine whether they integrate the exception into a practical application. Therefore, no meaningful limits are imposed practicing the abstract idea. Therefore, the claim is related to an abstract idea. Step 2B will now be discussed with regards to this claim: The claim does not provide an inventive concept. There is no additional Insignificant Extra- Solution Activity, as identified in Step 2A Prong Two, that provides an inventive concept. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)) does not overcome a rejection. The additional elements have been considered both individually and as an ordered combination as to whether they whether they warrant significantly more consideration. The claim is ineligible. Regarding claim 6, which depends upon claim 5: The following would be mathematical calculation: wherein the second learning unit is configured to update a parameter of the second partial model based on the error information transmitted from the third learning unit, and wherein the first learning unit is configured to update a parameter of the first partial model based on the error information transmitted from the second learning unit As claim 6 depends upon claim 5, the updating described likewise results from backpropagation, which is considered to be a mathematical calculation. This claim is ineligible. Regarding claim 8, which depends upon claim 1: This claim incorporates the insignificant extra-solution activity of claim 1. Furthermore, this claim is considered to be a mental process as the determination unit configured to determine if the learning is within an adequate range is accomplishable by the human mind, as the human mind is capable of comparing a result against a threshold and determining if the threshold is overcome. This claim is ineligible. Regarding claim 9, which depends upon claim 8: This claim incorporates the insignificant extra-solution activity of claim 1. Furthermore, this claim is considered to be a mental process as the determination unit configured to determine if the learning is within an adequate range is accomplishable by the human mind, as the human mind is capable of comparing a result against a threshold and determining if the threshold is overcome. This claim is ineligible. Regarding claim 10, which depends upon claim 8: This claim incorporates the insignificant extra-solution activity of claim 1. Furthermore, this claim is considered to be a mental process as the determination unit configured to determine if the learning is within an adequate range is accomplishable by the human mind, as the human mind is capable of comparing a result against a threshold and determining if the threshold is overcome. This claim is ineligible. Regarding claim 12, which depends upon claim 11: This claim incorporates the insignificant extra-solution activity of claim 1. Furthermore, this claim is considered to be a mental process as the determination unit configured to determine if the learning is within an adequate range is accomplishable by the human mind, as the human mind is capable of comparing a result against a threshold and determining if the threshold is overcome. Furthermore, as this claim depends upon claim 11, further consideration must be given to the limitation of the parent claim, which is not believed to contain a judicial exception. However, regarding the limitations of this claim: an information processing system configured to make inference using the inference model based on the neural network that has been trained through the learning processing according to claim 1 and the information processing system makes the inference on the data serving as the inference target using the first partial model, the second partial model, and the third partial model refers to the generic computer function of applying a machine learning model the first information processing apparatus further includes an acquisition unit configured to acquire data serving as an inference target refers to the acquisition of data, which is data gathering. As such in light of dependent claim 12 claim 11 is considered to be insignificant extra-solution activity. This claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Qian et al. (Pub. No. US 20210374605 A1, filed October 30th 2020, hereinafter Qian) in view of Satheesh et al. (Pub. No. WO 2021064737 A1, filed October 4th 2019, hereinafter Satheesh). Regarding claim 1: Claim 1 recites: An information processing system, comprising a first information processing apparatus and a second information processing apparatus configured to communicate with the first information processing apparatus via a network, the information processing system being configured to perform learning processing on an inference model based on a neural network including an input layer, a plurality of intermediate layers, and an output layer, the first information processing apparatus comprising: a training data acquisition unit configured to acquire training data including learning data and a correct label; a first learning unit configured to perform first learning processing by inputting the learning data to a first partial model including the input layer and a part of the plurality of intermediate layers of the inference model; and a third learning unit configured to perform third learning processing on a third partial model including the output layer using an output obtained through second learning processing performed by the second information processing apparatus and the correct label, the second information processing apparatus comprising a second learning unit configured to perform the second learning processing by inputting an output obtained through the first learning processing to a second partial model including an intermediate layer that is included in the inference model and is different from the part of the plurality of intermediate layers included in the first partial model Regarding the limitation An information processing system, comprising a first information processing apparatus and a second information processing apparatus configured to communicate with the first information processing apparatus via a network: Qian teaches the use of federated learning, wherein a series of client devices with local models communicate with a remote server and remote shared model via a network (Paragraph 27). Here, the series of client devices would be the first information processing apparatus and the remote server the second information processing apparatus as they are two separate apparatuses that are in communication with each other via a network. Regarding the limitation the information processing system being configured to perform learning processing on an inference model based on a neural network including an input layer, a plurality of intermediate layers, and an output layer: Qian teaches the use of a neural network, which would be a sort of inference model, that includes a number of LSTM layers that would comprise a plurality of intermediate layers, wherein a neural network in and of itself contains an input layer and output layer (Paragraph 24). Regarding the limitation the first information processing apparatus comprising a training data acquisition unit configured to acquire training data including learning data and a correct label: Qian teaches supervised learning algorithms that use labeled examples as training data in order to train a machine learning model (Paragraph 106). The labeled examples would comprise learning data and a correct label and are therefore the acquired training data. Regarding the limitation a first learning unit configured to perform first learning processing by inputting the learning data to a first partial model including the input layer and a part of the plurality of intermediate layers of the inference model: Qian teaches federated learning, wherein in the first information processing apparatus (the client devices) an individual client device may train a machine learning model based on its own user data (i.e., inputting learning user data to a first model through the input layer) wherein the model learns gradients for the client device’s data through a plurality of intermediate layers (Paragraph 27). This model would be considered a partial model as it only has access to partial data, and as such does not produce the full inference result. Regarding the limitation and a third learning unit configured to perform third learning processing on a third partial model including the output layer using an output obtained through second learning processing performed by the second information processing apparatus and the correct label: Qian teaches federated learning, wherein the first model discussed above may send its gradients to the second information processing apparatus, wherein a second model (the remote shared model) produces an output through second learning that is then sent back to the client devices (Paragraph 27). Therefore, another client device now has learning from the first client device. This new client device would be the third learning unit that with it own third partial model produces gradients in the same manner as the first partial model in order to continue the learning process as an output from an output layer. However, Qian does not fully teach the second information processing apparatus comprising a second learning unit configured to perform the second learning processing by inputting an output obtained through the first learning processing to a second partial model including an intermediate layer that is included in the inference model and is different from the part of the plurality of intermediate layers included in the first partial model: Qian teaches federated learning, wherein the first model discussed above may send its gradients to the second information processing apparatus, wherein a second model (the remote shared model) on the remote server (or the second learning unit) performs a second learning process using the gradients that were sent from the first partial model as an output (Paragraph 27). However, Qian does not teach that the intermediate layers of the second model are different than the intermediate layers of the first model. Satheesh in the same field of endeavor of distributed machine learning teaches two models with different sets of layers (Paragraph 10), and therefore contain an intermediate layer that is included in the inference model and is different from the part of the plurality of intermediate layers included in the first partial model. Satheesh and the present application are analogous art because they are in the same field of endeavor of distributed machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that implemented the teachings of Qian and the teachings of Satheesh. This would have provided the advantage of optimization within a federated learning framework (Satheesh, Paragraph 6). Regarding claim 2, which depends upon claim 1: Claim 2 recites: The information processing system according to claim 1, wherein the first partial model and the third partial model serve as a network for confidentiality, and wherein the second partial model serves as a network for publication. Qian in view of Satheesh discloses the system of claim 1 upon which claim 2 depends. Furthermore, regarding the limitation of claim 2: Qian teaches a federated learning system where the client devices, or the first partial model and the third partial model as seen in claim 1, allow for a method to keep user data on-device (Paragraph 27), providing confidentiality. The second partial model serves a network for publication as it distributes a shared model to the client devices, wherein distribution is a form of publication (Paragraph 27). Regarding claim 3, which depends upon claim 1: Claim 3 recites: The information processing system according to claim 1, wherein the training data acquisition unit is configured to acquire the training data from the second information processing apparatus. Qian in view of Satheesh discloses the system of claim 1 upon which claim 3 depends. However, Qian does not teach the limitation of claim 3: Satheesh teaches a federated learning system wherein the global model, analogous to the remote shared model of Qian, receives equations for training the model (Paragraph 46) wherein the equations act as a form of training data acquired by the second information processing apparatus, as the global / remote shared model acts as the second information processing apparatus as seen in Qian’s disclosure of claim 1. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that implemented the teachings of Qian and the teachings of Satheesh. This would have provided the advantage of optimization within a federated learning framework (Satheesh, Paragraph 6). Regarding claim 4, which depends upon claim 1: Claim 4 recites: The information processing system according to claim 1, wherein the third learning unit is configured to acquire the correct label from the training data acquisition unit and acquire error information based on the correct label and an output from the output layer Qian in view of Satheesh discloses the system of claim 1 upon which claim 4 depends. Furthermore, regarding the limitation of claim 4: Qian teaches supervised learning algorithms which acquire error information by comparing an acquired correct label from the training data to the output from the output layer (Paragraph 106) wherein the supervised learning algorithm may be used in the third learning unit. Regarding claim 5, which depends upon claim 4: Claim 5 recites: The information processing system according to claim 4, wherein the third learning unit is configured to update a parameter of the third partial model based on the error information and backpropagation. Qian in view of Satheesh discloses the system of claim 4 upon which claim 5 depends. However, Qian does not teach the limitation of claim 5: Satheesh teaches that the user devices’ models of its federated learning system use a backpropagation technique, which in and of itself uses the error information (Paragraph 58). As user device models have previously been used in Qian to provide a third learning unit, the third learning unit may in combination update a parameter in this manner. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that implemented the teachings of Qian and the teachings of Satheesh. This would have provided the advantage of optimization within a federated learning framework (Satheesh, Paragraph 6). Regarding claim 6, which depends upon claim 5: Claim 6 recites: The information processing system according to claim 5, wherein the second learning unit is configured to update a parameter of the second partial model based on the error information transmitted from the third learning unit, and wherein the first learning unit is configured to update a parameter of the first partial model based on the error information transmitted from the second learning unit Qian in view of Satheesh discloses the system of claim 5 upon which claim 6 depends. Furthermore, regarding the limitation wherein the second learning unit is configured to update a parameter of the second partial model based on the error information transmitted from the third learning unit: Qian teaches that remote server, or the second learning unit, aggregates gradient and weight information from each client system, which would include the third learning unit’s transmitted information. Furthermore, this information is then used to update the new central model, which would updating a parameter of the second partial model based on the error information (Paragraph 35). Regarding the limitation wherein the first learning unit is configured to update a parameter of the first partial model based on the error information transmitted from the second learning unit: Qian teaches that this updated central model may then be transmitted to the client devices, which would include the first partial model, wherein the distribution of the model would update the parameters of the first partial model as it is received and incorporated in the client models (Paragraph 35). Regarding claim 7, which depends upon claim 1: Claim 7 recites: The information processing system according to claim 1, wherein the second learning unit in the second information processing apparatus is configured to generate the second partial model by performing additional learning with parameters of the first partial model and the third partial model being fixed Qian in view of Satheesh discloses the system of claim 1 upon which claim 7 depends. Furthermore, regarding the limitation of claim 7: Qian teaches that the remote server model, the second learning unit in the second information processing apparatus, aggregates information from the client system and updates the model, generating the second partial model by performing additional learning (Paragraph 35). The parameters of the first partial model and the third partial model are fixed as they are not simultaneously updated with the shared model. Regarding claim 8, which depends upon claim 1: Claim 8 recites: The information processing system according to claim 1, further comprising a determination unit configured to determine whether the first information processing apparatus performs learning, in an adequate range, for at least either the training data or a partial model Qian in view of Satheesh discloses the system of claim 1 upon which claim 8 depends. Furthermore, regarding the limitation of claim 8: Qian teaches that training may be continued depending on the performance of the new central model (Paragraph 35), which demonstrates a determination of when learning is within an adequate range for the partial model. Regarding claim 11, which depends upon claim 1: Claim 11 recites: An information processing system configured to make inference using the inference model based on the neural network that has been trained through the learning processing according to claim 1, wherein the first information processing apparatus further includes an acquisition unit configured to acquire data serving as an inference target, and wherein the information processing system makes the inference on the data serving as the inference target using the first partial model, the second partial model, and the third partial model Qian in view of Satheesh discloses the system of claim 1 upon which claim 11 depends. Furthermore, regarding the limitation make inference using the inference model based on the neural network that has been trained through the learning processing according to claim 1, wherein the first information processing apparatus further includes an acquisition unit configured to acquire data serving as an inference target: Qian teaches that using the above model or the inference model based on the neural network that has been trained through the learning processing according to claim 1, predictions may be made about outputs values, which would be an inference (Paragraph 106). This may further be done with a training dataset that is used by the client models, or the first information processing apparatus (Paragraph 106), that is a supervised learning training dataset wherein there is a known inference target. Regarding the limitation wherein the information processing system makes the inference on the data serving as the inference target using the first partial model, the second partial model, and the third partial model: Qian teaches that using the above model or the inference model based on the neural network that has been trained through the learning processing according to claim 1, predictions may be made about outputs values, which would be an inference (Paragraph 106) wherein the process of claim 1 describes the use of the first, second, and third partial model. Regarding claim 12, which depends upon claim 11: Claim 12 recites: The information processing system according to claim 11, wherein the first information processing apparatus further includes a determination unit configured to determine whether the inference is made, in an adequate range, for at least one of the data as the inference target or an inference result Qian in view of Satheesh discloses the system of claim 11 upon which claim 12 depends. Furthermore, regarding the limitation of claim 12: Qian teaches that training may be continued depending on the performance of the new central model (Paragraph 35), which demonstrates a determination of when learning is within an adequate range for the partial model. Regarding claim 13, which depends upon claim 1: Claim 13 recites: The information processing system according to claim 1, wherein the first information processing apparatus is managed by a provider of the inference model, and the second information processing apparatus is managed by a user of the inference model Qian in view of Satheesh discloses the system of claim 1 upon which claim 13 depends. Furthermore, regarding the limitation of claim 13: Qian teaches that first information processing apparatus, or the client devices, are used by users who provide data to train the model with (Paragraph 27) therefore being the providers of the inference model wherein the second information processing apparatus that aggregates information gleaned from the user models may be the user of the inference model in that the shared model is the inference model that must be trained using the confidential user data. Claim 14 recites a method that parallels the system of claim 1. Therefore, the analysis discussed above with respect to claim 1 also applies to claim 14. Accordingly, claim 14 is rejected based on substantially the same rationale as set forth above with respect to claim 1. Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Qian in view of Satheesh, further in view of Ronen et al. (Pub. No. US 20210235287 A1, filed April 14th 2021, hereinafter Ronen) Regarding claim 9, which depends upon claim 8: Claim 9 recites: The information processing system according to claim 8, wherein the determination unit is configured to determine whether the first information processing apparatus performs the learning, in the adequate range, for at least either the training data or a partial model, depending on whether a component ratio of the correct label included in the training data satisfies a predetermined criterion Qian in view of Satheesh discloses the system of claim 8 upon which claim 9 depends. However, Qian in view of Satheesh does not teach the limitation of claim 9: Qian in view of Satheesh has previously taught wherein the determination unit is configured to determine whether the first information processing apparatus performs the learning, in the adequate range, for at least either the training data or a partial model in claim 8. Ronen in the same field of endeavor of machine learning teaches that for a series of data belonging to the same class, i.e. with the same correct label, performance may be evaluated to ensure that it is above a particular threshold or predetermined criterion (Paragraph 62). Performance would be analogous to a component ratio as the performance is used to identify anomalies (Paragraph 62) as the component ratio is in the present application’s specification, wherein it is used to avoid over- or underfitting the model (present application, paragraph 42). Ronen and the present application are analogous art because they are in the same field of endeavor, machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that implemented the teachings of Qian and the teachings of Satheesh as well as the teachings of Ronen. This would have provided the advantage of optimization within a federated learning framework (Satheesh, Paragraph 6) as well as further customization and control of performance metrics (Ronen, Paragraph 14). Regarding claim 10, which depends upon claim 8: Claim 10 recites: The information processing system according to claim 8, the determination unit is configured to determine whether the first information processing apparatus performs the learning, in the adequate range, for at least either the training data or a partial model, depending on whether a variation in parameter of the partial model due to the learning satisfies a predetermined criterion Qian in view of Satheesh discloses the system of claim 8 upon which claim 10 depends. However, Qian in view of Satheesh does not teach the limitation of claim 10: Qian in view of Satheesh has previously taught wherein the determination unit is configured to determine whether the first information processing apparatus performs the learning, in the adequate range, for at least either the training data or a partial model in claim 8. Ronen in the same field of endeavor of machine learning teaches that for a series of data belonging to the same class, i.e. with the same correct label, performance may be evaluated to ensure that it is above a particular variation threshold or predetermined criterion (Paragraph 62), wherein the variation threshold detects anomalies and hence would measure variation of a parameter of the partial model. Ronen and the present application are analogous art because they are in the same field of endeavor, machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that implemented the teachings of Qian and the teachings of Satheesh as well as the teachings of Ronen. This would have provided the advantage of optimization within a federated learning framework (Satheesh, Paragraph 6) as well as further customization and control of performance metrics (Ronen, Paragraph 14). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.J.M./Examiner, Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Nov 17, 2022
Application Filed
Sep 24, 2025
Non-Final Rejection — §101, §103, §112
Dec 30, 2025
Response Filed
Apr 08, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566943
METHOD AND APPARATUS WITH NEURAL NETWORK QUANTIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12481890
SYSTEMS AND METHODS FOR APPLYING SEMI-DISCRETE CALCULUS TO META MACHINE LEARNING
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
90%
With Interview (+71.4%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month