Prosecution Insights
Last updated: April 19, 2026
Application No. 17/516,835

METHOD OF TRAINING MODELS IN AI AND ELECTRONIC DEVICE

Non-Final OA §103
Filed
Nov 02, 2021
Examiner
KIM, JONATHAN J
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Hon Hai Precision Industry Co. Ltd.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
2 granted / 6 resolved
-21.7% vs TC avg
Strong +80% interview lift
Without
With
+80.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
36.7%
-3.3% vs TC avg
§103
38.6%
-1.4% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/29/2026 has been entered. The status of the claims is as follows. Claims 1-5, 7-13, 15 are pending within the application. Claims 1, 5, 7, 9, 15 are amended. Claims 6 and 14 are canceled. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al. (JP2017174298A, hereinafter “Kondo”) in view of Ma et al. (US20190258904A1, hereinafter “Ma”) further in view of Oelscher et al. (US20220067914A1, hereinafter “Oelscher”) Regarding claim 1, Kondo discloses A method of training models in artificial intelligence (AI) applicable to an electronic device, the electronic device is connected to other electronic devices and at least one controller, each electronic device is deployed with a same initial machine learning model, the method comprising: (Kondo [Abstract]; “Provided is a weight parameter learning method in a neural network capable of learning using various and large amounts of data while protecting privacy. A neural network system including a plurality of terminal devices 1 and a management device 2, wherein each of the terminal devices learns a storage unit 12 storing image data and a weight parameter for performing image identification. The image classifier 13 and the learned weight parameter are transmitted to the management device, the learning data generated based on the image data is transmitted to another terminal device or the management device, and the learning data generated by the other terminal device is transmitted. And a first control unit 14 for receiving data. The image discriminator learns the weight parameter not only using the image data stored in its own storage unit but also using learning data generated by another terminal device. The management device includes a second control unit 22 that receives learned weight parameters from a plurality of terminal devices and selects an optimal weight parameter.” wherein the plurality of terminal devices [1] all performing image identification and learning with regards to stored image data thus reads on an electronic device connected to other electronic devices (plurality of terminal devices 1) and a controller (management device [2]) wherein each terminal device is deployed with the same initial machine learning model (learned image identification model of each terminal device) collecting a sample data set adapted for training … (Kondo [Page 3 Paragraph 12]; “The pre-learning data is a set of pre-learning image data described later, and is received from each terminal device 1. The verification data set is a set of a set of verification image data and correct data described later. Since the verification data set is distributed to each terminal device 1, it is desirable that the verification data set is collected separately from the image data stored in each terminal device 1.” Kondo [Page 4 Paragraph 1]; “The image classifier 13 may perform unsupervised pre-learning using only the image data stored in the storage unit 12 of its own terminal device 1, but in cooperation with the management device 2 as described below. It is desirable to conduct prior learning without a teacher.” wherein the image data stored in each of the terminal devices for training of its image classifier thus reads on collecting a sample data set adapted for training) training the initial machine learning model based on the training set and verifying the trained machine learning model based on the verification set (Kondo [Abstract]; “Provided is a weight parameter learning method in a neural network capable of learning using various and large amounts of data while protecting privacy. A neural network system including a plurality of terminal devices 1 and a management device 2, wherein each of the terminal devices learns a storage unit 12 storing image data and a weight parameter for performing image identification. The image classifier 13 and the learned weight parameter are transmitted to the management device, the learning data generated based on the image data is transmitted to another terminal device or the management device, and the learning data generated by the other terminal device is transmitted. And a first control unit 14 for receiving data. The image discriminator learns the weight parameter not only using the image data stored in its own storage unit but also using learning data generated by another terminal device. The management device includes a second control unit 22 that receives learned weight parameters from a plurality of terminal devices and selects an optimal weight parameter.” wherein the plurality of terminal devices [1] all performing image identification and learning with regards to stored image data in order for the management device to receive learned weight parameters and learn the initial machine learning model’s weight parameter based on the performance of its terminal devices thus reads on training the initial machine learning model based on the training set (federated learning of the initial machine learning model through training data sets of the terminal devices) Kondo [Page 4 Paragraph 18]; “In FIG. 5, the terminal device 1 performs verification using the verification data set. However, the control unit 14 of the terminal device 1 transmits the weight parameter to the management device 2, and an image classifier is included in the management device 2. And the control unit 22 of the management device 2 may perform the verification.” which discloses verifying the trained machine learning model based on the verification set ) obtaining a prediction accuracy and weightings of neurons of the trained machine learning model corresponding to the prediction accuracy, and sending the prediction accuracy and the weightings of neurons to the at least one controller, to make the at least one controller determine one set of weightings of neurons corresponding to a highest prediction accuracy from a plurality of prediction accuracies as the new weightings (Kondo [Page 2 “Description” Paragraph 13]; “The second control unit may select a weight parameter having the highest recognition accuracy among the weight parameters obtained by the learning received from the plurality of terminal devices as an optimum weight parameter.As a result, the optimum weight parameter can be used in other terminals. The second control unit may group the plurality of terminal devices based on the weight parameter obtained by the learning received from the plurality of terminal devices, and select an optimum weight parameter for each group.By grouping terminal devices having similar trends, an appropriate weight parameter can be selected for each group according to the terminal device” wherein a prediction accuracy and weightings of the trained machine learning model is obtained and the controller (second control unit associated with the management device) receives the model learning of the terminal devices to facilitate its selection of optimized neuron weights correspondent to highest recognition accuracy (highest prediction accuracy) for usage as new weightings in the groups of terminal devices) wherein the electronic device and the other electronic devices obtain the plurality of prediction accuracies and different set of weightings of neurons corresponding to the plurality of prediction accuracies by using different sample data sets to train the initial machine learning model (Kondo [Page 4 Paragraph 1]; “The image classifier 13 may perform unsupervised pre-learning using only the image data stored in the storage unit 12 of its own terminal device 1, but in cooperation with the management device 2 as described below. It is desirable to conduct prior learning without a teacher.” wherein the image classifiers for each terminal device being learned using image data specific to its own terminal device thus reads on using different sample data sets to train the initial machine learning model to facilitate distributed learning to obtain the plurality of prediction accuracies of correspondent neuron weightings) and the at least one controller receives the plurality of prediction accuracies and different sets of weightings of neurons are sent by the electronic device and the other electronic devices, and determines the set of weightings of neurons corresponding to the highest prediction accuracy from the different sets of weightings of neurons as the new weightings (Kondo [Abstract]; “The management device includes a second control unit 22 that receives learned weight parameters from a plurality of terminal devices and selects an optimal weight parameter” Kondo [Page 2 “Description” Paragraph 13]; “The second control unit may select a weight parameter having the highest recognition accuracy among the weight parameters obtained by the learning received from the plurality of terminal devices as an optimum weight parameter.As a result, the optimum weight parameter can be used in other terminals”) and obtaining the new weightings sent by the at least one controller and updating the weightings of neurons of the trained machine learning model to the new weightings, and optimizing the trained machine learning model by the electronic device and the other electronic devices (Kondo [Page 4 Paragraph 18]; “When the distributed learning as described above is completed, the control unit 22 of the management device 2 transmits the optimum weight parameter to each terminal device 1 (step S12 in FIG. 3). The control unit 14 of the terminal device 1 receives the optimum weight parameter from the management device 2 and stores it in the storage unit 12 and sets it in the image classifier 13 (step S5). Thereby, the image discriminator 13 can perform learning and discrimination using the new weight parameter. The image discriminator 13 performs discrimination using the pre-update weight parameter and the post-update weight parameter (that is, the optimum weight parameter received from the management device 2) and confirms that the recognition accuracy does not decrease. The updated weight parameter may be adopted.” wherein the management device transmitting the optimum weight parameter to the plurality of terminal devices thus reads on optimizing the initial trained machine learning model by the electronic device and the other electronic devices (to obtain the optimum weight parameter through distributed learning) and updating the weightings of neurons of the trained machine learning model (transmitted new optimum weights to terminal devices being adopted)) Kondo does not disclose but Ma discloses dividing the sample data set into a training set and a verification set according to a preset ratio (Ma, para [0029], first train/validate sample 1312a is randomly partitioned into a first training sample 1314a and a first validation sample 1316a based on a selection of a ratio or a percentage of first train/validate sample 1312a that is allocated to first validation sample 1316a. Preset represented by a selection prior to processing). It would have been obvious to modify Kondo’s data set collection to divide the obtained data set into the training set and verification set according to a ratio instead of independently obtaining its verification data set separately from the sample data. One would have been motivated to do so “so that training/validation dataset 1510 has the same distribution as input dataset” (Ma [0117]). The combination of Kondo/Ma does not disclose but Oelscher discloses wherein the sample data set comprises images of defect in a product (Oelscher [0094, 109], images broken into two image classes of “ok” and “defective”) It would have been obvious to one of ordinary skill in the art to have modified Kondo/Ma’s determination of weightings for an image classification model to include the application of defect classification in the image. The motivation for doing so would have been determining defects that occur while carrying out a surface modification (Oelscher, abstract). Claim 9 recites substantially similar limitations to claim 1 and is similarly rejected. Claim(s) 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al. (JP2017174298A, hereinafter “Kondo”) in view of Ma et al. (US20190258904A1, hereinafter “Ma”) further in view of Oelscher et al. (US20220067914A1, hereinafter “Oelscher”) further in view of Lin et al. (US20120284213, hereinafter “Lin”) Regarding claim 2, Kondo in view of Ma in further view of Oelscher discloses the method according to claim 1. Kondo in view of Ma in further view of Oelscher does not disclose the additional limitations of claim 2. Lin discloses collecting data within a preset period as the sample data set; or collecting a preset amount of data as the sample data set (Lin, para [0031], size of training data can be set as a threshold volume). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the network to include a selection. The motivation for doing so would be to provide varying degrees of accuracy and selecting from them (Lin, para [0002]). Regarding claim 3, Kondo in view of Ma in further view of Oelscher in further view of Lin discloses the limitations of claim 2. Lin additionally discloses the method further comprising: receiving a recovery command and recovering the trained machine learning model to the initial machine learning model, wherein the recovery command is generated when a prediction accuracy corresponding to the new weightings is lower than a prediction accuracy of the initial machine learning model (Lin, para [0112-113], with regards to fig 6, training data set selected based on accuracy. When the series is received, it is determined whether the trained predictive model satisfies a condition. If it does not meet the condition, the initial predictive model is provided). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the network to include a selection. The motivation for doing so would be to provide varying degrees of accuracy and selecting from them(Lin, para [0002]). Regarding claim 4, Kondo in view of Ma in further view of Oelscher in further view of Lin discloses the method of claim 3. Kondo already discloses wherein each electronic device is an edge computing device (Kondo [Page 5 Paragraph 2]; “In this embodiment described above, an example in which the terminal device 1 is a smartphone or a tablet has been shown. However, the terminal device 1 may be a large-scale arithmetic device for performing distributed learning.”). Claims 10-12 recite substantially similar limitations to claims 2-4 and are similarly rejected. Claims 5, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al. (JP2017174298A, hereinafter “Kondo”) in view of Lin et al. (US20120284213, hereinafter “Lin”) further in view of Oelscher et al. (US20220067914A1, hereinafter “Oelscher”) Regarding claim 5, Kondo discloses a method of training models in artificial intelligence (Al) applicable to at least one controller, the at least one controller is connected to a plurality of electronic devices, each electronic device is deployed with a same initial machine learning model, the model comprising … (Kondo [Abstract]; “Provided is a weight parameter learning method in a neural network capable of learning using various and large amounts of data while protecting privacy. A neural network system including a plurality of terminal devices 1 and a management device 2, wherein each of the terminal devices learns a storage unit 12 storing image data and a weight parameter for performing image identification. The image classifier 13 and the learned weight parameter are transmitted to the management device, the learning data generated based on the image data is transmitted to another terminal device or the management device, and the learning data generated by the other terminal device is transmitted. And a first control unit 14 for receiving data. The image discriminator learns the weight parameter not only using the image data stored in its own storage unit but also using learning data generated by another terminal device. The management device includes a second control unit 22 that receives learned weight parameters from a plurality of terminal devices and selects an optimal weight parameter.” wherein the plurality of terminal devices [1] all performing image identification and learning with regards to stored image data thus reads on an electronic device connected to other electronic devices (plurality of terminal devices 1) and a controller (management device [2]) wherein each terminal device is deployed with the same initial machine learning model (learned image identification model of each terminal device) obtain a prediction accuracy and weightings of neurons of the trained machine learning model … receiving the prediction accuracy and the weightings of neurons sent by each of the plurality of electronic devices, (Kondo [Page 2 “Description” Paragraph 13]; “The second control unit may select a weight parameter having the highest recognition accuracy among the weight parameters obtained by the learning received from the plurality of terminal devices as an optimum weight parameter.As a result, the optimum weight parameter can be used in other terminals. The second control unit may group the plurality of terminal devices based on the weight parameter obtained by the learning received from the plurality of terminal devices, and select an optimum weight parameter for each group.By grouping terminal devices having similar trends, an appropriate weight parameter can be selected for each group according to the terminal device” wherein a prediction accuracy and weightings of the trained machine learning model is obtained and the controller (second control unit associated with the management device) receives the model learning of the terminal devices to facilitate its selection of optimized neuron weights correspondent to highest recognition accuracy (highest prediction accuracy) for usage as new weightings in the groups of terminal devices); selecting one set of weightings of neurons corresponding to a highest prediction accuracy from a plurality of prediction accuracies sent by the plurality of electronic devices as the new weightings (Kondo [Page 2 “Description” Paragraph 13]; “The second control unit may select a weight parameter having the highest recognition accuracy among the weight parameters obtained by the learning received from the plurality of terminal devices as an optimum weight parameter.As a result, the optimum weight parameter can be used in other terminals. The second control unit may group the plurality of terminal devices based on the weight parameter obtained by the learning received from the plurality of terminal devices, and select an optimum weight parameter for each group.By grouping terminal devices having similar trends, an appropriate weight parameter can be selected for each group according to the terminal device” wherein a prediction accuracy and weightings of the trained machine learning model is obtained and the controller (second control unit associated with the management device) receives the model learning of the terminal devices to facilitate its selection of optimized neuron weights correspondent to highest recognition accuracy (highest prediction accuracy) for usage as new weightings in the groups of terminal devices)) wherein the plurality of electronic devices obtain the plurality of prediction accuracies and different sets of weightings of neurons corresponding to the plurality of prediction accuracies by using different sample data sets to train the initial machine learning model (Kondo [Page 4 Paragraph 1]; “The image classifier 13 may perform unsupervised pre-learning using only the image data stored in the storage unit 12 of its own terminal device 1, but in cooperation with the management device 2 as described below. It is desirable to conduct prior learning without a teacher.” wherein the image classifiers for each terminal device being learned using image data specific to its own terminal device thus reads on using different sample data sets to train the initial machine learning model to facilitate distributed learning to obtain the plurality of prediction accuracies of correspondent neuron weightings) and the at least one controller receives the plurality of prediction accuracies and different sets of weightings of neurons are sent by the electronic device and the other electronic devices, and determines the set of weightings of neurons corresponding to the highest prediction accuracy from the different sets of weightings of neurons as the new weightings (Kondo [Abstract]; “The management device includes a second control unit 22 that receives learned weight parameters from a plurality of terminal devices and selects an optimal weight parameter” Kondo [Page 2 “Description” Paragraph 13]; “The second control unit may select a weight parameter having the highest recognition accuracy among the weight parameters obtained by the learning received from the plurality of terminal devices as an optimum weight parameter.As a result, the optimum weight parameter can be used in other terminals”) and sending the new weightings to each electronic device to make each electronic device update the weightings of neurons of the trained machine learning model to the new weightings, and make each electronic device optimize the trained machine learning model by the electronic device and the other electronic devices (Kondo [Page 4 Paragraph 18]; “When the distributed learning as described above is completed, the control unit 22 of the management device 2 transmits the optimum weight parameter to each terminal device 1 (step S12 in FIG. 3). The control unit 14 of the terminal device 1 receives the optimum weight parameter from the management device 2 and stores it in the storage unit 12 and sets it in the image classifier 13 (step S5). Thereby, the image discriminator 13 can perform learning and discrimination using the new weight parameter. The image discriminator 13 performs discrimination using the pre-update weight parameter and the post-update weight parameter (that is, the optimum weight parameter received from the management device 2) and confirms that the recognition accuracy does not decrease. The updated weight parameter may be adopted.” wherein the management device transmitting the optimum weight parameter to the plurality of terminal devices thus reads on optimizing the initial trained machine learning model by the electronic device and the other electronic devices (to obtain the optimum weight parameter through distributed learning) and sending (by updating) the weightings of neurons of the trained machine learning model to each electronic device (transmitted new optimum weights to terminal devices being adopted))). Kondo does not disclose but Lin discloses generating a control command and sending the control command to each electronic device, wherein the control command is used to trigger each electronic device to train the initial machine learning model (Lin, para [0059-60], generates commands to train a machine learning mode). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the network of Kondo to include a command to initiate training of the machine learning model based on the teachings of Lin. The motivation for doing so would be to provide updates from a group (Lin, para [0060]). Kondo in view of Lin does not disclose but Oelscher discloses wherein the sample data set comprises images of defect in a product (Oelscher, para [0094, 109], images broken into two image classes of “ok” and “defective”) It would have been obvious to one of ordinary skill in the art to have modified the combination’s determination of weightings for an image classification model to include the application of defect classification in the image. The motivation for doing so would have been determining defects that occur while carrying out a surface modification (Oelscher, abstract). Regarding claim 7, Kondo in view of Lin in further view of Oelscher discloses the method according to claim 5. Lin additionally discloses before sending the new weightings to each electronic device, the method further comprising: comparing a prediction accuracy corresponding to the new weightings with a prediction accuracy of the initial machine learning model; wherein in a case that the prediction accuracy corresponding to the new weightings is higher than the prediction accuracy of the initial machine learning model, sending the new weightings to each electronic device; and in case that the prediction accuracy corresponding to the new weightings is lower than the prediction accuracy of the initial machine learning model, sending a restoration command to each electronic device, to make each electronic device restore the trained machine learning model to the initial machine learning model (Lin, para [0112-113], with regards to fig 6, training data set selected based on accuracy. When the series is received, it is determined whether the trained predictive model satisfies a condition. If it does not meet the condition, the initial predictive model is provided). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the network of Kondo to include a selection. The motivation for doing so would be to provide varying degrees of accuracy and selecting from them (Lin, para [0002]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al. (JP2017174298A, hereinafter “Kondo”) in view of Lin et al. (US20120284213, hereinafter “Lin”) further in view of Oelscher et al. (US20220067914A1, hereinafter “Oelscher”) further in view of Ben-Itzhak et al. (US20120284213, hereinafter “Ben-Itzhak”). Regarding claim 8, Kondo in view of Lin in further view of Oelscher discloses the method according to claim 7. The combination fails to explicitly disclose but Ben-Itzhak discloses wherein conditions for ending a process of training the models comprises any of the following: a training duration is greater than a preset duration; a prediction accuracy of the trained machine learning model is greater than a preset prediction accuracy; number of training sessions is greater than a preset value; or receiving a stop command (Ben-Itzhak, para [0020], termination command received from parameter server 108). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modified the network training of Kondo/Lin/Oelscher to include the federated learning training conditions of Ben-Itzhak. The motivation for doing so would have been to overcome training challenges in federated learning (Ben-Itzhak, para [0002]). Claims 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al. (JP2017174298A, hereinafter “Kondo”) in view of Ma et al. (US20190258904A1, hereinafter “Ma”) further in view of Oelscher et al. (US20220067914A1, hereinafter “Oelscher”) further in view of Lin et al. (US20120284213, hereinafter “Lin”). Regarding claim 13, Kondo in view of Ma in further view of Oelscher discloses the electronic device according to claim 9. Kondo in view of Ma in further view of Oelscher already discloses: obtain a prediction accuracy and weightings of neurons of the trained machine learning model, receiving the prediction accuracy and the weightings of neurons sent by each electronic device, and selecting new weightings from a plurality of the received weightings according to a preset rule and a plurality of the received prediction accuracies (Kondo [Page 2 “Description” Paragraph 13]; “The second control unit may select a weight parameter having the highest recognition accuracy among the weight parameters obtained by the learning received from the plurality of terminal devices as an optimum weight parameter.As a result, the optimum weight parameter can be used in other terminals. The second control unit may group the plurality of terminal devices based on the weight parameter obtained by the learning received from the plurality of terminal devices, and select an optimum weight parameter for each group.By grouping terminal devices having similar trends, an appropriate weight parameter can be selected for each group according to the terminal device” wherein a prediction accuracy and weightings of the trained machine learning model is obtained and the controller (second control unit associated with the management device) receives the model learning of the terminal devices to facilitate its selection of optimized neuron weights correspondent to highest recognition accuracy (highest prediction accuracy) for usage as new weightings in the groups of terminal devices); and send the new weightings to each of the other electronic devices, to make each electronic device update the weightings of neurons of the trained machine learning model to the new weightings (Kondo [Page 4 Paragraph 18]; “When the distributed learning as described above is completed, the control unit 22 of the management device 2 transmits the optimum weight parameter to each terminal device 1 (step S12 in FIG. 3). The control unit 14 of the terminal device 1 receives the optimum weight parameter from the management device 2 and stores it in the storage unit 12 and sets it in the image classifier 13 (step S5). Thereby, the image discriminator 13 can perform learning and discrimination using the new weight parameter. The image discriminator 13 performs discrimination using the pre-update weight parameter and the post-update weight parameter (that is, the optimum weight parameter received from the management device 2) and confirms that the recognition accuracy does not decrease. The updated weight parameter may be adopted.” wherein the management device transmitting the optimum weight parameter to the plurality of terminal devices thus reads on optimizing the initial trained machine learning model by the electronic device and the other electronic devices (to obtain the optimum weight parameter through distributed learning) and sending (by updating) the weightings of neurons of the trained machine learning model to each electronic device (transmitted new optimum weights to terminal devices being adopted))) The combination of Kondo in view of Ma in further view of Oelscher does not disclose but Lin discloses: generate a control command and send the control command to other electronic devices, wherein the control command is used to trigger each of the other electronic devices to train the initial machine learning model (Lin, para [0059-60], generates commands to train a machine learning model). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the network of Kondo/Ma/Oelscher to include a command to initiate training of the machine learning model based on the teachings of Lin. The motivation for doing so would be to provide updates from a group (Lin, para [0060]). Regarding claim 15, Kondo in view of Ma in further view of Oelscher discloses the electronic device according to claim 9. Kondo in view of Ma in further view of Oelscher does not disclose the additional limitations of claim 15. Lin discloses before sending the new weightings to each of the other electronic devices, the processor further to: compare a prediction accuracy corresponding to the new weightings with a prediction accuracy of the initial machine learning model; wherein in a case that the prediction accuracy corresponding to the new weightings is higher than the prediction accuracy of the initial machine learning model, send the new weightings to each electronic device; and in a case that the prediction accuracy corresponding to the new weightings is lower than the prediction accuracy of the initial machine learning model, send a restoration command to each electronic device, to make each electronic device restore the trained machine learning model to the initial machine learning model (Lin, para [0112-113], with regards to fig 6, training data set selected based on accuracy. When the series is received, it is determined whether the trained predictive model satisfies a condition. If it does not meet the condition, the initial predictive model is provided). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the combination’s network to include a selection. The motivation for doing so would be to provide varying degrees of accuracy and selecting from them (Lin, para [0002]). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al. (JP2017174298A, hereinafter “Kondo”) in view of Ma et al. (US20190258904A1, hereinafter “Ma”) further in view of Oelscher et al. (US20220067914A1, hereinafter “Oelscher”) further in view of Lin et al. (US20120284213, hereinafter “Lin”) in further view of Ben-Itzhak et al. (US20220101189, hereinafter “Ben-Itzhak”). Regarding claim 16, Kondo in view of Ma in view of Oelscher in further view of Lin discloses the electronic device according to claim 15. Kondo in view of Ma in view of Oelscher further in view of Lin does not disclose the additional limitations of claim 16. Ben-Itzhak discloses wherein conditions for ending a process of training the models comprises any of the following: a training duration is greater than a preset duration; a prediction accuracy of the trained machine learning model is greater than a preset prediction accuracy; number of training sessions is greater than a preset value; or receiving a stop command (Ben-Itzhak, para [0020], termination command received from parameter server 108). Before the time of the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modified the network training of Kondo/Ma/Oelscher/Lin to include the federated learning training conditions of Ben-Itzhak. The motivation for doing so would have been to overcome training challenges in federated learning (Ben-Itzhak, para [0002]). Response to Arguments Applicant’s arguments with respect to claims 1-5,7-13 and 15-16 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: “SYSTEMS AND METHODS FOR ROBUST FEDERATED TRAINING OF NEURAL NETWORKS” (US20210049473A1) which discloses federated training of neural networks focused on distributed weight optimization across electronic devices. “SEQUENTIAL ENSEMBLE MODEL TRAINING FOR OPEN SETS” (US11526693B1) which discloses training a model through learnings of other model performed upon image inputs Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN J KIM whose telephone number is (571)272-0523. The examiner can normally be reached 9-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt El can be reached on (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000 /JONATHAN J KIM/Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Nov 02, 2021
Application Filed
May 31, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Nov 13, 2025
Final Rejection — §103
Jan 29, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+80.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month