Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-8 are pending for examination. Claims 1, 6, and 7 are independent.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 6 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 and 8 of copending Application No. 18/204,031 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claim limitations in Co-pending application 18/204,031 are substantially similar as highlighted in the table below. The difference appears in the instant application 18/203,950 describing calculating a global parameter with both a local parameter and weight. Under broadest reasonable interpretation, a local parameter and weight are synonymous. The claims of the instant application are anticipated by the claims of co-pending application 18/204,031.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Instant Application: 18/203,950
Corresponding Application: 18/204,031
Claim 1: A server apparatus comprising:
at least one memory configured to store instructions; and
at least one processor configured to execute the instructions to:
receive, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches;
calculate a parameter of a global model based on the local model parameter and the weight received by the receiving unit; and
transmit the parameter calculated by the calculating unit to the client apparatuses.
Claim 1: A server apparatus comprising:
at least one memory configured to store instructions; and
at least one processor configured to execute the instructions to:
receive, from a plurality of client apparatuses that perform federated learning of a neural network model having multiplex branches capable of performing different operations on a common input and thereby learn a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches, the local model parameters corresponding to each of the branches;
calculate a degree of similarity between the local model parameters corresponding to each of the branches, received from different client apparatuses;
calculate a parameter of a global model based on the local model parameter selected based on a result of calculation by the similarity degree calculating unit; and
transmit the parameter calculated by the parameter calculating unit to the client apparatus.
Claim 6: A calculation method by an information processing apparatus, the calculation method comprising:
receiving, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches;
calculating a parameter of a global model based on the received local model parameter and weight; and
transmitting the calculated parameter to the client apparatuses.
Claim 8: A calculation method by an information processing apparatus, the method comprising:
receiving, from a plurality of client apparatuses that perform federated learning of a neural network model having multiplex branches capable of performing different operations on a common input and thereby learn a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches, the local model parameters corresponding to each of the branches;
calculating a degree of similarity between the local model parameters corresponding to each of the branches, received from different client apparatuses;
calculating a parameter of a global model based on the local model parameter selected based on a result of the calculating; and
transmitting the calculated parameter to the client apparatus.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-5, and 7-8 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the receiving unit" in line 9. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation "the calculating unit" in line 10. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites the limitation "the learning unit" in line 8. There is insufficient antecedent basis for this limitation in the claim.
Dependent claims 2-5, and 8 do not resolve the 112(b) rejection from independent claims 1 and 7 and are also rejected under 112(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
According to the first part of the analysis, in the instant case, claims 1-5 are directed to a server apparatus, claim 6 is directed to a method, and claims 7-8 is directed to a client apparatus. Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
Regarding Claim 1:
2A Prong 1:
calculate a parameter of a global model based on the local model parameter and the weight received by the receiving unit; (This step for calculating a parameter is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
A server apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: (The server apparatus comprising memory and processor are understood to be a generic computer elements - See MPEP 2106.05(f).)
receive, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).)
transmit the parameter calculated by the calculating unit to the client apparatuses. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).)
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
A server apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: (The server apparatus comprising memory and processor are understood to be a generic computer elements - See MPEP 2106.05(f).
receive, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i))))
transmit the parameter calculated by the calculating unit to the client apparatuses. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i))))
The additional elements as disclosed above in combination of the abstract idea
are not sufficient to amount to significantly more than the judicial exception as they are
well, understood, routine and conventional activity as disclosed in combination
of generic computer functions that are implemented to perform the disclosed abstract idea above.
Regarding Claim 6
2A Prong 1:
calculating a parameter of a global model based on the received local model parameter and weight; (This step is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
A calculation method by an information processing apparatus, the calculation method comprising: (The information processing apparatus is understood to be a generic computer element - See MPEP 2106.05(f).)
receiving, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).)
transmitting the calculated parameter to the client apparatuses. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).)
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
A calculation method by an information processing apparatus, the calculation method comprising: (The information processing apparatus is understood to be a generic computer element - See MPEP 2106.05(f).)
receiving, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i))))
transmitting the calculated parameter to the client apparatuses. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i))))
The additional elements as disclosed above in combination of the abstract idea
are not sufficient to amount to significantly more than the judicial exception as they are
well, understood, routine and conventional activity as disclosed in combination
of generic computer functions that are implemented to perform the disclosed abstract idea above.
Regarding Claim 7
2A Prong 1:
learn, using training data owned by the client apparatus, a local model parameter of each of multiplex branches included by a neural network model having the multiplex branches capable of performing different operations on a common input and a weight for each branch used in superposing outputs from the respective multiplex branches; (This step is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
A client apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: (The client apparatus comprising memory and processor are understood to be a generic computer elements - See MPEP 2106.05(f).)
transmit the local model parameter and the weight learned by the learning unit to a server apparatus that generates a global model based on the local model parameter. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).)
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
A client apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: (The client apparatus comprising memory and processor are understood to be a generic computer elements - See MPEP 2106.05(f).)
transmit the local model parameter and the weight learned by the learning unit to a server apparatus that generates a global model based on the local model parameter. (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i))))
The additional elements as disclosed above in combination of the abstract idea
are not sufficient to amount to significantly more than the judicial exception as they are
well, understood, routine and conventional activity as disclosed in combination
of generic computer functions that are implemented to perform the disclosed abstract idea above.
Regarding Claim 2
2A Prong 1:
calculate the parameter of the global model based on a previously stored number of data owned by the client apparatus, the local model parameter, and the weight. (This step is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).)
2A Prong 2 & 2B: The claim does not recite any additional elements.
Regarding Claim 3
2A Prong 1:
calculate the parameter of the global model by averaging the local model parameters after weighting with the weights. (This step is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).)
2A Prong 2 & 2B: The claim does not recite any additional elements.
Regarding Claim 4
2A Prong 1:
wherein the processor is configured to execute the instructions to calculate the parameter of the global model based on the number of data, the local model parameter, and the weight by solving an equation shown by Equation 1:
[Equation 1]
W
i
,
j
=
1
A
j
∑
k
=
1
K
n
k
α
j
(
k
)
W
i
,
j
(
k
)
A
j
=
∑
k
=
1
K
n
k
α
j
(
k
)
where n indicates the number of data, k indicates the client, Wi,j indicates the parameter of a jth branch of an ith layer, and α indicates the weight. (This step is understood to be a recitation of a mathematical calculation.)
2A Prong 2 & 2B: The claim does not recite any additional elements.
Regarding Claim 5
2A Prong 1: The claim does not recite any Abstract idea.
2A Prong 2 & 2B:
wherein the weight is a value learned by each of the client apparatuses based on training data owned by the client apparatus. (Training a model is understood as mere instructions to implement an abstract idea on a computer - see MPEP 2106.05(f).))
Regarding Claim 8
2A Prong 1:
learn the weight in a state where the parameter of the global model is fixed and thereafter learn the local model parameter in a state where the weight is fixed. (This step is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).)
2A Prong 2:
receive a parameter of the global model from the server apparatus; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and data gathering. See MPEP 2106.05(g).)
2B:
receive a parameter of the global model from the server apparatus; (This step is directed to transmitting or receiving information, which is understood to be insignificant extra-solution activity and is well understood, routine and conventional activity of transmitting and receiving data as identified by the court (MPEP2106.05(d)(ll)(i))))
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, and 5-8 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al. ("Heterogeneous Federated Learning Through Multi-Branch Network", hereinafter "Wang").
Regarding Claim 1
Wang discloses: A server apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: ([Section 3.1.- 3.2. and Section 4] disclose a server.)
receive, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches; ([Section 3.1, Algorithm 1-2, and Fig 1] describes a multi-branch neural network framework that receives model parameters from clients to perform aggregating MFedAvg.)
calculate a parameter of a global model based on the local model parameter and the weight received by the receiving unit ([Section 3.1, Algorithm 1-2, and Fig 1], describes calculating MFedAvg (i.e. parameter of a global model).); and
transmit the parameter calculated by the calculating unit to the client apparatuses. ([Section 3.1, Algorithm 1-2, and Fig 1], describes server sending the updated parameter to clients in the next round.)
Regarding Claim 2
Wang discloses: The server apparatus according to Claim 1, wherein the processor is configured to execute the instructions to calculate the parameter of the global model based on a previously stored number of data owned by the client apparatus, the local model parameter, and the weight. ([Section 3.1, Algorithm 1-2, and Fig 1], describes calculating MFedAvg (i.e. parameter of a global model) based on samples n of edge devices (i.e. client apparatus) and the local parameter and weight.)
Regarding Claim 3
Wang discloses: The server apparatus according to Claim 2, wherein the processor is configured to execute the instructions to calculate the parameter of the global model by averaging the local model parameters after weighting with the weights. ([Section 3.1, Algorithm 1-2, and Fig 1], describes calculating MFedAvg.)
Regarding Claim 5
Wang discloses: The server apparatus according to Claim 1, wherein the weight is a value learned by each of the client apparatuses based on training data owned by the client apparatus. ([Section 3.1, Section 4.3, Algorithm 1-2, and Fig 1], describes training samples by clients.)
Regarding Claim 6
Wang discloses: A calculation method by an information processing apparatus, the calculation method comprising: ([Section 3.1.- 3.2. and Section 4] disclose a server apparatus.)
receiving, from each of a plurality of client apparatuses performing federated learning of a neural network model having multiplex branches capable of performing different operations on a common input, a local model parameter of each of the multiplex branches and a weight for each branch used in superposing outputs from the respective multiplex branches; ([Section 3.1, Algorithm 1-2, and Fig 1] describes a multi-branch neural network framework that receives model parameters from clients to perform aggregating MFedAvg.)
calculating a parameter of a global model based on the received local model parameter and weight ([Section 3.1, Algorithm 1-2, and Fig 1], describes calculating MFedAvg (i.e. parameter of a global model).); and
transmitting the calculated parameter to the client apparatuses. ([Section 3.1, Algorithm 1-2, and Fig 1], describes server sending the updated parameter to clients in the next round.)
Regarding Claim 7
Wang discloses: A client apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: ([Section 3.1.- 3.2. and Section 4] disclose a client apparatus.)
learn, using training data owned by the client apparatus, a local model parameter of each of multiplex branches included by a neural network model having the multiplex branches capable of performing different operations on a common input and a weight for each branch used in superposing outputs from the respective multiplex branches; ([Section 3.1, Algorithm 1-2, and Fig 1] describes a multi-branch neural network framework that learns model parameters from clients to perform aggregating MFedAvg.) and
transmit the local model parameter and the weight learned by the learning unit to a server apparatus that generates a global model based on the local model parameter. ([Section 3.1, Algorithm 1-2, and Fig 1], describes clients sending the learned parameters to server to generate a global model (i.e. aggregated model/MFedAvg).)
Regarding Claim 8
Wang discloses: The client apparatus according to Claim 7, wherein the processor is configured to execute the instructions to:
receive a parameter of the global model from the server apparatus ([Section 3.1, Algorithm 1-2, and Fig 1], describes server sending the updated parameter to clients in the next round.); and
learn the weight in a state where the parameter of the global model is fixed and thereafter learn the local model parameter in a state where the weight is fixed. ([Section 3.1, Algorithm 1-2, and Fig 1], describes calculating/training client parameter while the global (i.e. MFedAvg) aggregation hasn’t been updated yet (i.e. is fixed) then updating the global.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. ("Heterogeneous Federated Learning Through Multi-Branch Network", hereinafter "Wang") in view of Yang et al. ("CondConv: Conditionally Parameterized Convolutions for Efficient Inference", hereinafter "Yang").
Regarding Claim 4
Wang discloses: The server apparatus according to Claim 2, wherein the processor is configured to execute the instructions to calculate the parameter of the global model based on the number of data, the local model parameter, and the weight by solving an equation shown by Equation 1:
[Equation 1]
W
i
,
j
=
1
A
j
∑
k
=
1
K
n
k
α
j
(
k
)
W
i
,
j
(
k
)
A
j
=
∑
k
=
1
K
n
k
α
j
(
k
)
where n indicates the number of data, k indicates the client, Wi,j indicates the parameter of a jth branch of an ith layer, and ([Section 3.1, and Algorithm 1-2], describes calculating MFedAvg (i.e. parameter of a global model) based on samples n of edge devices (i.e. client apparatus) and the local parameter and weight.).)
Wang does not explicitly disclose: α indicates the weight.
However, Yang discloses in the same field of endeavor: α indicates the weight and
α
j
(
k
)
W
i
,
j
(
k
)
([Section 1, Section 3, and Fig 1] disclose a weight for α and computing αW.)
It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of weighted kernels disclosed by Yang into the method of Federated Learning through multi-branch networks disclosed by Wang to “weight a branch”. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of weighted kernels disclosed by Yang as all the references are in the field of machine learning. A person of ordinary skill of the art would have been motivated to perform the combination for being able to parameterize convolutional kernels.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al. (US 20250094822 A1, hereinafter "Li") describes Federated Learning a parallel layers (Para 0104). Xu et al. (CN114386570A) describes Federated matching and Multi-branch Neural networks..
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TEWODROS E MENGISTU whose telephone number is (571)270-7714. The examiner can normally be reached Mon-Fri 9:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDULLAH KAWSAR can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TEWODROS E MENGISTU/ Examiner, Art Unit 2127