Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This non-final office action is responsive to the U.S. patent application no. 18/828,073 filed on September 9, 2024.
Claims 1-20 are pending.
Claims 1-20 are rejected.
Priority
The application claims priority under 35 U.S.C. 120 to U.S. non-provisional application No. 17/635,332 filed on February 14, 2022, which claims priority under 35 U.S.C. 365(a) to the international application no. PCT/EP2020/072122 filed on August 6, 2020, which claims priority under 35 U.S.C. 119(e) to U.S. Provisional application No. 62/887,854 filed on August 16, 2019.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 3/5/2025 and 9/9/2024 are compliant with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Allowable Subject Matter
Claims 2-11 and 20 will be allowable if the pending double patenting rejection is overcome.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12,132,619. Although the claims at issue are not identical, they are not patentably distinct from each other. The table below show the mapping between claims 1-11 of the instant application and the claims 1-10 of the issued patent. Similar mappings existed between claims 12-20 of the instant application and claims 11-18 of the issued patent.
Application no. 18/828,073
Patent No. 12,132,619
1. A method performed by a first network entity in a communications network, the first network entity belonging to a plurality of network entities configured to participate in collaborative learning, the method comprising:
receiving a request from a second network entity in the communications network, the request comprising one or more selection criteria for selecting candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm,
wherein the candidate network entities comprise at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), evolved NodeB (eNB), and a next generation NodeB (gNB); and
transmitting, to the second network entity in the communications network, a response message comprising an indication of whether or not the first network entity satisfies the one or more selection criteria.
2. A network entity for a communications network, the network entity comprising processing circuitry and a non-transitory machine-readable medium storing instructions which, when executed by the processing circuitry, cause the network entity to perform operations comprising:
obtaining identification information for a plurality of candidate network entities in the communications network, the identification information indicating that the candidate network entities are configured to participate in collaborative learning;
sending a request for the candidate network entities, the request comprising one or more selection criteria;
receiving one or more response messages comprising an indication of which of the candidate network entities satisfy the one or more selection criteria; and
based on the indication in the one or more response messages, selecting one or more of the plurality of candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm,
(3. The network entity of claim 2, wherein the one or more selection criteria comprise one or more of the following: a criterion relating to a configuration of a candidate network entity of the candidate network entities; a criterion relating to performance requirements for the candidate network entity; a criterion relating to availability of training data at the candidate network entity for training the model; and a criterion relating to a property of training data available at the candidate network entity.)
wherein the selected one or more of the plurality of candidate network entities includes at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), evolved NodeB (eNB), and a next generation NodeB (gNB).
4. The network entity of claim 2, wherein the one or more selection criteria comprise a criterion relating to one or more metrics indicative of a performance of a preliminary model obtained by training the model at the candidate network entity using the machine learning algorithm.
5. The network entity of claim 4, wherein sending a request for the candidate network entities comprises initiating, at the candidate network entities, training of the model to obtain the preliminary model.
6. The network entity of claim 2, wherein sending a request for the candidate network entities comprises sending a request for the candidate network entities to an operations, administration and maintenance, OAM, entity in the communications network.
7. The network entity of claim 6, wherein the request further comprises a maximum number of candidate network entities to be selected by the OAM for participating in the collaborative learning process, and the one or more response messages comprise an indication for only a subset of the plurality of candidate network entities.
8. The network entity of claim 2, wherein the network entity is further caused to perform operations comprising: receiving, for at least one candidate network entity in the plurality of candidate network entities, one or more participation criteria for participating in the collaborative learning process, wherein selection of the one or more of the plurality of candidate network entities to participate in the collaborative learning process is further based on whether or not the one or more participation criteria are satisfied.
9. The network entity of claim 8, wherein the one or more participation criteria are comprised in at least one of the one or more response messages.
10. The network entity of claim 8, wherein the one or more participation criteria for the at least one candidate network entity relate to one or more of the following: a network slice operated on by the network entity; and a threshold number of candidate network entities participating in the collaborative learning process.
11. The network entity of claim 2, wherein one or more of the following apply: the network entity is a network data analytics function, NWDAF; and the network entity is in a core network of a communications network.
1. A method performed by a first network entity in a communications network, the first network entity belonging to a plurality of network entities configured to participate in collaborative learning, the method comprising:
receiving a request from a second network entity in the communications network, the request comprising one or more selection criteria for selecting candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm; and
(wherein the candidate network entity comprises at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), and evolved NodeB (eNB), and a next generation NodeB (gNB).)
transmitting, to the second network entity in the communications network, a response message comprising an indication of whether or not the first network entity satisfies the one or more selection criteria, wherein the one or more selection criteria comprise a criterion relating to a configuration of a candidate network entity among a plurality of candidate network entities in the communications network, a criterion relating to performance requirements for the candidate network entity, a criterion relating to availability of training data at the candidate network entity for training the model, and a criterion relating to a property of training data available at the candidate network entity, ..
2. A network entity for a communications network, the network entity comprising processing circuitry and a non-transitory machine-readable medium storing instructions which, when executed by the processing circuitry, cause the network entity to perform operations comprising:
obtaining identification information for a plurality of candidate network entities in the communications network, the identification information indicating that the candidate network entities are configured to participate in collaborative learning;
sending a request for the candidate network entities, the request comprising one or more selection criteria;
receiving one or more response messages comprising an indication of which of the candidate network entities satisfy the one or more selection criteria; and
based on the indication in the one or more response messages, selecting one or more of the plurality of candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm,
wherein the one or more selection criteria comprise a criterion relating to a configuration of a candidate network entity among the plurality of candidate network entities in the communications network, a criterion relating to performance requirements for the candidate network entity, a criterion relating to availability of training data at the candidate network entity for training the model, and a criterion relating to a property of training data available at the candidate network entity,
wherein the selected one or more of the plurality of candidate network entities includes at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), evolved NodeB (eNB), and a next generation NodeB (gNB).
3. The network entity of claim 2, wherein the one or more selection criteria comprise a criterion relating to one or more metrics indicative of a performance of a preliminary model obtained by training the model at the candidate network entity using the machine learning algorithm.
4. The network entity of claim 3, wherein sending a request for the candidate network entities comprises initiating, at the candidate network entities, training of the model to obtain the preliminary model.
5. The network entity of claim 2, wherein sending a request for the candidate network entities comprises sending a request for the candidate network entities to an operations, administration and maintenance, OAM, entity in the communications network.
6. The network entity of claim 5, wherein the request further comprises a maximum number of candidate network entities to be selected by the OAM for participating in the collaborative learning process, and the one or more response messages comprise an indication for only a subset of the plurality of candidate network entities.
7. The network entity of claim 2, wherein the network entity is further caused to perform operations comprising: receive, for at least one candidate network entity in the plurality of candidate network entities, one or more participation criteria for participating in the collaborative learning process, wherein selection of the one or more of the plurality of candidate network entities to participate in the collaborative learning process is further based on whether or not the one or more participation criteria are satisfied.
8. The network entity of claim 7, wherein the one or more participation criteria are comprised in at least one of the one or more response messages.
9. The network entity of claim 7, wherein the one or more participation criteria for the at least one candidate network entity relate to one or more of the following: a network slice operated on by the network entity; and a threshold number of candidate network entities participating in the collaborative learning process.
10. The network entity of claim 2, wherein one or more of the following apply: the network entity is a network data analytics function, NWDAF; and the network entity is in a core network of a communications network.
Claims 2-6 and 11 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6-9 of U.S. Patent No. 12,284,088. Although the claims at issue are not identical, they are not patentably distinct from each other, as shown below.
Application no. 18/828,073
Patent No. 12,284,088
2. A network entity for a communications network, the network entity comprising processing circuitry and a non-transitory machine-readable medium storing instructions which, when executed by the processing circuitry, cause the network entity to perform operations comprising:
obtaining identification information for a plurality of candidate network entities in the communications network, the identification information indicating that the candidate network entities are configured to participate in collaborative learning;
sending a request for the candidate network entities, the request comprising one or more selection criteria;
receiving one or more response messages comprising an indication of which of the candidate network entities satisfy the one or more selection criteria; and
based on the indication in the one or more response messages, selecting one or more of the plurality of candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm,
(5. The network entity of claim 4, wherein sending a request for the candidate network entities comprises initiating, at the candidate network entities, training of the model to obtain the preliminary model.)
wherein the selected one or more of the plurality of candidate network entities includes at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), evolved NodeB (eNB), and a next generation NodeB (gNB).
3. The network entity of claim 2, wherein the one or more selection criteria comprise one or more of the following: a criterion relating to a configuration of a candidate network entity of the candidate network entities; a criterion relating to performance requirements for the candidate network entity; a criterion relating to availability of training data at the candidate network entity for training the model; and a criterion relating to a property of training data available at the candidate network entity.
4. The network entity of claim 2, wherein the one or more selection criteria comprise a criterion relating to one or more metrics indicative of a performance of a preliminary model obtained by training the model at the candidate network entity using the machine learning algorithm.
6. The network entity of claim 2, wherein sending a request for the candidate network entities comprises sending a request for the candidate network entities to an operations, administration and maintenance, OAM, entity in the communications network.
11. The network entity of claim 2, wherein one or more of the following apply: the network entity is a network data analytics function, NWDAF; and the network entity is in a core network of a communications network.
4. A co-ordination network entity for a communications network, the co-ordination network entity comprising processing circuitry and a non-transitory machine-readable medium storing instructions which, when executed by the processing circuitry, cause the co-ordination network entity to:
transmit a first request message, from the co-ordination network entity to a network registration entity in the communications network, for identification information for a plurality of candidate network entities in the communications network capable of performing collaborative learning;
receive, at the co-ordination network entity, identification information for a plurality of candidate network entities from the network registration entity;
transmit a second request message comprising at least one query for additional information for the plurality of candidate network entities;
(claim 5. The co-ordination network entity of claim 4, wherein the request message comprises one or more criteria for selecting candidate network entities for performing the collaborative learning process.)
select, based on one or more responses to the at least one query, one or more network entities from the plurality of candidate network entities; and
initiate, at the selected one or more network entities from the plurality of candidate network entities, training of a model using a machine-learning algorithm as part of a collaborative learning process.
(8. The co-ordination network entity of claim 4, wherein one or more of the following applies: the co-ordination network entity is a network data analytics function, NWDAF; and the network registration entity is a network function repository function, NRF.)
6. The co-ordination network entity of claim 5, wherein the one or more criteria comprise one or more of: at least one primary criterion relating to a capability of the candidate network entity to perform the collaborative learning process; and at least one secondary criterion relating to a capability of the candidate network entity to respond to a type of query.
7. The co-ordination network entity of claim 4, wherein the at least one query for additional information relates to one or more of the following: a configuration of the candidate network entity; a performance requirement for the candidate network entity; an availability of training data at the candidate network entity for training the model; and a property of training data available at the candidate network entity.
8. The co-ordination network entity of claim 4, wherein one or more of the following applies: the co-ordination network entity is a network data analytics function, NWDAF; and the network registration entity is a network function repository function, NRF.
9. The co-ordination network entity of claim 4, wherein one or more of the co-ordination network entity and the network registration entity are in a core network of the communications network.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 and 12, 14-19 are rejected under 35 U.S.C. 102(a)(2) as unpatentable over Peng et al. (U.S. 2019/0324805).
Regarding claim 1, Peng disclosed a method performed by a first network entity in a communications network, the first network entity belonging to a plurality of network entities configured to participate in collaborative learning (Peng, Fig. 2 and [0021], “Resource scheduler 230” anticipates the first network entity in the claim), the method comprising:
receiving a request from a second network entity in the communications network, the request comprising one or more selection criteria for selecting candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm (Peng, Figs. 2, 3 and [0021, 0022], “If a user 202 of the user terminal 210 expects to process a deep learning task 220, the user terminal 210 may send a scheduling request 214 to the resource scheduler 230”; Peng, [0022], “The user terminal 210 may indicate the processing requirement specified by the user 202 in the scheduling request 21”; said user terminal 210, as a network device, anticipates the second network entity),
wherein the candidate network entities comprise at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), evolved NodeB (eNB), and a next generation NodeB (gNB) (Examiner’s position is that the claim element “candidate network entities” bears no patentable weight in this claim therefore the prior art does not need to teach the subject matter in this “wherein” clause. Examiner’s position is based on the following analysis. First, the “receiving” clause above recites that “the request comprising one or more selection criteria for selecting candidate network entities to …” where the claim element “selecting candidate network entities” appears to be an intended use of the “one or more selection criteria” rather than an active, scope limiting step in the claimed method. Second, the claim does not recite that said “candidate network entities” is a functional/structural component of the “selection criteria,” nor does the claim clearly shows that these “candidate network entities” are functionally or structurally related to the claimed method in a way to further limit the scope of the claim. Therefore said “candidate network entities” has no patentable weight.); and
transmitting, to the second network entity in the communications network, a response message comprising an indication of whether or not the first network entity satisfies the one or more selection criteria (Peng, [0033], “In some embodiments, based on the processing requirement from the user, the resource prediction model 240 may determine a plurality of sets of candidate resources that satisfy the processing requirement. Each set of candidate resources may indicate certain resources required for the deep learning task 220, for example, certain GPU(s), CPU(s) and/or an amount of storage resources. The resource prediction model 240 may provide these candidate resources to the resource scheduler 230, and the resource scheduler 230 indicates these candidate resources to the user 202 for user selection”).
Regarding claim 12, Peng disclosed a first network entity for a communications network, the first network entity belonging to a plurality of network entities configured to participate in collaborative learning (Peng, Fig. 2 and [0021], “Resource scheduler 230” anticipates the first network entity in the claim), the first network entity comprising processing circuitry and a non-transitory machine-readable medium storing instructions which, when executed by the processing circuitry, cause the first network entity to perform operations comprising:
receiving a request from a second network entity in the communications network, the request comprising one or more selection criteria for selecting candidate network entities to participate in a collaborative learning process to train a model using a machine learning algorithm (Peng, Figs. 2, 3 and [0021, 0022], “If a user 202 of the user terminal 210 expects to process a deep learning task 220, the user terminal 210 may send a scheduling request 214 to the resource scheduler 230”; Peng, [0022], “The user terminal 210 may indicate the processing requirement specified by the user 202 in the scheduling request 21”; said user terminal 210, as a network device, anticipates the second network entity),
wherein the candidate network entities comprise at least one of an Access and mobility Management Function (AMF), Authentication Server Function (AUSF), Session Management Function (SMF), Policy Charging Function (PCF), Unified Data Management (UDM), Operations Administration and Management (OAM), evolved NodeB (eNB), and a next generation NodeB (gNB) (Examiner’s position is that the claim element “candidate network entities” bears no patentable weight in this claim therefore the prior art does not need to teach the subject matter in this “wherein” clause. Examiner’s position is based on the following analysis. First, the “receiving” clause above recites that “the request comprising one or more selection criteria for selecting candidate network entities to …” where the claim element “selecting candidate network entities” appears to be an intended use of the “one or more selection criteria” rather than an active, scope limiting step in the claimed method. Second, the claim does not recite that said “candidate network entities” is a functional/structural component of the “selection criteria,” nor does the claim clearly shows that these “candidate network entities” are functionally or structurally related to the claimed method in a way to further limit the scope of the claim. Therefore said “candidate network entities” has no patentable weight.); and
transmitting, to the second network entity in the communications network, a response message comprising an indication of whether or not the first network entity satisfies the one or more selection criteria (Peng, [0033], “In some embodiments, based on the processing requirement from the user, the resource prediction model 240 may determine a plurality of sets of candidate resources that satisfy the processing requirement. Each set of candidate resources may indicate certain resources required for the deep learning task 220, for example, certain GPU(s), CPU(s) and/or an amount of storage resources. The resource prediction model 240 may provide these candidate resources to the resource scheduler 230, and the resource scheduler 230 indicates these candidate resources to the user 202 for user selection”).
Regarding claim 14, Peng disclosed the first network entity of claim 12.
Peng further disclosed wherein the one or more selection criteria comprise a criterion relating to one or more metrics indicative of a performance of a preliminary model obtained by training the model at the candidate network entity using the machine learning algorithm (Peng, [0022] disclosed that “The processing requirement is specified by the user 202 of the user terminal 210 and at least includes a requirement related to a completion time of the deep learning task 22.” Here the task completion time is a performance of the model).
Regarding claim 15, Peng disclosed the first network entity of claim 14.
Peng further disclosed wherein the first network entity is further caused to perform operations comprising: in response to receipt of the request, obtaining the preliminary model by training the model using the machine learning algorithm (Peng, [0025], “The resource prediction model 240 may be trained to a model for implementing resource prediction. The resource prediction model 240 may be implemented as, for example, a learning model and may output a resource(s) based on a specific input (including the processing requirement for the deep learning task 220) such that the output resource(s) satisfies the input processing requirement.”).
Regarding claim 16, Peng disclosed the first network entity of claim 14.
Peng further disclosed wherein the first network entity is further caused to perform operations comprising: obtaining values of the one or more metrics for the preliminary model; and comparing the obtained values to the at least one of the one or more selection criteria (Peng, [0025-0030]).
Regarding claim 17, Peng disclosed the first network entity of claim 12.
Peng further disclosed wherein the response message further comprises one or more participation criteria for participating in the collaborative learning process (Peng, [0036], “In the embodiments of providing a plurality of sets of candidate resources to the user 202, a predicated completion time and an expected processing cost related to each set of candidate resources can be determined. The user 202 may select, based on the presented expected completion time and expected processing cost, specific candidate resources for processing the deep learning task 220”).
Regarding claim 18, Peng disclosed the first network entity of claim 17.
Peng further disclosed wherein the one or more participation criteria relate to one or more of the following: a network slice operated on by the second network entity; and a threshold number of other network entities participating in the collaborative learning process (Peng, [0030], “ the resource indication 242 may indicate the amount and/or the type of the resources that satisfy the processing requirement for the deep learning task 220 (for example, the processing requirement related to the completion time). For example, it may only indicate the number of GPUs of a particular type, the number of CPU kernels of a particular type, and a size of memory.”).
Regarding claim 19, Peng disclosed the first network entity of claim 12.
Peng further disclosed wherein at least one of the following apply: the second network entity is a network data analytics function, NWDAF, or an operations, administration and maintenance, OAM, entity; and at least one of the first network entity and the second network entity are in a core network of a communications network (Peng, [0037], “the resource pool 250 may be deployed as a data center. In an example of FIG. 2, the resource pool 250 is shown to be deployed in the cloud”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Peng et al. (U.S. 2019/0324805) in view of Capota et al. (US 2020/0410288).
Regarding claim 13, Peng disclosed the first network entity of claim 12.
Peng did not explicitly disclose but Capota disclosed
wherein the one or more selection criteria comprise a criterion relating to a configuration of a candidate network entity among a plurality of candidate network entities in the communications network, a criterion relating to performance requirements for the candidate network entity, a criterion relating to availability of training data at the candidate network entity for training the model, and a criterion relating to a property of training data available at the candidate network entity (Capota, [0086], “a set of participating devices are selected as a function of a query. The set of participating devices should meet the one or more campaign requirements for data availability, compute capability and privacy restrictions”).
One of ordinary skill in the art would be motivated to combine Peng and Capota before the effective filing date of the claimed invention here because both references disclosed methods and systems for distributed training of machine learning models by selecting a subset of devices to participate in the training process/campaign (Peng, Abstract; Capota, [0003, 0037], “ Edge Learning Services may include services to support distributed or communal learning across numerous devices”). Therefore it would be obvious for one of ordinary skill in the art to incorporate into Peng’s resource scheduler the capability of using the specific set of resource selection criteria that Capota et al. had disclosed.
Related Prior Art
Prakash et al. (US 2019/0138934) is directed to technologies for distributed machine learning (ML) training using heterogeneous compute nodes in a heterogeneous computing environment, where the heterogeneous compute nodes are connected to a master node via respective wireless links.
Sridharan et al. (US 2019/0205745) is directed to a system to configure distributed training of a neural network and optimize communications.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIRLEY X ZHANG whose telephone number is (571)270-5012. The examiner can normally be reached 8:30am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H Hwang can be reached at 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIRLEY X ZHANG/Primary Examiner, Art Unit 2447