DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6, 10-13, 15, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orhan et al. (US 20220124543) in view of Molchanov et al. (US 20180114114).
Regarding claims 1, 10, and 19, Orhan teaches: A method / electronic device / computer program product comprising: acquiring a first state graph of a plurality of devices that run a first workload at a first time point (graph data structure representing coupling/interconnections of various nodes such as cells/NANs and UEs par. 0043-46); and determining a first load state of the plurality of devices at the first time point based on an updated first state graph (state s.sub.t defined as the current graph, any updates to said graph result in a new graph s.sub.t+1 being obtained, therefore the state is determined based upon any updates to the new state graph par. 0071 – 0078); and allocating a second workload to the plurality of devices at a second time point based on the first load state (edge compute nodes are arranged to provide computing resources and/or various services for workloads par. 0156, actions are taken with each step of a DQN-Algo, involving connecting UEs with at least one candidate NAN par. 0078, since workloads are carried out through connections, the actions of connecting nodes in updated graphs can involve allocating a different workload at a different time based on the first state graph)
Orhan does not explicitly teach active value thresholds.
However, Molchanov teaches: updating the first state graph based on a comparison between an active value of at least one node in the first state graph and a predetermined threshold (pruning criterion for neurons includes neurons having importance values that determine whether or not they should be pruned from the graph par. 0018 – 0021); wherein active values of nodes in the updated first state graph are greater than the predetermined threshold (pruned neural network created by removing all neurons having importance below a threshold value, leaving the pruned network with all neurons above the threshold value par. 0019 – 0021).
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Orhan with the teachings of Molchanov since the usage of fine-tuning existing deep networks, such as the pruning techniques outlined in Molchanov, improves accuracy while maintaining cost values and minimizing error functions.
Regarding claims 2 and 11, Orhan teaches: wherein the first time point is earlier than the second time point (graph data structure includes nodes representing a quantity observed at some point in time, some of which consequently change par. 0044, difference between current/existing connections and potential connections, implying connections can occur at different times par. 0044 - 0047).
Regarding claims 3 and 12, Orhan teaches: wherein determining a first load state comprises: determining the first load state by a graph neural network model (graph neural network framework used for incorporating NAN-UE relationships between nodes as well as channel capacities par. 0059 – 0068) based on the updated first state graph (this can occur on any s.sub.t graph, including s.sub.t+1 which would be updated par. 0078).
Regarding claims 4 and 13, Molchanov teaches: wherein updating the first state graph comprises: determining, based on the comparison, that the active value is less than the predetermined threshold (one neuron is identified to have the lowest importance par. 0020); and acquiring the updated first state graph by deleting the at least one node from the first state graph (at least one neuron is removed from the trained neural network to produce a pruned neural network par. 0021), wherein the active values of the nodes in the updated first state graph are greater than the predetermined threshold (determination is made whether pruning should continue or not, once past a certain threshold number of neurons, the parameters are fixed, once this threshold is reached then neurons may be all above a certain importance level par. 0020 – 0021 and 0045).
For motivation to combine see claim 1 above.
Regarding claims 6 and 15, Orhan teaches: acquiring a second state graph of the plurality of devices at the second time point (graph data structure calculating input features every time the graph is updated, thereby having a second load state at any second interval/update par. 0078); and acquiring a second load state based on the second state graph (state s.sub.t+1 can be obtained for any t, state is determined based upon any updates to any graph par. 0071 – 0078); wherein the first load state comprises a first running time during which the plurality of devices run the first workload at the first time point, and the second load state comprises a second running time during which the plurality of devices run the second workload at the second time point (each node in the graph represents a quantity observed at some point in time, which consequently change par. 0044, since the state can be obtained at any t then the running times can be considered different par. 0071 – 0078).
Claim(s) 5 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orhan and Molchanov further in view of Yang et al. (US 20220350789).
Regarding claims 5 and 14, Yang teaches: setting an initial active value of the at least one node to a specified value (model builder and/or the like generating a model that assigns a value for each edge of the data graph par. 0055); determining a propagated value between the at least one node and a target node based on an attenuation coefficient (outlier backtracking module including an attenuation coefficient module and a backtracking module which provides backtracking through backward propagation, the given propagation values are then multiplied by an attenuation factor to apply the factor to any outliers, thus determining propagation values for intersecting nodes par. 0080); and determining the active value of the at least one node based on the propagated value and a score of an edge between the at least one node and the target node (using attenuation coefficient, attenuation factors, and propagation values, the outlier backtracking module can then assign a value for each edge of the data graph based on said attenuation factor par. 0053, 0074 – 0080).
Allowable Subject Matter
Claims 7, 16, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
As such, claims 8, 9, 17, and 18 are rejected based on their dependencies to the claims above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Block (US 20090006292) discloses training neural networks based on trained link values to active values for various output nodes at specified thresholds.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN SCOTT MOTTER whose telephone number is (703)756-1550. The examiner can normally be reached Monday - Friday 7:30 a.m. - 4:30 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at 571-272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.S.M./Examiner, Art Unit 2198
/PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198