DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication is in response to preliminary amendments filed on 05/17/2024. Claims 3-12, 15, 22, and 23 were amended, and claims 18-20 were cancelled.
Election/Restriction
2. Restriction is required under 35 U.S.C. 121 and 372.
This application contains the following inventions or groups of inventions which are not so linked as to form a single general inventive concept under PCT Rule 13.1.
In accordance with 37 CFR 1.499, applicant is required, in reply to this action, to elect a single invention to which the claims must be restricted.
Group I, claims 1-15, 22 and 23, drawn to obtaining information to select a criteria associated with a request, and generating and performing an adaptive management strategy based on the obtained information and selected criteria.
Group II, claim 16, drawn to determining status of resources according to criteria in a received computation management request, comparing the request to historical data, and sending a command to a resource selected based on the criteria and historical data to perform the request.
Group III, claims 17 and 21, drawn to a computation analyzer obtaining criteria and specifications from a profiles database and a specifications repository, respectively, a discovery engine storing factors related to resources, a computation manager receiving, from the computation analyzer, the request, selected criteria and specifications, factors from the discover engine, obtaining strategies and success rates from a management strategies database, and generating a strategy of tasks, and a computation agent to receive, from the computation manager, the generated strategy to manage performance of the task.
The groups of inventions listed above do not relate to a single general inventive concept under PCT Rule 13.1 because, under PCT Rule 13.2, they lack the same or corresponding special technical features for the following reasons:
Group I and Group II lack unity of invention because the groups do not share the same or corresponding technical feature.
Group I and Group III lack unity of invention because the groups do not share the same or corresponding technical feature.
Group II and Group III lack unity of invention because the groups do not share the same or corresponding technical feature.
During a telephone conversation with Daniel Murray on 01/15/2026 a provisional election was made without traverse to prosecute the invention of Group I, claims 1-15, 22 and 23. Affirmation of this election must be made by applicant in replying to this Office action. Claims 16, 17, and 21 withdrawn from further consideration by the examiner, 37 CFR 1.142(b), as being drawn to a non-elected invention.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, for the following reason:
3. Claim 11 recites the limitation "the one or more specifications" at the end of the claim. There is improper antecedent basis for this limitation. Specifically, claim 1, from which claim 11 depends, recites “one or more user specifications” and “one or more application specifications”. It is unclear which, if either, of these instances “the one or more specifications” in claim 11 is intended to refer to.
For purposes of examination, the recitation in claim 11 is interpreted as referring to any of the previously disclosed specifications.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-6, 8-15, 22, 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method for obtaining information in response to a request, selecting criteria based on the obtained information, obtaining additional information and generating and performing an adaptive management strategy based on each of the obtained information. This judicial exception is not integrated into a practical application because the claims lack description linking the obtaining, selecting, generating, and performing to operations performed by a computer, and therefore these steps could be performed mentally by a human with access to the information and an ability to mentally design a strategy based on the information. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they are broad enough to be a human activity, do not identify improvements in the functioning of a computer, or a description as to how the adaptive management strategy is generated or what performing an adaptive management strategy entails.
It is noted that claim 7 recites a step of storing specific information in a database, understood to be electronically stored data on a computer system, and therefore requires a computer operated step to store that information in the database.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
5. Claims 1-12, 14, 15, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Parvataneni et al. (US 2021/0211352) in view of Sun et al. (US 2022/0006857) and in further view of Shaw et al. (US 2018/0316799).
Regarding claim 1, Parvataneni teaches a method performed by a computation management system for performing edge cloud computation management (Auto-scaling techniques may be used to manage resource allocation of the application-layer service network, such as a MEC network, [0010]), the method comprising:
receiving (901) a computation management request from a UE (user equipment) (the triggering event may be a request from end device 180 for an application service, [0051]);
obtaining (902) one or more request criteria associated with the computation management request (The request may include data indicating an application service or a category of an application service, [0051]);
obtaining (903) one or more user specifications (end device information may include information relating to resource utilization for physical, virtual, and/or logical resources at end device 180. Additionally, end device information may also include information relating to end device performance, such as latency, packet drop rate, etc., and/or other types of KPIs, QoS, SLA parameters, and so forth, [0039]);
obtaining (904) one or more application specifications from previously stored application specifications (Policy engine 210 may store default auto-scaling rules for application services. For example, default auto-scaling rules may relate to a category of application services (e.g., mission critical, real-time, non-real-time, video streaming, IoT, etc.) and/or on a per-application service basis, [0033]; RL device 215 may obtain a default auto-scale rule for the application service 310 from the policy engine (PE) 210, [0051]);
selecting (905) one of the one or more request criteria based on the one or more request criteria, the one or more user specifications, and the one or more application specifications (In response to receiving the default auto-scaling rule, RL device 215 may select a controller 220 and/or a host(s) 250 to provide the requested application service. RL device 215 may provide the default auto-scale rule 315 to RL agent 225 of the selected controller 220, [0052]);
obtaining (906) status for one or more edge cloud resources (Network information may include information relating to resource utilization for physical, virtual, and/or logical resources at controller 220, host 250, and communication links, [0038]; RL agent 225 may obtain the network information from a monitoring and tracking system (e.g., a monitoring system 305 in FIG. 3) in MEC network 125 and/or other configured source, [0038]);
obtaining (907) one or more management strategies (the machine learning-based resource management service may use various types of information (e.g., historical and current information pertaining to collected information, auto-scaling rules, etc.) that provides a more extensive evaluation and analysis of auto-scaling rules, [0012]);
generating (908) an adaptive management strategy based on the one or more request criteria, the one or more user specifications, the one or more application specifications, the static of the one or more edge cloud resources, and the one or more management strategies (Based on the analysis of the collected information and an auto-scaling rule (in use), such as a default auto-scaling rule or a modified auto-scaling rule, RL agent 225 may determine whether to maintain the current auto-scaling rule or modify the current auto-scaling rule. For example, RL agent 225 may use a machine learning algorithm (e.g., a quality learning (q-learning) algorithm) that seeks to find the best action to take given the current state indicated by the collected information, [0040]); and
performing (909) the generated adaptive management strategy to complete the computation management request (RL agent 225 may communicate with resource manager 230 to apply the modified auto-scaling rule, [0044]; based on a modified auto-scale rule (relative to a default auto-scale rule), resource manager 230 may allocate resources to an application service and associated host(s) 250 based on the modified auto-scale rule, [0047]; controller 220 may use the modified auto-scaling rule so to provision the application service. For example, resource manager 230 may adjust the allocation of resources to host 250, such as vertical or horizontal auto-scaling, in accordance with the modified auto-scaling rule, [0075]).
However, Parvataneni does not explicitly disclose both a static status and a dynamic status for the one or more edge cloud resources, or that the one or more management strategies and related success rates are obtained from a database.
Sun teaches obtaining static status and dynamic status for one or more edge cloud resources (Each of the factors described above may be determined based on stored information related to the candidate subset of MEC servers 306 (e.g., information indicative of the MEC server locations, processing capabilities, etc.) and/or based on real-time testing that is performed in accordance with communications and operations, [0053]; system 100 may combine static data that has been accessed from a data store (e.g., data indicative of locations of MEC servers 306, theoretical capabilities of MEC servers 306, etc.) with real-time, dynamic test result data reported by client devices 302 (e.g., data indicative of how each MEC server 306 may be expected to perform with respect to the actual client devices 302 at their current locations and under current conditions), [0058]);
obtaining one or more management strategies and related success rates from a database (analyzing at least one of the one or more factors described above using machine learning technology trained based on previous such identifications that have been made in the past and an indication of how optimal or successful such identifications turned out to be, [0053]; data representative of one or more executable applications 1012 configured to direct processor 1004 to perform any of the operations described herein may be stored within storage device 1006. In some examples, data may be arranged in one or more databases residing within storage device 1006, [0080]); and
generating an adaptive management strategy based on the static status of the one or more edge cloud resources, and the dynamic status of the one or more edge cloud resources (Based on all this data and any other suitable data that system 100 may access or receive in a particular implementation, system 100 may perform operation 504-3, in which a particular MEC server 306 is selected from the candidate subset of MEC servers 306 to be the one that will execute the multi-client application. Along with ensuring that all of the relevant operation parameters requested for the multi-client application can be satisfied by the selected MEC server 306, operation 504-3 may be configured to select a MEC server in accordance with a particular optimization policy, [0059]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to consider both static and dynamic resource conditions and utilize information about past successes in the system/method of Parvataneni as suggested by Sun when analyzing information to determine an optimal policy. One would be motivated to combine these teachings to combine a variety of static resource characteristics, such as location, with real-time conditions and to compare results with past successes in order to identify most appropriate resources to fulfill a particular request.
However, Parvataneni-Sun do not explicitly disclose that the one or more user specifications are previously stored.
Shaw teaches obtaining one or more user specification from previously stored user specifications (the Manager SDN Controller 130 can access User Profiles 132 of subscriber devices 116 to determine which users have histories of using which service applications, [0037]; the User Profile 132 can be stored at a cloud-based location, [0037]; Core SDN Controller 140 can fetch and look up a profile of one or more users of the communication device 116. The profile can include information on how the user and/or a subscriber to system services desires to manage data resources, [0082]); and
selecting one of one or more request criteria based on the one or more user specifications (access user profile/preference information for these devices to reveal performance needs so that QoS parameters can be defined at the device level. The SDN Controller can analyze QoS parameters over all the domains of the network to determine an aggregated set of QoS parameters needed to provide services within quality limits of the network for its totality of customers, [0110]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to store a user profile in the system/method of Parvataneni-Sun as suggested by Shaw to provide access to user preferences and performance needs. One would be motivated to combine these teachings to ensure that requirements of individual subscribers are met in order to provide each subscriber with a desired quality of service.
Regarding claim 2, Parvataneni teaches the method of claim 1 wherein the UE (700) comprises one of: mobile device, smart car, smart watch, Virtual Reality (VR) glass (End device 180 may be implemented as a mobile device, a portable device, a stationary device, a device operated by a user, or a device not operated by a user. For example, end device 180 may be implemented as a Mobile Broadband device, a smartphone, a computer, a tablet, a netbook, a phablet, a wearable device, a vehicle support system, a game system, a drone, or some other type of wireless device, [0029]).
Regarding claim 3, Parvataneni teaches the method of claim 1, wherein the computation management request comprises at least one of: application, service, user, time, location, battery level, program code, application specification, user requirement (The request may include data indicating an application service or a category of an application service, [0051]).
Regarding claim 4, Parvataneni does not explicitly disclose the method of claim 1, wherein the one or more request criteria comprises at least one of: latency, cost, total resource utilization.
Sun teaches wherein one or more request criteria comprises at least one of: latency, cost, total resource utilization (one or more operation parameters obtained at operation 202 may define key performance indicators (e.g., computing resource requirements, target values, thresholds, etc.) for various aspects of the performance of the multi-client application, [0027]; Certain operational parameters obtained by system 100 at operation 202 may also relate to tolerable latencies of the multi-client application (e.g., a round-trip latency parameter or one-way latency parameter for the multi-client application that define maximum acceptable latencies that each client device may experience, [0028]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to obtain operational parameters associated with a requested service/application in the system/method of Parvataneni as suggested by Sun in order to consider specific needs and functionalities of different services/applications. One would be motivated to combine these teachings because factoring in characteristics of how each service/application operates when determining an optimal resource configuration would allow for prioritizing certain requirements to provide the best performance for a user requesting the service/application.
Regarding claim 5, Parvataneni teaches the method of claim 1, wherein the static status or dynamic status of the one or more edge cloud resources comprises at least one of: availability, utilization rate, maximum allowed capacity, node location, running load, failure rate, energy consumption level (the MEC network may have insufficient resources (e.g., physical, logical, virtual) due to the number of end devices/users being served, the number of applications running simultaneously, [0009]; Network information may include information relating to resource utilization for physical, virtual, and/or logical resources at controller 220, host 250, and communication links. Network information may also include information relating to network performance, such as response rates for user requests, latency, packet drop rate, throughput, jitter, and/or other types of key performance indicators (KPIs), quality of service (QoS), service level agreement (SLA) parameters, and so forth. The network information may further include other types of information relating to health, security, usage of application service (e.g., the degree at which some features of an application service are used relative to other features, etc.), fault detection, and/or resource utilization, [0038]).
Regarding claim 6, Parvataneni teaches the method of claim 1, wherein the one or more edge cloud resources comprise local resources, remote resources, or both local and remote resources (Network information may include information relating to resource utilization for physical, virtual, and/or logical resources at controller 220, host 250, and communication links, [0038]; resource utilization and performance information relating to other networks (e.g., access network 105, access devices 110, external network 160, external devices 165, etc.) and communication links external to MEC network 125 that pertain to the provisioning of an application service, [0038]).
Regarding claim 7, Parvataneni teaches the method of claim 1, further comprising assessing a result of the generated adaptive management strategy (RL agent 225 may analyze the collected information relative to the modified auto-scaling rule to determine whether further adjustment is needed or the modified auto-scaling rule has been successful 355, [0055]) and storing the result in the database (Based on this determination, RL agent 225 may provide the modified auto-scale rule 360 to RL device 215. RL device 215 may store and share the modified auto-scaling rule with other controllers 220, [0055]).
Regarding claim 8, Parvantaneni does not explicitly disclose the method of claim 1, further comprising sending a result of the generated adaptive management strategy to the UE.
Sun teaches sending a result of a generated adaptive management strategy to a UE (at communication 502-5, system 100 may direct the performance testing by indicating which MEC servers 306 have been identified for the candidate subset of MEC servers, providing address information associated with each respective daemon process executing on each of the identified MEC servers, providing protocol information indicating a supported transport protocol that is to be utilized by client devices 302 to communicate with the respective daemon processes to enable testing of the performance capabilities, and so forth, [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate a selected server or resource to a requesting client in the system/method of Parvataneni as suggested by Sun in order for the client to acknowledge and approve the selection. One would be motivated to combine these teachings to enable client feedback for a determined resource configuration.
Regarding claim 9, Parvataneni teaches the method of claim 1, wherein the one or more user specifications or the one or more application specifications comprises at least one of: application, service, user, time, location, battery level, program code, application specification, user requirement (Policy engine 210 may store default auto-scaling rules for application services. For example, default auto-scaling rules may relate to a category of application services (e.g., mission critical, real-time, non-real-time, video streaming, IoT, etc.) and/or on a per-application service basis, [0033]).
Regarding claim 10, Parvantaneni teaches the method of claim 1, further comprising updating the static and/or dynamic status for the one or more edge cloud resources (Auto-scaling mechanisms may dynamically adjust network resources to support an application service based on auto-scaling rules, [0010]; MEC device 130 may transmit the adjusted auto-scaling rules to a resource manager that governs allocated resources to virtual network devices, such as hosts, containers, VMs, or other types of network devices that provide an application service to end devices 180, [0024]; Resource manager 230 may modify the amount of resources allocated based on communication with RL agent 225, as described herein, [0047]).
Regarding claim 11, although Parvantaneni teaches using historical information [0012], Parvantaneni does not explicitly disclose the method of claim 1, wherein success rate for the one or more management strategies is calculated based on at least one of: historical success rate, current defined importance of criteria, and the one or more specifications.
Sun teaches wherein success rate for the one or more management strategies is calculated based on at least one of: historical success rate, current defined importance of criteria, and the one or more specifications (the identifying of the candidate subset of MEC servers 306 at operation 504-1 may include analyzing at least one of the one or more factors described above using machine learning technology trained based on previous such identifications that have been made in the past and an indication of how optimal or successful such identifications turned out to be, [0053]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to consider and utilize information about past successes in the system/method of Parvataneni as suggested by Sun when analyzing information to determine an optimal policy. One would be motivated to combine these teachings to compare results with past successes and develop improved techniques for identifying most appropriate resources to fulfill a particular request.
Regarding claim 12, Parvataneni teaches the method of claim 1, wherein the generating an adaptive management strategy comprises using Reinforcement Learning (RL) (RL models 275 may provide auto-scaling rules for application services or categories of application services that yield the most efficient use of resources given a current state pertaining to MEC network 125 and end device 180, [0034]; RL agent 225 includes logic that provides the machine learning-based resource management service, [0035]).
Regarding claim 14, Parvataneni teaches the method of claim 12 wherein the Reinforcement Learning comprises at least one of: Q-learning (RL agent 225 may use a machine learning algorithm (e.g., a quality learning (q-learning) algorithm) that seeks to find the best action to take given the current state indicated by the collected information, [0040]) and SARSA (State Action Reward State Action) (RL agent 225 may use other machine learning algorithms (e.g., State-Action-Reward-State-Action (SARSA), [0041]).
Regarding claim 15, Parvataneni teaches the method of claim 12, wherein the Reinforcement Learning comprises a RL model engine configured to train a learning algorithm to obtain a Q-Database (RL agent 225 may store and manage a q-table, [0041]).
Claim 22 is directed to a first network node (800), comprising, a processor (801), and a memory (802) having stored thereon a computer program which, when executed on the processor, causes the processor to carry out the method according to claim 1, and therefore is rejected in view of the same rationale as claim 1.
Claim 23 is directed to a non-transitory computer-readable storage medium, having stored thereon a computer program which, when executed on at least one processor, causes the at least one processor to carry out the method according to claim 1, and therefore is rejected in view of the same rationale as claim 1.
6. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Parvataneni-Sun-Shaw in view of Yeh et al. (US 2022/0014963).
Regarding claim 13, Parvataneni-Sun-Shaw do not explicitly disclose the method of claim 12 wherein the generating an adaptive management strategy comprises using a Markov Decision Process (MDP).
Yeh teaches wherein generating an adaptive management strategy comprises using a Markov Decision Process (MDP) (the learning approach of the actor network 303 and/or the critic network 305 is a policy gradient learning approach such as, for example, a cross-entropy loss function, Monte Carlo policy gradient, finite horizon MDP, [0057]; RL for traffic management is accomplished by modeling the wireless multi-RAT network interaction as Markov decision process, [0124]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize a Markov Decision Process in the system/method of Parvataneni-Sun-Shaw as suggested by Yeh to implement reinforcement learning. One would be motivated to combine these teachings because MDP is known for adaptive modeling of a sequence of decisions in a highly dynamic edge environment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Todd et al. US 9,317,820 – configure a cloud computing solution based upon system and user requirements.
Steinder et al. US 10,326,649 – placement allocation policy in a cloud infrastructure based on a user request specifying requirements.
Nijim et al. US 12,061,934 – adaptively allocating resources using machine learning for edge cloud processing.
Rothschild US 2013/0091284 – cloud resource management taking a variety of information into consideration, including resource requirements, device information and request services.
Steinder et al. US 2016/0134558 – considering user requirements and properties of a plurality of clouds when deciding to scale a user cloud instance up or down.
Hockett et al. US 2018/0287864 – receiving administrator requirements and generating a new cloud configuration based on past known cloud profiles.
Keating et al. US 2020/0145337 – controlling edge platform resource utilization to fulfill requests from client endpoints.
Casey et al. US 2022/0094606 – generating a set of solutions to resolve an identified issue using resource utilization and historical success rates.
Yeh et al. US 2022/0353295 – execute actions involving computing resources based on stored client requests, device profile information, user preferences and resource utilization metrics
Vanbrabant et al. US 2023/0125626 – changing state of managed resources in a configuration model based on current resource status.
Nassar et al, Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements, 2019, IEEE Access (Year: 2019)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHU WOOLCOCK whose telephone number is (571)270-3629. The examiner can normally be reached Tuesday, Thursday 9-6 ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
MADHU WOOLCOCK
Examiner
Art Unit 2451
/MADHU WOOLCOCK/Primary Examiner, Art Unit 2451