Prosecution Insights
Last updated: April 20, 2026
Application No. 18/894,452

QUALITY OF SERVICE (QoS) MANAGEMENT IN EDGE COMPUTING ENVIRONMENTS

Non-Final OA §103
Filed
Sep 24, 2024
Examiner
COONEY, ADAM A
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
Intel Corporation
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
69%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
219 granted / 379 resolved
At TC average
Moderate +11% lift
Without
With
+11.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
27 currently pending
Career history
406
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 379 resolved cases

Office Action

§103
DETAILED ACTION Claim 1 was preliminarily cancelled. Claims 2-21 have been preliminarily added. Claims 2-21 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Comment The examiner recommends filing a written authorization for Internet communication in response to the present action. Doing so permits the USPTO to communicate with Applicant using Internet email to schedule interviews or discuss other aspects of the application. Without a written authorization in place, the USPTO will not respond via Internet email to any Internet correspondence which contains information subject to the confidentiality requirement as set forth in 35 U.S.C. 122. The preferred method of providing authorization is by filing form PTO/SB/439, available at: https://www.uspto.gov/patent/forms/forms. See MPEP § 502.03 for other methods of providing written authorization. Priority Applicant’s claim for the benefit of a prior-filed application 35 U.S.C. 120 is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/24/24 and 12/26/24 has been acknowledged and considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-5 and 7-21 are rejected under 35 U.S.C. 103 as being unpatentable over Svennebring et al. (U.S. 2019/0036630 A1) in view of Yang et al. (U.S. 2020/0196203 A1). Regarding claims 2, 11 and 17, Svennebring discloses an edge computing apparatus (see Svennebring; paragraphs 0023 and 0248; Svennebring discloses an edge computing device), comprising: processing circuitry; and a memory device including instructions, which, when executed by the processing circuitry (see Svennebring; paragraph 0084; Svennebring discloses a memory and central processing unit), cause the processing circuitry to: forecast a future resource need for execution of the service (see Svennebring; paragraphs 0024 and 0058; Svennebring discloses allocation of resources to maintain SLAs. Further, a prediction, i.e. “forecast a future…”, is made to ensure proper operation of resources, i.e. “resource need for execution of the service”); forecast a probability and an estimated time of usage for the service at the potential computing nodes according to the forecasted future resource need (see Svennebring; paragraphs 0024, 0035, 0057 and 0058; Svennebring discloses a probability field, i.e. “forecast a probability…” that indicates the probability of the correctness of the prediction. The prediction includes a metric that indicates a begin time or end time for the use of the resources, i.e. “estimated time of usage…according to the forecasted future resource need” for . Further, the predictions may include the same type of service at different times); communicate service pre-allocation information among the potential computing nodes in the multiple potential mobility location paths (see Svennebring; paragraphs 0027, 0035 and 0036; Svennebring discloses determining information, i.e. “service pre-allocation information”, associated with the predicted routes/paths, i.e. “multiple potential mobility location paths”, and communicating the information, i.e. “communicate service pre-allocation information”), wherein the service pre-allocation information is used for speculative allocation of resources along respective nodes in the multiple potential mobility location paths (see Svennebring; paragraphs 0027, 0035 and 0036; Svennebring discloses the received predictions may be used to modify behavior, i.e. of the service, in light of the prediction/forecast for possible routes/paths, i.e. “speculative allocation of resources along…potential mobility location paths”); and perform a migration of the service to a second edge computing apparatus located along a first mobility path based on movement of the connected edge device to the first mobility path (see Svennebring; paragraphs 0040 and 0041; Svennebring discloses as the device moves away from the base). While Svennebring discloses prediction of routes/paths that can be taken by a device (see Svennebring; paragraphs 0027, 0030 and 0049), as well as, a device moves away from a base (see Svennebring; paragraphs 0040 and 0041), Svennebring does not explicitly disclose identify multiple potential mobility location paths and potential computing nodes for use of a service by a connected edge device; and perform a migration of the service to a second edge computing apparatus located along a first mobility path based on movement of the connected edge device to the first mobility path. In analogous art, Yang discloses identify multiple potential mobility location paths and potential computing nodes for use of a service by a connected edge device (see Yang; paragraph 0046, 0053 and 0059; Yang discloses a mobility information field 360 that includes geographic coordinates and other types of location information, such as, both current and historical tracking information. The historical information includes MEC networks 115 and travel paths used by the end device 180 in which a MEC handover is performed. And identifying and determining whether a target MEC network, e.g. MEC network 115-2, hosted by MEC devices 117, i.e. “potential computing nodes”, can support a MEC handover, i.e. “identify multiple potential mobility location paths…”, such as to a different location); perform a migration of the service to a second edge computing apparatus located along a first mobility path based on movement of the connected edge device to the first mobility path (see Yang; paragraphs 0016, 0036, 0049, 0053 and 0059; Yang discloses a MEC handover allows for continuous and uninterrupted application service to an end device 180. A migration controller 124 is coupled via a communication link and interface that supports the MEC handover. In particular, the migration controller 124 identifies current tracking information, i.e. “based on movement of the connected edge device to the first mobility path”, such as travel paths between MEC networks, of the end device 180 from a mobility information field 360). One of ordinary skill in the art would have been motivated to combine Svennebring and Yang because they both disclose features for providing services in edge computing, and as such, are within the same environment. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Yang’s MEC handover into the system of Svennebring in order to provide the benefit of efficiency by allowing the predicted services to be used (see Svennebring; paragraphs 0024, 0035 and 0057) to be continuous and uninterrupted during a handover, such as, when a device moves away from a base (see Svennebring; paragraphs 0040 and 0041) in a predicted route/path (see Svennebring; paragraphs 0027, 0030 and 0049). Further, Svennebring discloses the additional limitations of claim 17, a non-transitory computer-readable medium comprising instructions executed by processing circuitry (see Svennebring; paragraphs 0085 and 0086; Svennebring discloses a non-transitory machine readable medium storing instructions for execution by a processor). Regarding claims 3, 12 and 18, Svennebring and Yang disclose all the limitations of claims 2, 11 and 17, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein the instructions further cause the processing circuitry to: perform a cleanup of resources along one or more unselected paths of the multiple potential mobility location paths by communicating service deallocation information to one or more edge computing apparatuses along the one or more unselected paths (see Svennebring; paragraphs 0030-0032; Svennebring discloses predicting multiple routes for the device and only selecting, i.e. “perform a cleanup”, a subset of the routes. As such, the resources of the routes not selected are not used, i.e. “communicating service deallocation information…”). Regarding claim 4, Svennebring and Yang disclose all the limitations of claim 2, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein forecasting the probability and the estimated time of usage comprises: determining a forecast of future service needs (see Svennebring; paragraphs 0024 and 0058; Svennebring discloses enabling a service prediction model to forecast the rate of termination of the SLA for a given service, i.e. “a forecast of future service needs”); and predicting one or more locations where the service will be used and usage of the service at the one or more locations based on location updates and identified usage (see Yang; paragraphs 0053 and 0066; Yang discloses information such as the application service used, i.e. “usage of the service”, by the end device 180 and frequency of use route/path. The migration controller 124 may predict the need for a handover, i.e. “predicting one or more locations where the service will be used”, based on this information and current tracking information. Further, the end device 180 may provide information, such as certain planned events and destinations, in order to provide a basis to predict mobility. Therefore, location and usage of the application service is predicted, i.e. “predicting one or more locations where the service will be used and usage of the service at the one or more locations…” based on location information and the application service being used, i.e. “based on location updates and identified usage”). The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 2. Regarding claim 5, Svennebring and Yang disclose all the limitations of claim 2, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein the edge computing apparatus is implemented as a Multi-access Edge Computing (MEC) host within a MEC system (see Svennebring; paragraph 0023; Svennebring discloses the quality prediction in use in mobile edge computing, e.g. MEC; and see Yang; paragraphs 0046 and 0047; Yang discloses devices in a MEC network), and wherein the potential computing nodes comprise additional MEC hosts managed by a MEC orchestrator (see Yang; paragraphs 0046, 0053 and 0059; Yang discloses determining target MEC networks, i.e. “potential computing nodes”, which are hosted by MEC devices 117). The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 2. Regarding claim 7, Svennebring and Yang disclose all the limitations of claim 2, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein the edge computing apparatus is implemented within an Internet of Things (IoT) network comprising one or more endpoint devices that communicate on a network at an edge or endpoint of the network, and wherein the connected edge device comprises an IoT device having sensor, data, or processing functionality (see Svennebring; paragraphs 0023; Svennebring discloses implementation within an IoT network, such as the system being employed using IoT connected devices accessing diverse services with different SLAs and from diverse types of networks. The IoT device, for example, may include wearable devices receiving software updates or navigation system receiving updates, i.e. “IoT device having a sensor, data or processing functionality” ). Regarding claim 8, Svennebring and Yang disclose all the limitations of claim 2, as discussed above, and further the combination of Svennebring and Yang clearly discloses identifying usage of the service and one or more resources used for the service (see Yang; paragraphs 0045 and 0059; Yang discloses MEC network identifier field 315 may store data indicating network resource information pertaining to each MEC network 115, such as, indicating current load information. Identify the application service being used, i.e. “identifying usage of the service…” by the end device 180. Further, determination is made on whether the application service is provisioned at the target network. As such, the resources used by the application service is known in order to determine if the target network is provisioned for migration of the application service); receiving location updates from the connected edge device (see Yang; paragraphs 0053 and 0066; Yang discloses a mobility information field 360 that includes geographic coordinates and other types of location information, such as, both current and historical tracking information. Further, update location information, i.e. “receiving location updates…”, may be provided by the end device 180); and predicting usage of the service based on the location updates and identified usage (see Yang; paragraphs 0053 and 0066; Yang discloses information such as the application service used by the end device 180 and frequency of use route/path. The migration controller 124 may predict the need for a handover, i.e. “predicting usage of the service…”, based on this information, i.e. “…based on the location updates”, and current tracking information. Further, the end device 180 may provide information, such as certain planned events and destinations, in order to provide a basis to predict mobility. Therefore, location and usage of the application service is predicted based on location information and the application service being used). The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 2. Regarding claim 9, Svennebring and Yang disclose all the limitations of claim 2, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein the instructions further cause the processing circuitry to: monitor actual movement of the connected edge device (see Svennebring; paragraphs 0023, 0040 and 0059; Svennebring discloses historical data relating to motion history as a device moves along through a network or across networks, i.e. “monitor actual movement…”); compare the actual movement to the forecasted probability (see Svennebring; paragraphs 0026 and 0062; Svennebring discloses comparing a predicted metric with a measured metric and measuring the status of the device while its moving with the status it would be at the next location); and adjust subsequent probability forecasts based on comparing the actual movement to the forecasted probability (see Svennebring; paragraph 0062; Svennebring discloses measuring the status of the signal strength considering the current and next location, i.e. “comparing the actual movement to forecasted probability”, of the device within each network zone. This may then be correlated to the policy and cost, i.e. “adjust subsequent probability forecasts”, of the service being consumed or forecasted to be consumed. as the resource moves into a network). Regarding claim 10, Svennebring and Yang disclose all the limitations of claim 2, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein the edge computing apparatus operates within a cloud-computing network in communication with one or more endpoint devices at an edge of the cloud-computing network, and wherein the potential computing nodes comprise edge nodes within the cloud-computing network (see Svennebring; paragraphs 0023, 0083 and 0247; Svennebring discloses data transmission from an edge to the cloud, such as, a cloud service server adapted to perform operations of a cloud service; and further see Yang; paragraphs 0018, 0027 and 0029; Yang discloses network devices, such as a cloud devices, i.e. “potential computing nodes”, being used in the MEC network at the edge of a cloud network using cloud computing and cloud services). The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 2. Regarding claims 13 and 21, Svennebring and Yang disclose all the limitations of claims 11 and 17, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein communicating the service pre-allocation information comprises: coordinating the speculative allocation of resources through a centralized infrastructure that performs management for pre-allocation of resources in a network (see Svennebring; paragraphs 0027, 0035, 0036 and 0049; Svennebring discloses the received predictions may be used to modify behavior, i.e. of the service, in light of the prediction/forecast for possible routes/paths, i.e. “speculative allocation of resources along…”); delegating management of one or more individual resources to meet one or more service objectives to one or more individual edge locations (see Svennebring; paragraphs 0023, 0024 and 0058; Svennebring discloses allocation of resources to maintain SLAs, i.e. “delegating management…to meet one or more service objectives”. Further, a prediction is made to ensure proper operation of resources). Regarding claims 14 and 19, Svennebring and Yang disclose all the limitations of claims 11 and 19, as discussed above, and further the combination of Svennebring and Yang clearly discloses wherein forecasting the probability and the estimated time of usage comprises: analyzing a statistical measure of likely activity along the multiple potential mobility location paths (see Svennebring; paragraphs 0026 and 0062; Svennebring discloses comparing a predicted metric with a measured metric and measuring the status, i.e. “analyzing a statistical measure…” of the device while its moving with the status it would be at the next location); and pre-allocating resources based on a probability percentage of the connected edge device continuing along specific paths (see Svennebring; paragraphs 0027, 0035 and 0036; Svennebring discloses determining information, i.e. “service pre-allocation information”, associated with the predicted routes/paths, i.e. “multiple potential mobility location paths”, and communicating the information, i.e. “communicate service pre-allocation information”), wherein the service pre-allocation information is used for speculative allocation of resources along respective nodes in the multiple potential mobility location paths (see Svennebring; paragraphs 0027, 0035 and 0036; Svennebring discloses the received predictions may be used to modify behavior, i.e. of the service, in light of the prediction/forecast for possible routes/paths, i.e. “speculative allocation of resources along…potential mobility location paths”). Regarding claims 15 and 20, Svennebring and Yang disclose all the limitations of claims 11 and 17, as discussed above, and further the combination of Svennebring and Yang clearly discloses establishing a fallback provision to enable the service to be continued for a predetermined time period at an originating computing node using a higher quality of service communication link with inter-tower communications when a migration deadline cannot be met (see Yang; paragraphs 0016, 0036, 0041, 0049, 0053 and 0059; Yang discloses a MEC handover allows for continuous and uninterrupted application service to an end device 180 to meet a quality of service requirement, i.e. “a higher quality of service communication link”. A migration controller 124 is coupled via a communication link and interface that supports the MEC handover. In particular, the migration controller 124 identifies current tracking information, such as travel paths between MEC networks, of the end device 180 from a mobility information field 360). Regarding claim 16, Svennebring and Yang disclose all the limitations of claim 11, as discussed above, and further the combination of Svennebring and Yang clearly discloses coordinating the speculative allocation of resources through multicast messages targeting multiple topologies (see Svennebring; paragraphs 0027, 0035, 0036 and 0049; Svennebring discloses the received predictions may be used to modify behavior, i.e. of the service, in light of the prediction/forecast for possible routes/paths, i.e. “speculative allocation of resources…”); initiating pre-allocation actions while coordinating resource management under centralized control (see Svennebring; paragraphs 0023, 0024 and 0058; Svennebring discloses allocation of resources, i.e. “coordinating resource management…” to maintain SLAs. Further, a prediction is made to ensure proper operation of resources); and identifying resources that become unneeded using time limits and notifications (see Svennebring; paragraphs 0022, 0025, 0033 and 0058; Svennebring discloses different links may be used at different times, i.e. “time limits”, and a link quality prediction analyzed based on a time, and alerts, i.e. “notifications”, key off of the prediction to the shutdown of running services, i.e. “identifying resources that become unneeded”). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Svennebring et al. (U.S. 2019/0036630 A1) in view of Yang et al. (U.S. 2020/0196203 A1) as applied to claim 2 above, and further in view of Frydman et al. (U.S. 2017/0118311 A1). Regarding claim 6, Svennebring and Yang disclose all the limitations of claim 2, as discussed above. While Svennebring discloses “communicating the service pre-allocation information”, as discussed above, the combination of Svennebring and Yang does not explicitly disclose communicating multicast messages and notifications targeting multiple computing nodes within a network topology; and coordinating distributed resource management through communication and resource API calls. In analogous art, Frydman discloses communicating multicast messages and notifications targeting multiple computing nodes within a network topology (see Frydman; paragraphs 0009, 0035 and 0036; Frydman discloses communication in the form of multicast to all mobility services, such as, sending multicast messages to members, i.e. “multiple computing nodes”, within a MEC zone, i.e. “within a network topology”); and coordinating distributed resource management through communication and resource API calls (see Frydman; paragraphs 0038 and 0039; Frydman discloses management of application resources through a mobility service using an API toward the applications, i.e. “resource API calls” ). One of ordinary skill in the art would have been motivated to combine Svennebring, Yang and Frydman because they all disclose features for providing services in edge computing, and as such, are within the same environment. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Frydman’s multicast messaging into the combined system of Svennebring and Yang in order to provide the benefit of efficiency by allowing when a device moves away from a base (see Svennebring; paragraphs 0040 and 0041) in a predicted route/path (see Svennebring; paragraphs 0027, 0030 and 0049) communication between all members in the new MEC zone (see Frydman; paragraphs 0035 and 0036). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Ramamoorthy et al. (U.S. 2017/0116626 A1) discloses prior to migration identifying target services that are likely to be used by a customer after migration. Llagostera et al. (U.S. 2017/0279692 A1) discloses indicating it is possible to migrate between two candidate service providers in the future. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM A COONEY whose telephone number is (571)270-5653. The examiner can normally be reached M-F 7:30am-5:00pm (every other Fri off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.A.C/Examiner, Art Unit 2458 03/31/26 /UMAR CHEEMA/Supervisory Patent Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Mar 31, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585237
SYSTEMS AND METHODS FOR HIERARCHICAL ORGANIZATION OF SOFTWARE DEFINED PROCESS CONTROL SYSTEMS FOR INDUSTRIAL PROCESS PLANTS
2y 5m to grant Granted Mar 24, 2026
Patent 12587720
MEDIA DEVICE SIMULATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12574428
DYNAMIC MODIFICATION OF FUNCTIONALITY OF A REAL-TIME COMMUNICATIONS SESSION
2y 5m to grant Granted Mar 10, 2026
Patent 12554520
Automated System And Method For Extracting And Adapting System Configurationss
2y 5m to grant Granted Feb 17, 2026
Patent 12531917
CHAT BRIDGING IN VIDEO CONFERENCES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
69%
With Interview (+11.0%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 379 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month