Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,170

INGRESS TRAFFIC SHIFT PREDICTION

Non-Final OA §103
Filed
Feb 26, 2024
Examiner
ALGIBHAH, HAMZA N
Art Unit
2441
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
82%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
566 granted / 713 resolved
+21.4% vs TC avg
Minimal +3% lift
Without
With
+3.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
744
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§103
Details Claims 1-20 are pending. Claims 1-20 are rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-7, 9-12 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over FONSECA et al (Pub. No.: US 2024/0039851 A1) in view of Official Notice. As per claim 1, FONSECA discloses a method of predicting a data traffic shift responsive to an (FONSECA, abstract, wherein “The system includes a congestion mitigation system configured to predict based on the model, for traffic flows arriving on one or more peering links, other peering links to which the traffic flows would be shifted due to a condition affecting the one or more peering links. The congestion mitigation system may determine, in response to the condition, a set of prefixes to withdraw based on the other peering links to which traffic would be shifted”;Fig 6 step 620-630, paragraph 0107-0109, 0111, wherein “FIG. 6 is a flow diagram of an example of a method 600 for determining a set of prefixes to withdraw due to a congestion condition. For example, the method 600 can be performed by the ingress traffic management system 300, the apparatus 400 and/or one or more components thereof to manage ingress traffic to the cloud network 110 or the WAN 210.”; “At block 620, the method 600 includes determining whether a utilization level for a peering link has exceeded a utilization threshold.”; “At block 630, the method 600 includes determining candidate prefixes at the peering link. In an example, the prefix selector 312, e.g., in conjunction with processor 402, memory 404, and operating system 406, can determine candidate prefixes at the peering link. For example, the candidate prefixes may include a set of candidate prefixes that have been announced for the peering link.”; wherein determining candidate prefixes at the peering link implies data traffic shifts); - collecting data traffic volumes on the ingress links to the communication network, the data traffic volumes corresponding to the detected data traffic shifts (FONSECA, Fig 6, paragraph 0114, wherein “At block 660, the method 600 includes allocating volume of traffic flows for candidate prefix to other peering links. In an example, the withdrawal simulator 314, e.g., in conjunction with processor 402, memory 404, and operating system 406, can allocate an amount of volume of the traffic flows for the at least one prefix to the other peering links to which traffic would be shifted. For example, the amount of volume may include one or more of a minimum volume, a probable volume, or a maximum volume”; wherein allocating volume of traffic flows implies the collecting of data traffic volumes as claimed); - receiving a query regarding the data traffic shift with respect to the identified ingress link and an identified data packet (FONSECA, Fig 6, paragraph 0113, wherein “At block 650, the method 600 includes querying the model to predict other peering links. In an example, the withdrawal simulator 314, e.g., in conjunction with processor 402, memory 404, and operating system 406, can querying the model 142 and/or traffic ingress prediction system 330 to predict other peering links to which the traffic flows would be shifted due to a condition affecting the one or more peering links. For example, the block 650 may correspond to the block 550 of method 500, wherein traffic flows for the at least one candidate prefix are the traffic flows arriving on one or more peering links that are provided to the model 142”; paragraph 0063-0064, wherein “In the context of ingress traffic prediction, features are derived from the sampled ingress traffic and combined with information about the network (e.g., cloud provider network 110). In an aspect, a dataset may include information about the complete network topology of the cloud provider network 110. Numerical examples of the size of the dataset are given for a global cloud provider network. In particular, features related to the IP layer, (e.g., source prefix) are the most likely to influence routing decisions. For example, the following features may be used as input to an ingress model 142: 1) Source AS: AS number where the packet originated, based on the source IP address of the sampled flows and BGP advertisements observed from BMP”; thus, the query regarding the data traffic shift is with respect to the identified ingress link that has the congestion and an at least one data packet within the congested traffic that has the information needed for the query); and - generating, responsive to the query, a predicted score for each of the candidate ingress links, wherein each predicted score indicates a likelihood that data traffic corresponding to the identified data packet will shift from the identified ingress link to a corresponding one of the candidate ingress links responsive to the (FONSECA, Fig 5-6, paragraph 0084, 0092, wherein “Finally, there are flow aggregates for which the training data may not include k alternative ingress links, even though they may exist. In these cases, geographic distance may be used to find alternate peering links. The traffic ingress prediction system 330 takes the peering AS A and ingress location 1 for the best match (k=1), and ranks the other peering interfaces from A by geographic distance to 1. The type of ensemble model then uses this ranked list to complete the list of interfaces returned. This ensemble model may be applied on top of the AL models (as it was the best for unseen withdrawals) and may be referred to as an AL+G model. Table 2 summarizes suitable models and features for the ingress model 142”, “When training the historical model, all the measured ingress traffic may be grouped by the respective flow tuples (as defined in Table 1) and ingress link, which requires memory and processing linear with the number of measurements (after the aggregation stage). Then for each flow tuple the peering links are ranked by bytes, keeping only the top k links”). FONSECA teaches that the shifting due to a condition affecting the one or more peering links (congestion condition) but does not explicitly disclose that the condition is an outage of the link. However, since an outage of the link also affects the one or more peering links similar to the congestion condition, FONSECA system can also be used for outage conditions. In addition, Official Notice is taken that detecting and handling outage of an ingress link is well known in the art. Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to modify FONSECA so that the condition is an outage of an ingress link as claimed because this would have provided a way to use the prediction model to outage scenarios which also causes traffic shifts and thus improving the system usability to handle failure scenarios. As per claim 2, claim 1 is incorporated and FONSECA further discloses wherein the candidate ingress links are ranked according to the predicted score of each candidate ingress link to yield a ranked list of the candidate ingress links to which the data traffic can shift (FONSECA, Fig 5-6, paragraph 0084, 0092, wherein “Finally, there are flow aggregates for which the training data may not include k alternative ingress links, even though they may exist. In these cases, geographic distance may be used to find alternate peering links. The traffic ingress prediction system 330 takes the peering AS A and ingress location 1 for the best match (k=1), and ranks the other peering interfaces from A by geographic distance to 1. The type of ensemble model then uses this ranked list to complete the list of interfaces returned”); As per claim 3, claim 1 is incorporated and FONSECA further discloses wherein training of the shift prediction machine learning model is based on at least a feature representing the collected data traffic volumes of the data traffic traversing through each of the candidate ingress links from a same source autonomous system or source internet protocol prefix as the identified data packet (FONSECA, paragraph 0021, wherein “the techniques described herein relate to a method, wherein the model is a historical model with a feature set including at least a source autonomous system (AS), a destination region, and a destination type for each traffic flow”; paragraph 0063-0066, wherein “A key design issue with any learning task is choosing the right features for the problem. In the context of ingress traffic prediction, features are derived from the sampled ingress traffic and combined with information about the network (e.g., cloud provider network 110). In an aspect, a dataset may include information about the complete network topology of the cloud provider network 110. Numerical examples of the size of the dataset are given for a global cloud provider network. In particular, features related to the IP layer, (e.g., source prefix) are the most likely to influence routing decisions. For example, the following features may be used as input to an ingress model 142: 1) Source AS: AS number where the packet originated, based on the source IP address of the sampled flows and BGP advertisements observed from BMP. 2) Source prefix: All traffic entering the WAN 210 is from a source IP address on the Internet. Using the entire /32 bits of IPv4 addresses significantly increases the feature space. In contrast, aggregating IPv4 addresses to the announced routing block sizes may hide too much useful information, especially if BGP Route Aggregation is enabled by some ASes. In this trade-off between resolution and feature space, the /24 prefix of the source IP may be used as the feature. The /24 prefix is the widely accepted limit on routable prefix length and inter-domain routing policies could operate at that boundary, which would influence which peering link traffic arrives on. 3) Source location: One issue with using source prefixes as a feature is the size of the feature space. A global WAN 210 may have over 13 million /24 prefixes in the dataset. Therefore the source location may also use mappings from source IP addresses to coarse geo-location (at the level of large metropolitan areas). Traffic originating in the same AS and geographic area with the same destination may share paths”); As per claim 4, claim 1 is incorporated and FONSECA further discloses wherein training of the shift prediction machine learning model is based on at least a feature representing including the collected data traffic volumes of the data traffic traversing through each of the candidate ingress links from a same source autonomous system as the identified data packet and from any other source autonomous system that communicated data traffic through the identified ingress link (FONSECA, paragraph 0021, wherein “the techniques described herein relate to a method, wherein the model is a historical model with a feature set including at least a source autonomous system (AS), a destination region, and a destination type for each traffic flow”; paragraph 0063-0066, wherein “A key design issue with any learning task is choosing the right features for the problem. In the context of ingress traffic prediction, features are derived from the sampled ingress traffic and combined with information about the network (e.g., cloud provider network 110). In an aspect, a dataset may include information about the complete network topology of the cloud provider network 110. Numerical examples of the size of the dataset are given for a global cloud provider network. In particular, features related to the IP layer, (e.g., source prefix) are the most likely to influence routing decisions. For example, the following features may be used as input to an ingress model 142: 1) Source AS: AS number where the packet originated, based on the source IP address of the sampled flows and BGP advertisements observed from BMP. 2) Source prefix: All traffic entering the WAN 210 is from a source IP address on the Internet. Using the entire /32 bits of IPv4 addresses significantly increases the feature space. In contrast, aggregating IPv4 addresses to the announced routing block sizes may hide too much useful information, especially if BGP Route Aggregation is enabled by some ASes. In this trade-off between resolution and feature space, the /24 prefix of the source IP may be used as the feature. The /24 prefix is the widely accepted limit on routable prefix length and inter-domain routing policies could operate at that boundary, which would influence which peering link traffic arrives on. 3) Source location: One issue with using source prefixes as a feature is the size of the feature space. A global WAN 210 may have over 13 million /24 prefixes in the dataset. Therefore the source location may also use mappings from source IP addresses to coarse geo-location (at the level of large metropolitan areas). Traffic originating in the same AS and geographic area with the same destination may share paths”); As per claim 6, claim 1 is incorporated and FONSECA further discloses wherein training of the shift prediction machine learning model is based on at least a feature representing the collected data traffic volumes of the data traffic traversing through each of the candidate ingress links from a same source internet protocol prefix as the identified data packet and from any other source internet protocol prefix that communicated data traffic through the identified ingress link and through a same source autonomous system (FONSECA, paragraph 0021, wherein “the techniques described herein relate to a method, wherein the model is a historical model with a feature set including at least a source autonomous system (AS), a destination region, and a destination type for each traffic flow”; paragraph 0063-0066, wherein “A key design issue with any learning task is choosing the right features for the problem. In the context of ingress traffic prediction, features are derived from the sampled ingress traffic and combined with information about the network (e.g., cloud provider network 110). In an aspect, a dataset may include information about the complete network topology of the cloud provider network 110. Numerical examples of the size of the dataset are given for a global cloud provider network. In particular, features related to the IP layer, (e.g., source prefix) are the most likely to influence routing decisions. For example, the following features may be used as input to an ingress model 142: 1) Source AS: AS number where the packet originated, based on the source IP address of the sampled flows and BGP advertisements observed from BMP. 2) Source prefix: All traffic entering the WAN 210 is from a source IP address on the Internet. Using the entire /32 bits of IPv4 addresses significantly increases the feature space. In contrast, aggregating IPv4 addresses to the announced routing block sizes may hide too much useful information, especially if BGP Route Aggregation is enabled by some ASes. In this trade-off between resolution and feature space, the /24 prefix of the source IP may be used as the feature. The /24 prefix is the widely accepted limit on routable prefix length and inter-domain routing policies could operate at that boundary, which would influence which peering link traffic arrives on. 3) Source location: One issue with using source prefixes as a feature is the size of the feature space. A global WAN 210 may have over 13 million /24 prefixes in the dataset. Therefore the source location may also use mappings from source IP addresses to coarse geo-location (at the level of large metropolitan areas). Traffic originating in the same AS and geographic area with the same destination may share paths”); As per claim 7, claim 1 is incorporated and FONSECA further discloses wherein wherein training of the shift prediction machine learning model is based on at least a feature representing the collected data traffic volumes of the data traffic traversing through each of the candidate ingress links from a same source internet protocol prefix as the identified data packet and from any other source internet protocol prefix that communicated data traffic through the identified ingress link (FONSECA, paragraph 0021, wherein “the techniques described herein relate to a method, wherein the model is a historical model with a feature set including at least a source autonomous system (AS), a destination region, and a destination type for each traffic flow”; paragraph 0063-0066, wherein “A key design issue with any learning task is choosing the right features for the problem. In the context of ingress traffic prediction, features are derived from the sampled ingress traffic and combined with information about the network (e.g., cloud provider network 110). In an aspect, a dataset may include information about the complete network topology of the cloud provider network 110. Numerical examples of the size of the dataset are given for a global cloud provider network. In particular, features related to the IP layer, (e.g., source prefix) are the most likely to influence routing decisions. For example, the following features may be used as input to an ingress model 142: 1) Source AS: AS number where the packet originated, based on the source IP address of the sampled flows and BGP advertisements observed from BMP. 2) Source prefix: All traffic entering the WAN 210 is from a source IP address on the Internet. Using the entire /32 bits of IPv4 addresses significantly increases the feature space. In contrast, aggregating IPv4 addresses to the announced routing block sizes may hide too much useful information, especially if BGP Route Aggregation is enabled by some ASes. In this trade-off between resolution and feature space, the /24 prefix of the source IP may be used as the feature. The /24 prefix is the widely accepted limit on routable prefix length and inter-domain routing policies could operate at that boundary, which would influence which peering link traffic arrives on. 3) Source location: One issue with using source prefixes as a feature is the size of the feature space. A global WAN 210 may have over 13 million /24 prefixes in the dataset. Therefore the source location may also use mappings from source IP addresses to coarse geo-location (at the level of large metropolitan areas). Traffic originating in the same AS and geographic area with the same destination may share paths”); As per claim 9, claim 1 is incorporated and FONSECA further discloses wherein training of the shift prediction machine learning model is based on at least a feature representing a geographic distance of a connection of the identified ingress link to the communication network and connections of the candidate ingress links to the communication network (FONSECA, Fig 5-6, paragraph 0084, 0092, wherein “Finally, there are flow aggregates for which the training data may not include k alternative ingress links, even though they may exist. In these cases, geographic distance may be used to find alternate peering links. The traffic ingress prediction system 330 takes the peering AS A and ingress location 1 for the best match (k=1), and ranks the other peering interfaces from A by geographic distance to 1. The type of ensemble model then uses this ranked list to complete the list of interfaces returned. This ensemble model may be applied on top of the AL models (as it was the best for unseen withdrawals) and may be referred to as an AL+G model. Table 2 summarizes suitable models and features for the ingress model 142”, “When training the historical model, all the measured ingress traffic may be grouped by the respective flow tuples (as defined in Table 1) and ingress link, which requires memory and processing linear with the number of measurements (after the aggregation stage). Then for each flow tuple the peering links are ranked by bytes, keeping only the top k links”); As per claim 10, claim 1 is incorporated and FONSECA further discloses wherein training of the shift prediction machine learning model is based on a feature identifying whether the identified ingress link and the candidate ingress links are connected to at least one identical autonomous system (FONSECA, paragraph 0021, wherein “the techniques described herein relate to a method, wherein the model is a historical model with a feature set including at least a source autonomous system (AS), a destination region, and a destination type for each traffic flow”; paragraph 0063-0066, wherein “A key design issue with any learning task is choosing the right features for the problem. In the context of ingress traffic prediction, features are derived from the sampled ingress traffic and combined with information about the network (e.g., cloud provider network 110). In an aspect, a dataset may include information about the complete network topology of the cloud provider network 110. Numerical examples of the size of the dataset are given for a global cloud provider network. In particular, features related to the IP layer, (e.g., source prefix) are the most likely to influence routing decisions. For example, the following features may be used as input to an ingress model 142: 1) Source AS: AS number where the packet originated, based on the source IP address of the sampled flows and BGP advertisements observed from BMP. 2) Source prefix: All traffic entering the WAN 210 is from a source IP address on the Internet. Using the entire /32 bits of IPv4 addresses significantly increases the feature space. In contrast, aggregating IPv4 addresses to the announced routing block sizes may hide too much useful information, especially if BGP Route Aggregation is enabled by some ASes. In this trade-off between resolution and feature space, the /24 prefix of the source IP may be used as the feature. The /24 prefix is the widely accepted limit on routable prefix length and inter-domain routing policies could operate at that boundary, which would influence which peering link traffic arrives on. 3) Source location: One issue with using source prefixes as a feature is the size of the feature space. A global WAN 210 may have over 13 million /24 prefixes in the dataset. Therefore the source location may also use mappings from source IP addresses to coarse geo-location (at the level of large metropolitan areas). Traffic originating in the same AS and geographic area with the same destination may share paths”); Claims 11-12 and 15-20 are rejected under the same rationale as claims 1-4, 6-7 and 9-10. Claims 5, 8 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over FONSECA et al (Pub. No.: US 2024/0039851 A1) in view of MAZOR (Pub. No.: US 2024/0211799 A1). As per claim 5, claim 1 is incorporated and FONSECA does not explicitly disclose wherein training of the shift prediction machine learning model is based on at least a feature representing cosine similarities between the collected data traffic volumes of data traffic traversing through the identified ingress link and the collected data traffic volumes of data traffic traversing through each of the candidate ingress links to the communication network from a same source autonomous system as the identified data packet. However, using cosine similarities in machine learning models is well known in the art. For example, MAZOR discloses using cosine similarities in machine learning models (MAZOR, paragraph 0018, wherein “In some embodiments, the at least one incoming data sample and the one or more previously classified data samples are represented as vectors and the similarity metric value is a cosine similarity metric value”; 0068, wherein “Similarity metric value 50A1 may represent a degree of similarity between incoming data sample 20A1 and previously classified data samples of the particular class (e.g., class 40A11). Calculation of similarity metric value 50A1 may be performed by using known from the art methods. E.g., in some embodiments, incoming data sample 20A1 and previously classified data samples (e.g., data samples of training dataset 51A1) may be represented as vectors and similarity metric value 50A1 may be a cosine similarity metric value”). Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate the use of cosine similarity as claimed into FONSECA to achieve the claimed limitations because this would have provided a way to identify related traffics ingress data that can be used for training the shift prediction machine learning model which help improves the accuracy of the prediction. As per claim 8, claim 1 is incorporated and FONSECA does not explicitly disclose wherein training of the shift prediction machine learning model is based on at least a feature representing cosine similarities between the collected data traffic volumes of data traffic traversing through the identified ingress link and the collected data traffic volumes of data traffic traversing through each of the candidate ingress links to the communication network from a same source internet protocol prefix as the identified data packet. However, using cosine similarities in machine learning models is well known in the art. For example, MAZOR discloses using cosine similarities in machine learning models (MAZOR, paragraph 0018, wherein “In some embodiments, the at least one incoming data sample and the one or more previously classified data samples are represented as vectors and the similarity metric value is a cosine similarity metric value”; 0068, wherein “Similarity metric value 50A1 may represent a degree of similarity between incoming data sample 20A1 and previously classified data samples of the particular class (e.g., class 40A11). Calculation of similarity metric value 50A1 may be performed by using known from the art methods. E.g., in some embodiments, incoming data sample 20A1 and previously classified data samples (e.g., data samples of training dataset 51A1) may be represented as vectors and similarity metric value 50A1 may be a cosine similarity metric value”). Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate the use of cosine similarity as claimed into FONSECA to achieve the claimed limitations because this would have provided a way to identify related traffics ingress data that can be used for training the shift prediction machine learning model which help improves the accuracy of the prediction. Claims 13-14 are rejected under the same rationale as claims 5 and 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAMZA N ALGIBHAH whose telephone number is (571)270-7212. The examiner can normally be reached 7:30 am - 3:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wing Chan can be reached on (571) 272-7493. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAMZA N ALGIBHAH/Primary Examiner, Art Unit 2441
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Oct 20, 2025
Non-Final Rejection — §103
Dec 23, 2025
Examiner Interview Summary
Dec 23, 2025
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602224
NON-TERMINATING FIRMWARE UPDATE
2y 5m to grant Granted Apr 14, 2026
Patent 12598111
ENABLING INTENT-BASED NETWORK MANAGEMENT WITH GENERATIVE AI AND DIGITAL TWINS
2y 5m to grant Granted Apr 07, 2026
Patent 12598656
METHOD FOR EDGE COMPUTING
2y 5m to grant Granted Apr 07, 2026
Patent 12598096
METHOD AND APPARATUS FOR ACCESSING VIRTUAL MACHINE, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12528442
SYSTEM, METHOD, AND APPARATUS FOR MANAGING VEHICLE DATA COLLECTION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
82%
With Interview (+3.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month