Prosecution Insights
Last updated: April 19, 2026
Application No. 18/877,074

PROCESS SEGMENT AUGMENTATION

Non-Final OA §102
Filed
Dec 19, 2024
Examiner
GIBSON, JONATHAN D
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
302 granted / 355 resolved
+30.1% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
15 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
26.2%
-13.8% vs TC avg
§103
36.5%
-3.5% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gunjal et al. US 2020/0296172 (hereinafter “Gunjal”). Regarding claim 1, Gunjal teaches: A computer implemented method of managing network-attachable computing entities comprising: [FIG. 1 and FIG. 5] training a machine-learning model to detect a bottleneck process segment in a process flow performed by at least a first network-attachable computing entity; [FIG. 5 and 0045: “FIG. 5 processing commences at 500 whereupon, at predefined process 520, the process learns the traffic flow conditions in the service mesh using metrics from adjacent nodes (microservices) and learns the traffic condition experience by each node (see FIG. 6 and corresponding text for processing details).”] deploying a trained said model to monitor at least said first network-attachable computing entity in operation; [FIG. 5 and 0045: “FIG. 5 is an exemplary high level flowchart showing steps to collect service mesh performance data and provide traffic routing policy adjustments.”] responsive to said monitoring detecting an instance of said bottleneck process segment, analyzing said bottleneck process segment to determine a cause of said bottleneck; [FIG. 5 and 0046: “At predefined process 540, the process analyzes and computes node stress, path stress, predictability score, and health score for the microservices in the service mesh (see FIG. 7 and corresponding text for processing details).”] responsive to determining said cause of said bottleneck, generating an augmented functional unit to address said cause of said bottleneck; and [FIG. 5 and 0046: “At predefined process 560, the process identifies and plans for traffic flow adjustments based on the microservices analysis performed at predefined process 560 (see FIG. 8 and corresponding text for processing details).”] deploying said augmented functional unit to at least one of said first network-attachable computing entity and a further network-attachable computing entity having an instance of a process comprising said bottleneck process segment. [FIG. 5 and 0047: “At predefined process 580, the process modifies the service mesh's traffic routing policy based on the traffic flow planning adjustments from predefined process 560. In one embodiment, the traffic flow policy adjustments include horizontal/vertical scaling of nodes, rerouting traffic to isolate and remove nodes with lower predictability, inject traffic delays, and restart nodes (see FIG. 9 and corresponding text for processing details).”] Regarding claim 2: The method according to claim 1, said analyzing said bottleneck process segment to determine a cause of said bottleneck comprising recognising signature characteristics of process elements that cause bottlenecks. [FIG. 5 and 0046: “At predefined process 540, the process analyzes and computes node stress, path stress, predictability score, and health score for the microservices in the service mesh (see FIG. 7 and corresponding text for processing details).”] Regarding claim 3: The method of claim 1, said training a machine-learning model to detect a bottleneck process segment in a process flow comprising training said model to analyse a processing path and generate at least one alternative processing path. [FIG. 5 and 0046: “At predefined process 560, the process identifies and plans for traffic flow adjustments based on the microservices analysis performed at predefined process 560 (see FIG. 8 and corresponding text for processing details).”] Regarding claim 4: The method according to claim 3, further comprising comparing said processing path and said at least one alternative processing path to determine which path is the more efficient processing path. [FIG. 5 and 0047: “At predefined process 580, the process modifies the service mesh's traffic routing policy based on the traffic flow planning adjustments from predefined process 560. In one embodiment, the traffic flow policy adjustments include horizontal/vertical scaling of nodes, rerouting traffic to isolate and remove nodes with lower predictability, inject traffic delays, and restart nodes (see FIG. 9 and corresponding text for processing details).”] Regarding claim 5: The method according to claim 4, said generating an augmented functional unit comprising generating an encoding for said more efficient processing path. [FIG. 5 and 0047: “At predefined process 580, the process modifies the service mesh's traffic routing policy based on the traffic flow planning adjustments from predefined process 560. In one embodiment, the traffic flow policy adjustments include horizontal/vertical scaling of nodes, rerouting traffic to isolate and remove nodes with lower predictability, inject traffic delays, and restart nodes (see FIG. 9 and corresponding text for processing details).”] Regarding claim 6: The method according to claim 1, said generating an augmented functional unit comprising constructing processing logic using a hardware definition language to apply to a configurable hardware unit. [FIG. 5 and 0046: “At predefined process 560, the process identifies and plans for traffic flow adjustments based on the microservices analysis performed at predefined process 560 (see FIG. 8 and corresponding text for processing details).” Also see 0021: “In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.”] Regarding claim 7: The method according to claim 1, said generating an augmented functional unit comprising constructing an instruction set extension. [FIG. 5 and 0046: “At predefined process 560, the process identifies and plans for traffic flow adjustments based on the microservices analysis performed at predefined process 560 (see FIG. 8 and corresponding text for processing details).” Also see 0021: “In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.”] Regarding claim 8: The method according to claim 1, said bottleneck process segment comprising a resource constrained processing path. [FIG. 5 and 0047: “At predefined process 580, the process modifies the service mesh's traffic routing policy based on the traffic flow planning adjustments from predefined process 560. In one embodiment, the traffic flow policy adjustments include horizontal/vertical scaling of nodes, rerouting traffic to isolate and remove nodes with lower predictability, inject traffic delays, and restart nodes (see FIG. 9 and corresponding text for processing details).”] Regarding claim 9: The method according to claim 1, responsive to said monitoring detecting more than one instance of said bottleneck process segment further comprising establishing a priority order for handling said more than one instance of said bottleneck process. [FIG. 5 and 0047: “At predefined process 580, the process modifies the service mesh's traffic routing policy based on the traffic flow planning adjustments from predefined process 560. In one embodiment, the traffic flow policy adjustments include horizontal/vertical scaling of nodes, rerouting traffic to isolate and remove nodes with lower predictability, inject traffic delays, and restart nodes (see FIG. 9 and corresponding text for processing details).”] Regarding claim 10: The method according to claim 1, said generating an augmented functional unit to address said cause of said bottleneck further comprising recognising a previously encountered bottleneck and reusing a prior generated functional unit as a basis for said generating. [0038: “In one embodiment, when a microservice and/or its neighbors is undergoing an administrator-initiated change, such as, add/remove new microservices, update traffic routing policies, etc., then dynamic traffic management agent 300 marks the microservice as a tainted-node, annotates the nodes with the microservices version numbers, and resets the learning models (continuously learnt using historical data) for the tainted-node.”] Regarding claim 11: The method according to claim 1, said deploying a trained said model to monitor at least said first network-attachable computing entity in operation comprising installing model-based instrumentation at said first network-attachable computing entity to capture data for analysis. [FIG. 3 and 0039: “Dynamic traffic management agent 300 analyzes the performance metrics and determines whether the traffic flow policy in service mesh 320 requires adjustments, such as when newer microservice versions are added and/or traffic flow is congested.”] Claim 12 is rejected based on the same citations and rationale given to claim 1. Claim 13 is rejected based on the same citations and rationale given to claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D GIBSON whose telephone number is (571)431-0699. The examiner can normally be reached Monday - Friday 8:00 A.M.-4:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRYCE P BONZO can be reached at (571)-272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN D GIBSON/Primary Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Dec 19, 2024
Application Filed
Jan 17, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602607
Determining an Implementation of a Quantum Program that has a Minimized Overall Error Rate
2y 5m to grant Granted Apr 14, 2026
Patent 12602274
SYSTEM FOR REAL-TIME OVERLOAD DETECTION USING IMAGE PROCESSING ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12591478
APPARATUS AND METHOD FOR GENERATING ALERT CONTEXT DASHBOARD
2y 5m to grant Granted Mar 31, 2026
Patent 12579019
SMART SURVEILLANCE SERVICE IN PRE-BOOT FOR QUICK REMEDIATIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12566661
ADAPTIVE LOG DATA LEVEL IN A COMPUTING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+13.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month