Prosecution Insights
Last updated: April 19, 2026
Application No. 17/862,723

GUARANTEED-LATENCY NETWORKING

Final Rejection §102§103
Filed
Jul 12, 2022
Examiner
PHAM, NHU
Art Unit
2479
Tech Center
2400 — Computer Networks
Assignee
Nokia Solutions and Networks Oy
OA Round
3 (Final)
90%
Grant Probability
Favorable
4-5
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+31.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
36.0%
-4.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 11/25/2025 have been fully considered but they are not persuasive. With respect to the IDB equation in Francini for a single flow i, uses a single network-wide Lmax (not per-flow maxima), and the summation is over links k rather than flows i. Therefore, Francini does not disclose or suggest a summation of maximum packet sizes over a set of flows. As disclosed in ¶ [0074], Francini explicitly provides an EDB expression that uses per-aggregate maximum packet sizes L_j and sums terms Σ L_j/R_j across aggregates encountered by a flow along its path, thereby demonstrating summation over multiple aggregates’ maximum packet sizes rather than a single flow i. Moreover, Francini teaches the functional relationship between packet size, service rate, and time-based scheduling through its eligibility time formulas, timestamp computations, and slot mapping mechanisms (see ¶[0053]-[0058],[0066]). Applicant’s focus on k vs i does not negate that Francini teaches the relevant size/rate/time relationships needed to implement the claimed constraint at the scheduler. The record supports that Francini teaches and uses per-aggregate maximum packet sizes and sums them in performance analysis. The 102 rejection is maintained. Francini does not disclose “flows are continuous guaranteed-latency (CGL) flows in a software-defined guaranteed-latency networking (SD-GLN) context” nor a “relationship … between a service rate of a CGL flow and a latency requirement that the SD-GLN context supports.” Francini only generally references SDN for provisioning network outputs and does not teach CGL flows, SD-GLN, or the claimed rate–latency relationship. Francini discloses a centralized/network controller that determines and sends service rates for GD flow aggregates (¶¶ [0059]–[0061], FIG. 6, FIG. 8), and discusses automatic discovery or explicit provisioning via an SDN controller (¶ [0038]–[0041]). Francini’s “GD traffic flows” are explicitly defined as flows with delay guarantees (¶¶ [0019]–[0021], [0036]), functionally corresponding to “continuous guaranteed-latency” flows as claimed. Francini provides closed-form delay bounds (IDB/EDB) that explicitly incorporate allocated service rates and packet sizes (¶¶ [0073]–[0075]), demonstrating the relationship between service rate allocations and the latency/delay bounds supported by the network. Although Francini uses different labels (GD flows, controller-driven rate allocation), it teaches the same functional elements claimed. The 35 U.S.C. 102 rejection of claims 36–40 over Francini is maintained. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 22,24-32,34, 36-41 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Francini et al. (US 20210250301; hereinafter “Francini”). Regarding claims 22,41, Francini discloses: An apparatus, comprising: ([0085] the computer 800) at least one processor; ([0085] the computer 800 may include at least one processor) and at least one memory storing instructions which, when executed by the at least one processor, cause the apparatus at least to: ([0085] the computer 800 may include at least one processor and at least one memory including instructions wherein the instructions are configured to, when executed by the at least one processor, cause the apparatus to perform various functions presented herein.) aggregate a set of flows into a flow bundle based on respective maximum packet sizes of the respective flows; ([0038] The scalable deterministic services queuing arrangement may be configured such that, in front of each network link, GD traffic flows of the GD class are grouped into a set of GD flow aggregates associated with distinct network outputs.) and serve the flow bundle using a set of fixed-size transmission timeslots; ([0066] It is noted that a PIFO queue may be implemented as a calendar queue (e.g., using an array of slots associated with timestamp ranges such that an arriving packet is queued to the slot corresponding to the range of its timestamp and packets queued in the same slot are served in FIFO order). wherein the set of flows is aggregated into the flow bundle such that a sum of the maximum packet sizes of the respective flows does not exceed a timeslot size of the fixed-size transmission timeslots. ([0073] In the packet network 100, based on use of scalable deterministic services for supporting communication of GD traffic, each GD traffic flow in the network may be associated with an IDB which quantifies the worst-case delay that packets of the GD traffic flow may ever experience within the packet network 100. For example, the following IDB holds for a flow f.sub.i of guaranteed rate r.sub.i and maximum packet size L.sub.i that traverses M links to cross a network with N outputs (and inputs): PNG media_image1.png 57 341 media_image1.png Greyscale where L.sub.max is the maximum size of a packet in the entire network and C.sub.k is the capacity of link k in the network path of flow f.sub.i.; [0066] It is noted that a PIFO queue may be implemented as a calendar queue (e.g., using an array of slots associated with timestamp ranges such that an arriving packet is queued to the slot corresponding to the range of its timestamp and packets queued in the same slot are served in FIFO order; [0074] In the packet network 100, based on use of scalable deterministic services for supporting communication of GD traffic, each GD traffic flow in the network may be associated with an EDB which depends on the service rates allocated to the GD flow aggregates that the GD traffic flow traverses end-to-end and which also depends on the set of other GD traffic flows currently established over the packet network 100. For example, the following effective delay bound holds for a flow f.sub.i of guaranteed rate r.sub.i and maximum packet size L.sub.i that traverses M links to cross a network with N outputs (and inputs): PNG media_image2.png 52 306 media_image2.png Greyscale where X.sub.i is the number of GD flow aggregates destined for the same network output that the GD flow aggregate of flow f.sub.i merges with along its path to the output (it is noted that X.sub.i can be expected to be smaller than the total number of network outputs, because every GD flow aggregate results from the merging of other GD traffic flows and GD flow aggregates at respective upstream nodes), R.sub.j is the service rate of the GD flow aggregate of GD flow f.sub.i at the node where the aggregate of GD flow f.sub.i merges with GD flow aggregate j, and L.sub.j is the maximum size of a packet of GD flow aggregate j. The bound of this EDB equation depends on the current composition of all GD flow aggregates encountered by GD flow f.sub.i and, thus, is subject to variations as flows keep being added to and removed from the network path of GD flow f.sub.i.) Regarding claim 24, Francini discloses: wherein the flow bundle includes one or more flows. ([0038] The scalable deterministic services queuing arrangement may be configured such that, in front of each network link, GD traffic flows of the GD class are grouped into a set of GD flow aggregates associated with distinct network outputs.) Regarding claim 25, Francini discloses: wherein the flows of the flow bundle share a common egress node and a common path to the common egress node. ([0042] It is also assumed that each GD traffic flow is routed to its network output based on a network-wide (e.g., as opposed to flow-specific) routing policy such that, after two GD traffic flows with a common network output merge into a common GD flow aggregate at a network node, the two GD traffic flows keep the same common path until reaching that common network output.) Regarding claim 26, Francini discloses: wherein the flows of the flow bundle share a common latency requirement. ([0042] after two GD traffic flows with a common network output merge into a common GD (guaranteed-delay) flow aggregate at a network node, the two GD traffic flows keep the same common path until reaching that common network output. [0029] For example, in packet network 100, the traffic flows 130 may be GD traffic flows for which delay guarantees are to be enforced. It will be appreciated that a flow of GD traffic may be referred to as a GD traffic flow) Regarding claim 27, Francini discloses: wherein the flows of the flow bundle do not share a common latency requirement. ([0059] In one example, when a new flow is about to be added to the network, the centralized controller adds the requested rate of the new flow to the requested rate accumulators for all of the links that the new flow will traverse and, if any of the requested rate accumulators exceeds the current service rate of the respective GD traffic queue 320-G, the centralized controller increases the service rate by the configured quantum amount.) Regarding claim 28, Francini discloses: wherein the flows of the flow bundle share a common maximum packet size. ([0073] In the packet network 100, based on use of scalable deterministic services for supporting communication of GD traffic, each GD traffic flow in the network may be associated with an IDB which quantifies the worst-case delay that packets of the GD traffic flow may ever experience within the packet network 100. For example, the following IDB holds for a flow f.sub.i of guaranteed rate r.sub.i and maximum packet size L.sub.i that traverses M links to cross a network with N outputs (and inputs): [00003]IDBi<(N-2)⁢Lmaxri+Σk=1M⁡(LmaxCk+Liri), where L.sub.max is the maximum size of a packet in the entire network and C.sub.k is the capacity of link k in the network path of flow f.sub.i.) Regarding claim 29, Francini discloses: wherein the set of fixed-size transmission timeslots is part of a periodic service sequence in which the fixed-size transmission timeslots are assigned to bundles of flows according to respective shaping rate allocations of the bundles of flows. ([0044] In one example, communication of such CDT traffic (or other types of higher priority traffic) in addition to GD traffic may be supported using a CDT queue for queuing of the CDT traffic where the CDT traffic may be assigned a CDT rate (e.g., a shaping rate) at the scalable deterministic services scheduler that handles the GD traffic.) Regarding claim 30, Francini discloses: wherein a service rate of the flow bundle is set to a service rate that supports a respective latency requirement of one of the flows of the flow bundle. ([0020] Various example embodiments for supporting scalable deterministic services in packet networks may be configured to support delay guarantees for GD traffic flows based on a service rate allocation rule configured to support the delay guarantees for GD traffic flows (e.g., each GD flow aggregate at a network node has a service rate associated therewith for use in controlling transmission of packets of the GD traffic flows from the network nodes). Regarding claim 31, Francini discloses: wherein, based on a determination that a sum of respective throughput requirements of the respective flows of the flow bundle exceeds the service rate of the flow bundle, the service rate of the flow bundle is modified to be set to the sum of respective throughput requirements of the respective flows. ([0059] In one example, when a new flow is about to be added to the network, the centralized controller adds the requested rate of the new flow to the requested rate accumulators for all of the links that the new flow will traverse and, if any of the requested rate accumulators exceeds the current service rate of the respective GD traffic queue 320-G, the centralized controller increases the service rate by the configured quantum amount. In one example, when an existing flow is removed from the network, the centralized controller updates all of the requested rate accumulators for the links along its path and, if any requested rate accumulator drops below the difference between the current service rate and the allocation quantum, the centralized controller reduces the service rate by the configured quantum amount.) Regarding claim 32, Francini discloses: wherein, to serve the flow bundle using the set of fixed-size transmission timeslots, the instructions, when executed by the at least one processor, cause the apparatus at least to: serve at least one of the flows of the flow bundle using at least one of the fixed- size transmission timeslots. ([0066] It is noted that a PIFO queue may be implemented as a calendar queue (e.g., using an array of slots associated with timestamp ranges such that an arriving packet is queued to the slot corresponding to the range of its timestamp and packets queued in the same slot are served in FIFO order) Regarding claim 34, Francini discloses: wherein the flows of the flow bundle are served based on a set of credit counters associated with the respective flows of the flow bundle. ([0075] The IDB and EDB delay bounds computed for traffic flow 130-1 in the packet network 100 (denoted as IDB-SDS and EDB-SDS, respectively) are compared against the delay bound for a combination of the Credit-Based Shaper with per-class queues of IEEE 802.1Qav with the Interleaved Regulator with per-switch-input queues of IEEE 802.1Qcr (the combination is denoted as CBS+IR, 40 Mbps)) Regarding claims 36, Francini discloses: An apparatus, comprising: ([0085] the computer 800) at least one processor; ([0085] the computer 800 may include at least one processor) and at least one memory storing instructions which, when executed by the at least one processor, cause the apparatus at least to: ([0085] the computer 800 may include at least one processor and at least one memory including instructions wherein the instructions are configured to, when executed by the at least one processor, cause the apparatus to perform various functions presented herein.) aggregate a set of flows into a flow bundle based on respective maximum packet sizes of the respective flows; ([0038] The scalable deterministic services queuing arrangement may be configured such that, in front of each network link, GD traffic flows of the GD class are grouped into a set of GD flow aggregates associated with distinct network outputs.) and serve the flow bundle using a set of fixed-size transmission timeslots; ([0066] It is noted that a PIFO queue may be implemented as a calendar queue (e.g., using an array of slots associated with timestamp ranges such that an arriving packet is queued to the slot corresponding to the range of its timestamp and packets queued in the same slot are served in FIFO order). wherein the flows are continuous guaranteed-latency (CGL) flows in a software-defined guaranteed-latency networking (SD-GLN) context ((¶¶ [0059]–[0061], FIG. 6, FIG. 8: a centralized/network controller that determines and sends service rates for GD flow aggregates; [0040] It is noted that various control methods may be used to ensure that the scalable deterministic services scheduler knows all of the network outputs (e.g., association of the GD flow aggregates with the network outputs may be realized by the scalable deterministic services scheduler of the network based on configuration of each node of the network to expose a configuration interface to enable the automatic discovery of the network outputs or explicit provisioning of the network outputs (e.g., via an SDN controller or other suitable controller)), wherein a relationship exists between a service rate of a CGL flow and a latency requirement that the SD-GLN context supports for that CGL flow. (¶[0073]-[0075]: closed-form delay bounds (IDB/EDB) that explicitly incorporate allocated service rates and packet sizes. PNG media_image3.png 50 315 media_image3.png Greyscale ) Regarding claim 37, Francini discloses: wherein, for at least one of the CGL flows, the respective CGL flow is a latency-driven flow having a respective latency requirement that imposes allocation of a respective service rate greater than a respective throughput requirement of the latency-driven flow. ([0060] In one example, accommodation of the new GD traffic flow may be supported (and, thus, the rejection of the new GD traffic flow can be avoided) by allowing the GD queue to increase its service rate only by the amount that is sufficient to accommodate the new flow, if such amount is still available at the link.) Regarding claim 38, Francini discloses: wherein a service rate of the flow bundle is set to a service rate needed to support a respective latency requirement of one of the CGL flows of the flow bundle. ([0059] In one example, a centralized controller tracks, for each of the GD traffic queues 320-G in the packet network 100, the sum of the requested rates of all of the flows that use the respective GD traffic queue 320-G and the service rate currently allocated at the scalable deterministic services scheduler 300 to the respective GD traffic queue 320-G. It will be appreciated that the latter should not be smaller than the former.) Regarding claim 39, Francini discloses: wherein the CGL flows aggregated into the flow bundle do not share a common latency requirement, wherein a latency requirement selected for calculation of a service rate of the flow bundle is a lowest latency requirement of any of the CGL flows in the set of CGL flows. ([0059] In one example, when a new flow is about to be added to the network, the centralized controller adds the requested rate of the new flow to the requested rate accumulators for all of the links that the new flow will traverse and, if any of the requested rate accumulators exceeds the current service rate of the respective GD traffic queue 320-G, the centralized controller increases the service rate by the configured quantum amount.) Regarding claim 40, Francini discloses: wherein, based on a determination that a sum of respective throughput requirements of the respective CGL flows of the flow bundle exceeds the service rate of the flow bundle, the service rate of the flow bundle is modified to be set to the sum of respective throughput requirements of the respective CGL flows. ([0059] In one example, when a new flow is about to be added to the network, the centralized controller adds the requested rate of the new flow to the requested rate accumulators for all of the links that the new flow will traverse and, if any of the requested rate accumulators exceeds the current service rate of the respective GD traffic queue 320-G, the centralized controller increases the service rate by the configured quantum amount. In one example, when an existing flow is removed from the network, the centralized controller updates all of the requested rate accumulators for the links along its path and, if any requested rate accumulator drops below the difference between the current service rate and the allocation quantum, the centralized controller reduces the service rate by the configured quantum amount.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim 33 is rejected under 35 U.S.C. 103 as being unpatentable over Francini et al. (US 20210250301; hereinafter “Francini”) in view of Finn et al. (US 20200296007; hereinafter “Finn”). Regarding claim 33, Francini does not disclose: wherein the flows of the flow bundle are served in a round-robin order. However, Finn discloses: wherein the flows of the flow bundle are served in a round-robin order. ([0052] Policies can also designate a specific route by which the packet or flow traverses the network. In addition, policies can classify the packet or flow so that certain kinds of traffic receive differentiated service when used in combination with queuing techniques such as those based on priority, fairness, weighted fairness, token bucket, random early detection, round robin, among others, or to enable the network analytics system 100 to perform certain operations on the servers and/or flows) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of Francini with the teachings of Finn, to include the flows of the flow bundle are served in a round-robin order. The motivation would have been to enriching flow data to analyze network security, availability, and compliance. (Finn ¶[0013]) Claim 35 is rejected under 35 U.S.C. 103 as being unpatentable over Francini et al. (US 20210250301; hereinafter “Francini”) in view of Verbree et al. (US 20190166062; hereinafter “Verbree”). Regarding claim 35, Francini does not disclose: wherein only ones of the flows of the flow bundle with non-negative credits are served. Verbree discloses: wherein only ones of the flows of the flow bundle with non-negative credits are served. ([0078] For the credit-based shaper algorithm, the transmit-allowed signal may be enabled while the credit value is zero or positive, and disabled when the credit value is negative.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of Francini with the teachings of Verbree, to include the flows of the flow bundle are served in a round-robin order. The motivation would have been to transmit time-sensitive data from an end point, and more particularly toward deterministic transmission of the time-sensitive data. (Verbree ¶ [0002]) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHU PHAM whose telephone number is (703)756-4511. The examiner can normally be reached Monday - Friday: 7:30 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jae Y. Lee can be reached at (571) 270-3936. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHU PHAM/Examiner, Art Unit 2479 /JAE Y LEE/Supervisory Patent Examiner, Art Unit 2479
Read full office action

Prosecution Timeline

Jul 12, 2022
Application Filed
Mar 04, 2025
Non-Final Rejection — §102, §103
Jun 10, 2025
Response Filed
Aug 21, 2025
Non-Final Rejection — §102, §103
Nov 25, 2025
Response Filed
Jan 07, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574832
METHOD AND SYSTEM FOR PROVIDING BACK-OFF TIMER TO UES DURING NETWORK SLICE ADMISSION CONTROL
2y 5m to grant Granted Mar 10, 2026
Patent 12574161
METHOD AND APPARATUS FOR DETERMINING TIME DENSITY RELATED TO PT-RS IN NR V2X
2y 5m to grant Granted Mar 10, 2026
Patent 12550033
COMMUNICATION METHOD, APPARATUS, AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12519698
METAVERSE END-TO-END (E2E) NETWORK ARCHITECTURE
2y 5m to grant Granted Jan 06, 2026
Patent 12513622
POWER CONTROL METHOD AND DEVICE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+12.5%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month