Prosecution Insights
Last updated: April 19, 2026
Application No. 18/569,555

MULTI-STAGE PACKET PROCESSING PIPELINE FOR A SINGLE TUNNEL INTERFACE

Non-Final OA §101§102§103
Filed
Dec 12, 2023
Examiner
ULYSSE, JAEL M
Art Unit
2477
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
541 granted / 649 resolved
+25.4% vs TC avg
Minimal +5% lift
Without
With
+5.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
29 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 649 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Application 2 This instant Office Action is in response to Original Filing filed on 12/12/2023. 3. This Office Action is made Non-Final. 4. Claims 1-20 are pending. Information Disclosure Statement 5. The information disclosure statement (IDS) submitted on 12/12/23, 9/17/24, 1/14/26 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 6. Claim 20 is rejected under 35 U.S.C. 101 because it recites “A computer-readable storage media that stores computer-executable instructions….” which is non-patent eligible subject matter because applicant claims software on a tangible medium. It is not clear whether or not the medium is a statutory medium. Furthermore, the specification does not recite or make specific if the computer readable media/medium is not a signal or carrier wave. Also, the specification does not recite a non-transitory computer readable media. In other words, the claims recites a computer program capable of being implemented in a device comprising instructions to be executed is directed to program per se and not recited in combination with non-transitory medium. Therefore the claim is non-statutory and non-patentable subject matter. It is suggested that the claim be amended to incorporate the phrase “non-transitory” prior to the phrase computer-readable. For example, “A non-transitory computer-readable storage media…” Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 1. Claims 1-12 and 15-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tatar et al. US 20070195773 hereafter Tatar. As to Claim 1. (Currently Amended) Tatar discloses a method, implemented at a computer system [Computer system-1010, Section 0298: The foregoing describes embodiments including components contained within other components various elements shown as components of computer system 1010] that includes a processor [i.e. Packet Processor-284/230/235 or Control Circuit-190], for applying a multi-stage packet processing pipeline to a single tunnel interface [Output Interface/CPU Interface or Path/Route], the method comprising [Figs. 1-2, 4B, Sections 0004, 0041, 0183: A packet routing device consists of an output interface. In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing related to tunneling multicast transmission]: identifying a single tunnel interface [Output Interface/CPU Interface or Path/Route, Section 0004: A packet routing device consists of an output interface] that is associated with an operating environment of the computer system [Computer system-1010], identifying a plurality of packet processing stages [Figs. 1, 5, Sections 0002, 0041, 0079, 0080, 0183: In a data communication network, routing device receive messages and forward them on to one output interface. In embodiments, both the receive and transmit path of packets processed and routed in a multi-stage pipeline operate on several packets. The processing stages of a pipe included in head processing unit has four pipelines, each with 13 stages. Each pipe includes the stages summarized below and these stages, executed in sequence on a given packet of the receive and transmit data path. Stages support pipeline processing], and for each packet processing stage [Section 0080: Each pipe includes stages executes in sequence on packets] identifying at least one rule [i.e. Criteria, Rule, or Protocol] specifying packets to which the packet processing stage applies [Figs. 1, 4A-B, 5, Sections 0042, 0088, 0145: ACLs are used to perform data packet filtering based on certain criteria, such as interface, protocol and the like. Content stages perform searches on differing rules for a packet. Perform filtering based on certain matching criteria and can be used to rate-limit traffic based on certain matching criteria], and identifying logic [i.e. Policy/logic] configured to process each packet received by the packet processing stage [Figs. 1, 4A-B, 5-6, Sections 0076, 0105, 0130, 0161: Each pipe includes a plurality of multiplexers with a multiplexer control logic. The microsequencer logic serves as a programmable machine for header/packet processing; and the architecture of the microsequencer is a three-stage pipelined-execution flow. Policy-based routing is implemented. As discussed above, the present invention includes multiple stages that use general purpose microsequencer logic for header/packet processing], and composing the plurality of packet processing stages into a packet processing pipeline, including [Figs. 1, 4A-B, 5, Sections 0041, 0046, 0183: In embodiments, packets can be processed and routed in a multi-stage pipeline and facilitate transmission to the corresponding destination port. Wherein each pipe can perform the series of operations on packet heads. Stages support pipeline processing related to tunneling multicast transmission]: identifying a union of rules [i.e. Criteria, Rule, or Protocol] specifying packets to which the plurality of packet processing stages apply [Figs. 1, 4A-B, 5, Sections 0042, 0145: ACLs are used to perform data packet filtering based on certain criteria, such as interface, protocol and the like. Perform filtering based on certain matching criteria and can be used to rate-limit traffic based on certain matching criteria]; registering the union of rules with the single tunnel interface [Output Interface/CPU Interface or Path/Route] and arranging the plurality of packet processing stages into a linear pipeline [Figs. 1, 5, Sections 0073, 0079, 0088, 0130: Once HPU (head processing unit) has completed analysis of the packet heads, the packet heads are forwarded; HPU is also coupled to CPU interface in which data associated with registers via CPU interface. Each incoming packet head is associated with a PHB (packet head buffer) that contains the packet head as well as other information written into the PHB by the different stages of the pipe, stages of pipes can include a packet associated information register associated with each packet and as each stage completes its operation on a packet, the stage can send an signal to pipe. Content stages perform searches on differing rules for a packet. Each profile describes a set of data registers associated with a packet for processing], including connecting an upstream [i.e. for output/outbound, exit or egress] connector of an initial packet processing stage to the single tunnel interface [Sections 0006, 0007, 0051, 0185: Outbound packets transmitted out of the routing device and transmitted to network on output interface (i.e. tunnel interface) in other words transmit or egress path (i.e. tunnel) from the routing device to the network. Control circuits provided to perform such tasks as initialization as well as process packets. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules collecting headers from all of the pipes and sending data to the traffic management modules], and for each pair of adjacent packet processing stages [Section 0041, 0095: In embodiments of the present invention, in both the receive and transmit path, packets can be processed and routed in a multi-stage pipeline. The stages have a HPU (head processing unit) to process packet heads received/linked to Ingress Packet Processor], connecting a respective downstream [i.e. for input or ingress] connector of an upstream packet processing stage in the pair to a respective upstream [i.e. for output/outbound, exit or egress] connector of a downstream packet processing stage processing stage in the pair [Sections 0046, 0051, 0185, 0246: Ingress Packet Processor (i.e. downstream processor) due to downstream backpressure can apply a backpressure to bridge to packets downstream. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules, collecting headers (i.e. packets) from all of the pipes and sending data to the traffic management modules. The two Packet Processor modules: Ingress Packet Processor and Egress Packet Processor contains modules, pipes, and stages. Note: Per Section 0042: In general, a series of data packets flow transmitted between two points (i.e. downstream to upstream) in a network during a session]. As to Claim 2 (Original) Tatar discloses the method of claim 1, wherein the method also comprises consuming a first packet received by the initial packet processing stage from the tunnel interface at the initial packet processing stage [Sections 0005, 0007, 0082, 0099: Control element is configured to receive inbound packets entering the routing device from network and process the packets, Control circuits perform such tasks as configuration, initialization, and process packets. Initial Microprocessor (IMP) and Pre-Processor (PreP) Stages are capable of any general purpose activity on a packet head. A packet header may be recycled in order to perform more processing as with, for example, a tunneled packet]. As to Claim 3 (Original) Tatar discloses the method of claim 1, wherein the method also comprises outputting a second packet received by the initial packet processing stage from the tunnel interface downstream on the packet processing pipeline [Sections 0009, 0041, 0046, 0246: Selection of one or more output interfaces to which to forward inbound packets. In embodiments of the present invention, in both the receive and transmit path, packets (i.e. first packet, second packet etc.) can be processed and routed in a multi-stage pipeline. Ingress Packet Processor (i.e. downstream processor) due to downstream backpressure can apply a backpressure to bridge to packets downstream. The two Packet Processor modules: Ingress Packet Processor and Egress Packet Processor contains modules, pipes, and stages]. As to Claim 4. (Currently Amended) Tatar discloses the method of claim 2, further comprising: receiving the first packet at a subsequent packet processing stage [Sections 0041, 0183: In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing]; and applying the logic [i.e. Policy/logic] of the subsequent packet processing stage to consume the first packet by the subsequent packet processing stage, or output the first packet downstream on the packet processing pipeline [Figs. 1, 4A-B, 5-6, Sections 0076, 0105, 0161: Each pipe includes includes a plurality of multiplexers with a multiplexer control logic. The microsequencer logic serves as a programmable machine for header/packet processing; and the architecture of the microsequencer is a three-stage pipelined-execution flow. As discussed above, the present invention includes multiple stages that use general purpose microsequencer logic for header/packet processing]. As to Claim 5 (Original) Tatar discloses the method of claim 1, wherein identifying the plurality of packet processing stages comprises [Sections 0041, 0183: In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing] identifying the plurality of packet processing stages from a single application [Figs. 1, 5, Sections 0079-0080, 0124: The processing stages of a pipe included in head processing unit has pipelines, each with 13 stages. Each pipe includes the stages summarized below and these stages, executed in sequence on a given packet of the receive and transmit data path. IMP and PreP stages for example defined through use of software (i.e. application)]. As to Claim 6 (Original) Tatar discloses the method of claim 1, wherein identifying the plurality of packet processing stages comprises [Sections 0041, 0183: In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing] identifying the plurality of packet processing stages from a plurality of applications [Figs. 1, 5, Sections 0079-0080, 0130: The processing stages of a pipe included in head processing unit has pipelines, each with 13 stages. Each pipe includes the stages summarized below and these stages, executed in sequence on a given packet of the receive and transmit data path. The stages can be programmed with as many different profiles. programmed by software(s)]. As to Claim 7 (Original) Tatar discloses the method of claim 1, wherein each rule [i.e. Criteria, Rule, or Protocol] identifies at least one network address [Address generator module-1020] range, and wherein the union of rules identifies a union of a plurality of ranges of network addresses [Sections 0081, 0102, 0275: As the packet head arrives, various packet checks and classifications are performed, including protocol checking, and IP/MPLS address fields and made available to the subsequent pipe stages. The header is forwarded with encapsulation size and parameters and relevant information IPv4 source and destination addresses, protocol field, start address of L3 data in the PHB, and L4 parameters. The multicast processor can maintain table allowing the multicast processor to lookup addresses based on unique association identifiers]. As to Claim 8. (Original) Tatar discloses the method of claim 1, wherein the logic consumes a packet by performing at least one of: transforming the packet; sending the packet towards a software component; sending the packet towards a physical network interface; or discarding the packet [Sections 0002, 0047, 0064, 0161, 0179: In data communication network, routing device receive messages and forward them on to one output interface. Ingress Packet Processor perform early discard packet dropping for queue depth management. The Ingress Packet Processor will look at bit and can selectively drop packets. Multiple stages use general purpose microsequencer logic for header/packet processing. More specifically, perform processing including: MAC layer rewrite (i.e. transforming) and, IP header modification/updating (i.e. transforming) according to programming in a profile register for each type of packet header]. As to Claim 9 (Original) Tatar discloses the method of claim 1, wherein identifying the plurality of packet processing stages comprises [Sections 0041, 0183: In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing] selecting at least one of the plurality of packet processing stages based on at least one of licensing status, geo-location, or a computer system attribute [Sections 0086, 0088, 0102: Mid-Processor Microsequencer (MiP) is capable of performing any general purpose activity on the head; perform tasks such as selecting an appropriate profile (i.e. attributes) for stages to be executed on the head. Content stages perform searches on differing rules (i.e. attribute) for a packet. The header is forwarded with encapsulation size and parameters (i.e. attributes) and the parameters and check results are passed on to other pipeline stages]. As to Claim 10 (Original) Tatar discloses the method of claim 1, wherein composing the plurality of packet processing stages into the packet processing pipeline [Sections 0041, 0183: In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing] comprises at least one of determining an ordering of stages in the packet processing pipeline, or resolving a conflict [Sections 0183, 0222: Packet level ordering requires Packet Processor keep packet arrival order in sync with packet transmission order; Header-tail ordering addresses issues related to synchronization of a header with its respective tail; and Gather stage will also provide a sequence indication and a time stamp. Packet segmentor module can detect error conditions (i.e. conflicts) and cause an interrupt]. As to Claim 11 (Original) Tatar discloses the method of claim 1, wherein a particular packet processing stage comprises a packet buffer, and wherein the logic of the particular packet processing stage operates on a plurality of packets stored in the packet buffer [Sections 0041, 0047, 0076: In embodiments, packets processed and routed in a multi-stage pipeline. Ingress Packet Processor provides Ingress Traffic Management module with the heads and tails of packets, ingress Traffic Management module perform packet buffering. Each head buffer provide packet headers to any pipe, a number of pipes are connected to all of the head buffers; a multiplexer control logic controls the head buffer and receive signal from Buffer Manager Interface module]. As to Claim 12 (Original) Tatar discloses the method of claim 11, wherein the logic of the particular packet processing stage performs at least one of: injecting a third packet into the packet buffer; removing a fourth packet from the packet buffer; or modifying a fifth packet within the packet buffer [Sections 0041, 0058, 0205, 0291: Packets can be processed and routed in a multi-stage, each received packet can be modified to contain new routing information as well as additional header data then each packet is then buffered and enqueued for transmission. Buffer memory can be divided into a plurality of buffers up to 64 buffers. In one embodiment, there are 32 entries (i.e. 32 packets) in the low queue HTL of which 15 are assigned to free queues on empty buffers. Packets are transferred from PLIM into Buffer Memory; Buffers are assigned based on a packet's port number, and the packet will be written in that buffer]. As to Claim 15 (Original) Tatar discloses the method of claim 1, wherein, for each pair of adjacent packet processing stages [Section 0041, 0095: In embodiments of the present invention, in both the receive and transmit path, packets can be processed and routed in a multi-stage pipeline. The stages have a HPU (head processing unit) to process packet heads received/linked to Ingress Packet Processor], connecting the respective downstream connector of the upstream packet processing stage in the pair to the respective upstream connector of the downstream packet processing stage processing stage in the pair comprises establishing a respective socket between the respective downstream connector and the respective upstream connector [Sections 0046, 0051, 0185, 0246: Ingress Packet Processor (i.e. downstream processor) due to downstream backpressure can apply a backpressure to bridge to packets downstream. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules, collecting headers (i.e. packets) from all of the pipes and sending data to the traffic management modules. The two Packet Processor modules: Ingress Packet Processor and Egress Packet Processor contains modules, pipes, and stages. Note: Per Section 0042: In general, a series of data packets flow transmitted between two points (i.e. downstream to upstream) in a network during a session]. As to Claim 16 (New) Tatar discloses a computer system [Computer system-1010, Section 0298: The foregoing describes embodiments including components contained within other components various elements shown as components of computer system 1010] for applying a multi-stage packet processing pipeline to a single tunnel interface [Output Interface/CPU Interface or Path/Route], comprising [Figs. 1-2, 4B, Sections 0004, 0041, 0183: A packet routing device consists of an output interface. In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing related to tunneling multicast transmission]: a processor [i.e. Packet Processor-284/230/235 or Control Circuit-190]; and a computer storage medium [i.e. Memory] that stores computer-executable instructions that are executable by the processor to at least [Sections 0007, 0301: Memory-160 link to Control circuits-190 are provided to perform such tasks as configuration, initialization, statistics collection, accounting functions, as well as to process packets. The above-discussed embodiments can be implemented by software modules include script, or other executable files; software modules stored on a computer-readable storage medium include a memory; the modules can be stored within a computer system memory to configure the computer system to perform the functions of the module]: identify a single tunnel interface [Output Interface/CPU Interface or Path/Route, Section 0004: A packet routing device consists of an output interface] that is associated with an operating environment of the computer system [Computer system-1010]; identify a plurality of packet processing stages [Figs. 1, 5, Sections 0002, 0041, 0079, 0080, 0183: In a data communication network, routing device receive messages and forward them on to one output interface. In embodiments, both the receive and transmit path of packets processed and routed in a multi-stage pipeline operate on several packets. The processing stages of a pipe included in head processing unit has four pipelines, each with 13 stages. Each pipe includes the stages summarized below and these stages, executed in sequence on a given packet of the receive and transmit data path. Stages support pipeline processing], and for each packet processing stage [Section 0080: Each pipe includes stages executes in sequence on packets] identify at least one rule [i.e. Criteria, Rule, or Protocol] specifying packets to which the packet processing stage applies [Figs. 1, 4A-B, 5, Sections 0042, 0088, 0145: ACLs are used to perform data packet filtering based on certain criteria, such as interface, protocol and the like. Content stages perform searches on differing rules for a packet. Perform filtering based on certain matching criteria and can be used to rate-limit traffic based on certain matching criteria], and identify logic [i.e. Policy/logic] configured to process each packet received by the packet processing stage [Figs. 1, 4A-B, 5-6, Sections 0076, 0105, 0130, 0161: Each pipe includes a plurality of multiplexers with a multiplexer control logic. The microsequencer logic serves as a programmable machine for header/packet processing; and the architecture of the microsequencer is a three-stage pipelined-execution flow. Policy-based routing is implemented. As discussed above, the present invention includes multiple stages that use general purpose microsequencer logic for header/packet processing]; and compose the plurality of packet processing stages into a packet processing pipeline, including [Figs. 1, 4A-B, 5, Sections 0041, 0046, 0183: In embodiments, packets can be processed and routed in a multi-stage pipeline and facilitate transmission to the corresponding destination port. Wherein each pipe can perform the series of operations on packet heads. Stages support pipeline processing related to tunneling multicast transmission]: identifying a union of rules [i.e. Criteria, Rule, or Protocol] specifying packets to which the plurality of packet processing stages apply [Figs. 1, 4A-B, 5, Sections 0042, 0145: ACLs are used to perform data packet filtering based on certain criteria, such as interface, protocol and the like. Perform filtering based on certain matching criteria and can be used to rate-limit traffic based on certain matching criteria]; registering the union of rules with the single tunnel interface [Output Interface/CPU Interface or Path/Route]; and arranging the plurality of packet processing stages into a linear pipeline [Figs. 1, 5, Sections 0073, 0079, 0088, 0130: Once HPU (head processing unit) has completed analysis of the packet heads, the packet heads are forwarded; HPU is also coupled to CPU interface in which data associated with registers via CPU interface. Each incoming packet head is associated with a PHB (packet head buffer) that contains the packet head as well as other information written into the PHB by the different stages of the pipe, stages of pipes can include a packet associated information register associated with each packet and as each stage completes its operation on a packet, the stage can send an signal to pipe. Content stages perform searches on differing rules for a packet. Each profile describes a set of data registers associated with a packet for processing], including: connecting an upstream [i.e. for output/outbound, exit or egress] connector of an initial packet processing stage to the single tunnel interface [Sections 0006, 0007, 0051, 0185: Outbound packets transmitted out of the routing device and transmitted to network on output interface (i.e. tunnel interface) in other words transmit or egress path (i.e. tunnel) from the routing device to the network. Control circuits provided to perform such tasks as initialization as well as process packets. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules collecting headers from all of the pipes and sending data to the traffic management modules], and for each pair of adjacent packet processing stages [Section 0041, 0095: In embodiments of the present invention, in both the receive and transmit path, packets can be processed and routed in a multi-stage pipeline. The stages have a HPU (head processing unit) to process packet heads received/linked to Ingress Packet Processor], connecting a respective downstream [i.e. for input or ingress] connector of an upstream packet processing stage in the pair to a respective upstream [i.e. for output/outbound, exit or egress] connector of a downstream packet processing stage processing stage in the pair [Sections 0046, 0051, 0185, 0246: Ingress Packet Processor (i.e. downstream processor) due to downstream backpressure can apply a backpressure to bridge to packets downstream. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules, collecting headers (i.e. packets) from all of the pipes and sending data to the traffic management modules. The two Packet Processor modules: Ingress Packet Processor and Egress Packet Processor contains modules, pipes, and stages. Note: Per Section 0042: In general, a series of data packets flow transmitted between two points (i.e. downstream to upstream) in a network during a session]. As to Claim 17 (New) The computer system of claim 16, the computer-executable instructions also executable by the processor to consume a first packet received by the initial packet processing stage from the tunnel interface at the initial packet processing stage [See Claim 2 rejection because both claims have similar subject matter therefore similar rejection applies herein]. As to Claim 18 (New) The computer system of claim 16, the computer-executable instructions also executable by the processor to output a second packet received by the initial packet processing stage from the tunnel interface downstream on the packet processing pipeline [See Claim 3 rejection because both claims have similar subject matter therefore similar rejection applies herein]. As to Claim 19 (New) The computer system of claim 16, wherein identifying the plurality of packet processing stages comprises identifying the plurality of packet processing stages from a single application [See Claim 5 rejection because both claims have similar subject matter therefore similar rejection applies herein]. As to Claim 20 (New) Tatar discloses a computer-readable storage media [i.e. Memory] that stores computer-executable instructions that are executable by a processor [i.e. Packet Processor-284/230/235 or Control Circuit-190] to apply a multi-stage packet processing pipeline to a single tunnel interface [Output Interface/CPU Interface or Path/Route], the computer-executable instructions including instructions that are executable by the processor to at least [Figs. 1-2, 4B, Sections 0004, 0007, 0041, 0173, 0301: A packet routing device consists of an output interface. Memory-160 link to Control circuits-190 are provided to perform such tasks as configuration as well as to process packets. In embodiments, packets processed and routed in a multi-stage pipeline. Stages support pipeline processing related to tunneling multicast transmission. Embodiments can be implemented by software modules include executable files; a computer-readable storage medium include a memory; the modules can be stored within a computer system memory to configure the computer system to perform the functions of the module]: identify a single tunnel interface [Output Interface/CPU Interface or Path/Route, Section 0004: A packet routing device consists of an output interface] that is associated with an operating environment of a computer system [Computer system-1010]; identify a plurality of packet processing stages [Figs. 1, 5, Sections 0002, 0041, 0079, 0080, 0183: In a data communication network, routing device receive messages and forward them on to one output interface. In embodiments, both the receive and transmit path of packets processed and routed in a multi-stage pipeline operate on several packets. The processing stages of a pipe included in head processing unit has four pipelines, each with 13 stages. Each pipe includes the stages summarized below and these stages, executed in sequence on a given packet of the receive and transmit data path. Stages support pipeline processing], and for each packet processing stage [Section 0080: Each pipe includes stages executes in sequence on packets]: identify at least one rule [i.e. Criteria, Rule, or Protocol] specifying packets to which the packet processing stage applies [Figs. 1, 4A-B, 5, Sections 0042, 0088, 0145: ACLs are used to perform data packet filtering based on certain criteria, such as interface, protocol and the like. Content stages perform searches on differing rules for a packet. Perform filtering based on certain matching criteria and can be used to rate-limit traffic based on certain matching criteria], and identify logic [i.e. Policy/logic] configured to process each packet received by the packet processing stage[Figs. 1, 4A-B, 5-6, Sections 0076, 0105, 0130, 0161: Each pipe includes a plurality of multiplexers with a multiplexer control logic. The microsequencer logic serves as a programmable machine for header/packet processing; and the architecture of the microsequencer is a three-stage pipelined-execution flow. Policy-based routing is implemented. As discussed above, the present invention includes multiple stages that use general purpose microsequencer logic for header/packet processing]; and compose the plurality of packet processing stages into a packet processing pipeline, including [Figs. 1, 4A-B, 5, Sections 0041, 0046, 0183: In embodiments, packets can be processed and routed in a multi-stage pipeline and facilitate transmission to the corresponding destination port. Wherein each pipe can perform the series of operations on packet heads. Stages support pipeline processing related to tunneling multicast transmission]: identifying a union of rules [i.e. Criteria, Rule, or Protocol] specifying packets to which the plurality of packet processing stages apply [Figs. 1, 4A-B, 5, Sections 0042, 0145: ACLs are used to perform data packet filtering based on certain criteria, such as interface, protocol and the like. Perform filtering based on certain matching criteria and can be used to rate-limit traffic based on certain matching criteria]; registering the union of rules with the single tunnel interface [Output Interface/CPU Interface or Path/Route]; and arranging the plurality of packet processing stages into a linear pipeline [Figs. 1, 5, Sections 0073, 0079, 0088, 0130: Once HPU (head processing unit) has completed analysis of the packet heads, the packet heads are forwarded; HPU is also coupled to CPU interface in which data associated with registers via CPU interface. Each incoming packet head is associated with a PHB (packet head buffer) that contains the packet head as well as other information written into the PHB by the different stages of the pipe, stages of pipes can include a packet associated information register associated with each packet and as each stage completes its operation on a packet, the stage can send an signal to pipe. Content stages perform searches on differing rules for a packet. Each profile describes a set of data registers associated with a packet for processing], including: connecting an upstream [i.e. for output/outbound, exit or egress] connector of an initial packet processing stage to the single tunnel interface [Sections 0006, 0007, 0051, 0185: Outbound packets transmitted out of the routing device and transmitted to network on output interface (i.e. tunnel interface) in other words transmit or egress path (i.e. tunnel) from the routing device to the network. Control circuits provided to perform such tasks as initialization as well as process packets. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules collecting headers from all of the pipes and sending data to the traffic management modules], and for each pair of adjacent packet processing stages [Section 0041, 0095: In embodiments of the present invention, in both the receive and transmit path, packets can be processed and routed in a multi-stage pipeline. The stages have a HPU (head processing unit) to process packet heads received/linked to Ingress Packet Processor], connecting a respective downstream [i.e. for input or ingress] connector of an upstream packet processing stage in the pair to a respective upstream [i.e. for output/outbound, exit or egress] connector of a downstream packet processing stage processing stage in the pair [Sections 0046, 0051, 0185, 0246: Ingress Packet Processor (i.e. downstream processor) due to downstream backpressure can apply a backpressure to bridge to packets downstream. Egress Packet Processor (i.e. upstream processor) working with traffic module and assembling multicast packets for transmission. Functions of BMI stage include interfacing to the Ingress and Egress Traffic Management modules, collecting headers (i.e. packets) from all of the pipes and sending data to the traffic management modules. The two Packet Processor modules: Ingress Packet Processor and Egress Packet Processor contains modules, pipes, and stages. Note: Per Section 0042: In general, a series of data packets flow transmitted between two points (i.e. downstream to upstream) in a network during a session]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 2. Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Tatar et al. US 20070195773 hereafter Tatar in view of Pope et al. US 20210258284 hereafter Pope. As to Claim 13 (Original) Tatar discloses the method of claim 1, wherein a particular packet processing stage introduces a third packet [Sections 0006. 0041, 0205: Outbound packets transmitted out of the routing device and transmitted to network on output interface (i.e. tunnel interface) in other words transmit or egress path (i.e. tunnel) from the routing device to the network. Packets can be processed and routed in a multi-stage, each received packet can be modified then buffered and enqueued for transmission. In one embodiment, there are 32 entries (i.e. 32 packets) in the low queue HTLs], Although Tatar discloses multiple data packets of up to 32 packets it doesn’t explicitly state and wherein the particular packet processing stage outputs the third packet upstream on the packet processing pipeline. However, Pope teaches and wherein the particular packet processing stage outputs the third packet upstream on the packet processing pipeline [Fig. 12, Sections 0004, 0283-0284, 0319: Network interface device configured to receive a plurality of data packets; provide processing pipeline for processing plurality of data packets. The pipelining of data packets in which different packets may be processed for example processing unit (i.e. stage see 0101) executing a third data packet. After the operations have been executed, each of the packets including third packet moves along the stage (i.e. move upstream for egress/output) in the sequence. The network device comprises plurality of processing for data packets transmit path (i.e. output interface)]. Therefore, it would have been obvious to one skilled in the art to have combined the method of Tatar relating to multiple packets for example 32 different packets can be received, processed and routed in multi-state upstream for egress/output with the teaching of Pope in which a particular processing state/processing unit can inject/include a specific third packet for processing then move it along upstream. By combining the method/systems, different number of packets can be injected and processed then moved along the stages upstream to a transmit path to be forwarded to the network without undue experimentation. As to Claim 14 (Original) Tatar discloses the method of claim 13, further comprising: receiving the third packet at a prior packet processing stage [Sections 0006. 0041, 0205: Outbound packets transmitted out of the routing device and transmitted to network on output interface (i.e. tunnel interface) in other words transmit or egress path (i.e. tunnel) from the routing device to the network. Packets can be processed and routed in a multi-stage, each received packet can be modified then buffered and enqueued for transmission. In one embodiment, there are 32 entries (i.e. 32 packets) in the low queue HTLs]; Tatar doesn’t explicitly state and applying the logic of the prior packet processing stage to consume the third packet by the prior packet processing stage, or output the third packet upstream on the packet processing pipeline. However, Pope teaches and applying the logic of the prior packet processing stage to consume the third packet by the prior packet processing stage, or output the third packet upstream on the packet processing pipeline [Fig. 12, Sections 0283-0284, 0299, 0319: The pipelining of data packets in which different packets may be processed for example processing unit (i.e. stage see 0101) executing a third data packet. After the operations have been executed, each of the packets including third packet moves along the stage (i.e. move upstream for egress/output) in the sequence. The pipeline may comprise a plurality of packet access stages, logic stages, and map access stages. The network device comprises plurality of processing for data packets transmit path (i.e. output interface)]. Therefore, it would have been obvious to one skilled in the art to have combined the method of Tatar relating to multiple packets for example 32 different packets can be received, processed and routed in multi-state upstream for egress/output with the teaching of Pope in which a particular processing state/processing unit can inject/include a specific third packet for processing then move it along upstream. By combining the method/systems, different number of packets can be injected and processed then moved along the stages upstream to a transmit path to be forwarded to the network without undue experimentation. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Roitshtein US 20120177047. Furthermore, each additional prior arts cited on PTO-892 but not applied in rejection contains a disclosed description related to the claimed subject matter found either in the Figures, description summary and/or disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAEL M ULYSSE whose telephone number is (571)272-1228. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chirag G. Shah can be reached at (571)272-3144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. March 13, 2026 /JAEL M ULYSSE/Primary Examiner, Art Unit 2477
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604337
COMMUNICATIONS METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12604289
METHOD AND APPARATUS FOR TIMING ADVANCE
2y 5m to grant Granted Apr 14, 2026
Patent 12598550
METHOD OF TRANSMITTING AND RECEIVING DOWNLINK CONTROL CHANNEL AND APPARATUS THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12588007
SUPER-SLOT FORMAT FOR HALF DUPLEX (HD) FREQUENCY-DIVISION DUPLEX (FDD) (HD-FDD) IN WIRELESS COMMUNICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12588020
METHOD AND USER EQUIPMENT FOR MULTI-TRANSMISSION/RECEPTION POINT OPERATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
88%
With Interview (+5.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 649 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month