Prosecution Insights
Last updated: April 19, 2026
Application No. 17/751,398

FLEXIBLE HEADER ALTERATION IN NETWORK DEVICES

Non-Final OA §103§112
Filed
May 23, 2022
Examiner
FOLLANSBEE, KEITH TRAN-DANH
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Marvell Israel (M I S L) Ltd.
OA Round
3 (Non-Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
82%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
54 granted / 85 resolved
+5.5% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
45 currently pending
Career history
130
Total Applications
across all art units

Statute-Specific Performance

§101
1.2%
-38.8% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
16.4%
-23.6% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 85 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 2, 12 have been amended. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 2-21 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 2 is rejected for claim limitation “and wherein the cycle scheme includes, during a particular cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time, among the first processing times, was distributed in a preceding cycle of the cycle scheme” fail to comply with the written description requirement. No where in specification details this limitation. The closest description examiner could find [0046]“ On the other hand, if the bypass decision engine 206 determines at block 310 that statistical processing mode is not enabled for the corresponding processing thread (e.g., if the statistical processing mode is set to a logic 0), then the bypass decision engine 206 does not statistically select the packet to be diverted to the bypass path, and the method 300 proceeds to a block 311 at which the thread ID corresponding to the packet is remapped to a "do nothing" thread, in an embodiment.” What is described in [0046] is not what is claimed in claim 1. Claim 12 is rejected for similar reasons as claim 1. Claims 3-11, 13-21 are rejected for being dependent on claims 1, 12. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-21 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 is rejected for being indefinite because the claim is unclear more specifically “and wherein the cycle scheme includes, during a current cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time, among the first processing times, was distributed in a preceding cycle of the cycle scheme”. Examiner is unsure what the preceding cycle of the cycle scheme if the cycle scheme is further defined a current cycle of the cycle scheme and a preceding cycle of the cycle scheme. Examiner could not find what the preceding cycle could be in specification. Claim 12 is rejected for similar reasons above. Claims 3-11, 13-21 are rejected for being dependent on claims 1, 12. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Levy et al. (US20150172187) in view of Kopelman et al. (US7424019). Regarding claim 2, Levy teaches A method for processing packets in a network device ([0011] “FIG. 2 is a block diagram of an example network device configured to combine header information corresponding to multiple packets, according to an embodiment”), the method comprising: receiving, at a packet processor of the network device, packets received by the network device from one or more network links (Fig. 2 “220-1, 220-2”, [0042] “the network device 200 receives a first packet 220-1 via the port 104 b. Subsequently, the network device 200 receives a second packet 220-3 via the port 104 b”); determining, by the packet processor, one or more egress interfaces via which respective ones of the packets are to be transmitted by the network device ([0021] “In an embodiment, the network device 100 is configured to receive packets via ingress ports 104, to determine respective egress ports 104 via which the packets are to be transmitted, and to transmit the packets via the determined ports 104”, [0039] “The network device 200 is similar to the network device 100 of FIG. 1); performing, by a header alteration engine of the network device, modification of packet headers of the packets, including: distributing the packet headers among a plurality of header alteration processors for parallel processing of the packet headers ([0050-51] “The distributor 210 provides the combined packet descriptor 232 a, as a single data unit, to a PPN 220 … In response to determining that the packet descriptor 232 a is a combined packet descriptor, the PPN 220 decomposes the combined packet descriptor 232 a to extract the first packet descriptor 230-1 a and the second packet descriptor 230-2 a, in an embodiment” … In an embodiment, processing the first packet descriptor 230-1 a includes modifying the packet descriptor 230-1 a, for example to change one or more header bits extracted from the header of the first packet 220-1 and included in the first packet descriptor 230-1 a, to add information (e.g., a forwarding decision) to the packet descriptor 230-1 a, etc., “, [0044] “the packet processor 217 includes a plurality packet processing nodes (PPNs) 220 configured to concurrently, in parallel, perform processing of respective packet descriptors to process packets associated with the packet descriptors”, (Examiner’s Note: Header and descriptor are equivalent, to modify the packet descriptor the processor is modifying the descriptor within the header as can be further seen in [0037] “Similarly, processing of the second packet 120-2 includes modifying the packet descriptor 130-2, for example to modify one or more fields of the header of the packet 120-2, in some embodiments” ), the packet headers being distributed by cycling through the plurality of header alteration processors according to a cycle scheme (Fig 2 “200”, “Control path 208”, “Data Path 206”, “Descriptor Generator 202”, “Distributer 210”) that ensures that processing of packet headers that undergo first sets of header alteration operations associated with first processing times does not delay processing of packet headers that undergo second sets of header alteration operations associated with second processing times, wherein the first processing times are longer than the second processing times ([0045] “The particular processing operations that the external processing engines 106 are configured to perform are typically highly resource intensive and/or would require a relatively longer time to be performed if the operations were performed using a more generalized processor, such as a PPNs 220 …[0054] “After the descriptor unpacking unit 205 decomposes a processed combined packet descriptor corresponding to the first packet and the second packet, the packet descriptor corresponding to the multicast packet is looped back to the control plane 208 for processing of a next instance of the multicast packet” [0055] “In an embodiment, when the descriptor packing unit 204 combines multiple (e.g., two, three, etc.) packet descriptors corresponding to different packets, and at least one of the packet descriptors corresponds to a higher priority packet, the descriptor packing unit 204 assigns the higher priority to the combined descriptor that represents the multiple packets. In this case, if a packet descriptor corresponding to a lower priority packet is combined with a packet descriptor corresponding to a higher priority packet, the lower priority packet is treated as a higher priority packet for the purpose of reducing congestion in the network device 200, in an embodiment”, (Examiner’s Note: multicast packets are looped back and processed again. As can be seen in [0045] and [0054], [0055] packet descriptors have different needs to be processed some are more time consuming requiring more processing power, some are looped back and processed a second time. When congestion occurs some system would drop low priority packets which can BRI as delay, but the system described in [0055] shows rather than dropping the packet the system raises priority of the lower priority packet so it is all processed which can BRI as “does not delay processing”), performing, by respective ones of the header alteration processors, respective sets of header alteration operations to process respective packet headers distributed to the header alteration processors, to generate modified packet headers ([0051] “In an embodiment, processing the first packet descriptor 230-1 a includes modifying the packet descriptor 230-1 a, for example to change one or more header bits extracted from the header of the first packet 220-1 and included in the first packet descriptor 230-1 a, to add information (e.g., a forwarding decision) to the packet descriptor 230-1 a, etc., in an embodiment. In this embodiment, the processed first packet descriptor 230-1 b is a modified version of the first packet descriptor 230-1 a”, (Examiner’s Note: packet descriptor is equivalent to packet header), and aggregating the modified packet headers from the plurality of header alteration processors, the modified packet headers being aggregated according to the cycle scheme to preserve an order of the respective packet headers modified by the respective sets of header alteration operations performed by the header alteration processors ([0047] “In an embodiment, the reorder block 212 is configured to maintain order of at least the packets belonging to a same data flow entering the network device to ensure that these packets are transmitted from the network device in the order in which the packets were received by the network device. In particular, the reorder block 212 ensures that descriptors are transmitted from the control path 208 to the data path 206 in the same order that the descriptors were received by the control path 208 from the data path 206, in an embodiment”); and transmitting packets with the modified packet headers via the one or more egress interfaces of the network device ([0054] “In an embodiment, the packet processor 217 determines a destination port 104 for egressing the first (unicast) packet”). Levy does not teach and wherein the cycle scheme includes, during a particular cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time, among the first processing times, was distributed in a preceding cycle of the cycle scheme. Kopelman teaches and wherein the cycle scheme includes, during a particular cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time (col 5 lines 1-10 “his SCI stage 120 in the pipeline preferably holds up to 3 cycles of a burst in order to enable the arbitration and identification of the source channel number. If both DRAM modules have a burst at the same time, one of the DRAM modules will be paused, for example in a round robin manner”), among the first processing times, was distributed in a preceding cycle of the cycle scheme (col 4 lines 60-67 “The header altering device 74 operates in a pipeline manner. The header altering device 74 includes a source channel identifier (SCI) stage 120”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify Levy’s method/apparatus by adding the teachings of Kopelman in order to make a more effective method/apparatus by allow for identification for system. Regarding claim 12, Levy teaches A network device ([0011] “FIG. 2 is a block diagram of an example network device configured to combine header information corresponding to multiple packets, according to an embodiment”), comprising: a packet processor configured to i) receive packets from received by the network device from one or more network links (Fig. 2 “220-1, 220-2”, [0042] “the network device 200 receives a first packet 220-1 via the port 104 b. Subsequently, the network device 200 receives a second packet 220-3 via the port 104 b”, [0044] “the packet processor 217 includes a plurality packet processing nodes (PPNs) 220 configured to concurrently, in parallel, perform processing of respective packet descriptors to process packets associated with the packet descriptors”) and ii) determine one or more egress interfaces via which respective ones of the packets are to be transmitted by the network device ([0021] “In an embodiment, the network device 100 is configured to receive packets via ingress ports 104, to determine respective egress ports 104 via which the packets are to be transmitted, and to transmit the packets via the determined ports 104”, [0039] “The network device 200 is similar to the network device 100 of FIG. 1); and a header alteration engine including a plurality of header alteration processors configured to perform modification of packet headers of the packets, the header alteration engine being configured to: distribute the packet headers among the plurality of header alteration processors for parallel processing of the packet headers ([0050] “The distributor 210 provides the combined packet descriptor 232 a, as a single data unit, to a PPN 220 … In response to determining that the packet descriptor 232 a is a combined packet descriptor, the PPN 220 decomposes the combined packet descriptor 232 a to extract the first packet descriptor 230-1 a and the second packet descriptor 230-2 a, in an embodiment”, [0044] “the packet processor 217 includes a plurality packet processing nodes (PPNs) 220 configured to concurrently, in parallel, perform processing of respective packet descriptors to process packets associated with the packet descriptors”, (Examiner’s Note: to modify the packet descriptor the processor is modifying the descriptor within the header as can be further seen in [0037] “Similarly, processing of the second packet 120-2 includes modifying the packet descriptor 130-2, for example to modify one or more fields of the header of the packet 120-2, in some embodiments” )), the packet headers being distributed by cycling through the plurality of header alteration processors according to a cycle scheme (Fig 2 “200”, “Control path 208”, “Data Path 206”, “Descriptor Generator 202”, “Distributer 210”) that ensures that processing of packet headers that undergo first sets of header alteration operations associated with first processing times does not delay processing of packet headers that undergo second sets of header alteration operations associated with second processing times, wherein the first processing times are longer than the second processing times ([0045] “The particular processing operations that the external processing engines 106 are configured to perform are typically highly resource intensive and/or would require a relatively longer time to be performed if the operations were performed using a more generalized processor, such as a PPNs 220 …[0054] “After the descriptor unpacking unit 205 decomposes a processed combined packet descriptor corresponding to the first packet and the second packet, the packet descriptor corresponding to the multicast packet is looped back to the control plane 208 for processing of a next instance of the multicast packet” [0055] “In an embodiment, when the descriptor packing unit 204 combines multiple (e.g., two, three, etc.) packet descriptors corresponding to different packets, and at least one of the packet descriptors corresponds to a higher priority packet, the descriptor packing unit 204 assigns the higher priority to the combined descriptor that represents the multiple packets. In this case, if a packet descriptor corresponding to a lower priority packet is combined with a packet descriptor corresponding to a higher priority packet, the lower priority packet is treated as a higher priority packet for the purpose of reducing congestion in the network device 200, in an embodiment”, (Examiner’s Note: multicast packets are looped back and processed again. As can be seen in [0045] and [0054], [0055] packet descriptors have different needs to be processed some are more time consuming requiring more processing power, some are looped back and processed a second time. When congestion occurs some system would drop low priority packets which can BRI as delay, but the system described in [0055] shows rather than dropping the packet the system raises priority of the lower priority packet so it is all processed which can BRI as “does not delay processing”), perform, with the header alteration processors, respective sets of header alteration operations to process respective packet headers distributed to the header alteration processors, to generate modified packet headers ([0051] “In an embodiment, processing the first packet descriptor 230-1 a includes modifying the packet descriptor 230-1 a, for example to change one or more header bits extracted from the header of the first packet 220-1 and included in the first packet descriptor 230-1 a, to add information (e.g., a forwarding decision) to the packet descriptor 230-1 a, etc., in an embodiment. In this embodiment, the processed first packet descriptor 230-1 b is a modified version of the first packet descriptor 230-1 a”, (Examiner’s Note: packet descriptor is equivalent to packet header), and aggregate the modified packet headers from the plurality of header alteration processors, the modified packet headers being aggregated according to the cycle scheme to preserve an order of the respective packet headers modified by respective sets of header alteration operations performed by the header alteration processors ([0047] “In an embodiment, the reorder block 212 is configured to maintain order of at least the packets belonging to a same data flow entering the network device to ensure that these packets are transmitted from the network device in the order in which the packets were received by the network device. In particular, the reorder block 212 ensures that descriptors are transmitted from the control path 208 to the data path 206 in the same order that the descriptors were received by the control path 208 from the data path 206, in an embodiment”); wherein the packet processor is further configured to cause the packets with the modified packet headers to be transmitted via the one or more egress interfaces of the network device ([0054] “In an embodiment, the packet processor 217 determines a destination port 104 for egressing the first (unicast) packet”). Levy does not teach and wherein the cycle scheme includes, during a particular cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time, among the first processing times, was distributed in a preceding cycle of the cycle scheme. Kopelman teaches and wherein the cycle scheme includes, during a particular cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time (col 5 lines 1-10 “his SCI stage 120 in the pipeline preferably holds up to 3 cycles of a burst in order to enable the arbitration and identification of the source channel number. If both DRAM modules have a burst at the same time, one of the DRAM modules will be paused, for example in a round robin manner”), among the first processing times, was distributed in a preceding cycle of the cycle scheme (col 4 lines 60-67 “The header altering device 74 operates in a pipeline manner. The header altering device 74 includes a source channel identifier (SCI) stage 120”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify Levy’s method/apparatus by adding the teachings of Kopelman in order to make a more effective method/apparatus by allow for identification for system. Claim(s) 3-9, 13-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Levy in view of Kopelman further in view of Shumsky et al.( US20140192815). Regarding claim 3, 13, Levy teaches wherein the header alteration engine is further configured to, prior to distributing the packet headers among the plurality of header alteration processors, identify respective processing threads to be implemented by the header alteration processors to process the respective packet headers ([0042] “The network device 200 is configured to perform a reduced set of processing operations with respect to packets that enter the network device 100 via the port 104 b, in an example embodiment. At least a portion, such as a header, of each of the packets 220-1 and 220-2 is provided to the descriptor generator 202. In an embodiment, the descriptor generator 202 generates a first packet descriptor 230-1 a corresponding to the first packet 220-1, and generates a second packet descriptor 230-2 a corresponding to the second packet 220-2 a. The first packet descriptor 230-1 a includes a reduced set of bits extracted from the header of the first packet 220-1, in an embodiment. Similarly, the second packet descriptor 230-2 a includes a reduced set of bits extracted from the header of the second packet 220-2, in an embodiment. The descriptor packing unit 204 combines the first packet descriptor 230-1 a and the second packet descriptor 230-2 a into a single data structure, such as a single “combined” packet descriptor, 232 a that represents the first packet 220-1 and the second packet 220-2, in an embodiment. In an embodiment, the descriptor packing unit 204 is configured to include, in the combined packet descriptor 232, an indication (e.g., a “packing flag”) to indicate that the combined packet descriptor 232 represents multiple packets and includes respective sets of header bits extracted from the multiple packets”), Levy does not explicitly teach wherein the respective processing threads are assigned re-cycle numbers corresponding to lengths of time required to perform the respective sets of header alteration operations. Shumsky, Kopelman teaches wherein the respective processing threads are assigned re-cycle numbers ([0034] “when the instruction A indicates that processing of the packet A is completed and the packet A is ready to be forwarded to a target port 112 for transmission of the packet A via the target port 112, the controller 110 causes the packet A to be sent for transmission to the target port 112, removes the packet ID1 from the queue 114-1, and releases the packet ID1 for example by returning the packet ID1 to the pool of free IDs 120, in an embodiment. As yet another example, the instruction A indicates that the packet A should be dropped, the controller 110 removes the ID1 from the queue 114-1 and returns the ID1 to the pool of free IDs 120, in an embodiment”, (Examiner’s note: instructions are equivalent to thread) corresponding to lengths of time required to perform the respective sets of header alteration operations ([0023] “the PPEs 104 provide class updates with respect to some of the packets (e.g., packets for which processing time is expected to be relatively long), and do not provide class updates with respect to other packets (e.g., packets for which processing time is expected to be relatively short)”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify the combination of Levy’s, Kopelman, method/apparatus by adding the teachings of Shumsky in order to make a more effective method/apparatus by efficiently utilize multiple processing elements to concurrently perform parallel processing of packets belonging to a data flow while efficiently maintaining an order of packets within the data flow. Regarding claim 4, 14, Levy teaches wherein the header alteration engine is configured to distribute the packet headers among the plurality of header alteration processors by cycling through the header alteration processors ([0050] “The distributor 210 provides the combined packet descriptor 232 a, as a single data unit, to a PPN 220 … In response to determining that the packet descriptor 232 a is a combined packet descriptor, the PPN 220 decomposes the combined packet descriptor 232 a to extract the first packet descriptor 230-1 a and the second packet descriptor 230-2 a, in an embodiment”, [0044] “the packet processor 217 includes a plurality packet processing nodes (PPNs) 220 configured to concurrently, in parallel, perform processing of respective packet descriptors to process packets associated with the packet descriptors”) Levy, Kopelman does not explicitly teaches based on the re-cycle numbers assigned to particular processing threads being implemented by the header alteration processors. Shumsky teaches based on the re-cycle numbers assigned to particular processing threads being implemented by the header alteration processors ([0033] “The controller 110 checks whether the ID2 associated with the packet B is at the head of the queue 114-1. Because the ID2 is not at the head of the queue 114-1, the controller 110 does not take the action indicated by the instruction B. Rather, the controller 110 associates the action indicated by instruction B with the packet B, for example by storing an association between the action indicated by the instruction B and the ID2 associated with the packet B in the actions database 122. Then, the controller 110 receives the instruction A indicating the action to be taken with respect to the packet A, and checks whether the ID1 associated with the packet A is at the head of the queue 114-1. Because the ID1 is at the head of the queue 114-1, the controller 110 performs the action indicated by the instruction A”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify the combination of Levy’s, Kopelman method/apparatus by adding the teachings of Shumsky in order to make a more effective method/apparatus by efficiently utilize multiple processing elements to concurrently perform parallel processing of packets belonging to a data flow while efficiently maintaining an order of packets within the data flow. Regarding claim 5, 15, Levy, Kopelman does not teach wherein the header alteration engine is configured to, while cycling through the header alteration processors, skip a particular header alteration processor for a number of cycles corresponding to a particular re-cycle number assigned to a particular processing thread being implemented by the particular header alteration processor. Shumsky teaches wherein the header alteration engine is configured to, while cycling through the header alteration processors, skip a particular header alteration processor for a number of cycles corresponding to a particular re-cycle number assigned to a particular processing thread being implemented by the particular header alteration processor ([0034] “As yet another example, the instruction A indicates that the packet A should be dropped, the controller 110 removes the ID1 from the queue 114-1 and returns the ID1 to the pool of free IDs 120, in an embodiment”, (Examiner’s Note: dropping is broadly interpreted as skip). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify the combination of Levy’s, Kopelman, method/apparatus by adding the teachings of Shumsky in order to make a more effective method/apparatus by efficiently utilize multiple processing elements to concurrently perform parallel processing of packets belonging to a data flow while efficiently maintaining an order of packets within the data flow. Regarding claim 6, 16, Levy teaches wherein the header alteration engine includes a hardware input processor configured to: identify the respective processing threads using a hardware input processor of the header alteration engine prior to distributing the respective packet headers to the plurality of header alteration processors ([0042] “In an embodiment, the descriptor packing unit 204 is configured to include, in the combined packet descriptor 232, an indication (e.g., a “packing flag”) to indicate that the combined packet descriptor 232 represents multiple packets and includes respective sets of header bits extracted from the multiple packets”), and when a particular packet header is distributed to a particular header alteration processor, provide, to the particular header alteration processor, a thread identifier associated with a particular processing thread to be implemented by the particular header alteration processor to process the particular packet header ([0050] “The distributor 210 provides the combined packet descriptor 232 a, as a single data unit, to a PPN 220 … In response to determining that the packet descriptor 232 a is a combined packet descriptor, the PPN 220 decomposes the combined packet descriptor 232 a to extract the first packet descriptor 230-1 a and the second packet descriptor 230-2 a, in an embodiment”, (Examiner’s Note: the flag is the identifier). Regarding claim 7, 17, Levy teaches wherein the hardware input processor is configured to identify the respective processing threads based on one or both of i) respective packet flows and ii) respective packet types of the corresponding packets ([0050] “The distributor 210 provides the combined packet descriptor 232 a, as a single data unit, to a PPN 220 … In response to determining that the packet descriptor 232 a is a combined packet descriptor, the PPN 220 decomposes the combined packet descriptor 232 a to extract the first packet descriptor 230-1 a and the second packet descriptor 230-2 a, in an embodiment”, (Examiner’s Note: the flag is the identifier, and the PPN identifies the packing flag which contains multiple packets). Regarding claim 8, 18, Levy teaches wherein: the hardware input processor is further configured to: in connection with distributing a particular packet header to a particular header alteration processor, i) extract one or more portions of the packet header to be provided to the header alteration processor ([0051] “In an embodiment, processing the first packet descriptor 230-1 a includes modifying the packet descriptor 230-1 a, for example to change one or more header bits extracted from the header of the first packet 220-1 and included in the first packet descriptor 230-1 a, to add information (e.g., a forwarding decision) to the packet descriptor 230-1 a, etc., in an embodiment”) and ii) generate an alteration accessible header to include the one or more portions extracted from the particular packet header, the alteration accessible header being separate from the particular packet header ([0051] “In an embodiment, processing the first packet descriptor 230-1 a includes modifying the packet descriptor 230-1 a, for example to change one or more header bits extracted from the header of the first packet 220-1 and included in the first packet descriptor 230-1 a, to add information (e.g., a forwarding decision) to the packet descriptor 230-1 a, etc., in an embodiment”), and provide the alteration accessible header, rather than the particular packet header, to the particular header alteration processor ([0052] “upon providing the processed combined descriptor 232 b to the buffer 214, the PPN 220 informs the reorder block 212 that processing of the combined packet descriptor 232 a has been completed by the PPN 220”), and the header alteration engine further comprises an output hardware processor configured to, after the alteration accessible header is processed by the particular header alteration processor, integrate the alteration accessible header, processed by the particular header alteration processor, into the particular packet header ([0052] “The packet reorder block 212 causes the single combined processed packet descriptor 232 b to be transmitted from the buffer 214 to the data path 206 when all packet descriptors (or data units containing multiple packet descriptors) received from the data path 206 prior to the packet descriptor 232 a have been returned to the data path 206, in an embodiment. Allowing the reorder block 212 to process two packet descriptors as a single data unit reduces the number of operations that the reorder block 212 needs to perform in order to maintain packet order”). Regarding claim 9, 19, Levy, Kopelman does not explicitly teaches wherein the hardware input processor is further configured to: generating metadata to include at least the thread identifier associated with the particular processing thread to be implemented to process the particular packet header by the particular header alteration processor, and provide the metadata along with the alteration accessible header to the particular header alteration processor. Shumsky teaches wherein the hardware input processor is further configured to: generating metadata to include at least the thread identifier associated with the particular processing thread to be implemented to process the particular packet header by the particular header alteration processor, and provide the metadata along with the alteration accessible header to the particular header alteration processor ([0031] “In another embodiment, the free IDs unit 120 includes another suitable record (e.g., a table or a database) of free IDs and/or includes an ID generator. In an embodiment, the dispatch unit 118 suitably associates each packet and the packet ID assigned to the packet, and sends the packet along with the packet ID to a PPE 104. The PPE 104 uses the packet ID associated with the packet to communicate with the ordering unit 106, for example to send instructions, to the controller 110, indicative of actions to be taken with respect to the packet by the controller 110”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify the combination of Levy’s, Kopelman method/apparatus by adding the teachings of Shumsky in order to make a more effective method/apparatus by efficiently utilize multiple processing elements to concurrently perform parallel processing of packets belonging to a data flow while efficiently maintaining an order of packets within the data flow. Claim(s) 10-11, 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Levy in view of Kopelman further in view of Shumsky further in view of Wohlgemuth et al.(US20150113190). Regarding claim 10, 20, Levy, Kopelman and Shumsky does not explicitly teach wherein the hardware input processor is configured to: split information comprising the metadata and the alteration accessible header into a plurality of chunks, and serially transfer respective chunks, among the plurality of chunks, to the particular header alteration processor, wherein an initial chunk of the plurality of chunks transferred to the header alteration processor includes at least the thread identifier associated with the particular processing thread to be implemented by the particular header alteration processor. Wohlgemuth teaches wherein the hardware input processor is configured to: split information comprising the metadata and the alteration accessible header into a plurality of chunks ([0028] “Continuing with FIG. 2, at a time t3, the PPN 104 triggers a second accelerator engine B for performing a processing operation 204 with respect to the packet. With reference to FIG. 1, in an example embodiment, the second accelerator engine B triggered at the time t3 is the accelerator engine 106b, and the processing operation 204 is, for example, a policy lookup operation for the packet. In an embodiment, after triggering the accelerator engine B, and before receiving a result of the processing operation 204 from the accelerator engine B, the PPN 104 continues processing of the packet, and executes one or more instructions corresponding to a portion 200c of processing of the packet. In an embodiment, the portion 200c includes performing one or more processing operations, with respect to the packet”, (Examiner’s Note: the chunks are portion of processing of the packet), and serially transfer respective chunks, among the plurality of chunks, to the particular header alteration processor, wherein an initial chunk of the plurality of chunks transferred to the header alteration processor includes at least the thread identifier associated with the particular processing thread to be implemented by the particular header alteration processor ([0027] “In an embodiment, the processing thread 200 includes a set of computer readable instructions that the PPN 104 executes to process a packet. The PPN 104 begins execution of the thread 200 at a time t1 by executing one or more instructions of the thread 200 corresponding to a portion 200a of processing of the packet”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify the combination of Levy’s, Kopelman and Shumsky’s method/apparatus by adding the teachings of Wohlgemuth in order to make a more effective method/apparatus by efficiently identifying rule for the packet processing. Regarding claim 11, 21, Levy, Kopelman and Shumsky does not teach wherein the header alteration processor is configured to, prior to receiving an initial portion of the alteration accessible header at the particular header alteration processor, retrieve, from a program memory based on the thread identifier included in the initial chunk of the plurality of chunks, a set of computer readable instructions to be implemented by the particular header alteration processor to process the alteration accessible header. Wohlgemuth teaches wherein the header alteration processor is configured to, prior to receiving an initial portion of the alteration accessible header at the particular header alteration processor, retrieve, from a program memory based on the thread identifier included in the initial chunk of the plurality of chunks ([0027] “In an embodiment, the processing thread 200 includes a set of computer readable instructions that the PPN 104 executes to process a packet. The PPN 104 begins execution of the thread 200 at a time t1 by executing one or more instructions of the thread 200 corresponding to a portion 200a of processing of the packet”), a set of computer readable instructions to be implemented by the particular header alteration processor to process the alteration accessible header ([0021] “In an embodiment, to efficiently manage responses, corresponding to concurrently pending requests to the accelerator engines 106, the PPN 104 assigns a respective identification numbers (IDs) to each transaction upon initiation of the transaction, and to subsequently use and ID assigned to a particular transaction to determine that the a response to the particular transaction has been received by the PPN 104. Managing responses corresponding to multiple concurrently pending transactions using respective IDs assigned to the transactions upon initiation of the transactions allows the PPN 104 to quickly and efficiently determine that a result corresponding to a particular transaction has been received by the PPN and is available to be used for further processing of the packet at the PPN”). It therefore would have been obvious to one of ordinary skill in the art, at the time when instant application was filed, to modify the combination of Levy’s, Kopelman and Shumsky’s method/apparatus by adding the teachings of Wohlgemuth in order to make a more effective method/apparatus by efficiently identifying rule for the packet processing. Response to Arguments Applicant's arguments filed 10/20/2025 have been fully considered but they are not persuasive. Applicant’s Argument Applicant remarks Levy does not teach the packet headers being distributed by cycling through the plurality of header alteration processors according to a cycle scheme that ensures that processing of packet headers that undergo first sets of header alteration operations associated with first processing times does not delay processing of packet headers that undergo second sets of header alteration operations associated with second processing times, wherein the first processing times are longer than the second processing times, and wherein the cycle scheme includes, during a current cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time, among the first processing times, was distributed in a preceding cycle of the cycle scheme Examiner’s Response Examiner respectfully disagrees. See updated rejection of Levy in view of newly added reference Kopelman. Levy is relied upon to show multicast packets are looped back and processed again. As can be seen in [0045] and [0054], [0055]. Kopelman is relied upon to show wherein the cycle scheme includes, during a current cycle of the cycle scheme, not distributing any packet header to a particular header alteration processor, among the plurality of header alteration processors, to which a packet header associated with a first processing time, among the first processing times, was distributed in a preceding cycle of the cycle scheme (col 5 lines 1-10 “his SCI stage 120 in the pipeline preferably holds up to 3 cycles of a burst in order to enable the arbitration and identification of the source channel number. If both DRAM modules have a burst at the same time, one of the DRAM modules will be paused, for example in a round robin manner”)(col 4 lines 60-67 “The header altering device 74 operates in a pipeline manner. The header altering device 74 includes a source channel identifier (SCI) stage 120”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEITH TRAN-DANH FOLLANSBEE whose telephone number is (571)272-3071. The examiner can normally be reached 10am -6 pm M-Th. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached on 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.T.F./Examiner, Art Unit 2411 /DERRICK W FERRIS/Supervisory Patent Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

May 23, 2022
Application Filed
Jan 27, 2023
Response after Non-Final Action
Nov 25, 2024
Non-Final Rejection — §103, §112
Mar 03, 2025
Response Filed
Jun 13, 2025
Final Rejection — §103, §112
Oct 20, 2025
Request for Continued Examination
Oct 27, 2025
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603684
METHOD AND DEVICE FOR COMMUNICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12513029
CARRIER FREQUENCY TRACKING METHOD, SIGNAL TRANSMISSION METHOD, AND RELATED APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12507284
ENHANCED UPLINK POWER CONTROL FOR PHYSICAL RANDOM ACCESS CHANNEL AFTER INITIAL ACCESS
2y 5m to grant Granted Dec 23, 2025
Patent 12476895
DEVICE FOR CONSTRUCTING NEURAL BLOCK RAPID-PROPAGATION PROTOCOL-BASED BLOCKCHAIN AND OPERATION METHOD THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12463907
VALIDATING NETWORK FLOWS IN A MULTI-TENANTED NETWORK APPLIANCE ROUTING SERVICE
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
82%
With Interview (+18.6%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 85 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month