Prosecution Insights
Last updated: April 19, 2026
Application No. 17/717,027

MULTI-TECHNOLOGY MULTI-USER IMPLEMENTATION FOR LOWER MAC PROTOCOL PROCESSING

Non-Final OA §103
Filed
Apr 08, 2022
Examiner
CHAKRAVARTHY, LATHA
Art Unit
2461
Tech Center
2400 — Computer Networks
Assignee
Edgeq Inc.
OA Round
5 (Non-Final)
31%
Grant Probability
At Risk
5-6
OA Rounds
3y 5m
To Grant
88%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
8 granted / 26 resolved
-27.2% vs TC avg
Strong +57% interview lift
Without
With
+57.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
40 currently pending
Career history
66
Total Applications
across all art units

Statute-Specific Performance

§103
65.4%
+25.4% vs TC avg
§102
27.4%
-12.6% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The office action is in response to the claim amendments and remarks filed on November 14, 2025 for the application filed April 8, 2022. Claims 1, 5, 10, 14, 16, and 18 are amended. Claims 1-6, 8, and 10-20 are currently pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 10-14, 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Tillinger et al. (U.S. Pub. No. 2021/0289482) in view of Liu et al. (U.S. Pub. No. 2017/0171060), Vasudevan (U.S. Pub. No. 2019/0238460), and Chowdhuri et al. (US8175015B1). Regarding claim 10, Tillinger teaches a communication device (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104. Paragraph [0033]: The UE may also be referred to as…..a wireless communications device… ) comprising: a physical layer (PHY) (Paragraph [0042: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. Paragraph [0043] Fig 3: The transmit (TX) processor 316 and the receive (RX) processor 370 implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels….Paragraph [0044] Fig 3: At the UE 350, each receiver 354RX receives a signal through its respective antenna 352. Each receiver 354RX recovers information modulated onto an RF carrier and provides the information to the receive (RX) processor 356. The TX processor 368 and the RX processor 356 implement layer 1 functionality associated with various signal processing functions); a medium access control (MAC) layer or MAC sublayer coupled to the PHY (Paragraph [0042] FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. …. MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization.) the MAC layer or MAC sublayer is configured for: processing a plurality of decoder codeblocks across multiple receiving data (RX) flows and a plurality of RX configuration blocks for the multiple RX flows to generate one or more decapsulated packets along with packet metadata and a flow status for each of the multiple RX flows, the plurality of decoder codeblocks and the plurality of RX configuration blocks are output from PHY; (Paragraph [0077]: FIG. 9 is a conceptual data flow diagram 900 illustrating the data flow between different means/components in an example apparatus 902. For example, the apparatus 902 may be a UE (e.g., the UE 402). The apparatus 902 includes a reception component 904 that receives one or more code blocks (e.g., a first code block through an Nth code block) from a base station 950 through one or more channels (e.g., a first channel through an Nth channel). As described in connection with 802, the reception component 904 may receive through a first channel a first code block from a base station. The apparatus 902 further includes a demodulator/decoder component 906 that demodulates and decodes the one or more code blocks received by the reception component 904 through the one or more channels. That is, the reception component 904 may provide the one or more code blocks to the demodulator/decoder component 906 to demodulate and decode the one or more code blocks. For example, as described in connection with 804, the demodulator/decoder component 906 may demodulate and decode the first code block to obtain at least one control information or data associated with the first code block. Paragraph [0042]: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor 375 provides ….MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Paragraph [0043]: The transmit (TX) processor 316 and the receive (RX) processor 370 implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels…. Paragraph [0044], Fig 3: At the UE 350….. The TX processor 368 and the RX processor 356 implement layer 1 functionality associated with various signal processing functions….. The data and control signals are then provided to the controller/processor 359, which implements layer 3 and layer 2 functionality. Paragraphs [0086], [0087], Fig 10: The processing system 1014 may be a component of the UE 350 and may include the memory 360 and/or at least one of the TX processor 368, the RX processor 356, and the controller/processor 359. Alternatively, the processing system 1014 may be the entire UE (e.g., see 350 of FIG. 3). In one configuration, the apparatus 902/902′ for wireless communication includes means for receiving, demodulating and decoding, encoding and modulating, estimating, and transmitting.) a higher layer coupled to the MAC layer or the MAC sublayer; receives from the MAC layer or the MAC sublayer the one or more decapsulated packets, the packet metadata and the flow status for each of the multiple RX flows. (FIG. 9 is a conceptual data flow diagram 900 illustrating the data flow between different means/components in an example apparatus 902. For example, the apparatus 902 may be a UE (e.g., the UE 402). The apparatus 902 includes a reception component 904 that receives one or more code blocks (e.g., a first code block through an Nth code block) from a base station 950 through one or more channels (e.g., a first channel through an Nth channel). As described in connection with 802, the reception component 904 may receive through a first channel a first code block from a base station. The apparatus 902 further includes a demodulator/decoder component 906 that demodulates and decodes the one or more code blocks received by the reception component 904 through the one or more channels. That is, the reception component 904 may provide the one or more code blocks to the demodulator/decoder component 906 to demodulate and decode the one or more code blocks. For example, as described in connection with 804, the demodulator/decoder component 906 may demodulate and decode the first code block to obtain at least one control information or data associated with the first code block. Paragraph [0042]: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes … a medium access control (MAC) layer. The controller/processor 375 provides RRC layer functionality associated with broadcasting of system information …. RLC layer functionality associated with the transfer of upper layer packet data units….; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Paragraph [0044]: The data and control signals are then provided to the controller/processor 359, which implements layer 3 functionality.) Tillinger does not explicitly teach processing a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY; the higher layer outputs to the MAC layer or MAC sublayer the plurality of packets across multiple TX flows, the plurality of codeblock descriptors, and the plurality of TX configuration blocks. However, Liu teaches processing a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY (Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets. Paragraphs [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Paragraph [0036]: the L2 layer 208 includes a media access control (MAC) sublayer 210…. the UE 106 may also have higher layers....Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Transmitter 410 may have one or more application (APP) layer encoders 417, a TCP encoder 416, an IP encoder 415, L2 layer encoders including a PDCP encoder 414, RLC encoder 413, and MAC encoder 412, and a L1 physical (PHY) layer encoder 411. In an aspect, for example, transmitter 410 can use encoders 411-417 in succession from APP layer encoder 417 to PHY encoder 411 to encapsulate a RLC layer payload 442 with a series of headers. Paragraph [0056]: L2 encoders, including….MAC encoder 412, may receive an IP datagram from IP encoder 415 and may successively add respective PDCP, RLC, and MAC headers and an L2 footer to the IP datagram to generate an Layer 2 (L2) frame…. In an aspect, each of the L2 headers may include information regarding the data packet and/or data flow. Paragraph [0057]: Physical-layer (PHY) encoder 411 may receive the L2 frame from one of the L2 encoders.) the higher layer outputs to the MAC layer or MAC sublayer the plurality of packets across multiple TX flows, the plurality of codeblock descriptors, and the plurality of TX configuration blocks; (Paragraph [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Paragraph [0036]: the L2 layer 208 includes a media access control (MAC) sublayer 210…. the UE 106 may also have higher layers....Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Transmitter 410 may have one or more application (APP) layer encoders 417, a TCP encoder 416, an IP encoder 415, L2 layer encoders including a PDCP encoder 414, RLC encoder 413, and MAC encoder 412. In an aspect, for example, transmitter 410 can use encoders 411-417 in succession from APP layer encoder 417 …. to encapsulate a RLC layer payload 442 with a series of headers. Paragraph [0056]: L2 encoders, including….MAC encoder 412, may receive an IP datagram from IP encoder 415 and may successively add respective PDCP, RLC, and MAC headers and an L2 footer to the IP datagram to generate an Layer 2 (L2) frame…. In an aspect, each of the L2 headers may include information regarding the data packet and/or data flow. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. In an aspect, relay device 420 may also include encoders to generate transport blocks. Paragraph [0078], Fig 5: In an aspect, encoders 515 can include one or more layer encoders 411-417 that may retrieve a data sequence 540 including one or more data payloads and encapsulate the payloads to provide transport blocks). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide processing a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY, a flow status for each of the multiple RX flows towards a higher layer, as well as a higher layer coupled to the MAC layer or the MAC sublayer, the higher layer outputs to the MAC layer or MAC sublayer the plurality of packets across multiple TX flows, the plurality of codeblock descriptors, and the plurality of TX configuration blocks, as taught by Liu in the system of Tillinger, so that it would include the data flow processing in a MAC layer to and from both the PHY layer as well as a higher layer, which would then enable data flow processing within a communication device in both RX as well as TX data flows. Liu describes that the processing system encodes the data sequence, including headers and other additional information such as configuration information, and multi-link flags (for coordinating data flows), acknowledgements and retransmissions, while transmitting the transport blocks, such that L2 headers may include information regarding the data packet and/or data flow (Liu: Paragraphs [0056], [0073], [0086], [0087], Fig 4, Fig 5). The combination of Tillinger and Liu does not explicitly teach pre-fetching, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer. However, Vasudevan teaches pre-fetching, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer (Paragraph [0020]: Various embodiments provide for deterministic contextual prefetching and accelerated I/O processing from unifying the hardware and software I/O processing pipeline through a shared context. For example, in the case of a network interface controller (NIC), the shared context is a pointer that contains flow specific contextual (e.g., TCP flow) addresses configured on the NIC per flow entry. This pointer is returned for ingress and egress related descriptor completions that match a specific flow entry, enabling prefetching to be done early in the pipeline with sufficient prefetch distance. Also, the shared context can eliminate the need for context lookups during software packet processing because the NIC has performed the lookup by matching the flow entry and returning the shared context pointer, potentially significantly speeding up packet processing. Paragraph [0026]: Thus, when a packet arrives and hits a matching entry in the Flow Director table of a NIC, the NIC retrieves the associated programmed context and returns the associated programmed context to the host along with the packet and other completion information. The host checks for a valid context and issues prefetches for the addresses contained in the context, if it determines that one exists. For example, the context could contain addresses to network layer contexts, transport layer contexts and socket layer contexts, which could all be prefetched, well ahead of the processing. These prefetches could prevent protocol processing pipeline from stalling, because prior to executing, associated data is prefetched and made available at the highest levels of the caching hierarchy. This can improve performance significantly. Paragraph [0027]: FIG. 1 depicts an example system including a network interface and a host system. Network interface 100 provides for identifying packets (transmit or receive) that have associated context information stored in memory of network interface 100 or host 150. The context information can be retrieved for one or more packets for packet or application processing of a received packet by network interface 100 or a packet to be transmitted by network interface 100. Paragraph [0039]: A received packet in packet buffer 162 can be retrieved and processed. Driver 168 can inform OS 172 of availability of a received packet. OS 172 can apply MAC layer processing on the packet using MAC context information including using driver data structures, driver statistic structures, and so forth. The MAC context information can be prefetched into cache of a core that performs MAC layer processing. Paragraph [0045]: The first context information address can refer to a MAC context address and the first context information address can refer to a MAC context information. Paragraph [0050]: For example, packet characteristics can be characteristics of the communication channel such as one or more of: source MAC address, destination MAC address, IPv4 source address, IPv4 destination address, portion of a TCP header, Virtual Extensible LAN protocol (VXLAN) tag, receive port, or transmit port. For example, a host operating system or network interface driver can identify the context information to be prefetched and store the information into an array on the network interface device. Paragraph [0053]: Prefetching can occur at the device driver so that by the time upper layer protocol processing starts, the associated data is ready at the cache.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide pre-fetching, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer, as taught by Vasudevan in the combined system of Tillinger and Liu, so that the pre-fetching can significantly speed up packet processing leading to improved performance (Vasudevan: Paragraphs [0020], [0026], [0027], [0039], [0053]). The combination of Tillinger, Liu, and Vasudevan does not explicitly teach wherein the MAC layer or the MAC sublayer comprises a flow context memory configured for: storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows. However, Chowdhuri teaches wherein the MAC layer or the MAC sublayer comprises a flow context memory configured for: storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows (Col 6, lines 37-48: The MAC processor 110 generally acts as an interface between a modem that implements physical layer functions and higher communication protocol layers. The MAC processor 110 receives MSDUs from upper layer functions, organizes them into MPDUs, and then provides the MPDUs to the modem for transmission from the SS. Additionally, the MAC processor 110 receives MPDUs from the modem, the MPDUs having been received by the SS, extracts and forms MSDUs that were packaged in the MPDUs, and then provides the MSDUs to the upper layer protocol functions. Col 10, line 67, Col 11, line 1: the context switching processor 200 may be utilized in a MAC processor. Col 11, lines 39-49: The context switching processor 200 also includes a context memory 212 and context switch logic 216. The processing engine 204 is coupled to the context memory 212, and the processing engine 204 generally stores state information in the context memory 212 when a context switch occurs. A context switch is when the processing engine 204 stops processing one burst (mid-burst) and then starts processing another burst. Thus, when a context switch occurs, the processing engine 204 also may retrieve state information from the context memory 212 corresponding to the next burst that is to be processed. Col 12, lines 8-12: Although context switching was described in the context of processing burst data received from the modem, processing using context switching optionally may also be implemented for burst data that is to be provided to the modem for transmission. Col 13, lines 30-33, 39-46: At block 246, the current context is saved. For example, the state information corresponding to the current context is saved to the context memory 212. At block 250, the next context (determined at block 242) is restored. For example, the state information corresponding to the next context is retrieved from the context memory 212 and stored in corresponding memories (e.g., registers) in the processing engine 204. In one implementation, the context Switch logic 216 may generate control signals to retrieve the state information corresponding to the next context from the context memory 212. Examiner’s note: MAC processor acts as an interface between a modem that implements physical layer functions, and higher communication protocol layers; organizes the MSDUs that it receives from upper layers into MPDUs, and provides them to the modem for transmission (TX flow); receives MPDUs from the modem, extracts and forms MSDUs that were packaged in the MPDUs, and then provides the MSDUs to the upper layer protocol functions (RX flow). Context switching is performed for processing data received from the modem, and processing data that is to be provided to the modem for transmission, using context memory where current context is saved to the context memory, and also, the state information is retrieved from the context memory, which teaches storing and fetching context for RX and TX flows.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide wherein the MAC layer or the MAC sublayer comprises a flow context memory configured for: storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows, as taught by Chowdhuri in the combined system of Tillinger, Liu, and Vasudevan, so that when processing stops and context switch occurs, state information can be stored in the context memory, and retrieved when the processing restarts (Chowdhuri: Col 11: lines 3-17, 39-49). Regarding claim 11, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the communication device of claim 10 (see rejection for claim 10) Tillinger further teaches wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a wireless standard, a user (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G Core (5GC)). Paragraph [0034]: Referring again to FIG. 1, in certain aspects, the UE 104 may be configured to encode and modulate control information and/or data to obtain a reference code block; receive the second code block; and demodulate and decode the second code block based on the estimated second channel (198). the concepts described herein may be applicable to other similar areas, such as LTE, and other wireless technologies. Paragraph [0052]: The first code block may be received through a first channel. At 408, the UE 402 demodulates and decodes the first code block to obtain, for example, control information and/or data from the first code block. At 410, the UE 402 encodes and modulates the control information and/or the data from the first code block to obtain an encoded and modulated reference first code block. For example, the control information and/or the data may be re-encoded and re-modulated by the UE 402.) Tillinger does not explicitly teach wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard. However, Liu teaches wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard (Paragraph [0006]: The multiple access technologies have been adopted in various telecommunication standards to provide a common protocol that enables different wireless devices to communicate on a municipal, national, regional, and even global level. An example telecommunication standard is Long Term Evolution (LTE). LTE is a set of enhancements to the Universal Mobile Telecommunications System (UMTS) mobile standard promulgated by Third Generation Partnership Project (3GPP). LTE is designed to better support mobile broadband Internet access by improving spectral efficiency, lowering costs, improving services, making use of new spectrum, and better integrating with other open standards using OFDMA on the downlink (DL), SC-FDMA on the uplink (UL), and multiple-input multiple-output (MIMO) antenna technology. However, as the demand for mobile broadband access continues to increase, there exists a need for further improvements in LTE technology. Preferably, the improvements should be applicable to other multi-access technologies and the telecommunication standards that employ these technologies. Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets through the access network. Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. Relay device may include decoders to decapsulate the L1 and L2 headers to recover a payload in the form of an IP datagram from the received RLC layer Protocol data unit (PDU) 440. In an aspect, relay device 420 may also include encoders to generate transport blocks); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the method wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard as taught by Liu in the combined system of Tillinger, Vasudevan, and Chowdhuri, so that it would provide processing in both directions for transmitting and receiving data flows to a user across multi-access technologies (Liu: Paragraphs [0006], [0056], [0073], [0086], [0087], Fig 4, Fig 5). Regarding claim 12, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the communication device of claim 10 (see rejection for claim 10); Tillinger further teaches wherein the wireless standard has a Wi-Fi protocol, or a 5G new radio (NR) protocol (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G Core (5GC)). Paragraph [0024], Fig 1: The base stations 102 configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network 190 through second backhaul links 184. Paragraph [0026], Fig 1: Certain UEs 104 may communicate with each other using device-to-device (D2D) communication link 158. D2D communication may be through a variety of wireless D2D communications systems, such as for example, FlashLinQ, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the IEEE 802.11 standard,… or NR. Paragraph [0034]: Referring again to FIG. 1, in certain aspects, the UE 104 may be configured to encode and modulate control information and/or data to obtain a reference code block…. the concepts described herein may be applicable to other similar areas, …and other wireless technologies.) Tillinger does not explicitly teach wherein the wireless standard has a long-term evolution (LTE) protocol. However, Liu teaches wherein the wireless standard has a long-term evolution (LTE) protocol (Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets through the access network. Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. Relay device may include decoders to decapsulate the L1 and L2 headers to recover a payload in the form of an IP datagram from the received RLC layer Protocol data unit (PDU) 440. In an aspect, relay device 420 may also include encoders to generate transport blocks.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the method wherein the wireless standard has a long-term evolution (LTE) protocol as taught by Liu in the combined system of Tillinger, Vasudevan, and Chowdhuri, so that it would also include LTE wireless standards in addition to the other wireless standards for processing the data flows (Liu: Paragraph [0034]). Regarding claim 13, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the communication device of claim 10 (see rejection for claim 10); Tillinger further teaches wherein the higher layer is: a layer or sublayer for radio link control (RLC), packet data convergence protocol (PDCP), (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G Core (5GC)). Paragraph [0024], Fig 1: The base stations 102 configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC 160 through first backhaul links 132 (e.g., S1 interface). The base stations 102 configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network 190 through second backhaul links 184. Paragraph [0026], Fig 1: Certain UEs 104 may communicate with each other using device-to-device (D2D) communication link 158. D2D communication may be through a variety of wireless D2D communications systems, such as for example, FlashLinQ, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the IEEE 802.11 standard, LTE, or NR. Paragraph [0042]: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality…..layer 2 includes….., a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer.) Tillinger does not explicitly teach wherein the higher layer is a network layer. However, Liu teaches wherein the higher layer is a network layer (Paragraph [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Layer 2 (L2 layer) 208 is above the physical layer 206 and is responsible for the link between the UE 106 and eNB 104 over the physical layer 206. Paragraph [0036] In the user plane, the L2 layer 208 includes a media access control (MAC) sublayer 210, a radio link control (RLC) sublayer 212, and a packet data convergence protocol (PDCP) 214 sublayer, which are terminated at the eNB on the network side. In an aspect, the UE 106 may have several upper layers above the L2 layer 208 including a network layer.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the method wherein the higher layer is a network layer, as taught by Liu in the combined system of Tillinger, Vasudevan, and Chowdhuri, so that it can include more layers within a communication device for performing RX and TX flow processing (Liu: Paragraph [0036]). Regarding claim 14, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the communication device of claim 10 wherein the MAC layer or the MAC sublayer further comprising (see rejection for claim 10); Tillinger does not explicitly teach a header memory for storing header or sub-header data or fetching stored header or sub-header data for at least one of the multiple RX flows; and a payload memory for storing payload data or fetching stored payload data for at least one of the multiple RX flows. However, Liu teaches a header memory for storing header or sub-header data or fetching stored header or sub-header data for at least one of the multiple RX flows (Liu: Paragraph [0053], Fig 4: In an aspect, portions of a relay device 420 (e.g., an access network node, for example, an eNB in LTE) and/or a receiver 430, such as the L2 decoders (MAC/RLC/PDCP decoders 422-424, 432-434) may ….deliver the resultant payload (IP datagram, TCP message, or application-layer message) to a higher-layer decoder,…. based on the information included in the headers. Paragraph [0058]: Relay device may include decoders to decapsulate the L1 and L2 headers to recover a payload in the form of an IP datagram from the received RLC layer. Paragraph [0063]: MAC decoder 432 may receive the L2 frame and may process the MAC header from the L2 frame. Paragraph [0078]: In an aspect, encoders 515 can include one or more layer encoders 411-417 that may retrieve a data sequence 540 including one or more data payloads and encapsulate the payloads to provide transport blocks 440, 541 to one or more transceivers 517a-c. In an aspect, one or more of encoders 515 may retrieve configuration information 550 and/or multi-link flag 555 from memory 512 and add the information into a layer header and/or data payload.) and a payload memory for storing payload data or fetching stored payload data for at least one of the multiple RX flows (Paragraph [0082]: FIGS. 6A-6E are diagrams illustrating example receiving devices receiving and decoding a multiple data packets of a data sequence received over a plurality of links in an access network. Paragraph [0083]: Processing system 605 can include memory 606, one or more processors 607, one or more modems 608, decoders 610-650, and data payload 660. In an aspect, data payload 660 may be stored in memory 606. Paragraph [0084]: Processing system 605 may use one or more decoders 610-650 to decapsulate and/or process layer headers from the transport blocks 541 to retrieve the resultant data payload.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to a header memory for storing header or sub-header data or fetching stored header or sub-header data for at least one of the multiple RX flows; and a payload memory for storing payload data or fetching stored payload data for at least one of the multiple RX flows, as taught by Liu in the system of Tillinger, so that the flow context and header information can be stored and fetched when required for further processing without delay (Liu: Paragraphs [0053], [0058], [0082], [0007]). Regarding claim 16, Tillinger teaches a non-transitory computer-readable medium or media comprising one or more sequences of instructions which, when executed by at least one processor, causes steps for data packet processing comprising: (Paragraph [0007] the memory may include instructions that when executed by the at least one processor, causes the at least one processor to encode and modulating at least one of control information or data from a first code block. Paragraph [0022]: If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices.) processing, at a medium access control (MAC) layer or a MAC sublayer within a communication device (Paragraph [0042] FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. a plurality of decoder codeblocks across multiple receiving data (RX) flows and a plurality of RX configuration blocks for the multiple RX flows to generate one or more decapsulated packets along with packet metadata and a flow status for each of the multiple RX flows towards a higher layer, the plurality of decoder codeblocks and the plurality of configuration blocks are output from a physical layer (PHY); (Paragraph [0077]: FIG. 9 is a conceptual data flow diagram 900 illustrating the data flow between different means/components in an example apparatus 902. For example, the apparatus 902 may be a UE (e.g., the UE 402). The apparatus 902 includes a reception component 904 that receives one or more code blocks (e.g., a first code block through an Nth code block) from a base station 950 through one or more channels (e.g., a first channel through an Nth channel). As described in connection with 802, the reception component 904 may receive through a first channel a first code block from a base station. The apparatus 902 further includes a demodulator/decoder component 906 that demodulates and decodes the one or more code blocks received by the reception component 904 through the one or more channels. That is, the reception component 904 may provide the one or more code blocks to the demodulator/decoder component 906 to demodulate and decode the one or more code blocks. For example, as described in connection with 804, the demodulator/decoder component 906 may demodulate and decode the first code block to obtain at least one control information or data associated with the first code block. Paragraph [0042]: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor 375 provides RRC layer functionality associated with broadcasting of system information….; PDCP layer functionality associated with header compression/decompression….; RLC layer functionality associated with the transfer of upper layer packet data units….; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Paragraph [0043]: The transmit (TX) processor 316 and the receive (RX) processor 370 implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels….Paragraph [0044], Fig 3: At the UE 350….. The TX processor 368 and the RX processor 356 implement layer 1 functionality associated with various signal processing functions….. The data and control signals are then provided to the controller/processor 359, which implements layer 3 and layer 2 functionality. Paragraphs [0086], [0087], Fig 10: The processing system 1014 may be a component of the UE 350 and may include the memory 360 and/or at least one of the TX processor 368, the RX processor 356, and the controller/processor 359. Alternatively, the processing system 1014 may be the entire UE (e.g., see 350 of FIG. 3). In one configuration, the apparatus 902/902′ for wireless communication includes means for receiving, demodulating and decoding, encoding and modulating, estimating, and transmitting.) Tillinger does not explicitly teach processing, at the MAC layer or the MAC sublayer, a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks, output from the higher layer, to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY. However, Liu teaches processing, at the MAC layer or the MAC sublayer, a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks, output from the higher layer, to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY (Paragraph [0010]: In an aspect, an apparatus includes a processing system configured to support a protocol stack comprising a first layer and a second layer. The processing system may establish a flow with a node. The establishment may include receiving configuration information, wherein the flow is associated with a plurality of data packets. Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets. Paragraphs [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Paragraph [0036]: the L2 layer 208 includes a media access control (MAC) sublayer 210…. the UE 106 may also have higher layers....Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Transmitter 410 may have one or more application (APP) layer encoders 417, a TCP encoder 416, an IP encoder 415, L2 layer encoders including a PDCP encoder 414, RLC encoder 413, and MAC encoder 412, and a L1 physical (PHY) layer encoder 411. In an aspect, for example, transmitter 410 can use encoders 411-417 in succession from APP layer encoder 417 to PHY encoder 411 to encapsulate a RLC layer payload 442 with a series of headers. Paragraph [0056]: L2 encoders, including….MAC encoder 412, may receive an IP datagram from IP encoder 415 and may successively add respective PDCP, RLC, and MAC headers and an L2 footer to the IP datagram to generate an Layer 2 (L2) frame…. In an aspect, each of the L2 headers may include information regarding the data packet and/or data flow. Paragraph [0057]: Physical-layer (PHY) encoder 411 may receive the L2 frame from one of the L2 encoder. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. In an aspect, relay device 420 may also include encoders to generate transport blocks. Paragraph [0073], Fig 5: In an aspect, transmitter 510 may use processing system 511 to implement one or more aspects of encoder 515 and/or scheduler 516 to encode data sequence 540 using one or more transport blocks 541 including headers for one or more layers in a protocol architecture such as an LTE protocol architecture. In an aspect, encoder 515 may include additional information, such as configuration information 550 and/or multi-link flag 555 in portions of the transport blocks 541 to indicate that the transport blocks 541 were transmitted using multiple links from the transmitter. In an aspect, encoder 515 and/or scheduler 516 may retransmit one or more transport blocks 541 in response to a retransmit request (e.g., a HARQ message, an ARQ message, and/or a duplicate ACK message). Paragraph [0074]: In an aspect, processing system 511 of transmitter 510 may include memory 512 for storing data used herein (e.g., data sequence 540, configuration information 550, and/or multi-link flag 555) and/or local versions of applications and/or encoder 515 scheduler 516 and/or one or more of their subcomponents being executed by processor 514. Paragraph [0078], Fig 5: In an aspect, encoders 515 can include one or more layer encoders 411-417 that may retrieve a data sequence 540 including one or more data payloads and encapsulate the payloads to provide transport blocks. The term ‘codeblock’ has been interpreted in the context of transport block, as per 5G NR and LTE: if a transport block size becomes too large, it is segmented into codeblocks. The disclosure does not specify what the term ‘codeblock descriptors’ mean. The broadest reasonable interpretation of the term has been taken to describe ‘a set of information parameters that describe the characteristics associated with the ‘codeblock’, such as information regarding the packet/codeblock.) a flow status for each of the multiple RX flows towards a higher layer (Paragraph [0060]: In an aspect, one or more of L2 layer decoders 432-434 may use further information to determine whether the decoders 432-434 should reorder the data packets determined not to be in sequence. In one aspect, for example, L2 layer decoders 431-434 may inspect the payload for configuration instructions or a multi-link flag that specifies whether that respective layer is to re-order non-sequential packets. In another aspect, L2 layer decoders 432-434 may determine the number of links used to receive the packets and whether the packets received over multiple links are related to a common data flow; Paragraph [0086]: MAC decoder 610 is similar to MAC decoder 422, 432 and may decapsulate and process the MAC header included in the L2 frame produced by the PHY decoder. Paragraph [0087]: In an aspect, MAC decoder 610 may also include a HARQ Check module 611, a scheduler 612, and a reordering timer 613. In an aspect, for example, MAC decoder may use scheduler 612 to determine whether one or more of transport blocks 671 were received out-of-order and may attempt to reorder the blocks. Paragraph [0088]: For example, MAC Decoder may use scheduler 612 to examine the contents of the MAC headers for each of the received transport blocks to determine whether the sequence numbers for each of the transport blocks 671 is in order. In an aspect, scheduler 612 may determine whether to reorder the transport blocks 671 by examining the contents of the payload for configuration information 550 or a multi-link flag 555.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide processing, at the MAC layer or the MAC sublayer, a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks, output from the higher layer, to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY, a flow status for each of the multiple RX flows towards a higher layer, as taught by Liu in the system of Tillinger so that it would include the data flow processing for transmission data flows as well, for a communication device that requires both TX as well as RX flows. Liu describes that the processing system encodes the data sequence, including headers and other additional information such as configuration information, and multi-link flags (for coordinating data flows), acknowledgements and retransmissions, while transmitting the transport blocks, such that L2 headers may include information regarding the data packet and/or data flow (Liu: Paragraphs [0056], [0073], [0086], [0087], Fig 4, Fig 5). The combination of Tillinger and Liu does not explicitly teach pre-fetching, at the MAC layer or the MAC sublayer, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer. However, Vasudevan teaches pre-fetching, at the MAC layer or the MAC sublayer, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer (see rejection for claim 10); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide pre-fetching, at the MAC layer or the MAC sublayer, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer, as taught by Vasudevan in the combined system of Tillinger and Liu, so that the pre-fetching can significantly speed up packet processing leading to improved performance (Vasudevan: Paragraphs [0020], [0026], [0027], [0039], [0053]). The combination of Tillinger, Liu, and Vasudevan does not explicitly teach storing, to a flow context memory within the MAC layer or the MAC sublayer, hardware context or fetching, from the flow context memory, stored hardware context for at least one of the multiple RX flows; and storing, to the flow context memory, hardware context or fetching, from the flow context memory, stored hardware context for at least one of the multiple TX flows. However, Chowdhuri teaches storing, to a flow context memory within the MAC layer or the MAC sublayer, hardware context or fetching, from the flow context memory, stored hardware context for at least one of the multiple RX flows; and storing, to the flow context memory, hardware context or fetching, from the flow context memory, stored hardware context for at least one of the multiple TX flows (Col 6, lines 37-48: The MAC processor 110 generally acts as an interface between a modem that implements physical layer functions and higher communication protocol layers. The MAC processor 110 receives MSDUs from upper layer functions, organizes them into MPDUs, and then provides the MPDUs to the modem for transmission from the SS. Additionally, the MAC processor 110 receives MPDUs from the modem, the MPDUs having been received by the SS, extracts and forms MSDUs that were packaged in the MPDUs, and then provides the MSDUs to the upper layer protocol functions. Col 10, line 67, Col 11, line 1: the context switching processor 200 may be utilized in a MAC processor. Col 11, lines 39-49: The context switching processor 200 also includes a context memory 212 and context switch logic 216. The processing engine 204 is coupled to the context memory 212, and the processing engine 204 generally stores state information in the context memory 212 when a context switch occurs. A context switch is when the processing engine 204 stops processing one burst (mid-burst) and then starts processing another burst. Thus, when a context switch occurs, the processing engine 204 also may retrieve state information from the context memory 212 corresponding to the next burst that is to be processed. Col 12, lines 8-12: Although context switching was described in the context of processing burst data received from the modem, processing using context switching optionally may also be implemented for burst data that is to be provided to the modem for transmission. Col 13, lines 30-33, 39-46: At block 246, the current context is saved. For example, the state information corresponding to the current context is saved to the context memory 212. At block 250, the next context (determined at block 242) is restored. For example, the state information corresponding to the next context is retrieved from the context memory 212 and stored in corresponding memories (e.g., registers) in the processing engine 204. In one implementation, the context Switch logic 216 may generate control signals to retrieve the state information corresponding to the next context from the context memory 212.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide storing, to a flow context memory within the MAC layer or the MAC sublayer, hardware context or fetching, from the flow context memory, stored hardware context for at least one of the multiple RX flows; and storing, to the flow context memory, hardware context or fetching, from the flow context memory, stored hardware context for at least one of the multiple TX flows, as taught by Chowdhuri in the combined system of Tillinger, Liu, and Vasudevan, so that when processing stops and context switch occurs, state information can be stored in the context memory, and retrieved when the processing restarts (Chowdhuri: Col 11: lines 3-17, 39-49). Regarding claim 17, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the non-transitory computer-readable medium or media of claim 16 (see rejection for claim 16); Tillinger further teaches wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a wireless standard, a user (see rejection for claim 11); Tillinger does not explicitly teach wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard. However, Liu teaches wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard (see rejection for claim 11); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard as taught by Liu in the combined system of Tillinger, Vasudevan, and Chowdhuriso that it would provide processing in both directions for transmitting and receiving data flows to a user across multi-access technologies (Liu: Paragraphs [0020], [0056], [0073], [0086], [0087], Fig 4, Fig 5). Regarding claim 18, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the non-transitory computer-readable medium or media of claim 16 further comprising one or more sequences of instructions which, when executed by at least one processor, causes steps to be performed comprising: (see rejection for claim 16); Tillinger does not explicitly teach storing, in a header memory within the MAC layer or the MAC sublayer, header or sub-header data or fetching, from the header memory, stored header or sub- header data for at least one of the multiple RX flows; and storing, in a payload memory within the MAC layer or the MAC sublayer, payload data for at least one of the multiple RX flows. However, Liu teaches storing, in a header memory within the MAC layer or the MAC sublayer, header or sub-header data or fetching, from the header memory, stored header or sub- header data for at least one of the multiple RX flows; and storing, in a payload memory within the MAC layer or the MAC sublayer, payload data for at least one of the multiple RX flows (see rejection for claim 14); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide storing, in a header memory within the MAC layer or the MAC sublayer, header or sub-header data or fetching, from the header memory, stored header or sub- header data for at least one of the multiple RX flows; storing, in a payload memory within the MAC layer or the MAC sublayer, payload data for at least one of the multiple RX flows, as taught by Liu in the system of Tillinger, so that the flow context and header information can be stored and fetched when required for further processing without delay (Liu: Paragraphs [0053], [0058], [0082], [0007]). Claims 15, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tillinger et al. (U.S. Pub. No. 2021/0289482) in view of Liu et al. (U.S. Pub. No. 2017/0171060) and Vasudevan (U.S. Pub. No. 2019/0238460), Chowdhuri et al. (US8175015B1), and further in view of Banuli (U.S. Pub. No. 2022/0200669). Regarding claim 15, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the communication device of claim 10, wherein the MAC layer or the MAC sublayer is further configured to: (see rejection for claim 10); The combination of Tillinger, Liu, Vasudevan, and Chowdhuri does not explicitly teach: saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer or the MAC sublayer, the first RX or TX flow to generate one or more desired outputs based at least on the saved at least part of the first RX or TX flow and the associated information. However, Banuli teaches: saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer or the MAC sublayer, the first RX or TX flow to generate one or more desired outputs based at least on the saved at least part of the first RX or TX flow and the associated information (Paragraph [0232]: A set of registers 1345 store context data for threads executed by graphics processing engines 1331-1332, and a context management circuit 1348 manages thread contexts. For example, context management circuit 1348 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit 1348 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In one embodiment, an interrupt management circuit 1347 receives and processes interrupts received from system devices. Figs 13B-E illustrate the interfacing of System Memory 1314 with Interrupt MGMT block 1347, Context MGMT 1348, Memory Management Unit (MMU) 1339, Registers 1345, Work Descriptor (WD) 1384 and Work Descriptor (WD) Fetch 1391, with Save/Restore feature. Paragraph [0242], Fig 13D: Application effective address space 1382 within system memory 1314 stores process elements 1383. A process element 1383 contains process state for corresponding application 1380. A work descriptor (WD) 1384 contained in process element 1383 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1384 is a pointer to a job request queue in an application's address space 1382. Paragraph [0245]: In operation, a WD fetch unit 1391 fetches next WD 1384 which includes an indication of work to be done. Data from WD 1384 may be stored in registers 1345 and used by MMU 1339, interrupt management circuit 1347 and/or context management circuit 1348 as illustrated. Paragraph [0252]: In at least one embodiment, application 1380 is required to make an operating system 1395 system call with…. a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). Paragraph [0298]: In at least one embodiment, processing cluster array 1812 can receive processing tasks to be executed via scheduler 1810, which receives commands defining processing tasks from front end 1808. In at least one embodiment, processing tasks can include indices of data to be processed. In at least one embodiment, scheduler 1810 may be configured to fetch indices corresponding to tasks or may receive indices. Paragraph [0488]: In at least one embodiment, MAC provides flow control and multiplexing for a transmission medium.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform: saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer or the MAC sublayer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer or the MAC sublayer, the first RX or TX flow to generate one or more desired outputs based at least on the saved first RX or TX flow and the associated information as taught by Banuli in the combined system of Tillinger, Liu, Vasudevan, and Chowdhuri, so that it would provide information needed to save/retrieve flow information from memory, in order to resume the RX or TX flow processing (Banuli: Paragraph [0232]). Regarding claim 19, the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches the non-transitory computer-readable medium or media of claim 16 further comprising one or more sequences of instructions which, when executed by at least one processor, causes steps to be performed comprising: (see rejection for claim 16); The combination of Tillinger, Liu, and Vasudevan does not explicitly teach: saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer or the MAC sublayer, the first RX or TX flow to generate one or more desired outputs based at least on the saved at least part of the first RX or TX flow and the associated information. However, Banuli teaches: saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer or the MAC sublayer, the first RX or TX flow to generate one or more desired outputs based at least on the saved at least part of the first RX or TX flow and the associated information (see rejection for claim 15); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform: saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer or the MAC sublayer, the first RX or TX flow to generate one or more desired outputs based at least on the saved at least part of the first RX or TX flow and the associated information as taught by Banuli in the combined system of Tillinger, Liu, Vasudevan, and Chowdhuri, so that it would provide information needed to save/retrieve flow information from memory, in order to resume the RX or TX flow processing (Banuli: Paragraph [0232]). Regarding claim 20, the combination of Tillinger, Liu, Vasudevan, Chowdhuri, and Banuli teaches the non-transitory computer-readable medium or media of claim 19 (see rejection for claim 19); The combination of Tillinger, Liu, Vasudevan, and Chowdhuri does not explicitly teach wherein the associated information is information needed to resume processing the first RX or TX flow from where the processing is stored. However, Banuli teaches, wherein the associated information is information needed to resume processing the first RX or TX flow from where the processing is stored (Paragraph [0232]: A set of registers 1345 store context data for threads executed by graphics processing engines 1331-1332, and a context management circuit 1348 manages thread contexts. For example, context management circuit 1348 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit 1348 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In one embodiment, an interrupt management circuit 1347 receives and processes interrupts received from system devices. Figs 13B-E illustrate the interfacing of System Memory 1314 with Interrupt MGMT block 1347, Context MGMT 1348, Memory Management Unit (MMU) 1339, Registers 1345, Work Descriptor (WD) 1384 and Work Descriptor (WD) Fetch 1391, with Save/Restore feature. (Paragraph [0242], Fig 13D: Application effective address space 1382 within system memory 1314 stores process elements 1383. A process element 1383 contains process state for corresponding application 1380. A work descriptor (WD) 1384 contained in process element 1383 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1384 is a pointer to a job request queue in an application's address space 1382. Paragraph [0245]: In operation, a WD fetch unit 1391 fetches next WD 1384 which includes an indication of work to be done. Data from WD 1384 may be stored in registers 1345 and used by MMU 1339, interrupt management circuit 1347 and/or context management circuit 1348 as illustrated. Paragraph [0252]: In at least one embodiment, application 1380 is required to make an operating system 1395 system call with…. a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). Paragraph [0298]: In at least one embodiment, processing cluster array 1812 can receive processing tasks to be executed via scheduler 1810, which receives commands defining processing tasks from front end 1808. In at least one embodiment, processing tasks can include indices of data to be processed. In at least one embodiment, scheduler 1810 may be configured to fetch indices corresponding to tasks or may receive indices. Paragraph [0488]: In at least one embodiment, MAC provides flow control and multiplexing for a transmission medium.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide that the associated information is information needed to resume processing the first RX or TX flow from where the processing is stored as taught by Banuli in the combined system of Tillinger, Liu, Vasudevan, and Chowdhuri, so that it would enable using the associated information to resume the RX and TX flow processing from where it was stored to avoid any processing delays (Banuli: Paragraph [0232]). Claims 1-6, 8 are rejected under 35 U.S.C. 103 as being unpatentable over Tillinger et al. (U.S. Pub. No. 2021/0289482) in view of Liu et al. (U.S. Pub. No. 2017/0171060), Banuli (U.S. Pub. No. 2022/0200669), Vasudevan (U.S. Pub. No. 2019/0238460), and Chowdhuri et al. (US8175015B1). Regarding claim 1, Tillinger teaches a method for data flow processing (Paragraph [0077], Fig 9: FIG. 9 is a conceptual data flow diagram 900 illustrating the data flow between different means/components in an example apparatus 902. For example, the apparatus 902 may be a UE (e.g., the UE 402) comprising: processing, at a medium access control (MAC) layer or a MAC sublayer within a communication device, (Paragraph [0042] FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer.) a plurality of decoder codeblocks across multiple receiving data (RX) flows and a plurality of RX configuration blocks for the multiple RX flows to generate one or more decapsulated packets along with packet metadata and a flow status for each of the multiple RX flows towards a higher layer, the plurality of decoder codeblocks and the plurality of RX configuration blocks are output from a physical layer (PHY); (Paragraph [0077]: FIG. 9 is a conceptual data flow diagram 900 illustrating the data flow between different means/components in an example apparatus 902. For example, the apparatus 902 may be a UE (e.g., the UE 402). The apparatus 902 includes a reception component 904 that receives one or more code blocks (e.g., a first code block through an Nth code block) from a base station 950 through one or more channels (e.g., a first channel through an Nth channel). As described in connection with 802, the reception component 904 may receive through a first channel a first code block from a base station. The apparatus 902 further includes a demodulator/decoder component 906 that demodulates and decodes the one or more code blocks received by the reception component 904 through the one or more channels. That is, the reception component 904 may provide the one or more code blocks to the demodulator/decoder component 906 to demodulate and decode the one or more code blocks. For example, as described in connection with 804, the demodulator/decoder component 906 may demodulate and decode the first code block to obtain at least one control information or data associated with the first code block. Paragraph [0042]: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor 375 provides RRC layer functionality associated with broadcasting of system information….; PDCP layer functionality associated with header compression/decompression….; RLC layer functionality associated with the transfer of upper layer packet data units….; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Paragraph [0043]: The transmit (TX) processor 316 and the receive (RX) processor 370 implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels….Paragraph [0044], Fig 3: At the UE 350….. The TX processor 368 and the RX processor 356 implement layer 1 functionality associated with various signal processing functions….. The data and control signals are then provided to the controller/processor 359, which implements layer 3 and layer 2 functionality. Paragraphs [0086], [0087], Fig 10: The processing system 1014 may be a component of the UE 350 and may include the memory 360 and/or at least one of the TX processor 368, the RX processor 356, and the controller/processor 359. Alternatively, the processing system 1014 may be the entire UE (e.g., see 350 of FIG. 3). In one configuration, the apparatus 902/902′ for wireless communication includes means for receiving, demodulating and decoding, encoding and modulating, estimating, and transmitting. Tillinger does not explicitly teach processing, at the MAC layer or the MAC sublayer, a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks, output from the higher layer, to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY. However, Liu teaches processing, at the MAC layer or the MAC sublayer, a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks, output from the higher layer, to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY (Paragraph [0010]: In an aspect, an apparatus includes a processing system configured to support a protocol stack comprising a first layer and a second layer. The processing system may establish a flow with a node. The establishment may include receiving configuration information, wherein the flow is associated with a plurality of data packets. Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets. Paragraphs [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Paragraph [0036]: the L2 layer 208 includes a media access control (MAC) sublayer 210…. the UE 106 may also have higher layers....Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Transmitter 410 may have one or more application (APP) layer encoders 417, a TCP encoder 416, an IP encoder 415, L2 layer encoders including a PDCP encoder 414, RLC encoder 413, and MAC encoder 412, and a L1 physical (PHY) layer encoder 411. In an aspect, for example, transmitter 410 can use encoders 411-417 in succession from APP layer encoder 417 to PHY encoder 411 to encapsulate a RLC layer payload 442 with a series of headers. Paragraph [0056]: L2 encoders, including….MAC encoder 412, may receive an IP datagram from IP encoder 415 and may successively add respective PDCP, RLC, and MAC headers and an L2 footer to the IP datagram to generate an Layer 2 (L2) frame…. In an aspect, each of the L2 headers may include information regarding the data packet and/or data flow. Paragraph [0057]: Physical-layer (PHY) encoder 411 may receive the L2 frame from one of the L2 encoder. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. In an aspect, relay device 420 may also include encoders to generate transport blocks. Paragraph [0073], Fig 5: In an aspect, transmitter 510 may use processing system 511 to implement one or more aspects of encoder 515 and/or scheduler 516 to encode data sequence 540 using one or more transport blocks 541 including headers for one or more layers in a protocol architecture such as an LTE protocol architecture. In an aspect, encoder 515 may include additional information, such as configuration information 550 and/or multi-link flag 555 in portions of the transport blocks 541 to indicate that the transport blocks 541 were transmitted using multiple links from the transmitter. In an aspect, encoder 515 and/or scheduler 516 may retransmit one or more transport blocks 541 in response to a retransmit request (e.g., a HARQ message, an ARQ message, and/or a duplicate ACK message). Paragraph [0074]: In an aspect, processing system 511 of transmitter 510 may include memory 512 for storing data used herein (e.g., data sequence 540, configuration information 550, and/or multi-link flag 555) and/or local versions of applications and/or encoder 515 scheduler 516 and/or one or more of their subcomponents being executed by processor 514. Paragraph [0078], Fig 5: In an aspect, encoders 515 can include one or more layer encoders 411-417 that may retrieve a data sequence 540 including one or more data payloads and encapsulate the payloads to provide transport blocks. The term ‘codeblock’ has been interpreted in the context of transport block, as per 5G NR and LTE: if a transport block size becomes too large, it is segmented into codeblocks. The disclosure does not specify what the term ‘codeblock descriptors’ mean. The broadest reasonable interpretation of the term has been taken to describe ‘a set of information parameters that describe the characteristics associated with the ‘codeblock’, such as information regarding the packet/codeblock.) a flow status for each of the multiple RX flows towards a higher layer (Paragraph [0060]: In an aspect, one or more of L2 layer decoders 432-434 may use further information to determine whether the decoders 432-434 should reorder the data packets determined not to be in sequence. In one aspect, for example, L2 layer decoders 431-434 may inspect the payload for configuration instructions or a multi-link flag that specifies whether that respective layer is to re-order non-sequential packets. In another aspect, L2 layer decoders 432-434 may determine the number of links used to receive the packets and whether the packets received over multiple links are related to a common data flow; Paragraph [0086]: MAC decoder 610 is similar to MAC decoder 422, 432 and may decapsulate and process the MAC header included in the L2 frame produced by the PHY decoder. Paragraph [0087]: In an aspect, MAC decoder 610 may also include a HARQ Check module 611, a scheduler 612, and a reordering timer 613. In an aspect, for example, MAC decoder may use scheduler 612 to determine whether one or more of transport blocks 671 were received out-of-order and may attempt to reorder the blocks. Paragraph [0088]: For example, MAC Decoder may use scheduler 612 to examine the contents of the MAC headers for each of the received transport blocks to determine whether the sequence numbers for each of the transport blocks 671 is in order. In an aspect, scheduler 612 may determine whether to reorder the transport blocks 671 by examining the contents of the payload for configuration information 550 or a multi-link flag 555.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide, processing, at the MAC layer or the MAC sublayer, a plurality of packets across multiple transmission data (TX) flows, a plurality of codeblock descriptors, and a plurality of TX configuration blocks, output from the higher layer, to generate one or more encoder codeblocks for each of the multiple TX flows, along with a flow status for each TX flow towards the PHY; and a flow status for each of the multiple RX flows towards a higher layer, as taught by Liu in the system of Tillinger so that it would include the data flow processing for transmission data flows as well, for a communication device that requires both TX as well as RX flows. Liu describes that the processing system encodes the data sequence, including headers and other additional information such as configuration information, and multi-link flags (for coordinating data flows), acknowledgements and retransmissions, while transmitting the transport blocks, such that L2 headers may include information regarding the data packet and/or data flow. (Liu: Paragraphs [0056], [0073], [0086], [0087], Fig 4, Fig 5). The combination of Tillinger and Liu does not explicitly teach saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer or the MAC sublayer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer, the first RX or TX flow to generate one or more desired outputs based at least on the saved first RX or TX flow and the associated information. However, Banuli teaches saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer or the MAC sublayer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer, the first RX or TX flow to generate one or more desired outputs based at least on the saved first RX or TX flow and the associated information (Paragraph [0232]: A set of registers 1345 store context data for threads executed by graphics processing engines 1331-1332, and a context management circuit 1348 manages thread contexts. For example, context management circuit 1348 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit 1348 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In one embodiment, an interrupt management circuit 1347 receives and processes interrupts received from system devices. Figs 13B-E illustrate the interfacing of System Memory 1314 with Interrupt MGMT block 1347, Context MGMT 1348, Memory Management Unit (MMU) 1339, Registers 1345, Work Descriptor (WD) 1384 and Work Descriptor (WD) Fetch 1391, with Save/Restore feature. Paragraph [0242], Fig 13D: Application effective address space 1382 within system memory 1314 stores process elements 1383. A process element 1383 contains process state for corresponding application 1380. A work descriptor (WD) 1384 contained in process element 1383 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1384 is a pointer to a job request queue in an application's address space 1382. Paragraph [0245]: In operation, a WD fetch unit 1391 fetches next WD 1384 which includes an indication of work to be done. Data from WD 1384 may be stored in registers 1345 and used by MMU 1339, interrupt management circuit 1347 and/or context management circuit 1348 as illustrated. Paragraph [0252]: In at least one embodiment, application 1380 is required to make an operating system 1395 system call with…. a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). Paragraph [0298]: In at least one embodiment, processing cluster array 1812 can receive processing tasks to be executed via scheduler 1810, which receives commands defining processing tasks from front end 1808. In at least one embodiment, processing tasks can include indices of data to be processed. In at least one embodiment, scheduler 1810 may be configured to fetch indices corresponding to tasks or may receive indices. Paragraph [0488]: In at least one embodiment, MAC provides flow control and multiplexing for a transmission medium.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide saving at least part of a first RX or TX flow and associated information of the first RX or TX flow in one or more memories within the MAC layer or the MAC sublayer when the MAC layer or the MAC sublayer stops processing the first RX or TX flow; retrieving, in the MAC layer or the MAC sublayer, the saved at least part of the first RX or TX flow and the associated information from the one or more memories; resuming processing, in the MAC layer, the first RX or TX flow to generate one or more desired outputs based at least on the saved first RX or TX flow and the associated information, as taught by Banuli in the combined system of Tillinger and Liu so that it would enable storage and retrieval of flow information for further RX and TX flow processing from where the processing was stopped (Banuli: Paragraph [0232]). The combination of Tillinger, Liu, and Banuli does not explicitly teach pre-fetching, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer. However, Vasudevan teaches pre-fetching, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer (Paragraph [0020]: Various embodiments provide for deterministic contextual prefetching and accelerated I/O processing from unifying the hardware and software I/O processing pipeline through a shared context. Also, the shared context can eliminate the need for context lookups during software packet processing because the NIC has performed the lookup by matching the flow entry and returning the shared context pointer, potentially significantly speeding up packet processing. Paragraph [0026]: Thus, when a packet arrives and hits a matching entry in the Flow Director table of a NIC, the NIC retrieves the associated programmed context and returns the associated programmed context to the host along with the packet and other completion information. The host checks for a valid context and issues prefetches for the addresses contained in the context, if it determines that one exists. For example, the context could contain addresses to network layer contexts, transport layer contexts and socket layer contexts, which could all be prefetched, well ahead of the processing. These prefetches could prevent protocol processing pipeline from stalling, because prior to executing, associated data is prefetched and made available at the highest levels of the caching hierarchy. This can improve performance significantly. Paragraph [0027]: FIG. 1 depicts an example system including a network interface and a host system. Network interface 100 provides for identifying packets (transmit or receive) that have associated context information stored in memory of network interface 100 or host 150. The context information can be retrieved for one or more packets for packet or application processing of a received packet by network interface 100 or a packet to be transmitted by network interface 100. Paragraph [0037]: In some embodiments, driver 168, OS 172, or network interface 100 can issue speculative fetches for context information ahead of processing. Driver 168 can inspect a receive or transmit completion descriptor to retrieve a context address. The context address can refer to one or more context information pointers in context pointers 176. Context pointers 176 can refer to context information in context region 178 and are copied and stored into a cache associated with a core that is to process the context information. For example, context information can include one or more of: MAC context information. Paragraph [0039]: A received packet in packet buffer 162 can be retrieved and processed. Driver 168 can inform OS 172 of availability of a received packet. OS 172 can apply MAC layer processing on the packet using MAC context information including using driver data structures, driver statistic structures, and so forth. The MAC context information can be prefetched into cache of a core that performs MAC layer processing. Paragraph [0045]: The first context information address can refer to a MAC context address and the first context information address can refer to a MAC context information. Paragraph [0050]: For example, packet characteristics can be characteristics of the communication channel such as one or more of: source MAC address, destination MAC address, IPv4 source address, IPv4 destination address, portion of a TCP header, Virtual Extensible LAN protocol (VXLAN) tag, receive port, or transmit port. For example, a host operating system or network interface driver can identify the context information to be prefetched and store the information into an array on the network interface device. Paragraph [0053]: Prefetching can occur at the device driver so that by the time upper layer protocol processing starts, the associated data is ready at the cache.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide pre-fetching, at least part of one or more RX or TX flows subsequent to a RX or TX flow that is currently processed in the MAC layer or the MAC sublayer, as taught by Vasudevan in the combined system of Tillinger, Liu, and Banuli, so that the pre-fetching can significantly speed up packet processing leading to improved performance (Vasudevan: Paragraphs [0020], [0026], [0027], [0039], [0053]). The combination of Tillinger, Liu, Banuli, and Vasudevan does not explicitly teach wherein the MAC layer or the MAC sublayer comprises a flow context memory configured for: storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows. However, Chowdhuri teaches wherein the MAC layer or the MAC sublayer comprises a flow context memory configured for: storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows (Col 6, lines 37-48: The MAC processor 110 generally acts as an interface between a modem that implements physical layer functions and higher communication protocol layers. The MAC processor 110 receives MSDUs from upper layer functions, organizes them into MPDUs, and then provides the MPDUs to the modem for transmission from the SS. Additionally, the MAC processor 110 receives MPDUs from the modem, the MPDUs having been received by the SS, extracts and forms MSDUs that were packaged in the MPDUs, and then provides the MSDUs to the upper layer protocol functions. Col 10, line 67, Col 11, line 1: the context switching processor 200 may be utilized in a MAC processor. Col 11, lines 39-49: The context switching processor 200 also includes a context memory 212 and context switch logic 216. The processing engine 204 is coupled to the context memory 212, and the processing engine 204 generally stores state information in the context memory 212 when a context switch occurs. A context switch is when the processing engine 204 stops processing one burst (mid-burst) and then starts processing another burst. Thus, when a context switch occurs, the processing engine 204 also may retrieve state information from the context memory 212 corresponding to the next burst that is to be processed. Col 12, lines 8-12: Although context switching was described in the context of processing burst data received from the modem, processing using context switching optionally may also be implemented for burst data that is to be provided to the modem for transmission. Col 13, lines 30-33, 39-46: At block 246, the current context is saved. For example, the state information corresponding to the current context is saved to the context memory 212. At block 250, the next context (determined at block 242) is restored. For example, the state information corresponding to the next context is retrieved from the context memory 212 and stored in corresponding memories (e.g., registers) in the processing engine 204. In one implementation, the context switch logic 216 may generate control signals to retrieve the state information corresponding to the next context from the context memory 212.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide wherein the MAC layer or the MAC sublayer comprises a flow context memory configured for: storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows, as taught by Chowdhuri in the combined system of Tillinger, Liu, Banuli, and Vasudevan, so that when processing stops and context switch occurs, state information can be stored in the context memory, and retrieved when the processing restarts (Chowdhuri: Col 11: lines 3-17, 39-49). Regarding claim 2, the combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches the method of claim 1 (see rejection for claim 1); Tillinger further teaches wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a wireless standard, a user (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G Core (5GC)). Paragraph [0034]: Referring again to FIG. 1, in certain aspects, the UE 104 may be configured to encode and modulate control information and/or data to obtain a reference code block; receive the second code block; and demodulate and decode the second code block based on the estimated second channel (198). the concepts described herein may be applicable to other similar areas, such as LTE, and other wireless technologies. Paragraph [0052]: The first code block may be received through a first channel. At 408, the UE 402 demodulates and decodes the first code block to obtain, for example, control information and/or data from the first code block. At 410, the UE 402 encodes and modulates the control information and/or the data from the first code block to obtain an encoded and modulated reference first code block. For example, the control information and/or the data may be re-encoded and re-modulated by the UE 402.) Tillinger does not explicitly teach wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard. However, Liu teaches wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard (Paragraph [0006]: The multiple access technologies have been adopted in various telecommunication standards to provide a common protocol that enables different wireless devices to communicate on a municipal, national, regional, and even global level. An example telecommunication standard is Long Term Evolution (LTE). LTE is a set of enhancements to the Universal Mobile Telecommunications System (UMTS) mobile standard promulgated by Third Generation Partnership Project (3GPP). LTE is designed to better support mobile broadband Internet access by improving spectral efficiency, lowering costs, improving services, making use of new spectrum, and better integrating with other open standards using OFDMA on the downlink (DL), SC-FDMA on the uplink (UL), and multiple-input multiple-output (MIMO) antenna technology. However, as the demand for mobile broadband access continues to increase, there exists a need for further improvements in LTE technology. Preferably, the improvements should be applicable to other multi-access technologies and the telecommunication standards that employ these technologies. Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets through the access network. Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. Relay device may include decoders to decapsulate the L1 and L2 headers to recover a payload in the form of an IP datagram from the received RLC layer Protocol data unit (PDU) 440. In an aspect, relay device 420 may also include encoders to generate transport blocks) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the method wherein each of the multiple RX or TX flows comprises one or more codeblocks corresponding to a user per wireless standard as taught by Liu in the combined system of Tillinger, Banuli, Vasudevan, and Chowdhuri, so that it would provide processing in both directions for transmitting and receiving data flows to a user across multi-access technologies (Liu: Paragraphs [0006], [0056], [0073], [0086], [0087], Fig 4, Fig 5). Regarding claim 3, the combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches the method of claim 2 (see rejection for claim 2); Tillinger further teaches wherein the wireless standard has a Wi-Fi protocol or a 5G new radio (NR) protocol. (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G Core (5GC)). Paragraph [0024], Fig 1: The base stations 102 configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network 190 through second backhaul links 184. Paragraph [0026], Fig 1: Certain UEs 104 may communicate with each other using device-to-device (D2D) communication link 158. D2D communication may be through a variety of wireless D2D communications systems, such as for example, FlashLinQ, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the IEEE 802.11 standard,… or NR. Paragraph [0034]: Referring again to FIG. 1, in certain aspects, the UE 104 may be configured to encode and modulate control information and/or data to obtain a reference code block…. the concepts described herein may be applicable to other similar areas, …and other wireless technologies.) Tillinger does not explicitly teach wherein the wireless standard has a long-term evolution (LTE) protocol. However, Liu teaches wherein the wireless standard has a long-term evolution (LTE) protocol (Paragraph [0028]: FIG. 1 is a diagram illustrating an example of an access network 100 in a wireless network, such as an LTE network architecture. Paragraph [0029]: UEs 106 may be wireless devices that may send and/or receive data packets through the access network. Paragraph [0049], Fig 4: Transmitter 410 can be a wireless device, such as a UE similar to UE 106, 350 that may send a RLC layer payload 442 to receiver 430. Paragraph [0058], Fig 4: Relay device 420 (e.g., an access network node, for example, an eNB in LTE) may be a base station, such as eNB 104, 310, or may be another UE, such as UE 106, 350. Relay device may include decoders to decapsulate the L1 and L2 headers to recover a payload in the form of an IP datagram from the received RLC layer Protocol data unit (PDU) 440. In an aspect, relay device 420 may also include encoders to generate transport blocks.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the method wherein the wireless standard has a long-term evolution (LTE) protocol as taught by Liu in the combined system of Tillinger, Banuli, Vasudevan, and Chowdhuri, so that it would also include LTE wireless standards in addition to the other wireless standards for processing the data flows (Liu: Paragraph [0034]). Regarding claim 4, the combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches the method of claim 1 (see rejection for claim 1); Tillinger further teaches wherein the higher layer is: a layer or sublayer for radio link control (RLC), packet data convergence protocol (PDCP), (Paragraph [0023]: FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network 100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations 102, UEs 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G Core (5GC)). Paragraph [0024], Fig 1: The base stations 102 configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC 160 through first backhaul links 132 (e.g., S1 interface). The base stations 102 configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network 190 through second backhaul links 184. Paragraph [0026], Fig 1: Certain UEs 104 may communicate with each other using device-to-device (D2D) communication link 158. D2D communication may be through a variety of wireless D2D communications systems, such as for example, FlashLinQ, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the IEEE 802.11 standard, LTE, or NR. Paragraph [0042]: FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network. The controller/processor 375 implements layer 3 and layer 2 functionality…..layer 2 includes….., a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer. Tillinger does not explicitly teach wherein the higher layer is a network layer. However, Liu teaches wherein the higher layer is a network layer: (Paragraph [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Layer 2 (L2 layer) 208 is above the physical layer 206 and is responsible for the link between the UE 106 and eNB 104 over the physical layer 206. Paragraph [0036] In the user plane, the L2 layer 208 includes a media access control (MAC) sublayer 210, a radio link control (RLC) sublayer 212, and a packet data convergence protocol (PDCP) 214 sublayer, which are terminated at the eNB on the network side. In an aspect, the UE 106 may have several upper layers above the L2 layer 208 including a network layer.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the method wherein the higher layer is a network layer, as taught by Liu in the combined system of Tillinger, Banuli, Vasudevan, and Chowdhuri, so that it can include more layers within a communication device for performing RX and TX flow processing (Liu: Paragraph [0036]). Regarding claim 5, The combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches the method of claim 1 further comprising (see rejection for claim 1); Tillinger does not explicitly teach storing, in a header memory within the MAC layer or the MAC sublayer, header or sub-header data or fetching, from the header memory, stored header or sub- header data for at least one of the multiple RX flows; storing, in a payload memory within the MAC layer or the MAC sublayer, payload data for at least one of the multiple RX flows. However, Liu teaches storing, in a header memory within the MAC layer or the MAC sublayer, header or sub-header data or fetching, from the header memory, stored header or sub- header data for at least one of the multiple RX flows (Liu: Paragraph [0053], Fig 4: In an aspect, portions of a relay device 420 (e.g., an access network node, for example, an eNB in LTE) and/or a receiver 430, such as the L2 decoders (MAC/RLC/PDCP decoders 422-424, 432-434) may ….deliver the resultant payload (IP datagram, TCP message, or application-layer message) to a higher-layer decoder,…. based on the information included in the headers. Paragraph [0058]: Relay device may include decoders to decapsulate the L1 and L2 headers to recover a payload in the form of an IP datagram from the received RLC layer. Paragraph [0063]: MAC decoder 432 may receive the L2 frame and may process the MAC header from the L2 frame. Paragraph [0078]: In an aspect, encoders 515 can include one or more layer encoders 411-417 that may retrieve a data sequence 540 including one or more data payloads and encapsulate the payloads to provide transport blocks 440, 541 to one or more transceivers 517a-c. In an aspect, one or more of encoders 515 may retrieve configuration information 550 and/or multi-link flag 555 from memory 512 and add the information into a layer header and/or data payload.) storing, in a payload memory within the MAC layer or the MAC sublayer, payload data for at least one of the multiple RX flows (Paragraph [0082]: FIGS. 6A-6E are diagrams illustrating example receiving devices receiving and decoding a multiple data packets of a data sequence received over a plurality of links in an access network. Paragraph [0083]: Processing system 605 can include memory 606, one or more processors 607, one or more modems 608, decoders 610-650, and data payload 660. In an aspect, data payload 660 may be stored in memory 606. Paragraph [0084]: Processing system 605 may use one or more decoders 610-650 to decapsulate and/or process layer headers from the transport blocks 541 to retrieve the resultant data payload.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide storing, in a header memory within the MAC layer or the MAC sublayer, header or sub-header data or fetching, from the header memory, stored header or sub- header data for at least one of the multiple RX flows; storing, in a payload memory within the MAC layer or the MAC sublayer, payload data for at least one of the multiple RX flows, as taught by Liu in the system of Tillinger, so that the flow context and header information can be stored and fetched when required for further processing without delay (Liu: Paragraphs [0053], [0058], [0082], [0007]). The combination of Tillinger, Liu, Banuli, and Vasudevan does not explicitly teach fetching, from the flow context memory, stored hardware context for at least one of the multiple RX flows. However, Chowdhuri teaches fetching, from the flow context memory, stored hardware context for at least one of the multiple RX flows (Col 6, lines 37-48: The MAC processor 110 generally acts as an interface between a modem that implements physical layer functions and higher communication protocol layers. The MAC processor 110 receives MSDUs from upper layer functions, organizes them into MPDUs, and then provides the MPDUs to the modem for transmission from the SS. Additionally, the MAC processor 110 receives MPDUs from the modem, the MPDUs having been received by the SS, extracts and forms MSDUs that were packaged in the MPDUs, and then provides the MSDUs to the upper layer protocol functions. Col 10, line 67, Col 11, line 1: the context switching processor 200 may be utilized in a MAC processor. Col 11, lines 39-49: The context switching processor 200 also includes a context memory 212 and context switch logic 216. The processing engine 204 is coupled to the context memory 212, and the processing engine 204 generally stores state information in the context memory 212 when a context switch occurs. A context switch is when the processing engine 204 stops processing one burst (mid-burst) and then starts processing another burst. Thus, when a context switch occurs, the processing engine 204 also may retrieve state information from the context memory 212 corresponding to the next burst that is to be processed. Col 12, lines 8-12: Although context switching was described in the context of processing burst data received from the modem, processing using context switching optionally may also be implemented for burst data that is to be provided to the modem for transmission. Col 13, lines 30-33, 39-46: At block 246, the current context is saved. For example, the state information corresponding to the current context is saved to the context memory 212. At block 250, the next context (determined at block 242) is restored. For example, the state information corresponding to the next context is retrieved from the context memory 212 and stored in corresponding memories (e.g., registers) in the processing engine 204. In one implementation, the context Switch logic 216 may generate control signals to retrieve the state information corresponding to the next context from the context memory 212.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide fetching, from the flow context memory, stored hardware context for at least one of the multiple RX flows, as taught by Chowdhuri in the combined system of Tillinger, Liu, Banuli, and Vasudevan, so that when processing stops and context switch occurs, state information can be stored in the context memory, and retrieved when the processing restarts (Chowdhuri: Col 11: lines 3-17, 39-49). Regarding claim 6, The combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches the method of claim 5 (see rejection for claim 5); Tillinger does not explicitly teach wherein the header or sub-header data is from a group comprising at least a wireless local area network (WLAN) MAC header, a long-term evolution (LTE) MAC header or sub-header, and a sub-header for a 5G MAC sub-protocol data unit (sub-PDU). However, Liu teaches wherein the header or sub-header data is a long-term evolution (LTE) MAC header or sub-header (Paragraph [0035]: FIG. 2 is a diagram illustrating an example of a radio protocol architecture 200 for the user and control planes in LTE. The radio protocol architecture 200 (e.g., “protocol stack”) for UE 106 and eNB 104 is shown with multiple layers: Layer 1, Layer 2, and Layer 3. Layer 1 (L1 layer) is the lowest layer and implements various physical-layer signal-processing functions. The L1 layer will be referred to herein as the physical layer 206. Layer 2 (L2 layer) 208 is above the physical layer 206 and is responsible for the link between the UE 106 and eNB 104 over the physical layer 206. Paragraph [0036]: In the user plane, the L2 layer 208 includes a media access control (MAC) sublayer 210, a radio link control (RLC) sublayer 212, and a packet data convergence protocol (PDCP) 214 sublayer. Paragraph [0053]: In an aspect, portions of a relay device 420 (e.g., an access network node, for example, an eNB in LTE) and/or a receiver 430, such as the L2 decoders (MAC/RLC/PDCP decoders 422-424, 432-434) may deliver the resultant payload (IP datagram, TCP message, or application-layer message) to a higher-layer decoder….based on the information included in the headers.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide wherein the header or sub-header data is a long-term evolution (LTE) MAC header or sub-header, as taught by Liu in the system of Tillinger, so that a radio protocol architecture for the user plane and control plane in LTE can be implemented (Liu: Paragraphs [0035], [0036], [0053]). The combination of Tillinger, Liu, Vasudevan, and Chowdhuri does not explicitly teach wherein the header or sub-header data is from a group comprising at least a wireless local area network (WLAN) MAC header, and a sub-header for a 5G MAC sub-protocol data unit (sub-PDU). However, Banuli teaches wherein the header or sub-header data is from a group comprising at least a wireless local area network (WLAN) MAC header (Paragraph [0211]: In at least one embodiment, FIG. 10 may include…. a wireless local area network unit (“WLAN”) 1050), a long-term evolution (LTE) MAC header or sub-header (Paragraph [0443]: In at least one embodiment, base stations may provide wireless access in accordance with one or more wireless communication protocols, e.g., long term evolution (LTE), LTE advanced (LTE-A); Paragraph [0582]: In at least one embodiment, UE 3802 and RAN 3816 may utilize a Uu interface (e.g., an LTE -Uu interface) to exchange control plane data via a protocol stack comprising PHY layer 4302, MAC layer 4304, RLC layer 4306, PDCP layer 4308, and RRC layer 4310.; Paragraph [0588]: FIG. 44 is an illustration of a user plane protocol stack in accordance with at least one embodiment. In at least one embodiment, for example, UE 3802 and RAN 3816 may utilize a Uu interface (e.g., an LTE -Uu interface) to exchange user plane data via a protocol stack comprising PHY layer 4302, MAC layer 4304, RLC layer 4306, PDCP layer 4308.); and a sub-header for a 5G MAC sub-protocol data unit (sub-PDU) (Fig 3, Figs. 32-36, Paragraph [0069]: In at least one embodiment, 5G NR signal processing environment 300 includes a 5G vRAN stack 306 with a low physical (PHY) layer 308, a high PHY layer 310, a Medium Access Control (MAC) layer 312, a Radio Link Control (RLC) layer 314, and a Packet Data Convergence Protocol (PDCP) layer. Paragraph [0488]: In at least one embodiment, MAC 3720 is a set of system software and libraries configured to provide an interface with a medium access control (MAC) layer, which may be part of a 5G network architecture. In at least one embodiment, a MAC layer controls hardware responsible for …. flow control and multiplexing.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide wherein the header or sub-header data is from a group comprising at least a wireless local area network (WLAN) MAC header, and a sub-header for a 5G MAC sub-protocol data unit (sub-PDU) as taught by Banuli, in the combined system of Tillinger, Liu, Vasudevan, and Chowdhuri, to include more wireless standards to implement data flow processing in a MAC layer within a communication device (Banuli: Paragraph [0443]). Regarding claim 8, the combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches the method of claim 1 (see rejection for claim 1); The combination of Tillinger, Liu, Vasudevan, and Chowdhuri does not explicitly teach wherein the associated information is information needed to resume processing the first RX or TX flow from where the processing is stored. However, Banuli teaches wherein the associated information is information needed to resume processing the first RX or TX flow from where the processing is stored (Paragraph [0232]: A set of registers 1345 store context data for threads executed by graphics processing engines 1331-1332, and a context management circuit 1348 manages thread contexts. For example, context management circuit 1348 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit 1348 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In one embodiment, an interrupt management circuit 1347 receives and processes interrupts received from system devices. Figs 13B-E illustrate the interfacing of System Memory 1314 with Interrupt MGMT block 1347, Context MGMT 1348, Memory Management Unit (MMU) 1339, Registers 1345, Work Descriptor (WD) 1384 and Work Descriptor (WD) Fetch 1391, with Save/Restore feature. (Paragraph [0242], Fig 13D: Application effective address space 1382 within system memory 1314 stores process elements 1383. A process element 1383 contains process state for corresponding application 1380. A work descriptor (WD) 1384 contained in process element 1383 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1384 is a pointer to a job request queue in an application's address space 1382. Paragraph [0245]: In operation, a WD fetch unit 1391 fetches next WD 1384 which includes an indication of work to be done. Data from WD 1384 may be stored in registers 1345 and used by MMU 1339, interrupt management circuit 1347 and/or context management circuit 1348 as illustrated. Paragraph [0252]: In at least one embodiment, application 1380 is required to make an operating system 1395 system call with…. a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). Paragraph [0298]: In at least one embodiment, processing cluster array 1812 can receive processing tasks to be executed via scheduler 1810, which receives commands defining processing tasks from front end 1808. In at least one embodiment, processing tasks can include indices of data to be processed. In at least one embodiment, scheduler 1810 may be configured to fetch indices corresponding to tasks or may receive indices. Paragraph [0488]: In at least one embodiment, MAC provides flow control and multiplexing for a transmission medium.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide wherein the associated information is information needed to resume processing the first RX or TX flow from where the processing is stored as taught by Banuli in the combined system of Tillinger, Liu, Vasudevan, and Chowdhuri, so that it would enable using the associated information to resume the RX and TX flow processing from where it was stored (Banuli: Paragraph [0232]). Response to Arguments Applicant's arguments filed November 14, 2025 with respect to claims 1-6, 8 and 10-20 being rejected under 35 U.S.C. 103 have been fully considered. Applicant asserts that the combination of the cited references Tillinger, Liu, Banuli and Vasudevan does not teach the element "the MAC layer or the MAC sublayer comprises a flow context memory configured for storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows" as recited in amended independent claim 1. However, Chowdhuri et al. (US8175015B1) teaches "the MAC layer or the MAC sublayer comprises a flow context memory configured for storing hardware context or fetching stored hardware context for at least one of the multiple RX flows; and storing hardware context or fetching stored hardware context for at least one of the multiple TX flows". Chowdhuri teaches that the MAC processor which acts as an interface between a modem that implements physical layer functions, and higher communication protocol layers, organizes the MSDUs that it receives from upper layers into MPDUs, and provides them to the modem for transmission, and additionally, receives MPDUs from the modem, extracts and forms MSDUs that were packaged in the MPDUs, and then provides the MSDUs to the upper layer protocol functions. Thus, Chowdhuri teaches RX and TX flow processing in the MAC layer. Chowdhuri further teaches a context switching processor that includes a context memory and context switch logic, and a processing engine coupled to the context memory which stores state information in the context memory when a context switch occurs. Chowdhuri teaches context switching in the context of processing data received from the modem, and processing using context switching also for data that is to be provided to the modem for transmission. Thus, Chowdhuri teaches context processing for TX and RX flows. Further, Chowdhuri teaches that the state information corresponding to the current context is saved to the context memory, and also, the state information corresponding to the next context is retrieved from the context memory. Thus, Chowdhuri also teaches storing context, and fetching (retrieving) stored context from the context memory. Thus, the combination of Tillinger, Liu, Banuli, Vasudevan, and Chowdhuri teaches amended independent claim 1, and the combination of Tillinger, Liu, Vasudevan, and Chowdhuri teaches amended independent claims 10 and 16, which recites similar limitations. Dependent claims 2-6, 8, 11-15, 17-20 are also taught by combinations of the cited references. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LATHA CHAKRAVARTHY whose telephone number is (703)756-1172. The examiner can normally be reached M-Th 8:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Huy Vu can be reached at 571-272-3155. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.C./Examiner, Art Unit 2461 /HUY D VU/Supervisory Patent Examiner, Art Unit 2461
Read full office action

Prosecution Timeline

Apr 08, 2022
Application Filed
Jul 12, 2024
Non-Final Rejection — §103
Oct 06, 2024
Response Filed
Oct 30, 2024
Final Rejection — §103
Mar 04, 2025
Response after Non-Final Action
Apr 17, 2025
Request for Continued Examination
Apr 24, 2025
Response after Non-Final Action
May 23, 2025
Non-Final Rejection — §103
Jun 24, 2025
Response Filed
Jul 08, 2025
Final Rejection — §103
Sep 14, 2025
Response after Non-Final Action
Nov 14, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Jan 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598672
METHOD FOR CELL RESELECTION, TERMINAL DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12549934
Method for Determining Policy Control Network Element, Apparatus, and System
2y 5m to grant Granted Feb 10, 2026
Patent 12542818
APPLICATION FUNCTION NODE AND COMMUNICATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12526837
METHOD AND APPARATUS FOR REPORTING INFORMATION RELATED TO SYSTEM INFORMATION REQUEST IN NEXT-GENERATION MOBILE COMMUNICATION SYSTEM
2y 5m to grant Granted Jan 13, 2026
Patent 12382388
DISCONTINUOUS RECEPTION FOR CONFIGURED GRANT/SEMI-PERSISTENT SCHEDULING
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
31%
Grant Probability
88%
With Interview (+57.0%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month