DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 7, 11-13 and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kurichiyath (US7,984,241) in view of Maeda et al. (US2015/0046634).
With respect claim 1, Kurichiyath teaches a plurality of processing elements (see Fig. 9, column 5, lines 48-55; first, second, third and fourth CPUs 50, 51, 52, 53);
a cache domain including a plurality of caches in which cache lines are to be stored (see Fig. 9 and column 5, lines 31-55; first, second, third and fourth CPUs 50, 51, 52, 53 are each associated with an L1 cache 54, 55, 56, 57. First and second CPUs 50, 51 share a first L2 cache 58, while third and fourth CPUs 52, 53 share a second L2 cache 59. All four CPUs share an L3 cache 60);
identify one or more cache lines among the plurality of cache lines that are important (see column 2, lines 27-37 and 50-53; controlling access may comprise comparing the cache memory level with a value of the cache attribute and controlling access based on said comparison. Each cache level looks at the value of the cache attribute to determine if it should permit or disallow data to be stored at that level. Also in column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)… If the page is not important enough to be marked for the L1 cache, the operating system may apply some other criteria for cacheability at a lower level); and
write, for each cache line identified as important, the cache line to a cache in the cache domain (see column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)).
Kurichiyath does not teach an input-output (IO) port, operatively coupled to the cache domain; and circuitry and logic to, logically partition a data unit received at the IO port into a plurality of cache lines.
However, Maeda et al. teaches device-controller main unit 202 functions as a bus master for the communication path 3 between the host 1 and the memory system 2 and performs data transfer by using a first port 230… device-controller main unit 202 includes a cache control unit 207. The cache control unit 207 controls caches (the L2P cache area 300, an L2P cache tag area 310, a data cache area 400, and a data cache tag area 410) reserved in a device use area 102 (see paragraphs 21-22); and wherein when the "length" included in the second read command is larger than the size of a cache line, the user data as a read target in accordance with the second read command is divided into pieces of data that has the size of a cache line (see paragraph 60).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Kurichiyath to include the above mentioned to improve system performance (see Maeda, paragraphs 62-63).
With respect claim 3, Kurichiyath does not teach wherein the data unit is contained in a memory transaction including a memory cache line address.
However, Maeda et al. teaches the CPU 110 generates a first read command that includes the address "mem addr" indicating the position in the host use area 101 and the LBA "stor addr"… host controller main unit 122 reads the user data as a read target that is received by the device connection adapter 126 (S55) and writes it in the host use area 101 (S56). The write position of the user data is the position indicated by the "mem addr" (see paragraph 59).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Kurichiyath to include the above mentioned to improve system performance (see Maeda, paragraphs 62-63).
With respect claim 7, Kurichiyath teaches wherein the cache domain comprises a coherent cache domain including a plurality of Level 1 (L1) and Level 2 (L2) caches and a Level 3 (L3) or Last Level Cache (LLC) (see Fig. 9 and column 5, lines 31-55; first, second, third and fourth CPUs 50, 51, 52, 53 are each associated with an L1 cache 54, 55, 56, 57. First and second CPUs 50, 51 share a first L2 cache 58, while third and fourth CPUs 52, 53 share a second L2 cache 59. All four CPUs share an L3 cache 60. The L3 cache).
With respect claim 11, Kurichiyath teaches a processor including a plurality of cores (see Fig. 9, column 5, lines 48-55; first, second, third and fourth CPUs 50, 51, 52, 53) and a cache domain having multiple caches (see Fig. 9 and column 5, lines 31-55; first, second, third and fourth CPUs 50, 51, 52, 53 are each associated with an L1 cache 54, 55, 56, 57. First and second CPUs 50, 51 share a first L2 cache 58, while third and fourth CPUs 52, 53 share a second L2 cache 59. All four CPUs share an L3 cache 60. The L3 cache), and a memory controller coupled to memory (see column 34-47; cache controller), the method comprising:
identify one or more important cache lines among the plurality of cache lines, the one or more important cache lines corresponding to key segments of the data (see column 2, lines 27-37 and 50-53; controlling access may comprise comparing the cache memory level with a value of the cache attribute and controlling access based on said comparison. Each cache level looks at the value of the cache attribute to determine if it should permit or disallow data to be stored at that level. Also in column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)… If the page is not important enough to be marked for the L1 cache, the operating system may apply some other criteria for cacheability at a lower level); and
writing, for each of the cache lines identified as important, the cache line to a cache in the cache domain (see column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)).
Kurichiyath does not teach wherein an input-output (IO) port is operationally coupled to cache domain; receiving a transaction at the IO port including a transaction address and data; logically partitioning the data into a plurality of cache lines;
However, Maeda et al. teaches device-controller main unit 202 functions as a bus master for the communication path 3 between the host 1 and the memory system 2 and performs data transfer by using a first port 230… device-controller main unit 202 includes a cache control unit 207. The cache control unit 207 controls caches (the L2P cache area 300, an L2P cache tag area 310, a data cache area 400, and a data cache tag area 410) reserved in a device use area 102 (see paragraphs 21-22); the CPU 110 generates a first read command that includes the address "mem addr" indicating the position in the host use area 101 and the LBA "stor addr"… host controller main unit 122 reads the user data as a read target that is received by the device connection adapter 126 (S55) and writes it in the host use area 101 (S56). The write position of the user data is the position indicated by the "mem addr" (see paragraph 59); and wherein when the "length" included in the second read command is larger than the size of a cache line, the user data as a read target in accordance with the second read command is divided into pieces of data that has the size of a cache line (see paragraph 60).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the method taught by Kurichiyath to include the above mentioned to improve system performance (see Maeda, paragraphs 62-63).
With respect claim 12, Kurichiyath teaches writing non-important cache lines among the plurality of cache lines to memory (see column 2, lines 27-37 and 50-53; and column 5, lines 42-47; at level L2, the L2 cache controller 48 checks if the CHH value is 2. If it is, the data is stored at level L2 (step s4). If there are no more cache levels (step s5), the data is stored in main memory).
With respect claim 13, Kurichiyath teaches wherein the important cache lines are written to a cache having a first level (see column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)… If the page is not important enough to be marked for the L1 cache, the operating system may apply some other criteria for cacheability at a lower level), further comprising writing non-important cache lines among the plurality of cache lines to a cache having a second level higher than the first level (see column 5, lines 31-46 and column 6, lines 6-36; if writing to the L1 cache is not permitted, the procedure moves to the next cache level (step s5), assuming there are more cache levels. For example, at level L2, the L2 cache controller 48 checks if the CHH value is 2. If it is, the data is stored at level L2 (step s4). If there are no more cache levels (step s5), the data is stored in main memory).
With respect claim 17, Kurichiyath teaches determining a core that will be used to consume the data; and writing important cache lines to a local cache associated with the core that is determined (see column 5, lines 31-40; write cycle starts at the L1 cache, for example at a first L1 cache 43 associated with the first CPU 41 (i.e., processor consuming the data). The first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache).
With respect claim 18, Kurichiyath teaches memory, configured to store a plurality of cache lines (see Fig. 6 and 9; column 4, line 43; main memory);
a processor, operatively coupled to the memory (see Fig. 6 and column 4, lines 60-62; processor modules 49 and 50), including,
a plurality of processing elements (see Fig. 6 and 9; column 4, lines 49-55 and column 5, lines 31-55; CPUs 41-42 and controllers 45-46);
a cache domain including a plurality of caches in which cache lines are stored (see Fig. 6 and 9; column 4, lines 49-56 and column 5, lines 31-55; first, second, third and fourth CPUs 50, 51, 52, 53 are each associated with an L1 cache 54, 55, 56, 57. First and second CPUs 50, 51 share a first L2 cache 58, while third and fourth CPUs 52, 53 share a second L2 cache 59. All four CPUs share an L3 cache 60);
identify one or more cache lines among the plurality of cache lines that comprise one or more key sections of the data unit are important (see column 2, lines 27-37 and 50-53; controlling access may comprise comparing the cache memory level with a value of the cache attribute and controlling access based on said comparison. Each cache level looks at the value of the cache attribute to determine if it should permit or disallow data to be stored at that level. Also in column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)… If the page is not important enough to be marked for the L1 cache, the operating system may apply some other criteria for cacheability at a lower level); and
write, for each cache line identified as important, the cache line to a cache in the cache domain (see column 5, lines 31-46 and column 6, lines 6-36; first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being `1`, then the data is stored in the L1 cache (step s4)).
Kurichiyath does not teach an input-output (IO) device; and an input-output (IO) port, operatively coupled to the cache domain and to which the IO device is coupled; and circuitry and logic to, logically partition a data unit received from an IO device coupled to the IO port into a plurality of cache lines.
However, Maeda et al. teaches device-controller main unit 202 functions as a bus master for the communication path 3 between the host 1 and the memory system 2 and performs data transfer by using a first port 230… device-controller main unit 202 includes a cache control unit 207. The cache control unit 207 controls caches (the L2P cache area 300, an L2P cache tag area 310, a data cache area 400, and a data cache tag area 410) reserved in a device use area 102 (see paragraphs 21-22); and wherein when the "length" included in the second read command is larger than the size of a cache line, the user data as a read target in accordance with the second read command is divided into pieces of data that has the size of a cache line (see paragraph 60).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath to include the above mentioned to improve system performance (see Maeda, paragraphs 62-63).
Claim(s) 2, 8-9 and 21 is/are rejected under 35 U.S.C. 103/ as being unpatentable over Kurichiyath (US7,984,241) and Maeda et al. (US2015/0046634) as applied to claims 1 and 18 above, and further in view of Banerjee et al. (US10,176,126).
With respect claim 2 Kurichiyath does not teach wherein the data unit is contained in a Peripheral Component Interconnect Express (PCIe) transaction.
However, Banerjee et al. teaches wherein the data unit is contained in a Peripheral Component Interconnect Express (PCIe) transaction (see column 8, lines 60-67; posted transaction over a PCIe architecture).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Kurichiyath to include the above mentioned to exhibit high performance and low latency. (see Banerjee, column 6, lines 10-12).
With respect claim 8, Kurichiyath does not teach wherein the IO port comprises one of a Peripheral Component Interconnect Express (PCIe) IO port, a Compute Express Link (CXL) IO port, and a Non-volatile Memory Express (NVMe) IO port.
However, Banerjee et al. teaches wherein the IO port comprises one of a Peripheral Component Interconnect Express (PCIe) IO port, a Compute Express Link (CXL) IO port, and a Non-volatile Memory Express (NVMe) IO port (see column 9, lines 15-24; PCIe ports).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Kurichiyath to include the above mentioned to exhibit high performance and low latency. (see Banerjee, column 6, lines 10-12).
With respect claim 9, Kurichiyath does not teach wherein the IO port comprises an Advanced High performance Bus (AHB), an Advanced Xtensible Bus (AXI), and a Universal Serial Bus (USB) IO port.
However, Banerjee et al. teaches wherein the IO port comprises an Advanced High performance Bus (AHB), an Advanced Xtensible Bus (AXI), and a Universal Serial Bus (USB) IO port (see column 10, lines 48-51; advanced extensible interface (AXI)).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Kurichiyath to include the above mentioned to exhibit high performance and low latency. (see Banerjee, column 6, lines 10-12).
With respect claim 21, Kurichiyath does not teach wherein the IO port comprises a Peripheral Component Interconnect Express (PCIe) IO port, the IO device comprises a PCIe device and wherein the data unit is received in a PCIe transaction.
However, Banerjee et al. teaches wherein the IO port comprises a Peripheral Component Interconnect Express (PCIe) IO port (see column 9, lines 15-24; PCIe ports), the IO device comprises a PCIe device (see column 8, lines 17-21; a packet is transmitted from a PCIe endpoint,) and wherein the data unit is received in a PCIe transaction (see column 8, lines 60-67; posted transaction over a PCIe architecture).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath to include the above mentioned to exhibit high performance and low latency. (see Banerjee, column 6, lines 10-12).
Claim(s) 4, 14 and 19 is/are rejected under 35 U.S.C. 103/ as being unpatentable over Kurichiyath (US7,984,241) and Maeda et al. (US2015/0046634) as applied to claims 1, 11 and 18 above, and further in view of Seningen (US2022/0100655).
With respect claim 4, Kurichiyath and Maeda et al. do not teach wherein the circuitry and logic include a register that is configured to be programmed by software executing on one or more of the plurality of processing elements to identify cache lines in a data unit of a given type of data unit are important.
However, Seningen teaches a pattern lookup table 204 is configured to store data patterns 214. In various embodiments, data patterns 214 may include one or more background data patterns… the modification of data patterns 214 may be based on data tracking of data patterns that are frequently encountered during memory accesses. In other cases, different sets of data patterns may be stored in pattern lookup table 204 based on a type of software or program instructions being executed by a computer system. For example, when executing video-related program instructions, the computer system may benefit from using one set of data patterns, while when executing audio-related program instructions, the computer system may benefit from another set of data patterns (see paragraphs 22 and 36).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Kurichiyath and Maeda et al. to include the above mentioned to avoid delays associated with accesses to main memory, thereby improving performance (see Seningen, paragraph 24).
With respect claim 14, Kurichiyath and Maeda et al. do not teach enabling software executing on a core to program a register to identify an importance pattern of cache lines in an associated data structure; and using the importance pattern to identify important cache lines in a received transaction.
However, Seningen teaches a pattern lookup table 204 is configured to store data patterns 214. In various embodiments, data patterns 214 may include one or more background data patterns… the modification of data patterns 214 may be based on data tracking of data patterns that are frequently encountered during memory accesses. In other cases, different sets of data patterns may be stored in pattern lookup table 204 based on a type of software or program instructions being executed by a computer system. For example, when executing video-related program instructions, the computer system may benefit from using one set of data patterns, while when executing audio-related program instructions, the computer system may benefit from another set of data patterns (see paragraph 36).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the method taught by Kurichiyath and Maeda et al. to include the above mentioned to avoid delays associated with accesses to main memory, thereby improving performance (see Seningen, paragraph 24).
With respect claim 19, Kurichiyath teaches software instructions loaded into the memory or stored in a storage device operationally coupled to the processor (see column 3, lines 18-20; each processor in a multiprocessor system runs one or more processes, which can be defined as programs in execution).
Kurichiyath and Maeda et al. do not teach wherein execution of the software instructions on a core programs a register to identify an importance pattern of cache lines in an associated data structure, and wherein the importance pattern is used to identify important cache lines in a received transaction.
However, Seningen teaches a pattern lookup table 204 is configured to store data patterns 214. In various embodiments, data patterns 214 may include one or more background data patterns… the modification of data patterns 214 may be based on data tracking of data patterns that are frequently encountered during memory accesses. In other cases, different sets of data patterns may be stored in pattern lookup table 204 based on a type of software or program instructions being executed by a computer system. For example, when executing video-related program instructions, the computer system may benefit from using one set of data patterns, while when executing audio-related program instructions, the computer system may benefit from another set of data patterns (see paragraph 36).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath and Maeda et al. to include the above mentioned to avoid delays associated with accesses to main memory, thereby improving performance (see Seningen, paragraph 24).
Claim(s) 5, 15 and 20 is/are rejected under 35 U.S.C. 103/ as being unpatentable over Kurichiyath (US7,984,241) and Maeda et al. (US2015/0046634) as applied to claims 1, 11 and 18 above, and further in view of Burrows (US5,303,302).
With respect claim 5, Kurichiyath and Maeda et al. do not teach circuitry and logic to detect whether a unit of data received from the IO is part of a previous packet or a new packet.
However, Burrows teaches wherein transmitted partial data packet is stored in the memory 212 of the host computer 120, generally at an address corresponding to the virtual circuit identifier for the data packet… When the end of a data packet is received by the host controller 300, as indicated by an END of packet flag in a received cell, the control logic 302 of this controller 300 checks the partial transfer flag P of the corresponding packet entry 170 before transmitting the completed data packet to the host computer 120. If portions of data packet have already been partially transmitted to the host computer, as indicated by the P status flag in its packet directory entry 170 being set, the rest of the completed data packet is transmitted to the host through via output control circuit 304 and path 306 through output buffer 184 (see column 9, lines 5-35).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath and Maeda et al. to include the above mentioned to improve the processor performance by providing an indication of the packet to which the portions of the partial packets belong (see Burrows, column 2, lines 49-54).
With respect claim 15, Kurichiyath and Maeda et al. do not teach receiving a first transaction containing a complete packet or a first portion of a packet; receiving a second transaction containing a new packet or a second portion of a packet; and detecting whether the second transaction contains a new packet or a second portion of a packet.
However, Burrows teaches wherein transmitted partial data packet is stored in the memory 212 of the host computer 120, generally at an address corresponding to the virtual circuit identifier for the data packet… When the end of a data packet is received by the host controller 300, as indicated by an END of packet flag in a received cell, the control logic 302 of this controller 300 checks the partial transfer flag P of the corresponding packet entry 170 before transmitting the completed data packet to the host computer 120. If portions of data packet have already been partially transmitted to the host computer, as indicated by the P status flag in its packet directory entry 170 being set, the rest of the completed data packet is transmitted to the host through via output control circuit 304 and path 306 through output buffer 184 (see column 9, lines 5-35).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath and Maeda et al. to include the above mentioned to improve the method performance by providing an indication of the packet to which the portions of the partial packets belong (see Burrows, column 2, lines 49-54).
With respect claim 20, Kurichiyath and Maeda et al. do not teach wherein the processor further comprises circuitry and logic to detect whether a unit of data received from the IO is part of a previous packet or a new packet.
However, Burrows teaches wherein transmitted partial data packet is stored in the memory 212 of the host computer 120, generally at an address corresponding to the virtual circuit identifier for the data packet… When the end of a data packet is received by the host controller 300, as indicated by an END of packet flag in a received cell, the control logic 302 of this controller 300 checks the partial transfer flag P of the corresponding packet entry 170 before transmitting the completed data packet to the host computer 120. If portions of data packet have already been partially transmitted to the host computer, as indicated by the P status flag in its packet directory entry 170 being set, the rest of the completed data packet is transmitted to the host through via output control circuit 304 and path 306 through output buffer 184 (see column 9, lines 5-35).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath and Maeda et al. to include the above mentioned to improve the system performance by providing an indication of the packet to which the portions of the partial packets belong (see Burrows, column 2, lines 49-54).
Claim(s) 10 is/are rejected under 35 U.S.C. 103/ as being unpatentable over Kurichiyath (US7,984,241) and Maeda et al. (US2015/0046634) as applied to claim 1 above, and further in view of Weeks (US6,334,132).
With respect claim 10, Kurichiyath and Maeda et al. do not teach wherein the one or more cache lines among the plurality of cache lines that are important comprise key sections of the data unit.
However, Weeks teaches wherein key data items are generated for each data set, key data items being relatively strongly related to the overall subject matter of the data set 200. Each section 295 is reviewed to obtain a distribution value 290 which reflects the proportion of key data items appearing in that section (see column 12, lines 65-67 and column 13, lines 1-5).
It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Kurichiyath and Maeda et al. to include the above mentioned to improve the determination of relevant data (see Weeks, column 17, lines 34-37).
Allowable Subject Matter
Claims 6 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
No prior art or combination of prior art teaches or suggest compare the cache line address with the cache line address stored in the PA to determine whether the cache line address for the new transaction and the cache line address in the PA are contiguous; and when the cache line address for the new transaction and the cache line address in the PA are contiguous, detecting the unit of data is part of a previous packet as recited in claims 6 and 16.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Junghans et al. (US2024/0119013) teaches splitting 804 the plurality of PCIe packets 204 along cache line boundaries 206 to generate a plurality of partial store commands.
Wang et al. (US2020/0192715) teaches wherein if multiple packets are to be processed using the same packet processing instructions, then packet processing instructions can be afforded a highest priority in the cache so that the packet processing instructions can be reused for multiple packets.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARACELIS RUIZ whose telephone number is (571)270-1038. The examiner can normally be reached Monday-Friday 11:00am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G. Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ARACELIS RUIZ/Primary Examiner, Art Unit 2139