Prosecution Insights
Last updated: April 19, 2026
Application No. 18/909,595

MEMORY CONTROLLER WITH PSEUDO-CHANNEL SUPPORT

Non-Final OA §102§103§DP
Filed
Oct 08, 2024
Examiner
CHOI, CHARLES J
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Micro Devices, Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
259 granted / 314 resolved
+27.5% vs TC avg
Moderate +6% lift
Without
With
+5.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
7 currently pending
Career history
321
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 314 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 21-24 and 34-38 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Magro (US 2018/0018105). Regarding claim(s) 21, Magro teaches: A data processing system, comprising: a memory that implements pseudo channels including a first pseudo channel and a second pseudo channel; and a data processor coupled to the memory, wherein the data processor: generates memory access requests; routes the memory access requests to a selected one of a first pseudo-channel pipeline circuit and a second pseudo-channel pipeline circuit; selects memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently; and provides memory commands to the memory in response to memory access requests selected by the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. [0012] In another form, a memory controller has a memory channel controller with the virtual controller mode. The memory channel controller includes an address generator, a command queue, an arbiter, and a dispatch queue. The address generator receives memory access requests and decodes the memory access requests to select a rank and bank of memory devices in a memory system, and in the virtual controller mode further decodes a sub-channel number of a plurality of sub-channels for each of the memory access requests. The command queue is coupled to the address generator for storing the memory access requests so decoded, including the sub-channel number in the virtual controller mode. The arbiter is coupled to the command queue to select memory access requests from the command queue using the rank and the bank according to predetermined criteria, and in the virtual controller mode selecting from among the memory access requests in each sub-channel independently using the predetermined criteria. The dispatch queue is coupled to the command queue for dispatching selected memory commands to a memory system over a physical interface, and in the virtual controller mode further dispatching the selected memory commands to a selected sub-channel. Regarding claim(s) 22, Magro teaches: wherein the data processor provides the memory commands to the memory using a common address and data path for the first pseudo channel and the second pseudo channel. [0037] Arbiter 538 uses timing block 534 to enforce proper timing relationships by determining whether certain accesses in command queue 520 are eligible for issuance based on DRAM timing parameters. For example, each DRAM has a minimum specified time between activate commands to the same bank, known as “t.sub.RC”. Timing block 534 maintains a set of counters that determine eligibility based on this and other timing parameters specified in the JEDEC specification, and is bidirectionally connected to replay queue 530. Regarding claim(s) 23, Magro teaches: wherein the data processor further selects memory access requests in the first pseudo-channel pipeline circuit and the second pseudo- channel pipeline circuit independently using substantially the same arbitration rules. [0014] Predetermined criteria are used to select from among a plurality of memory access requests in the command queue using predetermined criteria, and the predetermined criteria are further used to independently select from among the memory access requests to each sub-channel. The memory access requests, so selected, are dispatched to one of the plurality of memory channels according to the sub-channel. Regarding claim(s) 24, Magro teaches: wherein the data processor further selects memory access requests in the first pseudo-channel pipeline circuit and the second pseudo- channel pipeline circuit independently using different arbitration rules. [0012] The arbiter is coupled to the command queue to select memory access requests from the command queue using the rank and the bank according to predetermined criteria, and in the virtual controller mode selecting from among the memory access requests in each sub-channel independently using the predetermined criteria. The dispatch queue is coupled to the command queue for dispatching selected memory commands to a memory system over a physical interface, and in the virtual controller mode further dispatching the selected memory commands to a selected sub-channel. Regarding claim(s) 34, Magro teaches: A method for use in a data processing system having a memory that implements pseudo channels including a first pseudo channel and a second pseudo channel, comprising: generating memory access requests; routing the memory access requests to a selected one of a first pseudo-channel pipeline circuit and a second pseudo-channel pipeline circuit; selecting memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently; and providing memory commands to the memory in response to memory access requests selected by the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. [0014] Predetermined criteria are used to select from among a plurality of memory access requests in the command queue using predetermined criteria, and the predetermined criteria are further used to independently select from among the memory access requests to each sub-channel. The memory access requests, so selected, are dispatched to one of the plurality of memory channels according to the sub-channel. [0012] In another form, a memory controller has a memory channel controller with the virtual controller mode. The memory channel controller includes an address generator, a command queue, an arbiter, and a dispatch queue. The address generator receives memory access requests and decodes the memory access requests to select a rank and bank of memory devices in a memory system, and in the virtual controller mode further decodes a sub-channel number of a plurality of sub-channels for each of the memory access requests. The command queue is coupled to the address generator for storing the memory access requests so decoded, including the sub-channel number in the virtual controller mode. The arbiter is coupled to the command queue to select memory access requests from the command queue using the rank and the bank according to predetermined criteria, and in the virtual controller mode selecting from among the memory access requests in each sub-channel independently using the predetermined criteria. The dispatch queue is coupled to the command queue for dispatching selected memory commands to a memory system over a physical interface, and in the virtual controller mode further dispatching the selected memory commands to a selected sub-channel. [0047] a page table (not shown in FIG. 6) that include extra circuitry to keep track of the timing eligibility and state of the additional sub-channel, and a queue that includes a selector and separate queue structures to independently queue accesses to each sub-channel. Regarding claim(s) 35, Magro teaches: wherein the providing comprises: providing the memory commands to the memory using a common command and address path for the first pseudo channel and the second pseudo channel. [0037] Arbiter 538 uses timing block 534 to enforce proper timing relationships by determining whether certain accesses in command queue 520 are eligible for issuance based on DRAM timing parameters. For example, each DRAM has a minimum specified time between activate commands to the same bank, known as “t.sub.RC”. Timing block 534 maintains a set of counters that determine eligibility based on this and other timing parameters specified in the JEDEC specification, and is bidirectionally connected to replay queue 530. Regarding claim(s) 36, Magro teaches: wherein the selecting comprises: selecting memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently using substantially the same arbitration rules. [0014] Predetermined criteria are used to select from among a plurality of memory access requests in the command queue using predetermined criteria, and the predetermined criteria are further used to independently select from among the memory access requests to each sub-channel. The memory access requests, so selected, are dispatched to one of the plurality of memory channels according to the sub-channel. Regarding claim(s) 37, Magro teaches: wherein the selecting comprises: selecting memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently using different arbitration rules. [0014] Predetermined criteria are used to select from among a plurality of memory access requests in the command queue using predetermined criteria, and the predetermined criteria are further used to independently select from among the memory access requests to each sub-channel. The memory access requests, so selected, are dispatched to one of the plurality of memory channels according to the sub-channel. Regarding claim(s) 38, Magro teaches: further comprising: providing a memory command to the memory in response to a normalized request selectively using the first pseudo-channel pipeline circuit and the second pseudo- channel pipeline circuit. [0012] In another form, a memory controller has a memory channel controller with the virtual controller mode. The memory channel controller includes an address generator, a command queue, an arbiter, and a dispatch queue. The address generator receives memory access requests and decodes the memory access requests to select a rank and bank of memory devices in a memory system, and in the virtual controller mode further decodes a sub-channel number of a plurality of sub-channels for each of the memory access requests. The command queue is coupled to the address generator for storing the memory access requests so decoded, including the sub-channel number in the virtual controller mode. The arbiter is coupled to the command queue to select memory access requests from the command queue using the rank and the bank according to predetermined criteria, and in the virtual controller mode selecting from among the memory access requests in each sub-channel independently using the predetermined criteria. The dispatch queue is coupled to the command queue for dispatching selected memory commands to a memory system over a physical interface, and in the virtual controller mode further dispatching the selected memory commands to a selected sub-channel. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 25 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Magro (US 2018/0018105) in view of Lee (US 2017/0031631). Regarding claim 25, Magro does not explicitly teach, but Lee teaches: wherein the data processor comprises: a serializer that is operable to serialize memory commands from the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. Fig. 8 and [0118] First to fourth stream IDs A to D are respectively assigned to write commands for first to fourth pieces of the stream data STR_A to STR_D and are transmitted to the storage device 10. For convenience, pieces of multi-stream data are sequentially transmitted in an ascending order with respect to the stream IDs, but the pieces of multi-stream data may be transmitted in any order based on various priorities. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the memory controller system/method of Magro with the system/method of multi-stream data storage taught by Lee. The motivation for doing so would have been to improve write or read performance for continuous data, as taught by Lee in [0118]. Further, Lee in [0128] teaches improvement in the functioning of the storage controller 100 by improving the multi-stream write operation by merging streams together to free resources for additional streams. Regarding claim(s) 40, Lee teaches: further comprising: serializing memory commands from the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit to the memory. Fig. 8 and [0118] First to fourth stream IDs A to D are respectively assigned to write commands for first to fourth pieces of the stream data STR_A to STR_D and are transmitted to the storage device 10. For convenience, pieces of multi-stream data are sequentially transmitted in an ascending order with respect to the stream IDs, but the pieces of multi-stream data may be transmitted in any order based on various priorities. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim(s) 21-40 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-3, 5-10, 17, of U.S. Patent No. 12117945. Although the claims at issue are not identical, they are not patentably distinct from each other. Application 18/909,595 Pat. No. 12117945 21. A data processing system, comprising: a memory that implements pseudo channels including a first pseudo channel and a second pseudo channel; and a data processor coupled to the memory, wherein the data processor: generates memory access requests; routes the memory access requests to a selected one of a first pseudo-channel pipeline circuit and a second pseudo-channel pipeline circuit; selects memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently; and provides memory commands to the memory in response to memory access requests selected by the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. 1. A data processor for accessing a memory having a first pseudo channel and a second pseudo channel, comprising: at least one memory accessing agent for generating a memory access request; a memory controller for providing memory commands to the memory in response to a normalized request selectively using a first pseudo channel pipeline circuit and a second pseudo channel pipeline circuit, wherein each of the first and second pseudo channel pipeline circuits re-orders and prioritizes accesses based on particular access patterns in its respective pseudo channel; a data fabric coupled between said at least one memory accessing agent and said memory controller for converting said memory access request into said normalized request selectively for said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit; and a serializer that is operable to serialize memory commands from said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit. 22. The data processing system of claim 21, wherein the data processor provides the memory commands to the memory using a common address and data path for the first pseudo channel and the second pseudo channel. 8. The data processor of claim 7, further comprising a physical interface circuit having a first phase input coupled to said first phase output of said pseudo channel arbiter and a second phase input coupled to said second phase output of said pseudo channel arbiter, and an output coupled to the memory, wherein said physical interface circuit serializes first memory access commands of said first phase and second memory access commands of said second phase on a common command bus for both pseudo channels. 23. The data processing system of claim 21, wherein the data processor further selects memory access requests in the first pseudo-channel pipeline circuit and the second pseudo- channel pipeline circuit independently using substantially the same arbitration rules. 6. The data processor of claim 5, wherein said arbiter of said first pseudo channel pipeline circuit uses substantially the same criteria as said arbiter of said second pseudo channel pipeline circuit. 24. The data processing system of claim 21, wherein the data processor further selects memory access requests in the first pseudo-channel pipeline circuit and the second pseudo- channel pipeline circuit independently using different arbitration rules. 17. A method for a data processor to provide commands to a memory that has a first pseudo channel and a second pseudo channel, comprising: generating memory access requests; routing said memory access requests selectively between upstream ports and downstream ports of a data fabric, and memory access responses selectively between said downstream ports and said upstream ports of said data fabric; decoding said memory access requests in said data fabric according to one of a plurality of pseudo channels including a first pseudo channel and a second pseudo channel; processing first memory access requests of said first pseudo channel in a first decoding and command arbitration circuit; and processing second memory access requests of said second pseudo channel in a second decoding and command arbitration circuit independent of said first decoding and command arbitration circuit. 25. The data processing system of claim 21, wherein the data processor comprises: a serializer that is operable to serialize memory commands from the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. 1. A data processor for accessing a memory having a first pseudo channel and a second pseudo channel, comprising: at least one memory accessing agent for generating a memory access request; a memory controller for providing memory commands to the memory in response to a normalized request selectively using a first pseudo channel pipeline circuit and a second pseudo channel pipeline circuit, wherein each of the first and second pseudo channel pipeline circuits re-orders and prioritizes accesses based on particular access patterns in its respective pseudo channel; a data fabric coupled between said at least one memory accessing agent and said memory controller for converting said memory access request into said normalized request selectively for said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit; and a serializer that is operable to serialize memory commands from said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit. 26. The data processing system of claim 25, wherein the data processor further comprises: at least one memory accessing agent for generating a memory access request; a memory controller for providing a memory command to the memory in response to a normalized request selectively using a first pseudo channel pipeline circuit and a second pseudo channel pipeline circuit, wherein each of the first pseudo channel pipeline circuit and the second pseudo-channel pipeline circuit re-orders and prioritizes accesses based on particular access patterns in its respective pseudo channel; and a data fabric coupled between the at least one memory accessing agent and the memory controller for converting the memory access request into the normalized request selectively for the first pseudo channel pipeline circuit and the second pseudo channel pipeline circuit by decoding the memory access request into a pseudo channel bit and a normalized address that does not include the pseudo channel bit, and provides the normalized address to a selected one of the first pseudo channel pipeline circuit and the second pseudo channel pipeline circuit. 1. A data processor for accessing a memory having a first pseudo channel and a second pseudo channel, comprising: at least one memory accessing agent for generating a memory access request; a memory controller for providing memory commands to the memory in response to a normalized request selectively using a first pseudo channel pipeline circuit and a second pseudo channel pipeline circuit, wherein each of the first and second pseudo channel pipeline circuits re-orders and prioritizes accesses based on particular access patterns in its respective pseudo channel; a data fabric coupled between said at least one memory accessing agent and said memory controller for converting said memory access request into said normalized request selectively for said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit; and a serializer that is operable to serialize memory commands from said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit. 27. The data processing system of claim 26, wherein the data fabric comprises: a pseudo channel decoder circuit that in response to a predetermined bit of an access address being in a first logic state, routes the memory access request to the first pseudo channel pipeline circuit, and in response to the predetermined bit of the access address being in a second logic state, routes the memory access request to the second pseudo channel pipeline circuit. 3. The data processor of claim 2, wherein said data fabric comprises: a pseudo channel decoder circuit that in response to a predetermined bit of an access address being in a first logic state, routes a memory access request to said first pseudo channel pipeline circuit, and in response to said predetermined bit of said access address being in a second logic state, routes said memory access request to said second pseudo channel pipeline circuit. 28. The data processing system of claim 26, wherein: the data fabric is further for routing the normalized request to one of the first pseudo channel pipeline circuit and the second pseudo channel pipeline circuit based on a pseudo channel address of the memory access request. 3. The data processor of claim 2, wherein said data fabric comprises: a pseudo channel decoder circuit that in response to a predetermined bit of an access address being in a first logic state, routes a memory access request to said first pseudo channel pipeline circuit, and in response to said predetermined bit of said access address being in a second logic state, routes said memory access request to said second pseudo channel pipeline circuit. 29. The data processing system of claim 26, wherein each of the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit comprises: a front-end interface circuit coupled to the data fabric for converting normalized addresses into decoded addresses of decoded memory access requests; a command queue coupled to the front-end interface circuit for storing the decoded memory access requests; and an arbiter for selecting among the decoded memory access requests from the command queue according to predetermined criteria and providing selected memory access requests to an output thereof. 2. The data processor of claim 1, wherein: said data fabric converts said memory access request into said normalized request by decoding said memory access request into a pseudo channel bit and a normalized address, and provides said normalized address to a selected one of said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit. 5. The data processor of claim 4, wherein each of said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit comprises: a front-end interface circuit coupled to said data fabric for converting normalized addresses into decoded addresses of decoded memory access requests; a command queue coupled to said front-end interface circuit for storing said decoded memory access requests; and an arbiter for selecting among said decoded memory access requests from said command queue according to predetermined criteria and providing selected memory access requests to an output thereof. 30. The data processing system of claim 29, wherein the memory controller further comprises: a pseudo channel arbiter for selecting between outputs of the arbiter for each of the first pseudo channel pipeline circuit and the second pseudo channel pipeline circuit and providing memory access commands to a first phase output for a first phase and a second phase output for a second phase based on memory access timing eligibility. 7. The data processor of claim 5, wherein said memory controller further comprises: a pseudo channel arbiter for selecting between outputs of said arbiter for each of said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit and providing memory access commands to a first phase output for a first phase and a second phase output for a second phase based on memory access timing eligibility. 31. The data processing system of claim 30, further comprising a physical interface circuit having a first phase input coupled to the first phase output of the pseudo channel arbiter and a second phase input coupled to the second phase output of the pseudo channel arbiter, and an output coupled to the memory, wherein the physical interface circuit serializes first memory access commands of the first phase and second memory access commands of the second phase on a common command bus for both pseudo channels. 8. The data processor of claim 7, further comprising a physical interface circuit having a first phase input coupled to said first phase output of said pseudo channel arbiter and a second phase input coupled to said second phase output of said pseudo channel arbiter, and an output coupled to the memory, wherein said physical interface circuit serializes first memory access commands of said first phase and second memory access commands of said second phase on a common command bus for both pseudo channels. 32. The data processing system of claim 30, further comprising: a first back-end queue for the first phase having a first input coupled to the first phase output of the pseudo channel arbiter, a second input for receiving replay commands for the first pseudo channel, and an output; and a second back-end queue for the second phase having a first input coupled to the second phase output of the pseudo channel arbiter, a second input for receiving replay commands for the second pseudo channel, and an output. 9. The data processor of claim 7, further comprising: a first back-end queue for said first phase having a first input coupled to said first phase output of said pseudo channel arbiter, a second input for receiving replay commands for said first pseudo channel, and an output; and a second back-end queue for said second phase having a first input coupled to said second phase output of said pseudo channel arbiter, a second input for receiving replay commands for said second pseudo channel, and an output. 33. The data processing system of claim 29, wherein: the front-end interface circuit, the command queue, and the arbiter of each of the first pseudo channel pipeline circuit and the second pseudo channel pipeline circuit operate according to a memory controller clock signal; and a physical interface circuit provides the memory command to the memory using a memory clock signal, wherein the memory clock signal has a higher frequency than the memory controller clock signal. 10. The data processor of claim 5, wherein: said front-end interface circuit, said command queue, and said arbiter of each of said first decoding and command arbitration circuit and said second decoding and command arbitration circuit operate according to a memory controller clock signal; and a physical interface circuit provides said memory commands to the memory using a memory clock signal, wherein said memory clock signal has a higher frequency than said memory controller clock signal. 34. A method for use in a data processing system having a memory that implements pseudo channels including a first pseudo channel and a second pseudo channel, comprising: generating memory access requests; routing the memory access requests to a selected one of a first pseudo-channel pipeline circuit and a second pseudo-channel pipeline circuit; selecting memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently; and providing memory commands to the memory in response to memory access requests selected by the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. 17. A method for a data processor to provide commands to a memory that has a first pseudo channel and a second pseudo channel, comprising: generating memory access requests; routing said memory access requests selectively between upstream ports and downstream ports of a data fabric, and memory access responses selectively between said downstream ports and said upstream ports of said data fabric; decoding said memory access requests in said data fabric according to one of a plurality of pseudo channels including a first pseudo channel and a second pseudo channel; processing first memory access requests of said first pseudo channel in a first decoding and command arbitration circuit; and processing second memory access requests of said second pseudo channel in a second decoding and command arbitration circuit independent of said first decoding and command arbitration circuit. 35. The method of claim 34, wherein the providing comprises: providing the memory commands to the memory using a common command and address path for the first pseudo channel and the second pseudo channel. 8. The data processor of claim 7, further comprising a physical interface circuit having a first phase input coupled to said first phase output of said pseudo channel arbiter and a second phase input coupled to said second phase output of said pseudo channel arbiter, and an output coupled to the memory, wherein said physical interface circuit serializes first memory access commands of said first phase and second memory access commands of said second phase on a common command bus for both pseudo channels. 36. The method of claim 34, wherein the selecting comprises: selecting memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently using substantially the same arbitration rules. 6. The data processor of claim 5, wherein said arbiter of said first pseudo channel pipeline circuit uses substantially the same criteria as said arbiter of said second pseudo channel pipeline circuit. 37. The method of claim 34, wherein the selecting comprises: selecting memory access requests in the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit independently using different arbitration rules. 17. A method for a data processor to provide commands to a memory that has a first pseudo channel and a second pseudo channel, comprising: generating memory access requests; routing said memory access requests selectively between upstream ports and downstream ports of a data fabric, and memory access responses selectively between said downstream ports and said upstream ports of said data fabric; decoding said memory access requests in said data fabric according to one of a plurality of pseudo channels including a first pseudo channel and a second pseudo channel; processing first memory access requests of said first pseudo channel in a first decoding and command arbitration circuit; and processing second memory access requests of said second pseudo channel in a second decoding and command arbitration circuit independent of said first decoding and command arbitration circuit. 38. The method of claim 34, further comprising: providing a memory command to the memory in response to a normalized request selectively using the first pseudo-channel pipeline circuit and the second pseudo- channel pipeline circuit. 1. A data processor for accessing a memory having a first pseudo channel and a second pseudo channel, comprising: at least one memory accessing agent for generating a memory access request; a memory controller for providing memory commands to the memory in response to a normalized request selectively using a first pseudo channel pipeline circuit and a second pseudo channel pipeline circuit, wherein each of the first and second pseudo channel pipeline circuits re-orders and prioritizes accesses based on particular access patterns in its respective pseudo channel; a data fabric coupled between said at least one memory accessing agent and said memory controller for converting said memory access request into said normalized request selectively for said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit; and a serializer that is operable to serialize memory commands from said first pseudo channel pipeline circuit and said second pseudo channel pipeline circuit. 39. The method of claim 38, further comprising: re-ordering and prioritizing accesses based on particular access patterns in its respective pseudo channel by each of the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit. 40. The method of claim 34, further comprising: serializing memory commands from the first pseudo-channel pipeline circuit and the second pseudo-channel pipeline circuit to the memory. 17. A method for a data processor to provide commands to a memory that has a first pseudo channel and a second pseudo channel, comprising: generating memory access requests; routing said memory access requests selectively between upstream ports and downstream ports of a data fabric, and memory access responses selectively between said downstream ports and said upstream ports of said data fabric; decoding said memory access requests in said data fabric according to one of a plurality of pseudo channels including a first pseudo channel and a second pseudo channel; processing first memory access requests of said first pseudo channel in a first decoding and command arbitration circuit; and processing second memory access requests of said second pseudo channel in a second decoding and command arbitration circuit independent of said first decoding and command arbitration circuit. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dodd (US 7120765): discloses memory transaction ordering system with a channel decoder to obtain for each channel a channel select that identifies which memory channel is to service a memory transaction. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES J CHOI whose telephone number is (571)270-0605. The examiner can normally be reached MON-FRI: 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROCIO DEL MAR PEREZ-VELEZ can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES J CHOI/Primary Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Oct 08, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602334
ON-CHIP INTERCONNECT FOR MEMORY CHANNEL CONTROLLERS
2y 5m to grant Granted Apr 14, 2026
Patent 12596654
PROTECTION AGAINST TRANSLATION LOOKUP REQUEST FLOODING
2y 5m to grant Granted Apr 07, 2026
Patent 12580875
METHODS AND SYSTEMS FOR EXCHANGING NETWORK PACKETS BETWEEN HOST AND MEMORY MODULE USING MULTIPLE QUEUES
2y 5m to grant Granted Mar 17, 2026
Patent 12530299
SYSTEM AND METHOD FOR PRIMARY STORAGE WRITE TRAFFIC MANAGEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12524357
BUFFER COMMUNICATION FOR DATA BUFFERS SUPPORTING MULTIPLE PSEUDO CHANNELS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
88%
With Interview (+5.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 314 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month