Prosecution Insights
Last updated: April 19, 2026
Application No. 19/038,672

SYSTEMS AND METHODS OF INCORPORATING ARTIFICIAL INTELLIGENCE ACCELERATORS ON MEMORY BASE DIES

Non-Final OA §103
Filed
Jan 27, 2025
Examiner
KIM, ELIAS YOUNG
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
62 granted / 81 resolved
+21.5% vs TC avg
Strong +34% interview lift
Without
With
+34.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
16 currently pending
Career history
97
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§103
DETAILED ACTION The instant application having Application No. 19/038,672 has 20 claims pending in the application, all of which are ready for examination by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 1/27/2025, 10/23/2025, and 1/27/2026 are being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 10, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1). As per claim 1, 1. A method of processing in a memory, the method comprising: determining at least one feature of a data query; routing, based on the at least one feature of the data query, a first function of the data query to a memory base die for processing by a processing unit on the memory base die; and [Pappu teaches a computing die and accelerator die (base memory die) (para. 15-16; fig. 1), the accelerator die comprising an accelerator (processing unit) that may output requests (functions), a request decoder that may decode and direct the requests to the computing die or local memory of the accelerator die based on opcode or address (feature) of the request, and an accelerator control unit (memory controller) for configuring the accelerator and acting as a glue logic for communication between the dies (para. 18-20, 22-25, 31), where routing a request to a local memory within the accelerator may comprise routing to the memory base die for processing by the processing unit (see para. 25, 37 providing, responsive to the request routed to the local memory, a response from the local memory being consumed by the accelerator), where it would have been obvious for one of ordinary skill in the arts, provided with the disclosures of Pappu, comprising the accelerator control unit configuring the accelerator and acting as a glue logic for communication between the dies, to provide for a combination where the accelerator control unit may be configured to control the request decoder and other routing components in order to provide for improved management of die components engaged in communications between the dies] Pappu does not explicitly disclose, but Malladi discloses: processing, via the processing unit, data that a memory controller on the memory base die receives from at least one of one or more memory dies stacked on top of the memory base die. [Pappu as shown above teaches the accelerator control unit managing communications between the dies, and also teaches the accelerator die being used for offloading tasks from a computing die which may comprise a processor (see above; para. 15, 24); Pappu does not explicitly disclose, but Malladi teaches a logic die for offloading computation work from a host (CPU, GPU, etc), the logic die being part of a high bandwidth memory (HBM) stack with a plurality of HBM dies above the logic die (para. 23-25; fig. 2 and associated paragraphs); Malladi teaches that the logic die interfaces with the stack of HBM modules and performs the offloading based on data stored in the stack of the HBM modules (para. 43-44; see fig. 6 and associated paragraphs); where Pappu teaches communications between the dies, i.e. external communications, being managed by the accelerator control unit, it would have been obvious for one of ordinary skill in the arts to provide for a combination where the accelerator control unit also manages other external communications such as those associated with HBM modules in order to provide for greater improved modularity in managing external communications with the accelerator or logic die.] Pappu and Malladi are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu and Malladi, to modify the disclosures by Pappu to include disclosures by Malladi since they both teach data storage and communication, where in Malladi is directed towards improved capabilities of a stacked die configuration (para. 1-4, 20). Therefore, it would be applying a known technique (a die used for offloading operations and using stacked dies above the die) to a known device (a system comprising a die for offloading operations using and internal memory and comprising control unit for configuring external communications) ready for improvement to yield predictable results (a system comprising a die for offloading operations using and internal memory and dies stacked above the die, wherein a control unit may manage external communications; doing so would provide for improved storage capability by providing for additional storage space for storing data used in the offloaded operations). MPEP 2143 Pappu in view of Malladi does not explicitly disclose, but Shahim discloses: data query; data query; data query [Pappu in view of Malladi as shown above teaches routing requests (functions), but does not explicitly provide for a query comprising the functions; however, Shahim discloses queuing a set of command streams received and routing the command streams (para. 14-16)] Pappu, Malladi, and Shahim are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi and Shahim, to modify the disclosures by Pappu in view of Malladi to include disclosures by Shahim since they both teach data storage and communication, where in Shahim is directed towards improved parallel processing (para. 3). Therefore, it would be applying a known technique (receiving a set of command streams and routing the command streams) to a known device (a system for routing requests) ready for improvement to yield predictable results (a system for receiving a set of requests and routing the requests therein in order to provide for improved request processing capabilities). MPEP 2143 As per claim 2, Pappu in view of Malladi in view of Shahim teaches claim 1 as shown above and further teaches: 2. The method of claim 1, further comprising routing the first function to the memory base die for processing by the processing unit on the memory base die based on a determination that the first function is a memory bound function. [Pappu as shown above teaches routing a request to a local memory on the accelerator die through a determination based on opcode or address (see claim 1 above; para. 22-25, 37), where a request bound for the local memory may correspond to a memory bound function.] As per claim 10, Pappu in view of Malladi in view of Shahim teaches claim 1 as shown above and further teaches: 10. The method of claim 1, wherein: the memory controller is communicatively coupled to the processing unit, and a second memory controller on the memory base die is communicatively coupled to a second processing unit on the memory base die. [Pappu teaches a transaction directed to a particular accelerator among a plurality of accelerators in the accelerator die, and said transaction being forwarded to an accelerator control unit within the accelerator (para. 31), where the die may thereby necessarily comprise a plurality of accelerator control units corresponding to the respective accelerators] As per claim 18, 18. A non-transitory computer-readable medium storing code that comprises instructions executable by a processor of a device to: [Pappu teaches its embodiments are implemented in a code in a non-transitory storage medium for configuring a processor for performing operations (para. 82)] determine at least one feature of a data query; route, based on the at least one feature of the data query, a first function of the data query to a memory base die for processing by a processing unit on the memory base die; and [Pappu teaches a computing die and accelerator die (base memory die) (para. 15-16; fig. 1), the accelerator die comprising an accelerator (processing unit) that may output requests (functions), a request decoder that may decode and direct the requests to the computing die or local memory of the accelerator die based on opcode or address (feature) of the request, and an accelerator control unit (memory controller) for configuring the accelerator and acting as a glue logic for communication between the dies (para. 18-20, 22-25, 31), where routing a request to a local memory within the accelerator may comprise routing to the memory base die for processing by the processing unit (see para. 25, 37 providing, responsive to the request routed to the local memory, a response from the local memory being consumed by the accelerator), where it would have been obvious for one of ordinary skill in the arts, provided with the disclosures of Pappu, comprising the accelerator control unit configuring the accelerator and acting as a glue logic for communication between the dies, to provide for a combination where the accelerator control unit may be configured to control the request decoder and other routing components in order to provide for improved management of die components engaged in communications between the dies] Pappu does not explicitly disclose, but Malladi discloses: process, via the processing unit, data that a memory controller on the memory base die receives from at least one of one or more memory dies stacked on top of the memory base die. [Pappu as shown above teaches the accelerator control unit managing communications between the dies, and also teaches the accelerator die being used for offloading tasks from a computing die which may comprise a processor (see above; para. 15, 24); Pappu does not explicitly disclose, but Malladi teaches a logic die for offloading computation work from a host (CPU, GPU, etc), the logic die being part of a high bandwidth memory (HBM) stack with a plurality of HBM dies above the logic die (para. 23-25; fig. 2 and associated paragraphs); Malladi teaches that the logic die interfaces with the stack of HBM modules and performs the offloading based on data stored in the stack of the HBM modules (para. 43-44; see fig. 6 and associated paragraphs); where Pappu teaches communications between the dies, i.e. external communications, being managed by the accelerator control unit, it would have been obvious for one of ordinary skill in the arts to provide for a combination where the accelerator control unit also manages other external communications such as those associated with HBM modules in order to provide for greater improved modularity in managing external communications with the accelerator or logic die.] Pappu and Malladi are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu and Malladi, to modify the disclosures by Pappu to include disclosures by Malladi since they both teach data storage and communication, where in Malladi is directed towards improved capabilities of a stacked die configuration (para. 1-4, 20). Therefore, it would be applying a known technique (a die used for offloading operations and using stacked dies above the die) to a known device (a system comprising a die for offloading operations using and internal memory and comprising control unit for configuring external communications) ready for improvement to yield predictable results (a system comprising a die for offloading operations using and internal memory and dies stacked above the die, wherein a control unit may manage external communications; doing so would provide for improved storage capability by providing for additional storage space for storing data used in the offloaded operations). MPEP 2143 Pappu in view of Malladi does not explicitly disclose, but Shahim discloses: data query; data query; data query [Pappu in view of Malladi as shown above teaches routing requests (functions), but does not explicitly provide for a query comprising the functions; however, Shahim discloses queuing a set of command streams received and routing the command streams (para. 14-16)] Pappu, Malladi, and Shahim are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi and Shahim, to modify the disclosures by Pappu in view of Malladi to include disclosures by Shahim since they both teach data storage and communication, where in Shahim is directed towards improved parallel processing (para. 3). Therefore, it would be applying a known technique (receiving a set of command streams and routing the command streams) to a known device (a system for routing requests) ready for improvement to yield predictable results (a system for receiving a set of requests and routing the requests therein in order to provide for improved request processing capabilities). MPEP 2143 As per claim 19, Pappu in view of Malladi in view of Shahim teaches claim 1 as shown above and further teaches: 19. The non-transitory computer-readable medium of claim 18, wherein the code includes further instructions executable by the processor to route the first function to the memory base die for processing by the processing unit on the memory base die based on a determination that the first function is a memory bound function. [Pappu as shown above teaches routing a request to a local memory on the accelerator die through a determination based on opcode or address (see claim 1 above; para. 22-25, 37)] Claims 3 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1) in view of Gu et al. (US 20200184001 A1). As per claim 3, Pappu in view of Malladi in view of Shahim teaches claim 1 as shown above and further teaches: 3. The method of claim 1, further comprising routing a second function of the data query to a compute die for processing by the compute die based on a determination that the second function is a compute bound operation, [Pappu as shown above teaches routing a request to a computing die through a determination based on opcode or address (see claim 1 above; para. 22-25, 37), where being compute bound may correspond to belonging to a computing die instead of a local memory (memory bound)] Pappu in view of Malladi in view of Shahim does not explicitly disclose, but Gu discloses: wherein the compute die is connected to the memory base die via a silicon interposer of a system in package that includes the compute die and the memory base die. [Pappu in view of Malladi in view of Shahim teaches interposer connected to the host as well as the logic die (Malladi: para. 23; fig. 2 and associated paragraphs); it does not explicitly disclose the interposer as silicon interposer, but Gu discloses a die stack offloading computations from a processor, the die stack and the processor both placed on a silicon interposer (para. 46-47; fig. 4 and associated paragraphs)] The disclosures by Pappu, Malladi, Shahim, and Gu are analogous because they are in the same field of endeavor of data storage and transmission. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Pappu in view of Malladi in view of Shahim and Gu, to modify the teachings of Pappu in view of Malladi in view of Shahim to include the teaching of Gu since they both teach data storage and transmission, wherein Gu is directed towards improved accelerator performance (para. 2-5). Therefore, it would have been a simple substitution of one type of non-volatile interposer with another interposer (silicon interposer) ready for improvement to provide predictable results (improved performance over alternatives such as organic interposers). MPEP 2143 As per claim 20, Pappu in view of Malladi in view of Shahim teaches claim 18 as shown above and further teaches: 20. The non-transitory computer-readable medium of claim 18, wherein the code includes further instructions executable by the processor to route a second function of the data query to a compute die for processing by the compute die based on a determination that the second function is a compute bound operation, [Pappu as shown above teaches routing a request to a computing die through a determination based on opcode or address (see claim 1 above; para. 22-25, 37), where being compute bound may correspond to belonging to a computing die instead of a local memory (memory bound)] Pappu in view of Malladi in view of Shahim does not explicitly disclose, but Gu discloses: wherein the compute die is connected to the memory base die via a silicon interposer of a system in package that includes the compute die and the memory base die. [Pappu in view of Malladi in view of Shahim teaches interposer connected to the host as well as the logic die (Malladi: para. 23; fig. 2 and associated paragraphs); it does not explicitly disclose the interposer as silicon interposer, but Gu discloses a die stack offloading computations from a processor, the die stack and the processor both placed on a silicon interposer (para. 46-47; fig. 4 and associated paragraphs)] The disclosures by Pappu, Malladi, Shahim, and Gu are analogous because they are in the same field of endeavor of data storage and transmission. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Pappu in view of Malladi in view of Shahim and Gu, to modify the teachings of Pappu in view of Malladi in view of Shahim to include the teaching of Gu since they both teach data storage and transmission, wherein Gu is directed towards improved accelerator performance (para. 2-5). Therefore, it would have been a simple substitution of one type of non-volatile interposer with another interposer (silicon interposer) ready for improvement to provide predictable results (improved performance over alternatives such as organic interposers). MPEP 2143 Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1) in view of Moon et al. (US 20220075564 A1). As per claim 4, Pappu in view of Malladi in view of Shahim discloses claim 1 as shown above. It does not explicitly disclose, but Moon discloses: 4. The method of claim 1, wherein the memory base die comprises a memory expansion port connected to at least one of a low power double data rate memory or a graphics double data rate memory external to the memory base die. [Pappu in view of Malladi in view of Shahim as shown above teaches the accelerator control unit interfacing with the HBM dies stacked above the accelerator die (see claim 1 above; Malladi: para. 43-44; fig. 6 and associated paragraphs); it does not explicitly provide for, but Moon teaches stacked memory devices implemented based on HBM standard, but teaches that GDDR standard maybe used instead as well (para. 168)] The disclosures by Pappu, Malladi, Shahim, and Moon are analogous because they are in the same field of endeavor of data storage and transmission. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Pappu in view of Malladi in view of Shahim and Moon, to modify the teachings of Pappu in view of Malladi in view of Shahim to include the teaching of Moon since they both teach data storage and transmission, wherein Moon is directed towards improved operating methods of a memory system (para. 2). Therefore, it would have been a simple substitution of one type of memory with another type of memory (graphics double data rate) ready for improvement to provide predictable results (improved cost efficiency). MPEP 2143 As per claim 5, Pappu in view of Malladi in view of Shahim in view of Moon discloses claim 4 as shown above and further teaches: 5. The method of claim 4, further comprising at least one of: routing, via the memory controller, functions of a first category to the memory base die for processing by the processing unit on the memory base die, routing, via the memory controller, functions of a second category to a compute die for processing by the compute die, or routing, via the memory controller, functions of a third category to at least one of the low power double data rate memory or the graphics double data rate memory external to the memory base die. [Pappu in view of Malladi in view of Shahim as shown above teaches accelerator control unit managing external communications and routing of requests to the computing die or the local memory based on system address or opcode as shown above (see claim 1; Pappu: para. 18-20, 22-25, 31, 37); it would have been obvious for one of ordinary skill in the arts, provided with the disclosures of Pappu in view of Malladi in view of Shahim in view of Moon as shown above, to provide for the accelerator control unit similarly managing routing of requests to the stacked dies involved in the offloading operations (Malladi: para. 43-44) in order to provide for improved utilization of the stacked memory dies, where a third category may correspond to address or opcode indication a stacked memory die such as a GDDR.] Pappu and Malladi are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu and Malladi, to modify the disclosures by Pappu to include disclosures by Malladi since they both teach data storage and communication, where in Malladi is directed towards improved capabilities of a stacked die configuration (para. 1-4, 20). Therefore, it would be applying a known technique (a die used for offloading operations and using stacked dies above the die) to a known device (a system comprising a die for offloading operations using and internal memory and comprising control unit for configuring external communications) ready for improvement to yield predictable results (a system comprising a die for offloading operations using and internal memory and dies stacked above the die, wherein a control unit may manage external communications; doing so would provide for improved storage capability by providing for additional storage space for storing data used in the offloaded operations). MPEP 2143 The disclosures by Pappu, Malladi, Shahim, and Moon are analogous because they are in the same field of endeavor of data storage and transmission. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Pappu in view of Malladi in view of Shahim and Moon, to modify the teachings of Pappu in view of Malladi in view of Shahim to include the teaching of Moon since they both teach data storage and transmission, wherein Moon is directed towards improved operating methods of a memory system (para. 2). Therefore, it would have been a simple substitution of one type of memory with another type of memory (graphics double data rate) ready for improvement to provide predictable results (improved cost efficiency). MPEP 2143 Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1) in view of Pappu (US 20180096735 A1, hereinafter Pappu 2). As per claim 6, Pappu in view of Malladi in view of Shahim discloses claim 1 as shown above. It does not explicitly disclose, but Pappu 2 discloses: The method of claim 1, further comprising: transferring, by way of a through silicon via, the data from the one or more memory dies to a physical layer interface of the memory base die; transferring the data from the physical layer interface to the memory controller of the memory base die; and transferring the data from the memory controller to a shared memory on the memory base die, wherein the shared memory holds the data for processing of the data by the processing unit. [Pappu in view of Malladi in view of Shahim as shown above teaches an accelerator control unit of an accelerator managing external communications including those with dies stacked above the accelerator die (see claim 1 above; Pappu: para. 1-18, 22-25, 31, and 37; also see para. 35 on the accelerator control unit directing incoming memory request to local memory die; Malladi: para. 43-44); Pappu 2 discloses a silicon via connecting a stacked DRAM to a chip below the DRAM, the silicon via being connected to a decoder and then to a traffic re-router logic of the chip below (Pappu 2: para. 81-86, 103); it would have been obvious for one of ordinary skill in the arts to combine disclosures by Pappu in view of Malladi in view of Shahim, directed towards an accelerator control unit managing incoming communications including those directed towards a local memory, and disclosures by Pappu 2, directed towards a silicon via connected to a traffic re-router logic, to provide for a combination where the silicon via maybe connected to the components managed by the accelerator control unit in order to provide for greater modularity of managing incoming communications to the accelerator die; where the local memory may comprise a shared memory by being used in association with a plurality of accelerators (Pappu: para. 25)] Pappu, Malladi, Shahim, and Pappu 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi in view of Shahim and Pappu 2, to modify the disclosures by Pappu in view of Malladi in view of Shahim to include disclosures by Pappu 2 since they both teach data storage and communication, where in Pappu 2 is directed towards improved performance of stacked die architecture (para. 2-10). Therefore, it would be applying a known technique (a silicon via connecting stacked dies, the silicon via connected to a decoder and then to a traffic rerouting logic) to a known device (an accelerator control unit of a die managing incoming traffic, including those directed towards a local memory) ready for improvement to yield predictable results (an accelerator control unit of an accelerator die for managing incoming traffic, including those directed towards a local memory, the accelerator control unit configured to route data received from a silicon via through a decoder from a die stacked on the accelerator die in order to provide for improved transmission speed between the dies). MPEP 2143 Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1) in view of Siegl et al. (US 20200081744 A1). As per claim 7, Pappu in view of Malladi in view of Shahim teaches claim 1 as shown above. It does not explicitly disclose, but Siegl discloses: 7. The method of claim 1, wherein the processing unit comprises at least one of: a tensor core configured for matrix multiplication, or an accumulator configured for accumulating intermediate calculations. [Pappu in view of Malladi in view of Shahim as shown above teaches an accelerator die and accelerator(s) therein for offloading machine learning processes (see claim 1 above; Pappu: para. 26; Malladi: para. 26); it does not explicitly describe the accelerator as performing matrix multiplication or accumulating intermediate calculations, but Siegl teaches accelerators involved in matrix to matrix operations and accumulating intermediate calculations (para. 4)] Pappu, Malladi, Shahim, and Siegl are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi in view of Shahim and Siegl, to modify the disclosures by Pappu in view of Malladi in view of Shahim to include disclosures by Siegl since they both teach data storage and communication, where in Siegl is directed towards improved computing performance (para. 1-3, 25). Therefore, it would be applying a known technique (accelerator configured to perform matrix to matrix operations or accumulation of intermediate calculations) to a known device (an accelerator in a die for offloading machine learning processes) ready for improvement to yield predictable results (an accelerator in a die for offloading machine learning processes including those involving matrix to matrix operations or accumulation of intermediate calculations in order to provide for reducing calculation overhead of the computing die or a host). MPEP 2143 Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1) in view of Fang et al. (US 20090300292 A1) in view of Litt et al. (US 20250085875 A1). As per claim 8, Pappu in view of Malladi in view of Shahim teaches claim 1 as shown above. It does not explicitly disclose, but Fang discloses: 8. The method of claim 1, wherein: the memory controller connects to the processing unit via a network on chip (NOC) interconnect bus, and [Fang teaches a network on a chip, which may be a single die integrated circuit that connects components including cores, specializer processors, accelerators, local memories, and other such structures through interconnect links (Fang: para. 11; fig. 1 and associated paragraphs)] Pappu, Malladi, Shahim, and Fang are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi in view of Shahim and Fang, to modify the disclosures by Pappu in view of Malladi in view of Shahim to include disclosures by Fang since they both teach data storage and communication, where in Fang is directed towards improved memory system performance (para. 1-3, 28). Therefore, it would be applying a known technique (use of network on chip interconnect links for within a chip) to a known device (accelerator die comprising a plurality of components including an accelerator control unit and an accelerator) ready for improvement to yield predictable results (accelerator die comprising network on chip interconnect links connecting a plurality of components in the die including an accelerator control unit and an accelerator to provide for improved scalability and/or latency). MPEP 2143 Pappu in view of Malladi in view of Shahim in view of Fang does not explicitly disclose, but Litt discloses: the memory controller connects to a dynamic random-access memory (DRAM) physical layer on the memory base die via a double data rate (DDR) physical layer interface of the memory base die. [Pappu in view of Malladi in view of Shahim as shown above teaches an accelerator control unit of an accelerator managing external communications including those with HBM dies stacked above the accelerator die (see claim 1 above; Pappu: para. 1-18, 22-25, 31, and 37; Malladi: para. 43-44); Litt teaches that HBM includes a stack of DRAM dies and a wide-interface architecture providing operation to the stack of DRAM dies across multiple interfaces operating at double data rate (para. 2, 20; figs. 1-2 and associated paragraphs)] Pappu, Malladi, Shahim, Fang, and Litt are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi in view of Shahim in view of Fang and Litt, to modify the disclosures by Pappu in view of Malladi in view of Shahim in view of Fang to include disclosures by Litt since they both teach data storage and communication, where in Litt is directed towards improved memory performance (para. 2-3, 23). Therefore, it would be applying a known technique (HBM connecting to DRAM dies using interfaces operating at double data rate) to a known device (accelerator die comprising an accelerator control unit for managing communications with HBM stack above the accelerator die) ready for improvement to yield predictable results (accelerator die comprising an accelerator control unit connecting to DRAM die stack above the accelerator die using double data rate interfaces operating at double data rate to provide for compatible connection with the DRAM dies). MPEP 2143 Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Shahim et al. (US 20210303346 A1) in view of Fang et al. (US 20090300292 A1) in view of Litt et al. (US 20250085875 A1) in view of Jin et al. (US 20250077457 A1). As per claim 9, Pappu in view of Malladi in view of Shahim in view of Fang in view of Litt teaches claim 8 as shown above and further teaches: 9. The method of claim 8, wherein: a system bus interface connects to a die-to-die interface of the memory base die, and the processing unit connects to the system bus interface via the NOC interconnect bus, [Pappu in view of Malladi in view of Shahim in view of Fang in view of Litt as shown above teaches a bus connecting to a computing die and an accelerator die having components managed by accelerator control unit for managing communications (Pappu: para. 17-20, 22-25, 31) and further teaches NOC links connecting components in the accelerator die (see claim 8 above; Fang: para. 11)] It does not explicitly disclose, but Jin discloses: the system bus interface converting data in a die-to-die flit format to a network packet format. [Jin teaches generating die-to-die flit from a first protocol type transaction of a first chiplet and transmitting the die-to-die interface flit to the second chiplet, the process involving the first chiplet encoding first protocol type transaction into a payload of a die-to-die interface flit, transmitting the payload of the die-to-die interface to an adapter layer for generating header and trailer for combining (converting to network packet format) with the payload for transmission to a second chiplet (abstract; para. 72-77)] Pappu, Malladi, Shahim, and Fang are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi in view of Shahim and Fang, to modify the disclosures by Pappu in view of Malladi in view of Shahim to include disclosures by Fang since they both teach data storage and communication, where in Fang is directed towards improved memory system performance (para. 1-3, 28). Therefore, it would be applying a known technique (use of network on chip interconnect links for within a chip) to a known device (accelerator die comprising a plurality of components including an accelerator control unit and an accelerator) ready for improvement to yield predictable results (accelerator die comprising network on chip interconnect links connecting a plurality of components in the die including an accelerator control unit and an accelerator to provide for improved scalability and/or latency). MPEP 2143 Pappu, Malladi, Shahim, Fang, Litt, and Jin are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi in view of Shahim in view of Fang in view of Litt and Jin, to modify the disclosures by Pappu in view of Malladi in view of Shahim in view of Fang in view of Litt to include disclosures by Jin since they both teach data storage and communication, where in Jin is directed towards improved communication between dies (para. 1-6, 63-64). Therefore, it would be applying a known technique (conversion of protocol in a first chiplet for transmission to a second chiplet) to a known device (system comprising an accelerator die communicating with a computing die) ready for improvement to yield predictable results (system comprising an accelerator die communicating with a computing die, the communicating comprising converting protocols used within the accelerator die to a flit used for transmission in order to provide for improved communication between dies). MPEP 2143 Claims 11-12, 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1). As per claim 11, A system comprising: a memory base die, the memory base die comprising: a memory controller; and a processing unit configured to process data that the memory controller receives from at least one of the one or more memory dies [Pappu teaches a computing die and accelerator die (base memory die) (para. 15-16; fig. 1 and associated paragraphs), the accelerator die comprising accelerators (processing units) to which specific tasks may be offloaded and an accelerator control unit (memory controller) for configuring the accelerators and acting as a glue logic for communication between the dies (para. 18-20, 22-25, 31), Pappu teaches that the accelerator control unit may serve memory requests such as a write request to a local memory on the die (para. 35-36; fig. 5 and associated paragraphs) and that an accelerator may read data for processing from the local memory (para. 37, 25); where it would have been obvious for one of ordinary skill in the arts that the accelerator control unit may store received data in a local memory for processing by accelerator(s) to provide for improved latency] an interconnect that connects the memory controller to the one or more memory dies stacked on the memory base die and to multiple processing units that include the processing unit; [Pappu teaches a plurality of accelerators within the accelerator die (para. 25) and teaches that an integrated on-chip scalable fabric maybe used to provide on-die interconnect protocol for attaching components within a chip (para. 17), where the accelerators and the accelerator control unit may necessarily be connected via the fabric] and a die-to-die interface that connects the memory base die to a compute die of a system in package. [Pappu teaches an upstream switch port for connecting the accelerator die to the computing die (para. 17)] Pappu does not explicitly disclose, but Malladi discloses: one or more memory dies stacked on top of the memory base die;; from at least one of the one or more memory dies, the data being routed to the processing unit based on at least one feature of a data query associated with the data; [Pappu as shown above teaches the accelerator control unit managing communications between the dies, and also teaches the accelerator die being used for offloading tasks from a computing die which may comprise a processor (see above; para. 15, 24); Pappu does not explicitly disclose, but Malladi teaches a logic die for offloading computation work from a host (CPU, GPU, etc), the logic die being part of a high bandwidth memory (HBM) stack with a plurality of HBM dies above the logic die (para. 23-25; fig. 2 and associated paragraphs); Malladi teaches that the logic die interfaces with the stack of HBM modules and performs the offloading based on data stored in the stack of the HBM modules (para. 43-44; see fig. 6 and associated paragraphs); where Pappu teaches communications between the dies, i.e. external communications, being managed by the accelerator control unit and accelerators performing offload operations, it would have been obvious for one of ordinary skill in the arts to provide for a combination where the accelerator control unit also manages and directs offload operation requests/communications associated with HBM modules for processing by the accelerators in order to provide for greater improved modularity in managing external communications with the accelerator or logic die.] an interconnect that connects the memory controller to the one or more memory dies stacked on the memory base die and to multiple processing units that include the processing unit; [Where Pappu as shown above teaches an integrated on-chip scalable fabric may be used to provide on-die interconnect protocol for attaching components within a chip (Pappu: para. 17), an interface connecting to the stacked memory dies may necessarily be connected to the fabric as well] Pappu and Malladi are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu and Malladi, to modify the disclosures by Pappu to include disclosures by Malladi since they both teach data storage and communication, where in Malladi is directed towards improved capabilities of a stacked die configuration (para. 1-4, 20). Therefore, it would be applying a known technique (a die used for offloading operations using stacked dies above the die) to a known device (a system comprising a die with an accelerator for offloading operations and comprising a control unit for configuring external communications) ready for improvement to yield predictable results (a system comprising a die with an accelerator for offloading operations using dies stacked above the die, wherein a control unit may manage external communications including offloading operations originating from the stacked dies; doing so would provide for improved modularity of managing communications originating from multiple sources). MPEP 2143 As per claim 12, Pappu in view of Malladi teaches claim 11 as shown above and further teaches: 12. The system of claim 11, wherein a function of the data query is routed to the memory base die for processing by the processing unit based on a determination that the function is a memory bound function. [Pappu in view of Malladi as shown above teaches memory requests such as write requests directed to local memory of the accelerator die for storing data for processing by an accelerator (see claim 11 above; Pappu: para. 35-37, 25; Malldi: para. 43-44), where a request directed to the local memory may correspond to being memory bound] Pappu and Malladi are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu and Malladi, to modify the disclosures by Pappu to include disclosures by Malladi since they both teach data storage and communication, where in Malladi is directed towards improved capabilities of a stacked die configuration (para. 1-4, 20). Therefore, it would be applying a known technique (a die used for offloading operations using stacked dies above the die) to a known device (a system comprising a die with an accelerator for offloading operations and comprising a control unit for configuring external communications) ready for improvement to yield predictable results (a system comprising a die with an accelerator for offloading operations using dies stacked above the die, wherein a control unit may manage external communications including offloading operations originating from the stacked dies; doing so would provide for improved modularity of managing communications originating from multiple sources). MPEP 2143 As per claim 14, Pappu in view of Malladi teaches claim 11 as shown above and further teaches: 14. The system of claim 11, wherein the system in package includes multiple memory base dies connected to the compute die, the multiple memory base dies including the memory base die. [Pappu as shown above teaches an accelerator die connected to a computing die (para. 15-16) and further teaches more than one accelerator die may be present (para. 13)] As per claim 16, Pappu in view of Malladi teaches claim 11 as shown above and further teaches: 16. The system of claim 11, wherein the memory base die comprises a shared memory to share data between a first processing unit and a second processing unit of the multiple processing units. [Pappu teaches a local memory used by a plurality of accelerators (para. 25)] Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Gu et al. (US 20200184001 A1). As per claim 13, Pappu in view of Malladi teaches claim 11 as shown above and further teaches: 13. The system of claim 11, wherein a function of a second data query is routed to the compute die for processing by the compute die based on a determination that the second function is a compute bound operation, [Pappu teaches requests from an accelerator being directed to the computing die based on a determination based on an opcode or system address (para. 23, 25), wherein compute bound may correspond to being bound for the computing die instead of a local memory] Pappu in view of Malladi does not explicitly disclose, but Gu discloses: wherein the compute die is connected to the memory base die via a silicon interposer of the system in package. [Pappu in view of Malladi teaches interposer connected to the host as well as the logic die (Malladi: para. 23; fig. 2 and associated paragraphs); it does not explicitly disclose the interposer as silicon interposer, but Gu discloses a die stack offloading computations from a processor, the die stack and the processor both placed on a silicon interposer (para. 46-47; fig. 4 and associated paragraphs)] The disclosures by Pappu, Malladi, and Gu are analogous because they are in the same field of endeavor of data storage and transmission. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Pappu in view of Malladi and Gu, to modify the teachings of Pappu in view of Malladi to include the teaching of Gu since they both teach data storage and transmission, wherein Gu is directed towards improved accelerator performance (para. 2-5). Therefore, it would have been a simple substitution of one type of non-volatile interposer with another interposer (silicon interposer) ready for improvement to provide predictable results (improved performance over alternatives such as organic interposers). MPEP 2143 Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Jin et al. (US 20250077457 A1). As per claim 15, Pappu in view of Malladi teaches claim 11 as shown above. It does not explicitly disclose, but Jin discloses: 15. The system of claim 11, wherein: the memory base die comprises a system bus interface that connects the interconnect to the die-to-die interface of the memory base die, and the system bus interface maps a data format used by the interconnect to a data format used by the die-to-die interface. [Pappu in view of Malladi as shown above teaches an accelerator die comprising an integrated on-chip scalable fabric and an upstream switch port connecting to a computing die as shown above (see claim 11 above; Pappu: para. 17, 25); Jin teaches a controller of a first chiplet for converting requests of first protocol used by the first chiplet, as received from the bus system of the first chiplet, to die-to-die interface flit for transmission to a second chiplet, wherein the controller is situated between the bus system of the first chiplet and PHY layer for transmitting the requests to the second chiplet via UCIe interface (abstract; para. 72-77; fig. 3 and associated paragraphs)] Pappu, Malladi, and Jin are analogous to the claimed invention because they are in the same field of endeavor involving data storage and transmission. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Pappu in view of Malladi and Jin, to modify the disclosures by Pappu in view of Malladi to include disclosures by Jin since they both teach data storage and communication, where in Jin is directed towards improved communication between dies (para. 1-6, 63-64). Therefore, it would be applying a known technique (a controller for converting requests of first protocol received from the bus system of a chiplet and transmitting the converted requests to PHY layer of UCIe interface for transmission to a second chiplet) to a known device (a die comprising a fabric interconnecting components therein and a switch port for communicating with a computing die) ready for improvement to yield predictable results (a controller for converting requests of first protocol received by the fabric of a die and transmitting the converted requests to PHY layer of UCIe interface for transmission to a second chiplet in order to provide for improved communication between dies). MPEP 2143 Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Pappu et al. (US 20190042240 A1) in view of Malladi et al. (US 20190050325 A1) in view of Moon et al. (US 20220075564 A1). As per claim 17, Pappu in view of Malladi teaches claim 11 as shown above. It does not explicitly disclose, but Moon discloses: The system of claim 11, wherein the memory base die comprises a memory expansion port connected to at least one of a low power double data rate memory or a graphics double data rate memory external to the memory base die. [Pappu in view of Malladi as shown above teaches the accelerator control unit interfacing with the HBM dies stacked above the accelerator die (see claim 1 above; Malladi: para. 43-44; fig. 6 and associated paragraphs); it does not explicitly provide for, but Moon teaches stacked memory devices implemented based on HBM standard, but teaches that GDDR standard maybe used instead as well (para. 168) ] The disclosures by Pappu, Malladi, and Moon are analogous because they are in the same field of endeavor of data storage and transmission. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Pappu in view of Malladi and Moon, to modify the teachings of Pappu in view of Malladi to include the teaching of Moon since they both teach data storage and transmission, wherein Moon is directed towards improved operating methods of a memory system (para. 2). Therefore, it would have been a simple substitution of one type of non-volatile memory with another type of memory (graphics double data rate) ready for improvement to provide predictable results (improved cost efficiency). MPEP 2143 Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Roberts (US 20220188606 A1) discloses a deep learning accelerator and a memory comprising a first stack of IC dies with a base die having a memory controller and a second die stacked on the first die to provide a first type of a memory. A second stack of dies has a base die with a logic circuit configured to copy data within the asme stack in response to commands from the memory controller. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIAS KIM whose telephone number is (571)272-8093. The examiner can normally be reached Monday - Friday: 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JARED RUTZ can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.Y.K./Examiner, Art Unit 2135 /JARED I RUTZ/Supervisory Patent Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Jan 27, 2025
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591363
MEMORY PROGRAMMING METHOD, MEMORY DEVICE, AND MEMORY SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12541303
METHOD OF CLASSIFYING DATA BY LIFESPAN ACCORDING TO THE NUMBER OF TIMES OF MOVING DATA TO IMPROVE PERFORMANCE AND LIFESPAN OF FLASH MEMORY-BASED SSD
2y 5m to grant Granted Feb 03, 2026
Patent 12530150
TECHNIQUES FOR BALANCING WRITE COMMANDS ON SOLID STATE STORAGE DEVICES (SSDs)
2y 5m to grant Granted Jan 20, 2026
Patent 12517666
TECHNIQUES TO CONFIGURE ZONAL ARCHITECTURES OF MEMORY SYSTEMS
2y 5m to grant Granted Jan 06, 2026
Patent 12511234
MANAGING A PROGRAMMABLE CACHE CONTROL MAPPING TABLE IN A SYSTEM LEVEL CACHE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.0%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month