Prosecution Insights
Last updated: April 19, 2026
Application No. 17/900,018

PROCESSING-IN-MEMORY SYSTEM WITH DEEP LEARNING ACCELERATOR FOR ARTIFICIAL INTELLIGENCE

Non-Final OA §103
Filed
Aug 31, 2022
Examiner
MEMON, OWAIS IQBAL
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Micron Technology, Inc.
OA Round
5 (Non-Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
3y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
75 granted / 101 resolved
+12.3% vs TC avg
Strong +22% interview lift
Without
With
+22.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
27 currently pending
Career history
128
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
30.6%
-9.4% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/13/2026 has been entered. Response to Applicant’s Remarks/Arguments In regards to the Applicants remarks that Stonelake et al does not teach Claim 1 or 24: memory space that maps to a processing device, the examiner respectfully disagrees. Stonelake does teach configured to control read and write access to addresses in a memory space that maps to … at least one of the processing device ([0049] “The processing device 118 can write data to each of the memory sub-systems (e.g., 205) and read data from the memory sub-systems (e.g., 205) directly or indirectly.” And [0079] “CPU”. Stonelake further explains the details of how mapping is conducted in [0060] “the memory cells of the memory devices can be grouped as memory pages or data blocks that can refer to a unit of the memory device used to store data.” and after the mapping has been conducted the CPU can access the data which has been mapped as stated in [0079] “Once the mapping has been made by the driver, the host can access those pages in the DRAM… read and write operations can be performed using the CPU of the host system so that any data within a page that is mapped can be accessed.) In regards to the applicants remarks that Kim does not teach Claim 27: processing device mapped to a memory space, the examiner respectfully disagrees. Kim teaches the processing device ([0116] “The processor 110 may be configured to include at least one of a … artificial neural processing unit (NPU).”) map to a memory space of the host device, ([0973] “The NPU scheduler may store the ANN data locality information in the form of a register map.” And [0953] “operation sequence configured in a unit of memory operation request of the NPU, a data domain, a data size, a memory address map configured for sequential addressing.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4-11, 15 and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Stonelake et al (US20190243788, hereinafter “Stonelake”) and in view of Kim et al. (US20220137866, hereinafter “Kim”) Claim 1. (Previously Presented) Stonelake teaches A system comprising: dynamic random access memory; ([0027] “DRAM”.) static random access memory to store first data loaded from the dynamic random access memory; ([0027] “SRAM”.) a processing device configured to perform, using the first data stored in the static random access memory, ([0049] “The processing device 118 can write data to each of the memory sub-systems (e.g., 205) and read data from the memory sub-systems (e.g., 205) directly or indirectly.” Stonelake para 50 says the memory subsystems include “DRAM…. SRAM”) and a single memory controller ([0030] “controller 116 can be referred to as a memory controller,”) configured to control read and write access to addresses in a memory space that maps to each of the dynamic random access memory, the static random access memory, ([0027] “the memory system controller can maintain a mapping table of DDR pages mapped to the SRAM buffer in order to speed up data access in cases for which the data is already in the SRAM buffer….”) and at least one of the processing device or the multiply-accumulate engine. ([0049] “The processing device 118 can write data to each of the memory sub-systems (e.g., 205) and read data from the memory sub-systems (e.g., 205) directly or indirectly.” And [0079] “CPU”. Stonelake further explains the details of how mapping is conducted in [0060] “the memory cells of the memory devices can be grouped as memory pages or data blocks that can refer to a unit of the memory device used to store data.” and after the mapping has been conducted the CPU can access the data which has been mapped as stated in [0079] “Once the mapping has been made by the driver, the host can access those pages in the DRAM… read and write operations can be performed using the CPU of the host system so that any data within a page that is mapped can be accessed.) Stonelake does not explicitly teach computations for a neural network; a multiply-accumulate engine configured to support the computations; Kim teaches computations for a neural network; ([1064] “Referring to FIG. 55, the NPU and one or more internal memories are implemented in the form of a System on Chip (SoC). The internal memory may be SRAM. Accordingly, the NPU and the internal memory may be connected through an SRAM interface.” And [0008] “neural processing unit (NPU) which is a processor of an ANN memory system optimized for processing an artificial neural network (ANN) model.”) a multiply-accumulate engine configured to support the computations; ([0135]“multiplication and accumulation (MAC) operations”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Stonelake to have computations for a neural network and a MAC engine to support the computations as taught by Kim to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been to (Kim [0007] “improve the operation processing performance of the artificial neural network model and that an artificial neural network memory system which is capable of improving the problems needed to be developed.”) Claim 2. (Original) Stonelake and Kim teach The system of claim 1, further comprising Stonelake does not explicitly teach a virtual memory manager, wherein the memory space is visible to the memory manager, the processing device is a first processing device, and the memory manager manages memory used by a second processing device. Kim teaches a virtual memory manager, wherein the memory space is visible to the memory manager, ([0503] “artificial neural network memory controller AMC”) the processing device is a first processing device, and the memory manager manages memory used by a second processing device. ([0503] “AMC configured to include at least one processor and receive a data access request from at least one processor to provide the memory access request to at least one memory.” And fig 14 shows a second processor device utilizing the AMC) PNG media_image1.png 515 655 media_image1.png Greyscale Claim 4. (Original) Stonelake and Kim teach The system of claim 2, Stonelake does not explicitly teach wherein the second processing device is configured to receive image data from a camera, and provide the image data for use as an input to the neural network. Kim teaches wherein the second processing device is configured to receive image data from a camera, and provide the image data for use as an input to the neural network. ([0265] “For example, in the case of the artificial neural network model which recognizes an object of an image of a front camera” and [0533] “various peripheral devices such as WIFI devices, displays, cameras, or microphones may be connected to the system bus of the artificial neural network memory system 400.”) Claim 5. (Original) Stonelake and Kim teach The system of claim 4, Stonelake does not explicitly teach wherein the second processing device is further configured to perform image processing of the received image data, and a result from processing the image data is the input to the neural network. Kim teaches wherein the second processing device is further configured to perform image processing of the received image data, ([0115] “inference functions which may be inferred by the artificial neural network, such as object recognition…image processing.” and [0533] “In this case, various peripheral devices such as… cameras… may be connected to the system bus of the artificial neural network”) and a result from processing the image data is the input to the neural network. ([0115] “inference result of the artificial neural network model in accordance with the input data.” and [0533] “the artificial neural network memory system 400 may be configured to control the bandwidth of the system bus”) Claim 6. (Original) Stonelake and Kim teach The system of claim 5, Stonelake does not explicitly teach wherein the image processing comprises image segmentation, and the result is a segmented image. Kim teaches wherein the image processing ([0922] “inputs of image data”) comprises image segmentation, and the result is a segmented image. ([0922] “output feature map”) Claim 7. (Original) Stonelake and Kim teach The system of claim 6, Stonelake does not explicitly teach wherein an output of the neural network is a classification result, and the classification result identifies an object in the segmented image, or the segmented image. Kim teaches wherein an output of the neural network is a classification result, and the classification result identifies an object in the segmented image, or the segmented image. ( [0518] “artificial neural network model which is processed by the first processor (Processor 1) may be an object recognition model” and [0003] “object detection”) Claim 8. (Previously Presented) Stonelake and Kim teach The system of claim 1, further comprising: Stonelake does not explicitly teach registers to configure at least one of the processing device or the multiply-accumulate engine; and a memory interface configured to use a common command and data protocol for reading data from and writing data to the dynamic random access memory , the static random access memory , and the registers. Kim teaches registers to configure at least one of the processing device or the multiply-accumulate engine; ([0134] “The processing elements PE may fix one data of an input feature map pixel (Ifmap pixel: I), a filter weight W, and a partial sum (Psum: P) to a register of the processing elements PE.”) and a memory interface configured to use a common command and data protocol for reading data from and writing data ([0809] “command to read or write a specific size of data into a specific address in memory,”) to the dynamic random access memory, the static random access memory, ([0317] “The volatile memory may include a dynamic RAM (DRAM) and a static RAM (SRAM).”) and the registers. ([0984] “For example, the internal memory may be a SRAM or a register. The internal memory may simultaneously perform a read operation and a write operation.”) Claim 9. (Previously Presented) Stonelake and Kim teach The system of claim 8, Stonelake teaches wherein the memory interface is a double data rate memory bus. ([0029] “double data rate (DDR) memory bus, etc.”) Claim 10. (Original) Stonelake and Kim teach The system of claim 1, Stonelake does not explicitly teach wherein the neural network is at least one of a convolutional neural network, or a deep neural network. Kim teaches wherein the neural network is at least one of a convolutional neural network, or a deep neural network. ([0221] “artificial neural network model applicable to the present disclosure may be a convolutional neural network (CNN) which is one of deep neural networks (DNN).”) Claim 11. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake does not explicitly teach further comprising a plurality of registers associated with at least one of the processing device or the multiply-accumulate engine, wherein the registers are configurable for controlling operation of the processing device or the multiply-accumulate engine. Kim teaches further comprising a plurality of registers associated with at least one of the processing device or the multiply-accumulate engine, wherein the registers are configurable for controlling operation of the processing device ([0164] “special function register included in the processor”) or the multiply-accumulate engine. ([0697] “register map for NPU control”) Claim 15. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake teaches further comprising a command bus that couples the memory controller to the DRAM([0027] “the mapping table can be updated such that the access is redirected from SRAM to DRAM.”) and SRAM,([0027] “memory system controller can maintain a mapping table of DDR pages mapped to the SRAM buffer”) wherein: the memory controller comprises a command buffer ([0087] “buffer controller 412 reads and writes from the SRAM buffer 410. Buffer controller”) and a state machine; and the state machine is configured to provide a sequence of commands from the command buffer to the command bus. ([0087] “multiplexer 408 directs traffic either from the host to certain DRAM channels for volatile memory 402, or as needed internally in the memory module 401 (during tRFCs that are reserved for SRAM buffer read and write so that the host is not issuing read/write commands to the DRAM at this time).”) Claim 19. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake teaches wherein the processing device is further configured to communicate with the DRAM to move data between the DRAM and the SRAM in support of the computations. ([0049] “The processing device 118 can write data to each of the memory sub-systems (e.g., 205) and read data from the memory sub-systems (e.g., 205) directly or indirectly.” Stonelake para 50 says the memory subsystems include “DRAM…. SRAM”) Claim 20. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake teaches wherein the SRAM ([0036] “memory sub-system 110 can include…SRAM”) is configurable to operate as a memory for the processing device, () or as a cache ([0036] “cache or buffer…(e.g.,..SRAM)”) between the processing device and the DRAM. (Fig. 1 shows the memory subsystem 110 which acts as a cache between the processing device and DRAM) Claim 21. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake teaches wherein the memory controller accesses the dynamic random access memory using a memory bus protocol, the system further comprising a memory manager configured to: manage the memory space as memory for a host device, wherein the memory space includes a first address ([0064] “address that are associated with the memory”) corresponding to at least one register of the processing device; ([0089] “control registers are provided in the memory module”) receive a signal from the host device to configure the processing device; ([0079] “Thus, read and write operations can be performed”) translate the signal ([0064] “address translations”) to a first command and first data in accordance with the memory bus protocol, ([0064] “the controller (e.g., 227) can receive commands, requests or instructions from the processing device 118 in accordance with a standard communication protocol for the communication channel (e.g., 203) and can convert the commands, requests or instructions in compliance with the standard protocol into detailed instructions or appropriate commands within the memory sub-system”) wherein the first data corresponds to a configuration of the processing device; ([0064] “appropriate commands within the memory sub-system (e.g., 205) to achieve the desired access to the memory”) and send the first command, the first address, and the first data to the memory controller ([0064] “controller (e.g., 227) can receive commands, requests or instructions from the processing device 118”) so that the first data is written to the register. ([0062] “Local memory of the controller (e.g., 227) can include read-only memory (ROM) for storing micro-code and/or memory registers storing, e.g., memory pointers, fetched data, etc.”) Claim 22. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake teaches further comprising a memory manager configured to: manage the memory space for a host device; ([0027] “the memory system controller can maintain a mapping table of DDR pages mapped to the SRAM buffer in order to speed up data access in cases for which the data is already in the SRAM buffer….”) send a command to the memory controller that causes reading of data from a register in the processing device or the multiply-accumulate engine; ([0034] “The controller 115 of the memory sub-system 110 can communicate with the memory components 109A to 109N to perform operations such as reading data, writing data,” and [0034] “the local memory 119 can include memory registers storing memory pointers, fetched data, etc.”) Stonelake does not explicitly teach and provide, to the host device and based on the read data, a status of the computations. Kim teaches and provide, to the host device and based on the read data, a status of the computations. ([0988] “The AMC reads the data to be requested by the NPU based on the ANN data locality information from the main memory before the NPU requests it and stores it in the buffer memory.”) Claim 23. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake does not explicitly teach further comprising a memory manager configured to: manage the memory space for a host device; send a command to the memory controller that causes reading of data from a register in the processing device or the multiply-accumulate engine; and provide, to the host device and based on the read data, a status of the computations. Kim teaches further comprising a memory manager configured to: receive, from a host device, a signal indicating a new configuration; ([0972] “the compiler of the NPU may be configured to analyze the artificial neural network data locality.”) and in response to receiving the signal, send a command to the memory controller that causes writing of data ([0973] “The ANN data locality information may be stored in a memory provided inside the NPU scheduler or the NPU internal memory. The NPU scheduler can access the main memory to read or write necessary data.”) to a register so that operation of the processing device or the multiply-accumulate engine is according to the new configuration. ([0973] “The NPU scheduler may store the ANN data locality information in the form of a register map.”) Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Stonelake et al (US20190243788, hereinafter “Stonelake”) and in view of Kim et al. (US20220137866, hereinafter “Kim”) and in view of Ramani et al (US20230110438, hereinafter “Ramani”) Claim 3. (Original) Stonelake and Kim teach The system of claim 2 (as outlined above). Stonelake and Kim do not explicitly teach wherein the first and second processing devices are on different semiconductor dies. Ramani teaches wherein the first and second processing devices are on different semiconductor dies. ([0328] “multi-chip modules may be used… make substantial improvements over utilizing a conventional central processing unit (“CPU”) and bus implementation.”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have first and second processing devices on different semiconductor dies as taught by Ramani to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because (Ramani [0049] “such an approach can minimize storage requirements, increase a speed of computation, and save power for performing complex operation”) Claim 12. (Original) Stonelake and Kim teach The system of claim 11, Stonelake and Kim do not explicitly teach wherein at least one of the registers is configurable in response to a command received by the memory controller from a host device. Ramani teaches wherein at least one of the registers is configurable ([0331] “such shared memory may incur performance penalties relative to use of local memory and registers resident to a PPU 1014.”) in response to a command received ([0331] “ operation of PPUs 1014 is synchronized through use of a command ” and [0331] “memory is shared and accessible (e.g., for read and/or write access)”)by the memory controller from a host device. ([0331] “host processor or other peripheral devices”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have a configurable register in response to a command received by the host device as taught by Ramani to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because (Ramani [0049] “such an approach can minimize storage requirements, increase a speed of computation, and save power for performing complex operation”) Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Stonelake et al (US20190243788, hereinafter “Stonelake”) and in view of Kim et al. (US20220137866, hereinafter “Kim”) and in view of Differen.com (https://web.archive.org/web/20210301183100/https://www.diffen.com/difference/Dynamic_random-access_memory_vs_Static_random-access_memory, hereinafter “Website” ) Claim 13. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake and Kim do not explicitly teach wherein a data storage capacity of the static random access memory is less than 20 percent of the data storage capacity of the dynamic random access memory. Website teaches wherein a data storage capacity of the SRAM is less than 20 percent of the data storage capacity of the DRAM. (pg3PDF“DRAM module can have up to 6 times more capacity than an SRAM module.” Therefore SRAM typically contains 16% of the DRAM storage capacity). It is well known in the art that SRAM is typically Less than 20% of DRAM capacity. Claims 14, 16 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Stonelake et al (US20190243788, hereinafter “Stonelake”) and in view of Kim et al. (US20220137866, hereinafter “Kim”) and in view of Ware et al (US20230266968, hereinafter “Ware”) Claim 14. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake and Kim do not explicitly teach wherein the DRAM, the SRAM, the processing device, and the MAC engine are on a same die. Ware teaches wherein the dynamic random access memory, ([0025] “DRAM”) the static random access memory,([0030] “MAC processors includes an L0 SRAM stripe 211”) the processing device, ([0030] “processor”) and the multiply-accumulate engine ([0030] “MAC processors”) are on a same die. ([0052] “Moreover, the various inferencing IC embodiments (and component circuits thereof) presented herein may be implemented within a standalone integrated circuit component”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have DRAM, SRAM, Processor and MAC engine on the same die as taught by Ware to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Ware [0018] “substantially reduced processing latency”) Claim 16. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake and Kim do not explicitly teach wherein the multiply-accumulate engine is further configured as a coprocessor that accelerates an inner product of two vectors resident in the static random access memory. Ware teaches wherein the multiply-accumulate engine is further configured as a coprocessor ([0030] “MAC processors 203”) that accelerates an inner product of two vectors ([0030] “vector multiplication operations”) resident in the static random access memory. ([0030] “MAC processors includes an L0 SRAM stripe 211 (e.g., to store K filter weight operands to be multiplied, within a given MAC processor,”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have a MAC engine configured as a coprocessor that accelerates a product of two vectors residing in the SRAM as taught by Ware to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Ware [0018] “substantially reduced processing latency”) Claim 24. (Previously Presented) Stonelake teaches A system comprising: dynamic random access memory; ([0027] “DRAM”.) a memory controller ([0030] “controller 116 can be referred to as a memory controller,”) configured to control read and write access to addresses in a memory space that maps to the dynamic random access memory([0027] “the memory system controller can maintain a mapping table of DDR pages mapped to the SRAM buffer in order to speed up data access in cases for which the data is already in the SRAM buffer….”) and the processing device; ([0049] “The processing device 118 can write data to each of the memory sub-systems (e.g., 205) and read data from the memory sub-systems (e.g., 205) directly or indirectly.” And [0079] “CPU”. Stonelake further explains the details of how mapping is conducted in [0060] “the memory cells of the memory devices can be grouped as memory pages or data blocks that can refer to a unit of the memory device used to store data.” and after the mapping has been conducted the CPU can access the data which has been mapped as stated in [0079] “Once the mapping has been made by the driver, the host can access those pages in the DRAM… read and write operations can be performed using the CPU of the host system so that any data within a page that is mapped can be accessed.)and a memory manager configured to: receive, from a host device, a new configuration for the processing device; ([0064] “appropriate commands within the memory sub-system (e.g., 205) to achieve the desired access to the memory”) translate the new configuration to at least one command, and at least one address in the memory space; ([0064] “address translations”) and send the command and the address to the memory controller, wherein the memory controller is configured to, in response to receiving the command, update at least one register of the processing device to implement the new configuration. ([0064] “the controller (e.g., 227) can receive commands, requests or instructions from the processing device 118 in accordance with a standard communication protocol for the communication channel (e.g., 203) and can convert the commands, requests or instructions in compliance with the standard protocol into detailed instructions or appropriate commands within the memory sub-system”) Stonelake does not explicitly teach a processing device configured to perform computations for a neural network, wherein the processing device and the dynamic random access memory are located on a same semiconductor die; Kim teaches a processing device configured to perform computations for a neural network, ([1064] “Referring to FIG. 55, the NPU and one or more internal memories are implemented in the form of a System on Chip (SoC). The internal memory may be SRAM. Accordingly, the NPU and the internal memory may be connected through an SRAM interface.” And [0008] “neural processing unit (NPU) which is a processor of an ANN memory system optimized for processing an artificial neural network (ANN) model.”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Stonelake to have computations for a neural network and a MAC engine to support the computations as taught by Kim to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been to (Kim [0007] “improve the operation processing performance of the artificial neural network model and that an artificial neural network memory system which is capable of improving the problems needed to be developed.”) Kim does not explicitly teach wherein the processing device and the dynamic random access memory are located on a same semiconductor die; Ware teaches wherein the processing device ([0030] “processor”) and the dynamic random access memory ([0025] “DRAM”) are located on a same semiconductor die; ([0052] “Moreover, the various inferencing IC embodiments (and component circuits thereof) presented herein may be implemented within a standalone integrated circuit component”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have DRAM and Processor on the same die as taught by Ware to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Ware [0018] “substantially reduced processing latency”) Claim 25. (Previously Presented) Stonelake, Kim and Ware teach The system of claim 24, Stonelake does not explicitly teach further comprising a memory interface to receive images from the host device, wherein the images are stored in the dynamic random access memory and used as inputs to the neural network. Kim teaches further comprising a memory interface to receive images from the host device, wherein the images are stored in the dynamic random access memory ([0317] “memory may include a dynamic RAM (DRAM)”) and used as inputs to the neural network. ([0265] For example, in the case of the artificial neural network model which recognizes an object of an image of a front camera” and [0533] “various peripheral devices such as WIFI devices, displays, cameras, or microphones may be connected to the system bus of the artificial neural network memory system 400.”) Claim 26. (Previously Presented) Stonelake, Kim and Ware teach The system of claim 24, Stonelake does not explicitly teach wherein the memory controller is configured to access the dynamic random access memory using a memory bus protocol, and the command and address are compliant with the memory bus protocol. Kim teaches wherein the memory controller is configured ([0321] “The artificial neural network memory controller 220 may be configured to transmit the received data to the processor 210 again.”) to access the dynamic random access memory ([0317] “The volatile memory may include a dynamic RAM (DRAM)”) using a memory bus protocol, and the command and address are compliant with the memory bus protocol. ([0319] “The operation mode which controls the memory operation may include a read mode or a write mode.” ) Claims 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Stonelake et al (US20190243788, hereinafter “Stonelake”) and in view of Kim et al. (US20220137866, hereinafter “Kim”) and in view of Fortino et al (US5421000, hereinafter “Fortino”) Claim 17. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake and Kim do not explicitly teach wherein a row size of the static random access memory matches a row size of the dynamic random access memory. Fortino teaches wherein a row size of the static random access memory matches a row size of the dynamic random access memory. (col3 line 9“SRAM buffer equal in size to a single row of the dynamic RAM cells”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have the row size of the SRAM matching the row size of the DRAM as taught by Fortino to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Fortino col1 line 16“to provide higher speed systems at relatively low cost.”) Claim 18. (Previously Presented) Stonelake and Kim teach The system of claim 1, Stonelake and Kim do not explicitly teach further comprising a state machine configured to generate signals to control the dynamic random access memory and the static random access memory, wherein the signals comprise read and write strobes for banks of the dynamic random access memory, and read and write strobes for banks of the static random access memory. Fortino teaches further comprising a state machine (col6 line40 “state machine” and col8 line10 “RAS signal”) configured to generate signals to control the dynamic random access memory (col5 line31“DRAM”) and the static random access memory, (col5 line30“SRAM”) wherein the signals comprise read and write (col5 line34 “to read or write.”) strobes for banks of the dynamic random access memory, and read and write strobes for banks of the static random access memory. (col5 line34 “to read or write.”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Stonelake and Kim to have read write strobes for the SRAM and DRAM as taught by Fortino to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Fortino col1 line 16“to provide higher speed systems at relatively low cost.”) Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US20220137866, hereinafter “Kim”) and in view of Ware et al (US20230266968, hereinafter “Ware”) Claim 27. (Previously Presented) Kim teaches A method comprising: receiving image data from a camera of a host device; ([0265] For example, in the case of the artificial neural network model which recognizes an object of an image of a front camera” and [0533] “various peripheral devices such as WIFI devices, displays, cameras, or microphones may be connected to the system bus of the artificial neural network memory system 400.”) performing image processing on the image data to provide first data; ([0115] “inference functions which may be inferred by the artificial neural network, such as object recognition…image processing.” and [0533] “In this case, various peripheral devices such as… cameras… may be connected to the system bus of the artificial neural network”) storing, by a memory controller, the first data ([0809] “command to read or write a specific size of data into a specific address in memory,”) in a dynamic random access memory ; ([0317] “The volatile memory may include a dynamic RAM (DRAM)”) loading at least a portion of the first data to a static random access memory ([0317] “and a static RAM (SRAM).”) ; performing, by a processing device, computations for a neural network, ([1064] “Referring to FIG. 55, the NPU and one or more internal memories are implemented in the form of a System on Chip (SoC). The internal memory may be SRAM. Accordingly, the NPU and the internal memory may be connected through an SRAM interface.” And [0008] “neural processing unit (NPU) which is a processor of an ANN memory system optimized for processing an artificial neural network (ANN) model.”) wherein the first data is an input to the neural network, ([0265] For example, in the case of the artificial neural network model which recognizes an object of an image of a front camera” and [0533] “various peripheral devices such as WIFI devices, displays, cameras, or microphones may be connected to the system bus of the artificial neural network memory system 400.”) and the static random access memory stores an output from the neural network; ([0907] “the buffer memory or internal memory composed of SRAM,”) storing, by copying from the static random access memory, the output in the dynamic random access memory, ([1075] “data was read from the DRAM, which is the main memory, to the SRAM,”) wherein the dynamic random access memory, the static random access memory, and the processing device ([0116] “The processor 110 may be configured to include at least one of a … artificial neural processing unit (NPU).”) map to a memory space of the host device, ([0973] “The NPU scheduler may store the ANN data locality information in the form of a register map.” And [0953] “operation sequence configured in a unit of memory operation request of the NPU, a data domain, a data size, a memory address map configured for sequential addressing.”) and the memory controller controls read and write access to the memory space; ([0984] “read operation and a write operation.” And [0984] “the internal memory in the NPU may be a static memory. For example, the internal memory may be a SRAM or a register.”) and sending the output to the host device, wherein the host device uses the output to identify an object in the image data. ([0265] For example, in the case of the artificial neural network model which recognizes an object of an image of a front camera” and [0533] “various peripheral devices such as WIFI devices, displays, cameras, or microphones may be connected to the system bus of the artificial neural network memory system 400.”) Kim does not explicitly teach static random access memory on a same chip as the dynamic random access memory; processing device on the same chip as the dynamic random access memory and static random access memory; Ware teaches static random access memory on a same chip as the dynamic random access memory; ([0052] “Moreover, the various inferencing IC embodiments (and component circuits thereof) presented herein may be implemented within a standalone integrated circuit component”) processing device ([0030] “processor”) on the same chip as the dynamic random access memory and static random access memory ([0052] “Moreover, the various inferencing IC embodiments (and component circuits thereof) presented herein may be implemented within a standalone integrated circuit component”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim to have DRAM, SRAM and processing device on the same die as taught by Ware to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Ware [0018] “substantially reduced processing latency”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Pardo et al US20230110316 teaches utilizing a memory controller unit to run a neural network with associated DRAM and SRAM Musleh et al US20210092069 teaches using a memory controller to control communication between memory devices for the implementation of a neural network to extract features of an image Any inquiry concerning this communication or earlier communications from the examiner should be directed to OWAIS MEMON whose telephone number is (571)272-2168. The examiner can normally be reached M-F (7:00am - 4:00pm) CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OWAIS I MEMON/Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Aug 31, 2022
Application Filed
Feb 12, 2025
Non-Final Rejection — §103
May 19, 2025
Response Filed
May 27, 2025
Final Rejection — §103
Aug 04, 2025
Response after Non-Final Action
Sep 02, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103
Mar 13, 2026
Request for Continued Examination
Mar 14, 2026
Response after Non-Final Action
Mar 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597224
SYSTEM AND METHOD FOR FEATURE SUB-IMAGE DETECTION AND IDENTIFICATION IN A GIVEN IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12591989
METHOD FOR DEPTH ESTIMATION AND HEAD-MOUNTED DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12592013
REAL SCENE IMAGE EDITING METHOD BASED ON HIERARCHICALLY CLASSIFIED TEXT GUIDANCE
2y 5m to grant Granted Mar 31, 2026
Patent 12586338
SYSTEM FOR UPDATING NEURAL NETWORK PARAMETERS BASED ON OBJECT DETECTION AREA OVERLAP SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12573069
SYSTEMS AND METHODS FOR GENERATING AND CODING MULTIPLE FOCAL PLANES FROM TEXTURE AND DEPTH
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
97%
With Interview (+22.4%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month