Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/29/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1-4, 6 and 8-22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 6 and 8-22 are rejected under 35 U.S.C. 103 as being unpatentable over US20200285892A1 to Baum et al and US6957308B1 to Patel.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over US20200285892A1 to Baum et al, US6957308B1 to Patel and US7035131B2 to Huang et al.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over US20200285892A1 to Baum et al, US6957308B1 to Patel and US4926385A to Fujishima et al.
Baum teaches claim 1. (Currently amended) A memory device for an artificial neural network (ANN), the memory device comprising: (Baum abs “improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms.”)
at least one memory cell array of N columns and M rows; and (Baum para 310 “individual memory locations 856, denoted D11 through DUV that are accessed via address lines ADDR0 through ADDRUV-1, where the first digit of the D subscript represents the column and the second digit represents the row.”)
a memory controller configured to sequentially perform a read or write operation of data of the at least one memory cell array in a (Baum para 311 “The UP/DOWN signal indicates whether sequential access to the memory increases or decreases after each access…” Baum para 171 “the memory fabric is organized and constructed utilizing the following: (1) localization of memory where computing elements require access to local data which permits accessibility of any given computing element to a predefined and limited memory entity; (2) structured organization whereby memory content is organized a priori in a given consistent matter…” The memory is controlled to provide sequential access, so there must be a memory controller.)
wherein an address map for a plurality of parameters of a plurality of layers of the ANN is sequentially set, (Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
wherein the predetermined sequential access information of the ANN is determined at compilation of the ANN to be executed by a processor based on characteristics of the processor, (Baum para 284 “It is the function of the compiler and SDK to map the logical ANN model to physical NN processor… Layer 1 maps into the entire NN processor 1 since its capacity in terms of compute elements, memory fabric, etc. is only sufficient to implement Layer 1.” Baum para 205 “In operation, input data 216 and weights 218 are provided from the L3 memory at the cluster level to the input interconnect 206 in accordance with control signal 201.” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
wherein the predetermined sequential access information is generated based on ANN data locality information of the artificial neural network, and (Baum para 148 “Since a single compute unit memory access pattern is structured and well-defined by the ANN and does not require full random access to the entire memory,” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
wherein the memory controller is further configured to set memory addresses of data for each of operation steps to be stored in the at least one memory cell array based on the predetermined sequential access information. (Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum doesn’t teach a burst mode.
However, Patel teaches a burst mode. (Patel abs. “A memory device may be implemented to respond to and one or more command encodings that specify different burst lengths than the burst length indicated by the current burst length setting for the memory device.”)
Baum, Patel and the claims are all memory devices. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to include Patel’s dynamic burst mode because in the prior art the burst size was fixed and “bandwidth… [was] wasted… or the memory controller… [would] have to reissue commands…” Patel 1:40.
Baum teaches claim 2. (Original) The memory device of claim 1, wherein each of the at least one memory cell array comprises a plurality of dynamic memory cells having a leakage current characteristic. (Baum para 216 “Additional features include: (1) weight/input data balancing; (2) pre and post-processing blocks; (3) dynamic bus width and memory bit cell…” The memory bit cell is also “dynamic”, according to list item “(3)”.)
Baum doesn’t say the word leakage current characteristic, but all electric memory cells have a leakage current to some degree.
However, Huang teaches memory cells having a leakage current characteristic. (Huang abs “A circuit operable to measure leakage current in a Dynamic Random Access Memory (DRAM) is provided comprising a plurality of DRAM bit cell access transistors coupled to a common bit line, a common word line, and a common storage node, wherein said access transistors may be biased to simulate a corresponding plurality of inactive bit cells of a DRAM; and a current mirror in communication with the common storage node operable to mirror a total leakage current from said plurality of bit cell access transistors when the access transistors are biased to simulate the inactive bit cells.”)
Baum, Huang and the claims are all memory systems. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use memory with leakage current because Baum uses memory and Huang says that memory has leakage current and it makes sense to monitor the leakage because “[r]educing leakage current to low system power dissipation is presently a challenge…” Huang 1:30.
Baum teaches claim 3. (Original) The memory device of claim 1, wherein each of the at least one memory cell array comprises: (Baum para 216 “Additional features include: (1) weight/input data balancing; (2) pre and post-processing blocks; (3) dynamic bus width and memory bit cell…”)
Baum doesn’t teach the components of a memory.
However, Fujishima teaches a column decoder for controlling access to the N columns; (Fujishima abs “groups comprising a predetermined number of bit lines with block information transferred simultaneously from corresponding ones of the groups of bit lines of a selected block when the column address corresponding to the selected block is applied.” The block selector is the column decoder.)
a plurality of bit lines connected to the column decoder; (Fujishima abs “groups comprising a predetermined number of bit lines with block information transferred simultaneously from corresponding ones of the groups of bit lines of a selected block when the column address corresponding to the selected block is applied.” The block selector is the column decoder.)
a row decoder for controlling access to the M rows; (Fujishima abs “Word line selecting circuitry selects one of the word lines responsive to a row address and reads out to each of the bit lines information stored in the memory cell…” Row decoder is the row selector.)
a plurality of word lines connected to the row decoder; and (Fujishima abs “Word line selecting circuitry selects one of the word lines responsive to a row address and reads out to each of the bit lines information stored in the memory cell…” Row decoder is the row selector.)
a sense amplifier connected to one end of each the plurality of bit lines. (Fujishima abs “A first column selector circuit selects the sense amplifiers corresponding to a column address when the column address is applied and reads information held in the sense amplifier.”)
Baum, the claims, and Fujishima are all directed memory. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use Fujishima’s arrangement “to provide a semiconductor memory device for a simple cache system having a high hit rate.” Fujishima 3:20.
Baum teaches claim 4. (Original) The memory device of The memory device of wherein the at least one memory cell array stores data required for operation of the artificial neural network, (Baum fig. 7b “weight from memory”) wherein the memory controller is further configured to control data communication between a processor and the at least one memory cell array, and wherein the processor is configured to process the artificial neural network operation based on the predetermined sequential access information. (Baum para 216 “Additional features include: (1) weight/input data balancing; (2) pre and post-processing blocks; (3) dynamic bus width and memory bit cell…” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum teaches claim 6. (Original) The memory device of claim 1, wherein the memory controller is further configured to directly control an address of the N columns and the M rows of the at least one memory cell array so that the at least one memory cell array operates in the (Baum para 309 “The UP/DOWN signal indicates whether sequential access to the memory increases or decreases after each access, i.e. whether the preceding or subsequent location is accessed in the memory…. The address offset 920 output of the circuit 842 is used to generate the physical addressing to the memory 844.” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum doesn’t teach a burst mode.
However, Patel teaches a burst mode. (Patel abs. “A memory device may be implemented to respond to and one or more command encodings that specify different burst lengths than the burst length indicated by the current burst length setting for the memory device.”)
Baum teaches claim 8. (Original) The memory device of claim 1, wherein the memory controller is further configured to store data of the artificial neural network by sequentially allocating addresses corresponding to the N columns and the M rows of the at least one memory cell array. (Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.” The addresses have to be allocated to enable sequential access. Baum para 310 “individual memory locations 856, denoted D11 through DUV that are accessed via address lines ADDR0 through ADDRUV-1, where the first digit of the D subscript represents the column and the second digit represents the row.”)
Baum teaches claim 9. (Currently amended) A memory device for an artificial neural network (ANN),the memory device comprising: (Baum abs “improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms.”)
at least one memory cell array; and (Baum para 310 “individual memory locations 856, denoted D11 through DUV that are accessed via address lines ADDR0 through ADDRUV-1, where the first digit of the D subscript represents the column and the second digit represents the row.”)
a memory controller configured to directly control a read or write operation of the at least one memory cell array based on ANN data locality information of the artificial neural network, wherein the ANN data locality information includes sequence information with respect to all data access requests required to perform an inference operation of the artificial neural network, (Baum para 311 “The UP/DOWN signal indicates whether sequential access to the memory increases or decreases after each access…” Baum para 171 “the memory fabric is organized and constructed utilizing the following: (1) localization of memory where computing elements require access to local data which permits accessibility of any given computing element to a predefined and limited memory entity; (2) structured organization whereby memory content is organized a priori in a given consistent matter…” The memory is controlled to provide sequential access, so there must be a memory controller.)
wherein the memory controller is further configured to control the at least one memory cell array so that the at least one memory cell array operates in (Baum para 284 “It is the function of the compiler and SDK to map the logical ANN model to physical NN processor… Layer 1 maps into the entire NN processor 1 since its capacity in terms of compute elements, memory fabric, etc. is only sufficient to implement Layer 1.” Baum para 205 “In operation, input data 216 and weights 218 are provided from the L3 memory at the cluster level to the input interconnect 206 in accordance with control signal 201.” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
wherein the predetermined sequential access information is generated based on ANN data locality information of the artificial neural network. (Baum para 148 “Since a single compute unit memory access pattern is structured and well-defined by the ANN and does not require full random access to the entire memory,” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum doesn’t teach a burst mode.
However, Patel teaches a burst mode. (Patel abs. “A memory device may be implemented to respond to and one or more command encodings that specify different burst lengths than the burst length indicated by the current burst length setting for the memory device.”)
Baum, Patel and the claims are all memory devices. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to include Patel’s dynamic burst mode because in the prior art the burst size was fixed and “bandwidth… [was] wasted… or the memory controller… [would] have to reissue commands…” Patel 1:40.
Baum teaches claim 10. (Original) The memory device of claim 9, wherein the ANN data locality information includes predetermined operation sequence information of the artificial neural network. (Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum teaches claim 11. (Original) The memory device of claim 9, wherein the ANN data locality information includes data size information of each operation of a preset sequence of operations. (Patel abs. “A memory device may be implemented to respond to and one or more command encodings that specify different burst lengths than the burst length indicated by the current burst length setting for the memory device.” Burst length is the data size information.)
Baum teaches claim 12. (Original) The memory device of The memory device of wherein the memory controller is further configured to store a memory map, and
wherein the memory map is configured in a sequential manner based on operation sequence information and a data size of each of operation sequences. (Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.” The Bit lines are only so wide, so in that way, the sequential access of the memory is based on data size of “each of operation sequences”.)
Baum teaches claim 13. (Original) The memory device of The memory device of wherein the ANN data locality information includes a signal for identifying a weight, an input feature map, and an output feature map, wherein a pattern of an operation sequence of the weight, the input feature map, and the output feature map is determined by compilation based on characteristics of a processor. (Baum para 247 “the LCU also generates the control signals (i.e. MWC select controls) for controlling the control window as well (along with the weight, ingress and egress data windows).” Baum para 228 “the memory fabric can dynamically rearrange the memory windowing scheme whereby the memory resources accessible by compute elements is programmable and configurable (e.g., at compile time…” Ingress is the input feature map, and egress is the output feature map.)
Baum teaches claim 14. (Original) The memory device of claim 9, wherein the ANN data locality information is determined based on at least one of a characteristic of an artificial neural network model, a characteristic of a processor, a size of a cache memory, and an operation algorithm policy. (Baum para 284 “It is the function of the compiler and SDK to map the logical ANN model to physical NN processor… Layer 1 maps into the entire NN processor 1 since its capacity in terms of compute elements, memory fabric, etc. is only sufficient to implement Layer 1.” Baum para 205 “In operation, input data 216 and weights 218 are provided from the L3 memory at the cluster level to the input interconnect 206 in accordance with control signal 201.” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum teaches claim 15. (Currently amended) A memory device for an artificial neural network (ANN), the memory device comprising: (Baum abs “improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms.”)
at least one dynamic memory cell array; and (Baum para 310 “individual memory locations 856, denoted D11 through DUV that are accessed via address lines ADDR0 through ADDRUV-1, where the first digit of the D subscript represents the column and the second digit represents the row.”) Baum para 216 “Additional features include: (1) weight/input data balancing; (2) pre and post-processing blocks; (3) dynamic bus width and memory bit cell…” The memory bit cell is also “dynamic”, according to list item “(3)”.)
a memory controller configured to store data of the artificial neural network in the at least one dynamic memory cell array according to a sequence based on ANN data locality information, (Baum para 311 “The UP/DOWN signal indicates whether sequential access to the memory increases or decreases after each access…” Baum para 171 “the memory fabric is organized and constructed utilizing the following: (1) localization of memory where computing elements require access to local data which permits accessibility of any given computing element to a predefined and limited memory entity; (2) structured organization whereby memory content is organized a priori in a given consistent matter…” The memory is controlled to provide sequential access, so there must be a memory controller.)
wherein the ANN data locality information includes sequence information with respect to all data access requests required to perform an inference operation of the artificial neural network, (Baum para 284 “It is the function of the compiler and SDK to map the logical ANN model to physical NN processor… Layer 1 maps into the entire NN processor 1 since its capacity in terms of compute elements, memory fabric, etc. is only sufficient to implement Layer 1.” Baum para 205 “In operation, input data 216 and weights 218 are provided from the L3 memory at the cluster level to the input interconnect 206 in accordance with control signal 201.” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
wherein the memory controller is further configured to control the at least one dynamic memory cell array so that the at least one dynamic memory cell array operates in information, and (Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
wherein the predetermined sequential access information is generated based on ANN data locality information of the artificial neural network. (Baum para 148 “Since a single compute unit memory access pattern is structured and well-defined by the ANN and does not require full random access to the entire memory,” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum doesn’t teach a burst mode.
However, Patel teaches a burst mode. (Patel abs. “A memory device may be implemented to respond to and one or more command encodings that specify different burst lengths than the burst length indicated by the current burst length setting for the memory device.”)
Baum, Patel and the claims are all memory devices. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to include Patel’s dynamic burst mode because in the prior art the burst size was fixed and “bandwidth… [was] wasted… or the memory controller… [would] have to reissue commands…” Patel 1:40.
Baum teaches claim 16. The memory device of claim 15, wherein the sequence based on the ANN data locality information includes a repeating pattern having an order of an input feature map, a kernel, and an output feature map. (Baum fig. 23 shows all of these different orders, see below. The pattern is the order of operations in the NN operations.)
PNG
media_image1.png
516
704
media_image1.png
Greyscale
Baum teaches claim 17. The memory device of claim 15, wherein the sequence based on the ANN data locality information includes a repeating pattern having an order of a kernel, an input feature map, and an output feature map. (Baum fig. 23 shows all of these different orders, see above. The pattern is the order of operations in the NN operations. Kernel multiplied by the input is communicative, so input-kernel order is the same as kernel-input order.)
Baum teaches claim 18. (Previously presented) The memory device of claim 15, wherein the ANN data locality information is configured in a unit of a data access request requested by a processor and sent to the memory controller. (Baum para 148 “Since a single compute unit memory access pattern is structured and well-defined by the ANN and does not require full random access to the entire memory,” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.”)
Baum teaches claim 19. (Original) The memory device of claim 15, wherein the memory controller is further configured to divide each of the at least one dynamic memory cell array into a kernel area and a feature map area based on information for identifying a kernel, an input feature map, and an output feature map. (Baum fig. 13, below. Ingress is input, egress is output, the weights are the kernel.)
PNG
media_image2.png
580
638
media_image2.png
Greyscale
Baum teaches claim 20. (Previously presented) The memory device of The memory device of wherein each of the at least one dynamic memory cell array comprises a plurality of banks configured to enable an (Baum fig. 13 above shows blocks as banks.)
Baum doesn’t teach interleaving boost.
However, Patel teaches operate in the burst mode corresponding to the interleaving operation for the plurality of banks. (Patel 4:45 “operate in the burst mode corresponding to the interleaving operation for the plurality of banks.”)
Baum teaches claim 21. (Original) The memory device of claim 15, further comprising a processor configured to provide the ANN data locality information to the memory controller. (Baum para 148 “Since a single compute unit memory access pattern is structured and well-defined by the ANN and does not require full random access to the entire memory,” Baum para 164 “the static order of computations combined with an appropriate arrangement of parameters in memory enables sequential access to memory.” Baum para 230 ”control for compute element 1 spans memory blocks 584, 586, and 588, denoted by Control 1 arrow 590.”)
Baum teaches claim 22. (Original) The memory device of claim 15, further comprising a processor configured to provide the memory controller with information for identifying an input feature map, a kernel, and an output feature map. (Baum fig. 13 above and para 230 “control for compute element 1 spans memory blocks 584, 586, and 588, denoted by Control 1 arrow 590.”)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AUSTIN HICKS/Primary Examiner, Art Unit 2142