Prosecution Insights
Last updated: April 19, 2026
Application No. 18/968,200

MEMORY FOR ARTIFICIAL INTELLIGENCE APPLICATION AND METHODS THEREOF

Non-Final OA §102§103§112
Filed
Dec 04, 2024
Examiner
KROFCHECK, MICHAEL C
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Everspin Technologies Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
530 granted / 652 resolved
+26.3% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to application 18/968,200 filed on 12/4/2024. Claims 1-20 have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/4/2024, 6/18/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-7 and 10-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The terms “medium write endurance,” “very low bit error rate,” “fast read rate,” “slow write rate,” “high write endurance,” “fast write rate,” “low write endurance,” “medium bit error rate,” “high speed IO memory scheme,” “high energy barrier,” “medium energy barrier in claims 4-7 and 10-12 are relative terms which renders the claim indefinite. The terms are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-6, 9-11, 13, and 20 is/are rejected under 35 U.S.C. 102(a)(1), (a)(2) as being anticipated by Rom et al. (US 2020/0185027). With respect to claim 1, Rom teaches of an artificial neural network device (fig. 1; paragraph 48; SSD containing a deep learning neural network), comprising: input circuitry configured to provide input data into a neuron (fig. 2-3; paragraph 55, 58, 63; where input components are used to input values that may be activation values for the neuron of the specific layer); weight operation circuitry electrically connected to the input circuitry, the weight operation circuitry configured to input a weight value into the neuron (fig. 2-3; paragraph 58, 63; where synaptic weights are sensed from NVM storage and summation circuits multiply the weights with corresponding activation values for the neurons); bias operation circuitry electrically connected to the weight operation circuitry, the bias operation circuitry configured to input a bias value into the neuron (fig. 2-3; paragraph 58, 63; where bias values are sensed from the NVM storage and added to the output of the summation circuit); activation function circuitry electrically connected to the bias operation circuitry, the activation function circuitry configured to receive an output of the bias operation circuitry and output an activation function output (fig. 2-3; paragraph 58; where the sigmoid/RLU function circuits receive the results of the bias being added to the output of the summation circuit and compute resulting activation values for the neuron of the next layer); and a storage device including storage circuitry electrically connected to the weight operation circuitry, the bias operation circuitry, and the activation function circuitry (fig. 2-3, 18; paragraph 55, 58, 63, 85; NVM array storage 202, 302, and 1804. As shown in the figures, the NVM storage is electrically connected to the circuit modules that perform the operations), wherein the input circuitry, the weight operation circuitry, the bias operation circuitry, and the activation function circuitry are operated based on code data (paragraph 93, 120-121; where system data, “data pertaining to the overall control of operations of the die” is used to carry out the operations), and wherein the storage device includes a plurality of storage portions, each storage portion of the plurality of storage portions configured to store one or more of the code data, the input data, the weight value, the bias value, or the activation function output (fig. 2-3, 18; paragraph 58, 61, 91-93; where the array has blocks for storing input data, synaptic weights, bias values, system data (claimed code data)). With respect to claim 20, Rom teaches of a method of operating a device of an artificial neural network based on code data, the method comprising: receiving, at weight operation circuitry of the device, an input value (activation values) via input circuitry of the device (fig. 2-3; paragraph 55, 58, 63; where input components are used to input values for the neuron of the specific layer); providing a weight value from a storage device to the weight operation circuitry; applying, at the weight operation circuitry, the weight value to the input value to form a weighted value (fig. 2-3; paragraph 58, 63; where synaptic weights are sensed from NVM storage and summation circuits multiply the weights with corresponding activation values for the neurons); providing the weighted value to bias operation circuitry of the device; providing a bias value from the storage device to the bias operation circuitry of the device; applying, at the bias operation circuitry, the bias value to the weighted value to form a biased weighted value (fig. 2-3; paragraph 58, 63; where bias values are sensed from the NVM storage and added to the output of the summation circuit using the bias addition circuits); providing the biased weighted value to activation function circuitry of the device; and applying, at the activation function circuitry, an activation function to the biased weighted value to generate an activation function output (fig. 2-3; paragraph 58; where the sigmoid/RLU function circuits receive the results of the bias being added to the output of the summation circuit and compute resulting activation values for the neuron of the next layer), wherein the storage device includes a plurality of storage portions, each storage portion of the plurality of storage portions configured to store one or more of the code data, the input value, the weight value, the bias value, or the activation function output (fig. 2-3, 18; paragraph 58, 61, 91-93; where the array has blocks for storing input data, synaptic weights, bias values, system data (claimed code data)), and wherein the storage device is integrated into or disposed proximate a chip including the input circuitry, the weight operation circuitry, the bias operation circuitry, and the activation function circuitry (fig. 18; paragraph 91, 114-116; where the NAND die is a monolithic three dimensional memory array, thus integrated as a chip). With respect to claim 2, Rom teaches of wherein the storage device is a magnetoresistive random-access memory (MRAM) device (paragraph 29, 110; where the NVM can be MRAM arrays). With respect to claim 3, Rom teaches of wherein the plurality of storage portions includes a code storage portion, a data storage portion, and a weight storage portion (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, and synaptic weights), wherein the code storage portion is configured to store the code data (fig. 18; paragraph 92; where the memory array has blocks for storing system data, i.e. data pertaining to the overall control of operations of the NAND die), wherein the data storage portion is configured to store one or more of the input data or the activation function output (fig. 18; paragraph 92; where the memory array has blocks for storing input data), and wherein the weight storage portion is configured to store one or more of the weight value or the bias value (fig. 18; paragraph 92; where the memory array has blocks for storing synaptic weights and bias values). With respect to claim 4, Ron teaches of wherein the plurality of storage portions includes a code storage portion configured to store the code data (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, synaptic weights, and bias values) and support one or more of: an unlimited read endurance; a medium write endurance; a very low bit error rate; a fast read rate; or a slow write rate (fig. 18; paragraph 92-93; as the memory array is a NAND memory array, which has a faster read rate than its write rate). With respect to claim 5, Ron teaches of wherein the plurality of storage portions includes a data storage portion configured to store one or more of the input data or the activation function output (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, synaptic weights, and bias values) and support one or more of: an unlimited read endurance; a high write endurance; a very low bit error rate; a fast read rate; or a fast write rate (fig. 18; paragraph 92-93; as the memory array is a NAND memory array, which has a faster read rate than its write rate). With respect to claim 6, Ron teaches of wherein the plurality of storage portions includes a weight storage portion configured to store one or more of the weight value or the bias value (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, synaptic weights, and bias values) and support one or more of: an unlimited read endurance; a low write endurance; a medium bit error rate; a fast read rate; or a slow write rate (fig. 18; paragraph 92-93; as the memory array is a NAND memory array, which has a faster read rate than its write rate). With respect to claim 9, Rom teaches of wherein the plurality of storage portions includes a code storage portion, a data storage portion, and a weight storage portion (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, and synaptic weights), and wherein the data storage portion, the code storage portion, and the weight storage portion each include a plurality of magnetic tunnel junctions (MTJs) (paragraph 29, 110; as the NCM arrays are MRAM arrays each of the storage blocks must include multiple MTJs as MTJs are the fundamental building block of MRAM). With respect to claim 10, Rom teaches of wherein the plurality of storage portions includes a code storage portion configured to store the code data (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, and synaptic weights) and support one or more of: a parallel IO memory scheme, a serial IO memory scheme, or a high speed IO memory scheme; a write-verify write scheme; an error correction code (ECC) scheme with at least two-bit error correction; or a magnetic tunnel junction (MTJ) having a high energy barrier (fig. 1; paragraph 49; where the host interface may be any suitable communication interface, such as a Universal Serial Bus (USB) interface, a Serial Peripheral (SP) interface, a Serial Advanced Technology Attachment (SATA) interface). With respect to claim 11, Rom teaches of wherein the plurality of storage portions includes a data storage portion configured to store one or more of the input data or the activation function output (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, and synaptic weights) and support one or more of: a parallel IO memory scheme, a serial IO memory scheme, or a high speed IO memory scheme; a single pulse write scheme; an error correction code (ECC) scheme with at least two-bit error correction; or a magnetic tunnel junction (MTJ) having a medium energy barrier (fig. 1; paragraph 49; where the host interface may be any suitable communication interface, such as a Universal Serial Bus (USB) interface, a Serial Peripheral (SP) interface, a Serial Advanced Technology Attachment (SATA) interface). With respect to claim 13, Rom teaches of wherein the storage device is integrated into or disposed proximate a chip including the input circuitry, the weight operation circuitry, the bias operation circuitry, and the activation function circuitry (fig. 18; paragraph 91, 114-116; where the NAND die is a monolithic three dimensional memory array, thus integrated as a chip). Claim(s) 14-19 is/are rejected under 35 U.S.C. 102(a)(1), (a)(2) as being anticipated by Torng et al. (US 2019/0363131). With respect to claim 14, Torng teaches of a device configured to store data associated with an artificial neural network (fig. 1; paragraph 34; where the logic circuits in the single chip implementation implement neural networks), the device comprising: a first storage portion configured to store a first data type associated with the artificial neural network and support a first set of storage characteristics (fig. 1; paragraph 32, 35-38; where the first memory stores updatable/rewritable training parameters of a CNN model); and a second storage portion configured to store a second data type associated with the artificial neural network and support a second set of storage characteristics (fig. 1; paragraph 32, 35-38; where the second memory stores input data that is processed by the CNN model), wherein the first set of storage characteristics and the second set of storage characteristics are different (fig. 1; paragraph 32, 35-38; where each memory has different characteristics and the physical parameters of the memory cells are varied to further adjust the memory characteristics). With respect to claim 15, Torng teaches of wherein each of the first data type and the second data type includes code data, input data, or weight and bias data, and wherein the first data type and the second data type are different (paragraph 27-29, 35; where the memory stores executable instructions (code data), the first memory stores updatable/rewritable training parameters of the CNN model, and the second memory stores input data processed by the CNN model). With respect to claim 16, Torng teaches of wherein each of the first set of storage characteristics and the second set of storage characteristics includes one or more of storage performance specifications, an input/output (I/O) scheme, a write scheme, an error correction code (ECC) scheme, or storage bit characteristics (paragraph 32, 34, 38, 46; the memories can withstand different numbers of read/write cycles and where the physical parameters of the memory cells are varied). With respect to claim 17, Torng teaches of wherein the first data type includes code data and the second data type includes weight and bias data (paragraph 27-29, 35; where the memory stores executable instructions and training parameters including the weights and bias parameters). With respect to claim 18, Torng teaches of wherein the device includes a magnetoresistive random-access memory (MRAM) (paragraph 42-44; where the first through third memories can be implemented with multipole types of memory including MRAM). With respect to claim 19, Torng teaches of a third storage portion configured to store a third data type associated with the artificial neural network and support a third set of storage characteristics (fig. 1; paragraph 32, 35-38; where the third memory stores OTP parameters of the CNN model), wherein the first set, the second set, and the third set of storage characteristics are different from each other (fig. 1; paragraph 32, 35-38; where the third memory is an OTP memory). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rom, Tran et al. (US 2022/0108759), and Ram et al. (US 2020/0264974). With respect to claim 7, Rom teaches of wherein the plurality of storage portions includes a weight storage portion configured to support a low write endurance (fig. 18; paragraph 43, 73, 92; where the memory array stores synaptic weights, and bias values. The memory array storing the weights and bias values is TLC or QLC memory which have a low write endurance compared to SLC), Rom fails to explicitly teach of the low write endurance including a first number of write cycles associated with an inference operation and a second number of write cycles associated with a re-training operation, wherein the first number of write cycles is larger than the second number of write cycles. However, Tran teaches of the low write endurance including a first number of write cycles associated with an inference operation (paragraph 27; where MLC memory is used to store weights and inputs during inference operations). Ram teaches of a second number of write cycles associated with a re-training operation, (paragraph 10; where TLC and QLC memory can be used to store the weights for training). The combination of Rom, Tran, and Ram teaches of wherein the first number of write cycles is larger than the second number of write cycles (Rom, fig. 18; paragraph 43, 73, 92; Tran, paragraph 27; Ram, paragraph 10; as write endurance is lower the more bits are stored per cell, MLC memory has a higher write endurance (number of write cycles) than TLC or QLC memory). Rom and Tran are analogous art because they are from the same field of endeavor, as they are directed to storage containing neural networks. It would have been obvious to one of ordinary skill in the art having the teachings of Rom and Tran before the time of the effective filing of the claimed invention to incorporate the memory storing weights as MLC memory in Rom as taught in Tran. Their motivation would have been to more efficiently use the memory. Rom, Tran, and Ram are analogous art because they are from the same field of endeavor, as they are directed to storage containing neural networks. It would have been obvious to one of ordinary skill in the art having the teachings of Rom, Tran, and Ram before the time of the effective filing of the claimed invention to incorporate the memory storing weights as TLC OR QLC memory in the combination of Rom and Tran as taught in Ram. Their motivation would have been to more efficiently use the memory. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rom and Troia (US 2022/0399050). With respect to claim 8, Rom teaches of wherein the plurality of storage portions includes two or more of a code storage portion, a data storage portion, or a weight storage portion (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, synaptic weights, and bias values), wherein the code storage portion and the weight storage portion include a magnetoresistive random-access memory (MRAM) (paragraph 29, 110; where the memory array can be MRAM). Rom fails to explicitly teach of wherein the data storage portion includes a dynamic random access memory (DRAM). However, Troia teaches of wherein the data storage portion includes a dynamic random access memory (DRAM) (fig. 1; paragraph 25, 28; where the memory array can include DRAM). Rom and Troia are analogous art because they are from the same field of endeavor, as they are directed to storage containing neural networks. It would have been obvious to one of ordinary skill in the art having the teachings of Rom and Troia before the time of the effective filing of the claimed invention to incorporate the DRAM storing data in the storage of Rom as taught in Troia. Their motivation would have been to increase the speed at which the data can be accessed. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rom and Alam et al. (US 2023/0281434). With respect to claim 12, Rom teaches of wherein the plurality of storage portions includes a weight storage portion configured to store one or more of the weight value or the bias value (fig. 18; paragraph 92; where the memory array stores system data (claimed code), input data, synaptic weights, and bias values). Rom fails to explicitly teach of support one or more of: a wide IO memory scheme; a write-verify write scheme; no error correction code (ECC) scheme or an ECC scheme with a one-bit error correction; or a magnetic tunnel junction (MTJ) having a high energy barrier. However, Alam teaches of support one or more of: a wide IO memory scheme; a write-verify write scheme; no error correction code (ECC) scheme or an ECC scheme with a one-bit error correction; or a magnetic tunnel junction (MTJ) having a high energy barrier (paragraph 77; where ECC may be omitted). Rom and Alam are analogous art because they are from the same field of endeavor, as they are directed to storage containing neural networks. It would have been obvious to one of ordinary skill in the art having the teachings of Rom and Alam before the time of the effective filing of the claimed invention to omit error correction code in the storage of Rom as taught in Alam. Their motivation would have been to conserve resources and chip space by using the ECC bit space for storage (Alam, paragraph 77). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Liu et al. (US 11,797,467) discloses a network on chip neural network operation process that includes reading all data blocks including assigned input neuron data, weight data of neurons of the layer, an interpolation table for a quick activation function operation, a constant table for configuring parameters of the operation device, bias data, for the activation function. Sending the assigned input neuron data, weight data, and operation instruction of the neurons of the layer to the primary processing circuit. The processing circuit, determines the assigned input neuron data of the neurons of the layer the weight data to be distribution data, distributing one piece of distributed data into a plurality of data blocks, sending at least one of the data blocks, broadcast data, and at least one of a plurality of operation instructions to the secondary processing circuits. Obtaining the intermediate result from a multiplication processing circuit, an accumulation processing circuit and obtaining the assigned neuron data output by the neurons of this layer. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C KROFCHECK whose telephone number is (571)272-8193. The examiner can normally be reached on Monday - Friday 8am -5pm, first Friday off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim Vo can be reached on (571) 272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael Krofcheck/Primary Examiner, Art Unit 2138 MICHAEL C. KROFCHECK Primary Examiner Art Unit 2138
Read full office action

Prosecution Timeline

Dec 04, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §102, §103, §112
Mar 23, 2026
Interview Requested
Apr 06, 2026
Examiner Interview Summary
Apr 06, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591514
TRAFFIC AWARE SMART CACHING IN FABRIC SWITCHES
2y 5m to grant Granted Mar 31, 2026
Patent 12591509
RECONFIGURABLE PARTITIONING OF HIGH BANDWIDTH MEMORY
2y 5m to grant Granted Mar 31, 2026
Patent 12585579
LOCKED RAID WITH COMPRESSION FOR MEMORY INTERCONNECT APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12563242
UTILIZING A SINGLE BUFFER FOR A DYNAMIC NUMBER OF PLAYERS, EACH USING A DYNAMICALLY SIZED BUFFER
2y 5m to grant Granted Feb 24, 2026
Patent 12561090
NAND-BASED STORAGE DEVICE WITH PARTITIONED NONVOLATILE WRITE BUFFER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+17.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month