Prosecution Insights
Last updated: April 19, 2026
Application No. 17/934,178

MEMORY MANAGEMENT FOR MATHEMATICAL OPERATIONS IN COMPUTING SYSTEMS WITH HETEROGENEOUS MEMORY ARCHITECTURES

Non-Final OA §101§102§103§112
Filed
Sep 21, 2022
Examiner
ALCANTARA-RAMOS, EMILIO
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
4 granted / 5 resolved
+25.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
18 currently pending
Career history
23
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to because of the following informalities: Each drawing contains a top-left corner and bottom-right corner. It’s unclear if the corners are part of the drawing or not. Applicant is advised to remove these corners Figure 1C is further objected to for failing to comply with 37 CFR 1.84(i), which requires that words appear in a horizontal, left-to-right fashion when the page is either upright or turned so that the top becomes the right side. Note, from 37 CFR 1.84(f), that the top of the sheet is regarded as one of the shorter sides. In Figure 1C, the text “Application Processor 160” needs to be rotated 180 degrees. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. The abstract of the disclosure is objected to because of the following informalities: Line 2: Replace “the method” with “a method” to improve clarity. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claims 1-7, 17-21, and 29-30 are objected to because of the following informalities: Claim 1, line 3: A hyphen should be added between “non” and “volatile” in “nonvolatile” Claim 17, line 4: A hyphen should be added between “non” and “volatile” in “nonvolatile” Claim 29, line 3: A hyphen should be added between “non” and “volatile” in “nonvolatile” Claim 30, line 5: A hyphen should be added between “non” and “volatile” in “nonvolatile” Claims 2-7 and 18-21 are objected to for inheriting the objection of the claims in which they depend on. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim 29 recites the following limitations: “means for initializing at least a portion of weight data for a machine learning model in a nonvolatile random access memory (NVRAM) associated with a processor”. Examiner identifies the “means” of the limitation as a “weight data initializing component”, which is identified to provide the recited function as seen in paragraph [0043, 0074]. A “component” may be a processor (see [0103]). Since the limitation “initializing at least a portion of weight data” is read as storing data, where storing data is a coextensive function of a processor, special programming is not required (see MPEP 2181(II)(B)). “means for storing input data in a dynamic random access memory (DRAM) coupled with the processor”. Examiner identifies the “means” of the limitation as an “input data storing component”, which is identified to provide the recited function as seen in paragraph [0044-0045, 0074]. A “component” may be a processor (see [0103]). Since storing data is a coextensive function of a processor, special programming is not required (see MPEP 2181(II)(B)). “means for storing a result of the operations using the machine learning model in the DRAM”. Examiner identifies the “means” of the limitation as an “result storing component”, which is identified to provide the recited function as seen in paragraph [0052, 0074]. A “component” may be a processor (see [0103]). Since storing data is a coextensive function of a processor, special programming is not required (see MPEP 2181(II)(B)). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 17 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17 recites the limitation "the first memory component" in lines 9-10. There is insufficient antecedent basis for this limitation in the claim. There was no prior recitation of “a first memory component” within the claim. For the sake of examination, Examiner will interpret this limitation to be “the NVRAM”. Claim 17 recites the limitation "the second memory component" in line 11. There is insufficient antecedent basis for this limitation in the claim. There was no prior recitation of “a second memory component” within the claim. For the sake of examination, Examiner will interpret this limitation to be “the DRAM”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 7, 17-20, and 29-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1, 17, 29, and 30 recite a computer implemented method, an apparatus, an apparatus, and a non-transitory computer-readable medium, respectively. Thus, each of the claims fall under one of the four statutory categories. Under Prong One of Step 2A of the 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”), claim 1 recites “executing… operations using the machine learning model based on the at least the portion of the weight data and the input data”. Such limitations cover mathematical concepts such as mathematical relationships, mathematical formulas/equations, or mathematical calculations and mental processes that are concepts performed in the human mind or with pen and paper (including an observation, evaluation, judgement, or opinion). Accordingly, the claim recites an abstract idea. Under Prong Two of Step 2A, this judicial exception is not integrated into a practical application. The elements “initializing at least a portion of weight data… in a nonvolatile random access memory (NVRAM)”, “storing unput data in a dynamic random access memory (DRAM)”, and “storing a result of the operations… in the DRAM” are considered to be an insignificant step of storing data in memory (See MPEP 2106.05(d)(II)(iv), storing and retrieving information in memory). The elements “a processor” and “functional unit” are recited at a high level of generality, i.e., generic computer components performing generic functions, which amount to no more than mere instructions to apply the exception using generic computer elements (See MPEP 2106.05(f)). Thus, the elements fail to integrate the judicial exception into a practical application. Under Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed previously, with respect to Step 2A Prong Two, the elements are considered to be an insignificant step of storing data in memory (See MPEP 2106.05(d)(II)(iv), storing and retrieving information in memory), and is deemed to be considered well-understood, routine, and conventional by the courts (MPEP 2106.05(d); See Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93) or amount to no more than mere instructions to apply the exception using generic computer elements (See MPEP 2106.05(f)). Accordingly, this claim is not patent-eligible under 35 U.S.C. 101. Regarding claim 2, the claim recites “generation…. The result of the operations using the machine learning model and the input data”. Such limitation covers mathematical concepts such as mathematical relationships, mathematical formulas/equations, or mathematical calculations and mental processes that are concepts performed in the human mind or with pen and paper (including an observation, evaluation, judgement, or opinion). The elements “loading at least the portion of the weight data from the NVRAM into memory registers of the processor” and “storing the generated result in the memory registers of the processor” are considered to be an insignificant step of storing and/or retrieving data in memory (See MPEP 2106.05(d)(II)(iv), storing and retrieving information in memory), and is deemed to be considered well-understood, routine, and conventional by the courts (MPEP 2106.05(d); See Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Regarding claim 3, the claim recites “the result of the of the operations is generated using the input data”. Such limitation covers mathematical concepts such as mathematical relationships, mathematical formulas/equations, or mathematical calculations and mental processes that are concepts performed in the human mind or with pen and paper (including an observation, evaluation, judgement, or opinion). The element “loading the input data from the DRAM into the memory registers of the processor” is considered to be an insignificant step of retrieving data in memory (See MPEP 2106.05(d)(II)(iv), storing and retrieving information in memory), and is deemed to be considered well-understood, routine, and conventional by the courts (MPEP 2106.05(d); See Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Regarding claim 4, the claim recites “reading the result of the operations from the memory registers of the processor” and “writing the result of the operations read from the memory registers of the processor to the DRAM”. The elements are considered to be an insignificant step of storing and/or retrieving data in memory (See MPEP 2106.05(d)(II)(iv), storing and retrieving information in memory), and is deemed to be considered well-understood, routine, and conventional by the courts (MPEP 2106.05(d); See Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Regarding claim 7, the claim recites “input data comprises data received from a streaming data source”. The element is recited at a high level of generality, i.e., describing data, which amounts to no more than mere instructions to apply the exception using computer elements recited at a high level (See MPEP 2106.05(f)). Alternatively, the element is considered to be an insignificant step of receiving data from a component (see MPEP 2106.05(g)), and is deemed to be well-understood, routine, and conventional (see Priller, US 20210342249 A1, [0063-0064]) (see MPEP 2106.05(d)). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Regarding claim 17, the claim is mostly rejected for the same reasons as claim 1. The claim additionally recites “a memory having executable instructions stored thereon”. The element is recited at a high level of generality, i.e., generic computer components with generic functions, which amounts to no more than mere instructions to apply the exception using generic computer elements (See MPEP 2106.05(f)). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Regarding claims 18-21, the claims recite an apparatus similar to the method of claims 2-5, respectively. Therefore, the claims are rejected on the same premises. Regarding claim 29, the claim is mostly rejected for the same reasons as claim 1. The claim also recites, as described in the “Claim Interpretation” section, elements such as “a circuit”, “an ASIC”, or “a processor”. The additional elements are recited at a high level of generality, i.e., generic computer components or components recited at a high level, which amount to no more than mere instructions to apply the exception using computer elements recited at a high level (See MPEP 2106.05(f)). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Regarding claim 30, the claim is mostly rejected for the same reasons as claim 1. The claim additionally recites “a non-transitory computer-readable medium comprising computer-executable instructions”, “one or more processors”, and “a processing system”. The additional elements are recited at a high level of generality, i.e., generic computer components or components recited at a high level, which amount to no more than mere instructions to apply the exception using computer elements recited at a high level (See MPEP 2106.05(f)). The claim fails to provide an element that would integrate the judicial exception into a practical application under Step 2A Prong Two and does amount to anything significantly more under Step 2B. Accordingly, the claim is not patent-eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7, and 29 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mathuriya et al (US 11836102 B1). Ali “Artificial Neural Network (ANN) with Practical Implementation” is cited as extrinsic evidence to indicate that the AI training performed in Mathuriya is for an ANN model. Wikipedia “Application-specific integrated circuit” is cited as extrinsic evidence to indicate that ASICs are processors for specific applications. Regarding claim 1, Mathuriya teaches A computer-implemented method, comprising: initializing at least a portion of weight data for a machine learning model in a nonvolatile random access memory (NVRAM) associated with a processor (Figs. 1, 4, and 5, Col 19, line 60 to Col. 20, line 14: Weight buffer 501a can be implemented as FE-RAM, which is a type of nonvolatile random access memory. Weight buffer stores weight data, which is used to train an ANN model (see Fig. 4, 403), which is a machine learning model. Weight buffer is part of memory die 501, where memory die refers to memory 102 in Fig. 1, which is part of an AI ASIC 101, which is a processor. Therefore, weight buffer is associated with a processor; See “Artificial Neural Network (ANN) with Practical Implementation” by Ali and “Application-specific integrated circuit” by Wikipedia); storing input data in a dynamic random access memory (DRAM) coupled with the processor (Figs. 1 and 5, Col. 19, line 60 to Col. 20, line 14 : Input/output buffer 501b stores input/output data and may be implemented as DRAM. The buffer is part of memory die 501, where memory die refers to memory 102, which is part of an AI ASIC 101, which is a processor. Therefore, input/output buffer is coupled with the processor); executing, via a functional unit associated with the processor, operations using the machine learning model based on the at least the portion of the weight data and the input data (Figs. 2A, 4 , 5, and 13, Col. 11, line 61 to Col. 12, line 21: Top die 502 comprises of matrix multipliers 502a/b and logics 502c/d, which uses data from the weight buffer and input/output buffer (similar to Fig. 2A, inputs 201, weights 202, matrix multiplication 203, and logic 204) to perform operations using the ANN model (such as the one seen in Fig. 4). Matrix multiplier and logic as the functional unit); and storing a result of the operations using the machine learning model in the DRAM (Figs. 2A, and 5, Col. 4, lines 30-35, Col. 11, line 61 to Col. 12, line 21, Col. 19, lines 63-65, Col. 20, lines 27-31: Once the multiplication operations have been performed, the results are stored in the temp buffer 502e. The results are then sent to the bottom die to store the data in I/O buffer 501b through I/Os 503b’/503b (Similar to what occurs in Fig. 2A where the results stored in the buffer in die 2 sends data back to be stored in die 1)). Regarding claim 7, Mathuriya teaches the method of Claim 1, wherein the input data comprises data received from a streaming data source (Figs. 2A, and 5, Col. 4, lines 30-35, Col. 11, line 61 to Col. 12, line 21, Col. 19, lines 63-65, Col. 20, lines 27-31: Buffer 502e would transmit (stream) data into the I/O buffer 501b (similar to what’s seen in Fig. 2A). Therefore, buffer 502e is a streaming data source). Regarding claim 29, Mathuriya teaches an apparatus (Fig. 1, Col. 7, lines 60-67: System 100 as the apparatus), comprising: means for initializing at least a portion of weight data for a machine learning model in a nonvolatile random access memory (NVRAM) associated with a processor (Figs. 1, 4, and 5, Col 19, line 60 to Col. 20, line 14: Weight buffer 501a can be implemented as FE-RAM, which is a type of nonvolatile random access memory. Weight buffer stores weight data, which is used to train an ANN model (see Fig. 4, 403), which is a machine learning model. Weight buffer is part of memory die 501, where memory die refers to memory 102 in Fig. 1, which is part of an AI ASIC 101, which is a processor. Therefore, weight buffer is associated with a processor. ASIC 101 as the means for initializing; See “Artificial Neural Network (ANN) with Practical Implementation” by Ali and “Application-specific integrated circuit” by Wikipedia); means for storing input data in a dynamic random access memory (DRAM) coupled with the processor (Figs. 1 and 5, Col. 19, line 60 to Col. 20, line 14 : Input/output buffer 501b stores input/output data and may be implemented as DRAM. The buffer is part of memory die 501, where memory die refers to memory 102, which is part of an AI ASIC 101, which is a processor. Therefore, input/output buffer is coupled with the processor. ASIC 101 as the means for storing); means for executing, via a functional unit associated with the processor, operations using the machine learning model based on the at least the portion of the weight data and the input data (Figs. 1, 2A, 4 , 5, and 13, Col. 11, line 61 to Col. 12, line 21: Top die 502 comprises of matrix multipliers 502a/b and logics 502c/d, which uses data from the weight buffer and input/output buffer (similar to Fig. 2A, inputs 201, weights 202, matrix multiplication 203, and logic 204) to perform operations using the ANN model (such as the one seen in Fig. 4). ASIC 101 as the means for executing); and means for storing a result of the operations using the machine learning model in the DRAM (Figs. 2A, and 5, Col. 4, lines 30-35, Col. 11, line 61 to Col. 12, line 21, Col. 19, lines 63-65, Col. 20, lines 27-31: Once the multiplication operations have been performed, the results are stored in the temp buffer 502e. The results are then sent to the bottom die to store the data in I/O buffer 501b through I/Os 503b’/503b (Similar to what occurs in Fig. 2A where the results stored in the buffer in die 2 sends data back to be stored in die 1). ASIC 101 as the means for storing). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Mathuriya et al (US 11836102 B1) in view of Wikipedia (“Memory buffer register”) and Jiang (US 20200134433 A1). Hasan et al. “Bridging the Latency Gap between NVM and DRAM for Latency-bound Operations” is cited as extrinsic evidence to indicated a latency difference between non-volatile memory and dynamic RAM. Regarding claim 2, Mathuriya teaches the method of Claim 1, wherein executing the operations using the machine learning model comprises: generating, by the functional unit, the result of the operations using the machine learning model and the input data (Figs. 2A, 4 , 5, and 13, Col. 11, line 61 to Col. 12, line 21: Top die 502 comprises of matrix multipliers 502a/b and logics 502c/d, which uses data from the weight buffer and input/output buffer (similar to Fig. 2A, inputs 201, weights 202, matrix multiplication 203, and logic 204) to perform operations using the ANN model (such as the one seen in Fig. 4)); and storing the generated result in the buffer of the processor (Figs. 2A and 5, Col. 11, line 61 to Col. 12, line 21: Buffer 205 stores the results of data processed through the matrix multiplier circuit 204 and logic 205 (which recall that it can refer to MML 502a/b, logic 502c/d, and buffer 502e in Fig. 5)). Mathuriya does not teach to load at least the portion of the weight data from the NVRAM into memory registers of the processor and store the generated result in the memory registers of the processor. Note that the data in Mathuriya is written into the buffer after the completion of the operations (see Col. 11, line 61 to Col. 12, line 21). Wikipedia teaches a memory buffer register (Paragraphs 1 and 2: “A memory buffer register is a register that stores data being transferred to and from immediate access storage”; in other words, it’s a register that acts as a buffer/immediate step between a functional unit and a memory location). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya with the teachings of Wikipedia to have made the temporary buffer in Mathuriya be a memory buffer register, therefore becoming a memory register. One of ordinary skill would recognize that transferring data to a register after an operation rather than storing it in memory directly helps reduce the critical path delay, allowing higher clock speeds, which allows data to be processed faster. Mathuriya, in view of Wikipedia, still does not teach to load at least the portion of the weight data from the NVRAM into memory registers of the processor. Jiang teaches to load at least the portion of the weight data from the weight memory into memory registers and to load data from data memory into memory registers (Fig. 2 and 3, [0071-0073]: Weight data from weight memory 20 loads data into weight registers which connects to an operation unit 30). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya, in view of Wikipedia, with the teachings of Jiang to have loaded the weight data from the NVRAM into memory registers and loaded the input data from DRAM into memory registers. One of ordinary skill would recognize that by loading the data into registers from NVRAM or DRAM rather than loading them into arithmetic units directly reduces the critical path delay, allowing higher clock speeds, which allows data to be processed faster. Regarding claim 3, Mathuriya, in view of Wikipedia and Jiang, teaches the method of Claim 2, further comprising loading the input data from the DRAM into the memory registers of the processor (The current combination teaches loading the input data from the Input/output buffer into the memory registers to perform the required operations), wherein the result of the operations is generated by the functional unit . Mathuriya, in view of Wikipedia and Jiang, does not currently teach that the result of the operations is generated by the functional unit using the input data loaded into the memory registers of the processor. Jiang also teaches that the result of the operations is generated by the functional unit using the input data loaded into the memory registers of the processor (Fig. 5 and [0076-0078]: Data loaded onto the data and weight registers are connected to the multiplication unit 301 to be processed through the unit. The multiplication unit as the functional unit. The output from each multiplication unit in 301 as the result of the operations generated by the functional unit). It would have been obvious to one of ordinary skill in the art before the effective filing date to have further combined the teachings of Mathuriya, in view of Wikipedia and Jiang, with the teachings of Jiang to have used data loaded from those registers be read as input data for the functional unit. One of ordinary skill would recognize that by loading the data into registers from DRAM rather than loading them into arithmetic units directly reduces the critical path delay, allowing higher clock speeds, which allows data to be processed faster. Regarding claim 4, Mathuriya, in view of Wikipedia and Jiang, teaches the method of Claim 2, further comprising: reading the result of the operations from the memory registers of the processor (In the current combination, the weight data and input data is stored into memory registers. Then the input and weight data is retrieved from the memory registers and processed through the matrix multiply and logic units); and writing the result of the operations read from the memory registers of the processor to the DRAM (Mathuriya, Figs. 2A and 5, Col. 11, line 61 to Col. 12, line 21: In the current combination, the output data from the matrix multiplier and logic operations is stored in the buffer 205 (which recall that it can refer to 502e and is a memory register) and is read from it. Then, it writes back to the first die (bottom die 501 in Fig. 5) to the I/O buffer 501b). Regarding claim 5, Mathuriya, in view of Wikipedia and Jiang, teaches the method of Claim 4, wherein data stored in the memory registers of the processor is selected to maximize an amount of time during which operations are performed using data in the memory registers of the processor before retrieving additional data for processing from the DRAM (Mathuriya, Figs. 2A and 5, Col. 11, line 61 to Col. 12, line 21; Jiang, [0071-0073, 0077-0078]: In the current combination, data is loaded into the memory registers before that data is loaded into the matrix multiplier and logic units. In a preset time period (e.g., a clock cycle), the data that was stored in the memory registers is then sent to these units for further processing while the memory registers receive new data, therefore maximizing an amount of time during which operations are performed before retrieving additional data from both NVRAM and DRAM). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Mathuriya et al (US 11836102 B1) in view of Wikipedia (“Memory buffer register”), Jiang (US 20200134433 A1), and Maiyuran et al. (US 20030229762 A1) Regarding claim 6, Mathuriya, in view of Wikipedia and Jiang, teaches the method of Claim 5. Mathuriya, in view of Wikipedia and Jiang, does not teach that the data stored in the memory registers of the processor is further selected based on an asymmetry in access latency between the NVRAM and the DRAM. Note that NVRAM has a longer access latency than DRAM (See “Bridging the Latency Gap between NVRAM and DRAM for Latency-bound Operations”, Abstract). Maiyuran teaches to prefetch data from memory devices with a long latency (Fig. 3 and [0018]: Data from memory devices with a long latency may be prefetched to devices with a short latency closer to an execution unit). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya, in view of Wikipedia and Jiang, with the teachings of Maiyuran to have prefetched the data from the NVRAM and storing the data into the memory registers, resulting in the selection of data based on an asymmetry in access latency between the NVRAM and DRAM. One of ordinary skill would recognize that by reducing the access latency of the NVRAM, data becomes available quicker, which results in data getting processed earlier. Claims 17 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Mathuriya et al (US 11836102 B1) in view of Yu (US 9998334 B1). Regarding claim 17, the claim is mostly rejected for the same reasons as claim 1. Mathuriya also teaches a memory Mathuriya does not teach that the memory has executable instructions stored. Yu teaches a memory having executable instructions stored thereon (Fig. 5 and Col. 11, lines 47-60: Memory 532 may store instructions that are executable by a processor (e.g., processor 510)). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya with the teachings of Yu to have the memory contain executable instructions. One of ordinary skill would recognize that by having executable instructions stored in memory, a processor can retrieve and perform those instructions, which would allow data to be processed. Regarding claim 30, the claim is mostly rejected for the same reasons as claim 1. Mathuriya also teaches one or more processors of a processing system (Fig. 1, Col. 7, lines 60-67: System 100 comprising ASIC 101. ASIC 101 as the processor of the system). Mathuriya does not teach a non-transitory computer-readable medium comprising computer-executable instructions. Yu teaches a non-transitory computer-readable medium comprising computer-executable instructions (Fig. 5 and Col. 11, lines 33-60: Memory 532 (a non-transitory computer-readable medium) may store instructions that are executable by a processor (e.g., processor 510)). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya with the teachings of Yu to have the memory contain executable instructions. One of ordinary skill would recognize that by having executable instructions stored in memory, a processor can retrieve and perform those instructions, which would allow data to be processed. Claims 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over Mathuriya et al (US 11836102 B1) in view of Yu (US 9998334 B1), Jiang (US 20200134433 A1), and Wikipedia (“Memory buffer register”). Regarding claim 18, Mathuriya, in view of Yu teaches the apparatus of Claim 17, wherein in order to execute the operations using the machine learning model, the processor is configured to cause the apparatus to: generate, by the functional unit, the result of the operations using the machine learning model and the input data (Figs. 2A, 4 , 5, and 13, Col. 11, line 61 to Col. 12, line 21: Top die 502 comprises of matrix multipliers 502a/b and logics 502c/d, which uses data from the weight buffer and input/output buffer (similar to Fig. 2A, inputs 201, weights 202, matrix multiplication 203, and logic 204) to perform operations using the ANN model (such as the one seen in Fig. 4)); and store the generated result in the buffer of the processor (Figs. 2A and 5, Col. 11, line 61 to Col. 12, line 21: Buffer 205 stores the results of data processed through the matrix multiplier circuit 204 and logic 205 (which recall that it can refer to MML 502a/b, logic 502c/d, and buffer 502e in Fig. 5)). Mathuriya, in view of Yu, does not teach to load at least the portion of the weight data from the NVRAM into memory registers of the processor and store the generated result in the memory registers of the processor Note that the data in Mathuriya is written into the buffer after the completion of the operations (see Col. 11, line 61 to Col. 12, line 21). Wikipedia teaches a memory buffer register (Paragraphs 1 and 2: “A memory buffer register is a register that stores data being transferred to and from immediate access storage”; in other words, it’s a register that acts as a buffer/immediate step between a functional unit and a memory location). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya, in view of Yu, with the teachings of Wikipedia to have made the temporary buffer in Mathuriya be a memory buffer register, therefore becoming a memory register. One of ordinary skill would recognize that transferring data to a register after an operation rather than storing it in memory directly helps reduce the critical path delay, allowing higher clock speeds, which allows data to be processed faster. Mathuriya, in view of Yu and Wikipedia, still does not teach to load at least the portion of the weight data from the NVRAM into memory registers of the processor. Jiang teaches to load at least the portion of the weight data from the weight memory into memory registers and to load data from data memory into memory registers (Fig. 2 and 3, [0071-0073]: Weight data from weight memory 20 loads data into weight registers which connects to an operation unit 30). It would have been obvious to one of ordinary skill in the art before the effective filing date to have combined the teachings of Mathuriya, in view of Yu and Wikipedia, with the teachings of Jiang to have loaded the weight data from the NVRAM into memory registers and loaded the input data from DRAM into memory registers. One of ordinary skill would recognize that by loading the data into registers from NVRAM or DRAM rather than loading them into arithmetic units directly reduces the critical path delay, allowing higher clock speeds, which allows data to be processed faster. Regarding claim 19, Mathuriya, in view of Yu, Wikipedia, and Jiang, teaches the apparatus of Claim 18, wherein the processor is further configured to cause the apparatus to load the input data from the second memory component DRAM into the memory registers of the processor (The current combination teaches loading the input data from the Input/output buffer into the memory registers to perform the required operations), wherein the functional unit is configured to generate the result of the operations . Mathuriya, in view of Yu, Wikipedia, and Jiang, does not currently teach that the result of the operations is generated by the functional unit using the input data loaded into the memory registers of the processor. Jiang also teaches that the result of the operations is generated by the functional unit using the input data loaded into the memory registers of the processor (Fig. 5 and [0076-0078]: Data loaded onto the data and weight registers are connected to the multiplication unit 301 to be processed through the unit. The multiplication unit as the functional unit. The output from each multiplication unit in 301 as the result of the operations generated by the functional unit). It would have been obvious to one of ordinary skill in the art before the effective filing date to have further combined the teachings of Mathuriya, in view of Yu, Wikipedia, and Jiang, with the teachings of Jiang to have used data loaded from those registers be read as input data for the functional unit. One of ordinary skill would recognize that by loading the data into registers from DRAM rather than loading them into arithmetic units directly reduces the critical path delay, allowing higher clock speeds, which allows data to be processed faster. Regarding claim 20, Mathuriya, in view of Yu, Wikipedia, and Jiang, teaches the apparatus of Claim 18, wherein the processor is further configured to: read the result of the operations from the memory registers of the processor (In the current combination, the weight data and input data is stored into memory registers. Then the input and weight data is retrieved from the memory registers and processed through the matrix multiply and logic units); and write the result of the operations read from the memory registers of the processor to the DRAM (Mathuriya, Figs. 2A and 5, Col. 11, line 61 to Col. 12, line 21: In the current combination, the output data from the matrix multiplier and logic operations is stored in the buffer 205 (which recall that it can refer to 502e and is a memory register) and is read from it. Then, it writes back to the first die (bottom die 501 in Fig. 5) to the I/O buffer 501b). Regarding claim 21, Mathuriya, in view of Yu, Wikipedia, and Jiang, teaches the apparatus of Claim 20, wherein data stored in the memory registers of the processor is selected to maximize an amount of time during which operations are performed using data in the memory registers of the processor before retrieving additional data for processing from the DRAM (Mathuriya, Figs. 2A and 5, Col. 11, line 61 to Col. 12, line 21; Jiang, [0071-0073, 0077-0078]: In the current combination, data is loaded into the memory registers before that data is loaded into the matrix multiplier and logic units. In a preset time period (e.g., a clock cycle), the data that was stored in the memory registers is then sent to these units for further processing while the memory registers receive new data, therefore maximizing an amount of time during which operations are performed before retrieving additional data from both NVRAM and DRAM). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Reliability of NAND Flash Memory as a Weight Storage Device of Artificial Neural Network”: Hasan et al. discusses about the use of flash memory to store weight data US 20180004510 A1: Charney et al. teaches to fetch data from multiple memories and store the data in a separate memory. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILIO ALCANTARA-RAMOS whose telephone number is (571)272-4211. The examiner can normally be reached Mon-Fri 8:30-5:00 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at (571)270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.A./Examiner, Art Unit 2183 /JYOTI MEHTA/Supervisory Patent Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Sep 21, 2022
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596551
METHOD AND SYSTEM FOR ASSIGNING INSTRUCTIONS TO DECODERS IN DECODER CLUSTERS
2y 5m to grant Granted Apr 07, 2026
Patent 12541371
PREDICTING BEHAVIOUR OF CONTROL FLOW INSTRUCTIONS USING PREDICTION ENTRY TYPES
2y 5m to grant Granted Feb 03, 2026
Patent 12536021
METHOD AND SYSTEM FOR PREDICTING BRANCH INSTRUCTIONS
2y 5m to grant Granted Jan 27, 2026
Patent 12524371
Enhanced Harvard Architecture Reduced Instruction Set Computer (RISC) with Debug Mode Access of Instruction Memory within a Unified Memory Space
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+100.0%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month