Prosecution Insights
Last updated: April 19, 2026
Application No. 17/813,396

NEURAL NETWORK MEMORY CONFIGURATION

Final Rejection §102§103
Filed
Jul 19, 2022
Examiner
DASGUPTA, SHOURJO
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
2 (Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
293 granted / 449 resolved
+10.3% vs TC avg
Strong +38% interview lift
Without
With
+38.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action 2. This Final Office Action is responsive to Applicants’ amendments and arguments, as received 11/25/25. Claims 1-18 remain pending, of which claims 1, 9, and 15 are independent. 3. In the prior Office Action, claim 6 was objected to for featuring an informality, which the Applicants’ recent amendment has cured. Hence, this claim objection as previously presented is now withdrawn. 4. The prior art rejections from the prior Office Action are essentially maintained, with some minor updates to address Applicants’ minor claim amendments as received 11/25/25. Please see the Examiner’s Response to Arguments section for clarifications and responses to Applicants’ arguments. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office Action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1-13 and 15-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Application Publication No. 2022/0292334 (“Pandey”). Regarding claim 1, PANDEY teaches A method comprising: storing a first activation input tensor1 to one or more first storage devices and partitioning the stored first activation input tensor into a plurality of tensor segments (FIG. 4A, as discussed in relation to [0048] and [0049], detail the memory management of models and their input data with respect to a memory architecture scheme, such that model may be factorized into different partitions of model data, parameters, and input data to fit the cache of an edge computing device, where the Examiner understands this memory optimization to be performed by a memory optimizer per [0037] (e.g., associated with a host computing device as shown in FIG. 1A, and hence its memory/storage – which the Examiner equates with “first storage devices” as recited) to determine the appropriate partitioning for the edge device in advance of performing the sequential loading discussed per [0049] which relies upon the determined partitioning); and sequentially loading individual tensor segments of the stored first activation input tensor to one or more second storage devices ([0049] discussing the sequential loading based on determined partitioning, where the cache for the edge computing device receiving the partitioned model and information is akin to the recited “second storage devices”), the one or more second storage devices being integrated in a microcontroller unit (MCU) with processing circuitry to apply one or more activation functions associated with the tensor segments (the edge computing device features a processor/MCU, e.g. as discussed per [0019] and [0022]). Regarding claim 2, Pandey teaches the method of claim 1, wherein sequentially loading individual first tensor segments of the stored first activation input tensor further comprises: loading a first tensor segment of the individual tensor segments to a first portion of the one or more second storage devices local to processing circuitry to apply a first activation function to the first tensor segment of the individual tensor segments; and completing of loading of a second tensor segment to a second portion of the one or more second storage devices local to the processing circuitry to apply the first activation function subsequent to commencement of application of the first activation function to the first tensor segment (sequential loading scheme of determined model partitions, including model inputs/information, as discussed per [0049]). Regarding claim 3, Pandey teaches the method of claim 1, further comprising: storing weights in at least one of the one or more first storage devices; and partitioning the stored weights according to the one or more activation functions; and sequentially loading individual stored weights to the one or more second storage devices local to the processing circuitry to apply the one or more activation functions ([0049] discussing “In some implementations, network parameters of neuron 404 (and other neurons that are not shown explicitly) may similarly be partitioned into portions and loaded into cache 136 together with the inputs of the corresponding partitions.”, where “network parameter” as mentioned is understood to include weights per [0048] and [0061]). Regarding claim 4, Pandey teaches the method of claim 3, wherein at least one of the one or more activation functions comprises a dot product ([0028]: “For example, operations performed on a set of input data (e.g., partitioned among multiple neurons) by various neurons may represent one layer, operations performed on the output of that layer may represent another layer, and so on. A neuron may represent any set of computations that takes two or more input numbers and produces an output number (e.g., via weight multiplication, bias addition, application of an activation function, etc.).”). Regarding claim 5, Pandey teaches the method of claim 1, further comprising: storing a second activation input tensor to at least one of the one or more first storage devices; and partitioning the stored second activation input tensor into a plurality of second tensor segments; and sequentially loading individual second tensor segments of the stored second activation input tensor to the one or more second storage devices to processing circuitry to apply at least one of the one or more activation functions associated with the tensor segments (the Examiner interprets the instant claim’s limitations to merely constitute a further iteration of additional instances of the same limitations discussed above in relation to claim 1, and hence the mappings as provided above with respect to claim 1 are similarly applicable here in the exercise or practice of the framework allowing further iterations of the same steps/teachings as applied to further instances of data/information). Regarding claim 6, Pandey teaches the method of claim 5, wherein at least one of the one or more activation functions comprises an operation to additively combine an associated tensor segment of the plurality of tensor segments and an associated tensor segment of the plurality of second tensor segments ([0028]: “For example, operations performed on a set of input data (e.g., partitioned among multiple neurons) by various neurons may represent one layer, operations performed on the output of that layer may represent another layer, and so on. A neuron may represent any set of computations that takes two or more input numbers and produces an output number (e.g., via weight multiplication, bias addition, application of an activation function, etc.).”). Regarding claim 7, Pandey teaches the method of claim 1, wherein the one or more first storage devices are external to the MCU (with reference to FIG. 1A, the Examiner understands the edge device and its MCU to be separate and apart from the storage/memory aspects of the host computing device, i.e., “external to” as recited). Regarding claim 8, Pandey teaches the method of claim 1, wherein sequentially loading individual tensor segments of the stored first activation input tensor to memories local to processing circuitry comprises executing a sequence of first direct memory access (DMA) ([0051]: “... FIG. 4B depicts an efficient factorization of data loading and processing in the instances when a buffer is capable of storing at least N values. Input buffer 452 may store all N input values {Ij} that are loaded from a system memory 460 during cycle 1 of direct memory access (DMA) operations. Similarly, during cycle 1, N values {W1j} of the weights (which determine the first output value O1) may be loaded from system memory 460 to weight buffer 454. Additionally, during cycle 1, M buffer values {Bi} may be loaded from system memory 460 to output buffer 456, which will eventually store the output values {Oi}.”). Regarding claim 9, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. The claim additional recites elements for circuitry to perform the recited steps, which the Examiner believes is further taught per [0098]’s mention of “a processing element.” Regarding claim 10, Pandey teaches the computing device of claim 9, wherein: the one or more first storage devices comprise at a dynamic random access memory (DRAM) device or a flash memory device, or a combination thereof and the one or more second storage devices comprise at least one static random access memory (SRAM) device ([0098]: “The implementations of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. “Memory” includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, “memory” includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices, and any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).”, from which the Examiner believes it possible to understand both of a host computing device and an edge computing device as might be used in relation to FIG. 1A’s framework for example to feature memory implementations one or both of SRAM or DRAM, thereby permitting a version of the FIG. 1A framework where the host computing device features DRAM and the edge computing device features SRAM). Regarding claim 11, the claim includes the same or similar limitations as claim 8 discussed above, and is therefore rejected under the same rationale. Regarding claim 12, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. Regarding claim 13, Pandey teaches the computing device of claim 9, and further comprising a memory bus coupled to the MCU to transfer tensor segments and/or weights between the one or more first storage devices and the one or more second storage devices ([0078]: “The parameters of the MLM may include weights, biases, activation functions, classifiers, and so on. In some implementations, the second memory device may be a random-access memory connected to the processor by a bus interconnect.”). Regarding claim 15, the claim includes the same or similar limitations as claims 1 and 9 as discussed above, and is therefore rejected under the same rationale. The claim additional recites elements for a non-transitory storage medium comprising computer-readable instructions stored thereon and one or more processors to express circuitry for the recited steps discussed in relation to those earlier claims, which the Examiner is taught per [0098]’s teaching that “The implementations of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element.” Regarding claim 16, the claim includes the same or similar limitations as claim 8 discussed above, and is therefore rejected under the same rationale. Regarding claim 17, the claim includes the same or similar limitations as claim 13 discussed above, and is therefore rejected under the same rationale. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. Claims 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pandey in view of U.S. Patent Application Publication No. 2022/0012563 (“Castro Gonzalez”). Regarding claim 14, Pandey teaches the computing device of claim 13, as discussed above. The aforementioned reference teaches a memory bus, as discussed above per claim 13, but does not teach the further limitation wherein the memory bus comprises a Serial Peripheral Interface (SPI). Rather, the Examiner relies upon CASTRO GONZALEZ to teach what Pandey otherwise lacks, see e.g., Castro Gonzalez’s comparable framework for edge inference of neural network models ([0062]-[0065]) as facilitated by a framework that features SPI bus elements specifically ([0193]). Both references are similarly directed with respect to providing frameworks and teachings to facilitate edge computing of machine-learning/neural network models. Hence, they are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate well-known bus elements as contemplated by Castro Gonzalez into Pandey’s framework, with a reasonable expectation of success, on the basis that various types of bus elements as broadly contemplated by the primary reference can satisfy its needs including a specific bus element as contemplated by the secondary reference. Regarding claim 18, the claim includes the same or similar limitations as claim 14 discussed above, and is therefore rejected under the same rationale. Response to Arguments 9. Applicants’ arguments filed 11/25/25 have been fully considered but they are not persuasive. The Examiner will address the arguments in full with remarks that are offered in hope to clarify the Examiner’s position in relation to the pending claims and the cited art. [Applicants’ assertion A] On pages 8-9 of Applicants’ Reply, Applicants argue that the cited reference Pandey “only discusses related operations being performed on a single device” (citing to Pandey’s [0049]). The Examiner respectfully disagrees. See, e.g., Pandey’s FIG. 1A for reference (as discussed per [0026]-[0028]), which clearly teaches a host computing device (element 102) onto which the to-be-deployed machine learning models (elements 108) are maintained. The deployment targets are edge devices, e.g. element 130. Edge devices are understood to have variable and sometimes limited resources, e.g. relative to the requirements of the models, and hence the challenge is to adapt the models at the host for successful and optimal deployment to the edge device in view of the edge device’s constrains. See, e.g., Pandey’s [0019]-[0025]. Pandey’s [0048], which begins the discussion continued in [0049] (which Applicants’ refer to in their arguments), clearly contemplates the challenge mentioned above: “In some instances a model 108 and/or an input data into model 108 may be too large to fit into high-speed cache 136 (or any other internal memory) of edge computing device 130” ([0048]’s second sentence). Hence, the reference as a whole, but also the portions cited to in the maintained rejection, clearly contemplate the adaptation of a model stored on a first device (i.e., the host, as shown in FIG. 1A) into the cache residing in a second device (i.e., the edge, as shown in FIG. 1A). The Examiner understands the partitioning and hence optimization for an edge device to be performed on the host, as a quick review of FIG. 1A and its corresponding discussion confirms. Hence, it reasons that once appropriately portioned in the optimal way for the target edge and its cache, the model partition is then sent to the edge, i.e., corresponding to the recited “sequentially loading” step found in claim 1 for example (see the direction connection between host and edge as indicated by element 141 in FIG. 1A and discussed in [0026]). Hence, contra to Applicants’ arguments, there are two devices clearly contemplated when Pandey is understood as a whole; however, even the portions cited to are part of a discussion that makes clear the involvement of two separate devices. [Applicants’ assertion B] Staying on page 9 of Applicants’ Reply, Applicants reason that Pandey teaches away from using two different devices (citing to Pandey’s [0003]). The Examiner respectfully disagrees. As a first matter, the discussion provided above should have made it clear that Applicants’ assertion A is in fact a misunderstanding of the reference. Rather, Pandey is very much about optimal deployment of models from a host to an edge. Hence, it cannot be at all fairly read and characterized to teach away from the use of two devices. That simply does not make any sense when one considers reads the reference as a whole, particularly in view of its stated problem and its solution framework as taught. In making this assertion, Applicants point to a paragraph ([0003]) that states a need or problem in the state of the art as relating to deployment. The paragraph acknowledges that edge devices in such a distributed cloud/server-based framework can have “modest” resources in terms of processing and memory, the paragraph essentially settles that the advantages of processing only in the cloud do not necessarily outweigh instances and strategies that permit local edge processing. Then, it lists numerous advantages to processing at the edge device. Hence, this paragraph does not teach away from computing at the edge, and nor does it ever say that computing should only be performed at the host. Applicants appear to have misunderstood this paragraph and are misapplying it to make this argument. Rather, and as noted above and as clearly shown in the discussion relating to FIG. 1A and the other FIGs., the reference very much is concerned with an optimization of a host-edge framework, that permits models stored at the host to be optimally deployed to the various edge devices in view of their respective constraints. Based on the arguments identified above as assertions A and B, Applicants argue that Pandey is not proper and does not read on the cited prior art. [Applicants’ assertion C] Moreover, on page 12 of the Reply, Applicants argue that additional reference Castro Gonzalez does not teach what Applicants assert Pandey lacks. Based on the remarks provided above by the Examiner, the Examiner does believe Applicants’ arguments, namely their assertions A and B as identified here, are consistent with the reference when reconsidered. The Examiner does not agree with them, and hence Applicants’ assertion C is moot as it has been argued. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure: US 2022/0067543: Kollias CN 108615072 B: Young CN 117413280 A: Lin 11. THIS ACTION IS MADE FINAL. Applicants are reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHOURJO DASGUPTA whose telephone number is (571)272-7207. The examiner can normally be reached M-F 8am-5pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHOURJO DASGUPTA/Primary Examiner, Art Unit 2144 1 The Examiner construes the term “activation input tensor” in accordance with Applicants’ definition as provided in [0018] of the published specification, e.g., “an expression of one or more activation input values according to a particular structure, dimension and/or format.”
Read full office action

Prosecution Timeline

Jul 19, 2022
Application Filed
Oct 31, 2025
Non-Final Rejection — §102, §103
Nov 25, 2025
Response Filed
Mar 09, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591802
GENERATING ESTIMATES BY COMBINING UNSUPERVISED AND SUPERVISED MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12586371
SENSOR DATA PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12578979
VISUALIZATION OF APPLICATION CAPABILITIES
2y 5m to grant Granted Mar 17, 2026
Patent 12572782
SCALABLE AND COMPRESSIVE NEURAL NETWORK DATA STORAGE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12549397
MULTI-USER CAMERA SWITCH ICON DURING VIDEO CALL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+38.1%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month