Prosecution Insights
Last updated: April 19, 2026
Application No. 17/713,251

PACING SERVING OF CONTENT TRANSFER REQUESTS

Final Rejection §101§103§112
Filed
Apr 05, 2022
Examiner
KNIGHT, PAUL M
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Mellanox Technologies Ltd.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
169 granted / 272 resolved
+7.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
35.2%
-4.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 272 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Style In this action unitalicized bold is used for claim language, while italicized bold is used for emphasis. Information Disclosure Statement While the IDS includes documents sent to Applicant from a foreign patent office more than three months before the IDS was filed, the filing fee has been paid and the documents have been considered. Specification The amended Title is entered. Applicant Reply “The claims may be amended by canceling particular claims, by presenting new claims, or by rewriting particular claims as indicated in 37 CFR 1.121(c). The requirements of 37 CFR 1.111(b) must be complied with by pointing out the specific distinctions believed to render the claims patentable over the references in presenting arguments in support of new claims and amendments. . . . The prompt development of a clear issue requires that the replies of the applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. . . . An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” MPEP § 714.02. Generic statements or listing of numerous paragraphs do not “specifically point out the support for” claim amendments. “With respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) (citing MPEP § 2163.04 which provides that a ‘simple statement such as ‘applicant has not pointed out where the new (or amended) claim is supported, nor does there appear to be a written description of the claim limitation ‘___’ in the application as filed’ may be sufficient where the claim is a new or amended claim, the support for the limitation is not apparent, and applicant has not pointed out where the limitation is supported.’)” MPEP § 2163(II)(A). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) and the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? All claims are found to be directed to one of the four statutory categories, unless otherwise indicated in this action. Step 2A Prongs One and Two (Alice Step 1): According to Office guidance, claims that read on math do not recite an abstract idea at step 2A1, when the claims fail to refer to the math by name.1 The MPEP also equates “recit[ing] a judicial exception” with “state[ing]” or “describ[ing]” an abstract idea in the claims.2 Consistent with this guidance, an abstract idea may be first recited in a dependent claim even though the independent claims read on that abstract idea. Claim limitations which recite any of the abstract idea groupings set forth in the manual are found to be directed, as a whole, to an abstract idea unless otherwise indicated.3 The claims do not recite additional elements that integrate the abstract ideas into a practical application.4 To confer patent eligibility to an otherwise abstract idea, claims may recite a specific means or method of solving a specific problem in a technological field.5 Independent Claims 1. A processing apparatus, comprising a processor to train an artificial intelligence model to (The claims as a whole are directed to the mental and mathematical process of deriving a pacing metric from a pacing action. This language reads on merely determining a pacing setting for use in a computer. The claims merely instruct that these mental and mathematical processes should be carried out using generic computer components including a generic “artificial intelligence model.” The language above reads on an instruction to implement the abstract ideas recited below using conventional computer components. Implementation using a generic machine learning model is merely an instruction to implement the following abstract ideas using conventional computer components.6 Note also that “train an artificial intelligence model to . . .” is written as an intended use for the subsequently recited language. See MPEP §§ 2103 and 2111.04.) find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests (Finding a pacing action from which to derive a pacing metric for use in commencement of serving content transfer requests reads on both math and on a mental process. The type of decisions being made merely limit to a field of use (e.g. for use in pacing commencement). Note also that “for use in pacing commencement” is written as an intended use because it does not require any steps to be performed or limit to a particular structure.) in a storage sub-system, wherein the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (This merely limits the claimed mental processes and mathematical operations to changes in pacing.) 12. A processing apparatus, comprising processing circuitry to use an artificial intelligence model trained to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests (See rejection of claim 1. Implementing using “a processing apparatus, comprising processing circuitry to use an artificial intelligence model” is merely instruction to apply an exception using generic computing components. This finding applies to all language using generic hardware to implement the claimed abstract ideas.) in a storage sub-system, wherein the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) 21. A method, comprising: receiving training data; and training an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests (See rejection of claim 1. Implementing using “a processing apparatus, comprising processing circuitry to use an artificial intelligence model” is merely instruction to apply an exception using generic computing components. This finding applies to all language using generic hardware to implement the claimed abstract ideas. Note that receiving training data is mere extra-solution activity.) in a storage sub-system, wherein the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) 22. A method to use an artificial intelligence model trained to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage subsystem, the method comprising: applying the artificial intelligence model to find the pacing action; and computing the pacing metric responsively to the pacing action, (This reads on an instruction to apply the mental process of finding a pacing action from which to derive a pacing metric using a generic model (i.e. using generic computer components.) Note that training based on data is also a mental process. The use of the pacing metric in “serving content transfer requests” merely limits to a field of use.) wherein: the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) 23, A software product, comprising a non-transient computer-readable medium in which program instructions are stored, which instructions, when read by a central processing unit (CPU), cause the CPU to: (This reads on using generic computer components to implement a mental process.) receive training data; and train an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, (See rejection of claim 1. This reads on an instruction to apply the mental process of finding a pacing action from which to derive a pacing metric using a generic model (i.e. using generic computer components.) Note that training based on data is also a mental process. The use of the pacing metric in “serving content transfer requests” merely limits to a field of use.) wherein the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) 24. A software product, comprising a non-transient computer-readable medium in which program instructions are stored, which instructions, when read by a central processing unit (CPU), cause the CPU to: (This reads on using generic computer components to implement a mental process.) apply an artificial intelligence model to find a pacing action; and compute a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, responsively to the pacing action wherein: (See rejection of claim 1. This reads on an instruction to apply the mental process of finding a pacing action from which to derive a pacing metric using a generic model (i.e. using generic computer components.) Note that training based on data is also a mental process. The use of the pacing metric in “serving content transfer requests” merely limits to a field of use.) the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) Step 2B (Alice Step 2): The rejected claims do not recite additional elements that amount to significantly more than the judicial exception. All additional limitations that do not integrate the claimed judicial exception into a practical application also fail to amount to significantly more, for the reasons given at step 2A2. All limitations found to be extra-solution activity at step 2A2 are found to be WURC, including limitations that read on mere data gathering, data storage, and data input/output/transfer. (This applies e.g. to claim 21 which recites “receiving training data.”) This finding is based on cases which have recognized that generic input-output operations, repetitive processing operations, and storage operations are WURC.7 Other aspects of generic computing have also been found to be WURC.8 Further, the description itself may provide support for a finding that claim elements are WURC. The analysis under § 112(a) as to whether a claim element is “so well-known that it need not be described in detail in the patent specification” is the same as the analysis as to whether the claim element is widely prevalent or in common use.9 Similarly, generic descriptions in the Specification of claimed components and features has been found to support a conclusion that the claimed components were conventional.10 Improvements to the relevant technology may support a finding that the claims include a patent eligible inventive concept. But some mechanism that results in any asserted improvements must be recited in the claim, and the Specification must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing the improvement.11 This applies to the dependent claims below. Dependent Claims 2. The apparatus according to claim 1, wherein the processor is configured to train the artificial intelligence model to find the pacing action from which to derive the pacing metric for use in pacing commencement of serving of the content transfer requests in a storage sub-system. (This merely further limits the field of use to request in a storage sub-system.) 3. The apparatus according to claim 2, wherein the pacing metric is the pacing period. (Determining a pacing period is a mental process.) 4. The apparatus according to claim 3, wherein the pacing action is a change in the pacing period to be applied by the storage sub-system. (Determining a change in the pacing period to be applied is a mental process.) 5. The apparatus according to claim 2, wherein the processor is configured to train the artificial intelligence model to find the pacing action that maximizes at least one storage sub-system parameter (Finding the pacing action that maximizes a parameter is a mental process.) responsively to training data including at least one previous storage sub-system state and at least one previous pacing action. (Finding the pacing action based on a specific type of data merely limits to a data environment associated with a particular field of use.) 6. The apparatus according to claim 5, wherein the at least one storage sub-system parameter includes one or more of the following: a bandwidth; a cache hit rate; and a number of buffers in flight. (See rejection of claim 5. Maximizing a parameter reads on a mental process. Limiting the parameter merely limits to a field of use.) 7. The apparatus according to claim 2, wherein the processor is configured to train the artificial intelligence model to find the pacing action that maximizes at least one storage sub-system parameter responsively to training data including at least one window of storage sub-system states and at least one window of pacing actions. (This merely limits the data environment to a particular field of use.) 8. The apparatus according to claim 7, wherein the processor is configured to: apply the artificial intelligence model to predict a plurality of future pacing actions responsively to training data including windows of storage sub-system states and windows of pacing actions; (Making a prediction in response to windows of states and actions reads on the mental process of learning by trial and error.) apply the future pacing actions resulting in corresponding future storage sub-system states; (This reads on a generic instruction to apply the action determined during the metal process.) compute a reward or punishment responsively to comparing values of the at least one storage sub-system parameter of the future storage sub-system states with at least one target value; and train the artificial intelligence model responsively to the reward or punishment. (This reads on implementing the mental process of learning by trial and error using generic computer components.) 9, The apparatus according to claim 8, wherein the processor is configured to apply a storage sub-system simulation engine that simulates operation of the storage sub-system to provide the future storage sub-system states responsively to the future pacing actions. (Running a simulation to generate data reads on implementation of a mental process using generic computer components.) 10.The apparatus according to claim 8, wherein each of the future storage sub-system states includes one or more of the following: a bandwidth; a cache hit rate; the pacing metric; a number of buffers in flight; a cache hit rate; the pacing metric; a number of buffers in flight; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given tune window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window. (This merely limits to a field of use.) 11. The apparatus according to claim 2, wherein the processor is configured to find the pacing action from which to derive the pacing metric for use in pacing commencement of the serving of the content transfer requests in the storage sub-system responsively to reinforcement learning. (This is merely in instruction to apply the exception using generic computer components (i.e. reinforcement learning.)) 13. The apparatus according to claim 12, wherein the processing circuitry is configured to use an artificial intelligence model trained to find the pacing action from which to derive the pacing metric for use in pacing commencement serving of the content transfer requests in a storage sub-system. (See rejection of claim 2.) 14. The apparatus according to claim 13, further comprising the storage sub-system, and wherein the processing circuitry is configured to: pace the commencement of the serving of the content transfer requests responsively to the pacing metric; apply the artificial intelligence model to find the pacing action; and compute the pacing metric responsively to the pacing action. (See rejections of claims 1, 2, and 11.) 15. The apparatus according to claim 14, further comprising a network interface comprising one or more ports for connection to a packet data network and configured to receive the content transfer requests from at least one remote device over the packet data network via the one or more ports, and wherein: (This merely recites implementation of the abstract ideas above using conventional computing components, and limits to a particular field of use defined by the particular components associated with the pacing of data transmission.) the storage sub-system is configured to be connected to local peripheral storage devices, and comprises at least one peripheral interface, and a memory sub-system comprising a cache and a random-access memory (RAM), the memory sub-system being configured to evict overflow from the cache to the RAM; (This merely limits the field of use to include a generic storage sub-system.) and the processing circuitry is configured to manage transfer of content between at least one remote device and the local peripheral storage devices via the at least one peripheral interface and the cache, (This merely limits the field of use to include a generic storage and transmission computing components.) responsively to the content transfer requests, while pacing the commencement of the serving of respective ones of the content transfer requests responsively to the pacing metric so that while ones of the content transfer requests are being served, other ones of the content transfer requests pending serving are queued in at least one pending queue. (This reads on implementing the determined pacing metric using generic computer components in their ordinary capacities (i.e. pacing the requests in a “queue.”) 16, The apparatus according to claim 13, wherein the pacing metric is the pacing period, and the pacing action is a change in the pacing period. (See rejections of claims 3 and 4.) 17. The apparatus according to claim 13, wherein the processing circuitry is configured to: apply the artificial intelligence model to find the pacing action responsively to at least one previous state and at least one previous pacing action of the storage sub-system; and compute the pacing metric responsively to the pacing action. (See rejection of claim 1.) 18, The apparatus according to claim 13, wherein the at least one previous state includes one or more of the following: a bandwidth of the storage sub-system; a cache hit rate of the storage sub-system; a pacing metric of the storage sub-system; and a number of buffers in flight over the storage sub-system; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given time window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window. (See rejection of claim 10.) 19. The apparatus according to claim 13, wherein the processing circuitry is configured to: apply the artificial intelligence model to find the pacing action responsively to a window of previous states and a window of previous pacing actions of the storage sub-system; and compute the pacing metric responsively to the pacing action. (This merely limits the data environment to a particular field of use.) 20. The apparatus according to claim 19, wherein each of the previous states includes one or more of the following: a bandwidth of the storage sub-system; a cache hit rate of the storage sub-system; a pacing metric of the storage sub-system; a number of buffers in flight over the storage sub-system; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given time window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window. (This merely limits to a field of use.) 25. (New) The apparatus of claim 1, wherein the processor is to pace commencement of serving of the content transfer requests in the storage sub-system responsively to the derived pacing metric. (This reads on a mere instruction to apply the abstract ideas of claim 1 using conventional components.) All dependent claims are rejected as containing the material of the claims from which they depend. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Generally: separately listed claim elements are construed as distinct components, that all claim terms must be given weight, there is presumed to be a difference in meaning and scope when different words or phrases are used in separate claims, and repeated and consistent descriptions in the specification indicate the proper scope of a claimed term. “[C]laims must ‘conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description.’ 37 C.F.R. § 1.75(d)(1).” Phillips v. AWH Corp., 415 F.3d 1303, 1316 (Fed. Cir. 2005) (as cited in MPEP § 2111). Therefore, use of two different terms in the claims that both rely on the description of a single structure in the Specification may render at least one term indefinite because there is no way to determine which term should be construed in view of the description of the single structure. The independent claims substantially recite “train an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, wherein: the pacing metric is at least one of: a pacing period, a speed threshold or a cache size; and the pacing action indicates a change to be applied to the pacing metric.” The proposed amendments overcome the previous rejection, but the amended claims are unclear because the terms, now reasonably well defined seem inconsistent with the claimed relationship between them (i.e. that one is “deriv[ed]” from the other in the way claimed.) There are several ways the language “a processor to train an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests” could be interpreted, none of which is clearly more correct than the others. First and second, the language could be read as finding a pacing action which is merely intended to be used to derive a pacing metric (“from which to derive a pacing metric[.]”) This could refer to either deriving the type or deriving the magnitude of a value of the of the pacing metric. Third, the claim language could be read as requiring deriving the type of pacing metric (“deriv[ing] a pacing metric . . . wherein the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size[.]”) Fourth, this language could be read to require finding a pacing action that results in a modification of the magnitude of a value of a pacing metric (“pacing action from which to derive a [value of the] pacing metric[.]”) All independent claims substantially recite “a processor to train an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests[.]” It is unclear whether the language “for use in pacing commencement” applies to the “pacing action,” applies to the “pacing metric,” or if it applies to both the pacing action and the pacing metric. This results in three different ways of interpreting the claim, none of which is clearly correct. Further, the language “for use in pacing commencement” could also be written as an intended use, leaving yet another way the language could be read. All dependent claims are rejected as containing the limitations of the claims from which they depend. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 11-18, and 21-25 are rejected under 35 U.S.C. 103 as being unpatentable over Mao (Network System Optimization with Reinforcement Learning: Methods and Applications). 1. A processing apparatus, comprising a processor to train an artificial intelligence model (“System operation decisions are often highly repetitive, making it easy to collect an abundance of training data to train RL models.” Mao P. 4. “In this thesis, we take a step back and ask what is the most natural way for machines to optimize complex networking systems. Rather than explicitly design and tune fixed algorithms for each problem, we seek to enable systems to learn to efficiently optimize the performance on their own. . . . Instead, [the system operator] architects a framework for data collection, experimentation, and learning to discover the low-level actions that achieve a high-level optimization objective automatically.” Mao P. 2. “Each training iteration, including interaction with the simulator, model inference and model update from all training workers, takes roughly 1.5 seconds on a machine with Intel Xeon E5-2640 CPU and Nvidia Tesla P100 GPU.” Mao P. 140. It is not expressly stated in the reference that the above listed hardware components are used to train each type of model listed below, in a single embodiment. It would have been obvious to one of ordinary skill in the art before the effective filing date to use the teaching of Mao on P.140 to implement the models and model training operations using conventional hardware because executing algorithms using the combination of a processor an memory taught in Mao (cited above) automates model training and reduces the human work and time otherwise required to train machine models.) to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage subsystem, wherein the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (Note that “a processor to train an artificial intelligence model to find a pacing action . . .” is written as an intended use for a processor and an intended use for an artificial intelligence model. Intended use language is explained in MPEP §§ 2103 and 2111.02. See also MPEP § 2111.04. “Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.” Similarly, “to derive a pacing metric . . .” and “for use in pacing commencement of serving content transfer requests in a storage subsystem . . .” are also written as intended uses. While implied operations are obvious over the prior art, it is submitted that the claims read on the recited hardware alone. As best understood, the claimed “pacing metric” is a metric related to data transfer and the “pacing action” reads on a change in the pacing metric. See Spec. P. 13, 21-22. Based on the description in the Specification, the “change in the pacing period” of claim 4 does not differ in scope from the pacing action of claim 1. The claims read on various teachings in Mao. For completeness, several are listed below. Each independently teaches the claimed subject matter. As background, Mao teaches “Resource management refers broadly to the methods used to determine how to allocate compute and communication resources (e.g., CPU cycles, memory blocks, network bandwidth, etc.) to different applications, and to manage the contention for resources among applications. Resource management problems are ubiquitous and appear in all kinds of networks and systems. Examples include job scheduling in compute clusters [121, 123, 305], bitrate adaptation for video streaming [146, 329], network congestion control [320, 319, 86], relay selection for Internet telephony [330], virtual machine allocations in cloud computing [139] and more.” Mao P. 1. One underlying concept in application of the reinforcement learning discussed below, is that models measure a state and take an action to alter that measured state. See e.g. Mao PP. 11-15 (Primer on Reinforcement Learning), evidencing the knowledge of one of ordinary skill in the art. Further, Mao suggests applying the techniques taught in the reference to storage subsystems. “Sequential decision-making problems manifest in a variety of ways across computer systems disciplines. These problems span a multi-dimensional space from centralized vs. multiagent control to reactive, fast control loops vs. long-term planning. In this section, we overview a sample of problems from each discipline and how to formulate them as MDPs.” Mao P. 121. “Operating systems seek to efficiently multiplex hardware resources (compute, memory, storage) amongst various application processes. One example is providing a memory hierarchy: computer systems have a limited amount of fast memory and relatively large amounts of slow storage. Operating systems provide caching mechanisms which multiplex limited memory amongst applications which achieve performance benefits from residency in faster portions of the cache hierarchy. In this setting, an RL agent can observe the information of both the existing objects in the cache and the incoming object; it then decides whether to admit the incoming object and which stale objects to evict from the cache. The goal is to maximize the cache hit rate (so that more application reads occur from fast memory) based on the access pattern of the objects.” Mao PP. 122-123. Mao teaches “Given a state, we extract some useful information (e.g., queue sizes, server processing rate estimation, etc.) and we use some heuristics to decide an action based on the information (e.g., rank the queue size normalized by server speed and then join the shortest queue).” Mao P. 12. Note that the action of joining the shortest queue causes a change to the state of the processing rate. Note changing to a smaller buffer (i.e. a smaller cache) is used to reduce the pace of data transfer. Mao teaches “As a primary tool to optimize the video quality (e.g., higher resolution, fewer rebufferings, etc.), content providers deploy adaptive bitrate (ABR) algorithms, which run on client-side video players and dynamically choose a bitrate for each video chunk (e.g., 4-second block). based on observations such as the estimated network throughput and playback buffer occupancy. Their goal is to maximize the user’s quality of experience (QoE) by adapting the video bitrate to the underlying network conditions.” Mao P. 5. “At each step, the agent observes the past network throughput measurement, the current video buffer size, and the remaining portion of the video. The action is the bitrate for the next video chunk. The objective is to maximize the video resolution and minimize the stall (which occurs when download time of a chunk is larger than the current buffer size) and the reward is structured to be a linear combination of selected bitrate and the stall when downloading the corresponding chunk.” Mao P. 149. In this case, the network throughput reads on the pacing metric and the change to the bitrate for the next video chunk reads on the pacing action. This teaches the claimed “speed threshold.” Mao teaches observing five task parameters (pacing metrics). In response actions are chosen to minimize completion time are scheduled (pacing actions). See Mao P. 150, last paragraph. Specifically, Mao teaches “Thus, the scheduling agent observes (1) the number of tasks remaining in the stage, (2) the average task duration, (3) the number of executors currently working on the stage, (4) the number of available executors, and (5) whether available executors are local to the job. This set of information is embedded as features on each node of the job DAGs. The scheduling action is two-dimensional—(1) which node to work on next and (2) how many executors to assign to the node. We structure the reward at step k as rk=−(tk−tk−1)Jk, where Jk is the number of jobs in the system during the physical time interval [tk−1,tk). Sum of such rewards penalize the agent in order to minimize the average job completion time.” Mao P. 150. Here the number of executors teach tokens allocated during the time period tk-1 to tk. This teaches the claimed “pacing period.” Mao teaches “Congestion control has been a perennial problem in networking for three decades [152], and governs when hosts should transmit packets. Transmitting packets too frequently leads to congestion collapse (affecting all users) [225] while overconservative transmission schemes under-utilize the available network bandwidth. Good congestion control algorithms achieve high throughput and low delay while competing fairly for network bandwidth with other flows in the network. . . . At each step, the agent observes the network state, including the throughput and delay. The action is a tuple of pacing rate and congestion window. The pacing rate controls the inter-packet send time, while the congestion window limits the total number of packets in-flight (sent but not acknowledged). We set our (configurable) action interval at 10ms (suitable for typical Internet delays). Our reward function is adopted from the Copa [25] algorithm: log(throughput) - log(delay)/2 - log(lost packets).” Mao PP. 151-152. The congestion window limiting the number of packets (where the limit on packets teaches tokens) teaches during an action interval teaches the claimed pacing period.) 2. The apparatus according to claim 1, wherein the processor is configured to train the artificial intelligence model to find the pacing action from which to derive the pacing metric for use in pacing commencement of serving of the content transfer requests in a storage sub-system. (See rejection of claim 1. See also Spec. P. 13 describing a cache related pacing metric. Mao teaches “Operating systems seek to efficiently multiplex hardware resources (compute, memory, storage) amongst various application processes. One example is providing a memory hierarchy: computer systems have a limited amount of fast memory and relatively large amounts of slow storage. Operating systems provide caching mechanisms which multiplex limited memory amongst applications which achieve performance benefits from residency in faster portions of the cache hierarchy. In this setting, an RL agent can observe the information of both the existing objects in the cache and the incoming object; it then decides whether to admit the incoming object and which stale objects to evict from the cache. The goal is to maximize the cache hit rate (so that more application reads occur from fast memory) based on the access pattern of the objects.” Mao P. 122.) 3. The apparatus according to claim 2, wherein the pacing metric is the pacing period. (See rejection of claim 1.) 4. The apparatus according to claim 3, wherein the pacing action is a change in the pacing period to be applied by the storage sub-system. (See rejection of claim 1.) 5. The apparatus according to claim 2, wherein the processor is configured to train the artificial intelligence model to find the pacing action that maximizes at least one storage sub-system parameter responsively to training data including at least one previous storage sub-system state and at least one previous pacing action. (“We focus on a class of RL algorithms that perform training by using gradient-descent on the policy parameters[286]. Recall that the objective is to maximize the expected discounted total reward; the gradient of this objective is given by [equation 2.1], where Qπθ (st,at) is the expected total discounted reward from (deterministically) choosing action at in state st, and subsequently following policy πθ [285, §13.2].” Mao P. 13. “The intuition of REINFORCE is that the direction ∇θ logπθ(st,at) indicates how to change the policy parameters in order to increase πθ(st,at) (i.e., increase the probability of action at at state st).” Mao P. 14.) 6. The apparatus according to claim 5, wherein the at least one storage sub-system parameter includes one or more of the following: a bandwidth; a cache hit rate; and a number of buffers in flight. (“RL-based systems research is inherently interdisciplinary and creates abundant opportunities to draw intellectual connections between the networking, systems, and machine learning areas. The landscape of building learning-based systems is vast, ranging from centralized control problems[.] . . . Further, the control tasks manifest at a variety of timescales, from fast, reactive control systems with sub-second response-time requirements (e.g., admission/eviction algorithms for caching objects in memory) to longer term planning problems that consider a wide range of signals to make decisions (e.g., VM allocation/placement in cloud computing).” Mao P. 8. “Operating systems provide caching mechanisms which multiplex limited memory amongst applications which achieve performance benefits from residency in faster portions of the cache hierarchy. In this setting, an RL agent can observe the information of both the existing objects in the cache and the incoming object; it then decides whether to admit the incoming object and which stale objects to evict from the cache. The goal is to maximize the cache hit rate (so that more application reads occur from fast memory) based on the access pattern of the objects.” Mao P. 123.) 11. The apparatus according to claim 2, wherein the processor is configured to find the pacing action from which to derive the pacing metric for use in pacing commencement of the serving of the content transfer requests in the storage sub-system responsively to reinforcement learning. (Mao teaches “At a high level, we believe that several benefits of RL are particularly well-suited to system optimization problems. . . . RL agents can learn to optimize a variety of high-level optimization objectives (e.g., user-level perceived video playback delay) without prior knowledge of how low-level metrics (e.g., transport-layer queueing delay, CDN cache hit ratio, backend video server utilization, etc.) impact the objective.” Mao P. 4.) 12. A processing apparatus, comprising processing circuitry to use an artificial intelligence model trained to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, wherein:the pacing metric is at least one of: a pacing period, a speed threshold or a cache size; andthe pacing action indicates a change to be applied to the pacing metric. (This claim is written as an intended use for “processing circuitry.” Intended use language is explained in MPEP §§ 2103 and 2111.02. “Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.” MPEP § 2111.04. See also rejection of claim 1.) 13. The apparatus according to claim 12, wherein the processing circuitry is configured to use an artificial intelligence model trained to find the pacing action from which to derive the pacing metric for use in pacing commencement serving of the content transfer requests in a storage sub-system. (See rejection of claim 2.) 14. The apparatus according to claim 13, further comprising the storage sub-system, and wherein the processing circuitry is configured to: pace the commencement of the serving of the content transfer requests responsively to the pacing metric; apply the artificial intelligence model to find the pacing action; and compute the pacing metric responsively to the pacing action. (See rejections of claims 1, 2, and 11.) 15. The apparatus according to claim 14, further comprising a network interface comprising one or more ports for connection to a packet data network and configured to receive the content transfer requests from at least one remote device over the packet data network via the one or more ports, and wherein: (“Table 6.2 provides an overview of 12 environments that we have implemented in Park.” Mao PP. 128-129.Table 6.2 includes “switch scheduling” where the state space includes queue occupancy for input-output pairs. Similarly Table 6.2 includes “network congestion control” where the state space includes packets and the action space includes congestion window and pacing rate.) the storage sub-system is configured to be connected to local peripheral storage devices, and comprises at least one peripheral interface, and a memory sub-system comprising a cache and a random-access memory (RAM), the memory sub-system being configured to evict overflow from the cache to the RAM; (“Further, the control tasks manifest at a variety of timescales, from fast, reactive control systems with sub-second response-time requirements (e.g., admission/eviction algorithms for caching objects in memory)” Mao P. 120. “Operating systems. Operating systems seek to efficiently multiplex hardware resources (compute, memory, storage) amongst various application processes. One example is providing a memory hierarchy: computer systems have a limited amount of fast memory and relatively large amounts of slow storage. Operating systems provide caching mechanisms which multiplex limited memory amongst applications which achieve performance benefits from residency in faster portions of the cache hierarchy. In this setting, an RL agent can observe the information of both the existing objects in the cache and the incoming object; it then decides whether to admit the incoming object and which stale objects to evict from the cache. The goal is to maximize the cache hit rate (so that more application reads occur from fast memory) based on the access pattern of the objects.” Mao P. 123.) and the processing circuitry is configured to manage transfer of content between at least one remote device and the local peripheral storage devices via the at least one peripheral interface and the cache, (“RL agents can learn to optimize a variety of high-level optimization objectives (e.g., user-level perceived video playback delay) without prior knowledge of how low-level metrics (e.g., transport-layer queueing delay, CDN cache hit ratio, backend video server utilization, etc.) impact the objective.” Mao P. 4. Note that the teaching of transport layer queuing in a system with caches would be understood by one of ordinary skill as referring to “managing transfer of content” between remote and local storage including a cache.) responsively to the content transfer requests, while pacing the commencement of the serving of respective ones of the content transfer requests responsively to the pacing metric so that while ones of the content transfer requests are being served, other ones of the content transfer requests pending serving are queued in at least one pending queue. (“Load balancing over two servers. (a) Job sizes follow a Pareto distribution and jobs arrive as a Poisson process; the RL agent observes the queue lengths and picks a server for an incoming job.” Mao P. xxi.) 16, The apparatus according to claim 13, wherein the pacing metric is the pacing period, and the pacing action is a change in the pacing period. (See rejections of claims 3 and 4.) 17. The apparatus according to claim 13, wherein the processing circuitry is configured to: apply the artificial intelligence model to find the pacing action responsively to at least one previous state and at least one previous pacing action of the storage sub-system; and compute the pacing metric responsively to the pacing action. (See rejection of claim 1.) 18. The apparatus according to claim 13, wherein the at least one previous state includes one or more of the following: a bandwidth of the storage sub-system; a cache hit rate of the storage sub-system; a pacing metric of the storage sub-system; and a number of buffers in flight over the storage sub-system; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given time window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window. (Mao teaches “Why favoring policy-based approaches? Policy-based methods are usually better suited for the system applications in this thesis. There are two main reasons for making this design choice. First, the policy π expresses a direct mapping between the states and actions, which conceptually adheres to the current way human engineers design algorithms to control the systems. Given a state, we extract some useful information (e.g., queue sizes, server processing rate estimation, etc.) and we use some heuristics to decide an action based on the information (e.g., rank the queue size normalized by server speed and then join the shortest queue).” Mao P. 12. Note that states are continually updated as part of reinforcement learning.) 21. A method, comprising: receiving training data; (“System operation decisions are often highly repetitive, making it easy to collect an abundance of training data to train RL models.” Mao P. 4.) and training an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, wherein: the pacing metric is at least one of: a pacing period, a speed threshold or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) 22. A method to use an artificial intelligence model trained to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, the method comprising: applying the artificial intelligence model to find the pacing action; and computing the pacing metric responsively to the pacing action wherein: the pacing metric is at least one of: a pacing period, a speed threshold or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 1.) 23. A software product, comprising a non-transient computer-readable medium in which program instructions are stored, which instructions, when read by a central processing unit (CPU), cause the CPU to: (See rejection of claim 1.) receive training data; and train an artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, wherein: the pacing metric is at least one of: a pacing period, a speed threshold, or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 21.) 24. A software product, comprising a non-transient computer-readable medium in which program instructions are stored, which instructions, when read by a central processing unit (CPU), cause the CPU to: (See rejection of claim 1.) apply an artificial intelligence model to find a pacing action; and compute a pacing metric for use in pacing commencement of serving content transfer requests in a storage sub-system, responsively to the pacing action wherein: the pacing metric is at least one of: a pacing period; a speed threshold; or a cache size; and the pacing action indicates a change to be applied to the pacing metric. (See rejection of claim 22.) 25. (New) The apparatus of claim 1, wherein the processor is to pace commencement of serving of the content transfer requests in the storage sub-system responsively to the derived pacing metric. (See rejection of claim 1.) Claims 7-10 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mao and Gajane (A Sliding-Window Algorithm for Markov Decision Processes with Arbitrarily Changing Rewards and Transitions, 2018.) 7. The apparatus according to claim 2, wherein the processor is configured to train the artificial intelligence model to find the pacing action that maximizes at least one storage sub-system parameter responsively to training data including at least one window of storage sub-system states and at least one window of pacing actions. (“RL agents can learn to optimize a variety of high-level optimization objectives (e.g., user-level perceived video playback delay) without prior knowledge of how low-level metrics (e.g., transport-layer queueing delay, CDN cache hit ratio, backend video server utilization, etc.) impact the objective. . . . System operation decisions are often highly repetitive, making it easy to collect an abundance of training data to train RL models.” Mao P. 4. Mao does not clearly teach maximizing a parameter responsive to training data including at least one window of system states and at least one window of pacing actions. Gajane teaches “We consider reinforcement learning in changing Markov Decision Processes where both the state-transition probabilities and the reward functions may vary over time. For this problem setting, we propose an algorithm using a sliding window approach and provide performance guarantees for the regret evaluated against the optimal non-stationary policy.” Gajane P. 1. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Gajane because this is part of a method that allows performance guarantees when the system varies over time while allowing for the reward function to vary over time.) 8. The apparatus according to claim 7, wherein the processor is configured to: apply the artificial intelligence model to predict a plurality of future pacing actions responsively to training data including windows of storage sub-system states and windows of pacing actions; (“For training, RL proceeds in episodes. Each episode consists of a sequence of (state, action, reward) observations—i.e., (st,at,rt) at each step t∈[0,1,...,T], where T is the episode length. The goal of RL is to maximize the total discounted reward E [SUM(γtrt) from t=0 to T], where γ is the discount factor that downweights the reward in the future.” Mao P. 11.) apply the future pacing actions resulting in corresponding future storage sub-system states; (“At each step t, the agent observes some state st, and takes an action at.” Mao P. 11.) compute a reward or punishment responsively to comparing values of the at least one storage sub-system parameter of the future storage sub-system states with at least one target value; (“Following the action, the state of the environment transitions to st+1 and the agent receives a reward rt as feedback.” Mao P. 11.) and train the artificial intelligence model responsively to the reward or punishment. (“In the general “model-free” RL setting, the agent only controls its actions: it has no a priori knowledge of the state transition probabilities or the reward function. However, by interacting with the environment, the agent can learn these quantities during training.” Mao P. 11. See also Mao P. 12, Fig. 2-1.) 9. The apparatus according to claim 8, wherein the processor is configured to apply a storage sub-system simulation engine that simulates operation of the storage sub-system to provide the future storage sub-system states responsively to the future pacing actions. (“After each chunk download, the simulator passes several state observations to the RL agent for processing: the current buffer occupancy, rebuffering time, chunk download time, size of the next chunk (at all bitrates), and the number of remaining chunks in the video. We describe how this input is used by the RL agent in more detail in §3.4.2. Using this chunk-level simulator, Pensieve can “experience” 100 hours of video downloads in only 10 minutes.” Mao P. 27. In section 2.4.2 Mao teaches: “After the download of each chunk t, Pensieve’s learning agent takes state inputs st=('xt,'τt,'nt,bt,ct,lt) to its neural networks. 'xt is the network throughput measurements for the past k video chunks; 'τt is the download time of the past k video chunks, which represents the time interval of the throughput measurements; 'nt is a vector of m available sizes for the next video chunk; bt is the current buffer level; ct is the number of chunks remaining in the video; and lt is the bitrate at which the last chunk was downloaded. Policy: Upon receiving st, Pensieve’s RL agent needs to take an action at that corresponds to the bitrate for the next video chunk. The agent selects actions based on a policy, defined as a probability distribution over actions π : π(st,at)→[0,1]. π(st,at) is the probability that action at is taken in state st.” Mao P. 29 (Mao §3.4.2.).) 10. The apparatus according to claim 8, wherein each of the future storage sub-system states includes one or more of the following: a bandwidth; a cache hit rate; the pacing metric; a number of buffers in flight; a cache hit rate; the pacing metric; a number of buffers in flight; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given tune window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window. (Mao teaches “Why favoring policy-based approaches? Policy-based methods are usually better suited for the system applications in this thesis. There are two main reasons for making this design choice. First, the policy π expresses a direct mapping between the states and actions, which conceptually adheres to the current way human engineers design algorithms to control the systems. Given a state, we extract some useful information (e.g., queue sizes, server processing rate estimation, etc.) and we use some heuristics to decide an action based on the information (e.g., rank the queue size normalized by server speed and then join the shortest queue).” Mao P. 12.) 19. The apparatus according to claim 13, wherein the processing circuitry is configured to: apply the artificial intelligence model to find the pacing action responsively to a window of previous states and a window of previous pacing actions of the storage sub-system; and compute the pacing metric responsively to the pacing action. (See rejection of claim 7.) 20. The apparatus according to claim 19, wherein each of the previous states includes one or more of the following: a bandwidth of the storage sub-system; a cache hit rate of the storage sub-system; a pacing metric of the storage sub-system; a number of buffers in flight over the storage sub-system; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given time window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window. (See rejection of claim 10. Note that states are continually updated as part of reinforcement learning.) Response to Arguments Applicant's arguments filed 12/03/2025 have been fully considered but they are not persuasive. Rejections under § 101 The Remarks, while interesting, veer into various topics which do not clearly have any bearing on issues relevant to patent eligibility of the specific claims under examination. Where the Remarks apply a current point of law to facts which are relevant to this application, they are addressed. Bare assertions of patent eligibility which fail to articulate a specific relationship between a current point of law and specific claim language are not relevant. Applicant asserts a “specific technological improvement.” Rem. 3. The Remarks describe the technical problem of thrashing, though the claims are not limited to any particular solution for that problem. The technical problem of thrashing does not appear to be mentioned by name in the original Specification. In the interest of clarity of the record, thrashing is explained. A basic understanding of caching is assumed. Thrashing results when multiple data blocks located in two different locations within main memory compete for the same location(s) in a cache. Accessing the two blocks repeatedly and alternatively results in repeated evictions of each respective block of data from that cache location, back to their respective locations main memory as each bumps the other out of the cache. This pattern of repeatedly replacing alternately accessed memory blocks in the same cache location is called thrashing. This problem can be mitigated by adding locations in the cache thereby avoiding competition for a given location. Alternatively, reducing the rate at which one of the memory blocks competing for a given cache location may reduce the number of evictions of the other block competing for that location, allowing one block to remain in the cache where it can be repeatedly accessed without calls to main memory, potentially saving substantial bandwidth. A technical solution to thrashing would be patent eligible. But merely claiming the application of a generic machine learning and asserting the solution to a problem in the remarks does not constitute a technical solution. Claims directed to a technical solution generally recite components or steps described in the Specification with sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement is not a technical solution. Throughout the Remarks, Applicant asserts that the solution to a technical problem lies in the use of a generic artificial intelligence model. See Rem. 11 (The amended claims are directed to applied AI technology that solves a specific technological problem[.]”) 12 (“the amended claims are directed to training an AI model to solve the technical problem of storage sub-system congestion and cache eviction[.]”) 14 (“The amended claims recite training an AI model to find” actions for mitigating thrashing.) Applicant indicates that pacing is a problem to be solved and indicates that the technical solution is lies in “solv[ing the above [pacing] problems by training a pacing artificial intelligence (AI) model to find a pacing action from which to derive a pacing metric[.]” Rem. 15. Applicant asserts this use of an AI model “improves storage sub-system performance by dynamically optimizing pacing metrics that were previously set manually or sub-optimally.” Rem. 15-16. According to Applicant, “[t]his is a concrete technical improvement, not an abstract idea.” Rem. 16. Claims which merely recite the generic training or use of an AI model for carrying out mental or mathematical processes is not a technical solution, because it fails to include components or steps described in the Specification with sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The MPEP is clear on this point. “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. . . . It should be noted that while this consideration is often referred to in an abbreviated manner as the ‘improvements consideration,’ the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” MPEP 2106.04(d)(1). See also Koninklijke KPN N.V. v. Gemalto M2M GmbH, 942 F.3d 1143, 1150-1152 (Fed. Cir. 2019). Since the claims fail to include components or steps described in the Specification with sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement, they are not directed to a technical solution to a technical problem. Rejections under § 112b: The rejection from the previous action is mostly withdrawn in response to claim amendments and Applicant Remarks. However, the amendments and clarifications brought forth another ambiguity. See rejection above. Rejections under § 103: It must be noted here, that the claims include multiple terms which merely imply operations without actually requiring that any operations be performed, as well as language which fails to require specific structures. Applicant states that Mao’s “pacing rate” is fundamentally different from the pacing rate, as defined in the Specification. Rem. 24. Mao teaches limiting the number of packets during an action interval. This teaches a time period (action interval) for which credits (the number of packets) are allocated. Applicant also states that Mao fails to teach a speed threshold and a cache size but the language cited in the rejection from Mao does not appear to be in the addressed. Applicant states that Mao fail to teach “peripheral storage (e.g. NVMe drives)” as described in the specification. Generally, the claims, not the Specification is compared with the prior art when evaluating claims for obviousness. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL M KNIGHT whose telephone number is (571) 272-8646. The examiner can normally be reached Monday - Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL M. KNIGHTExaminerArt Unit 2148 /PAUL M KNIGHT/Examiner, Art Unit 2148 1 This distinction between claims which read on math and claims which recite an abstract idea is based on official USPTO Guidance. The 2019 Subject Matter Eligibility (SME) Examples instructs examiners that a claim reciting “training the neural network” where the background describes training as “using stochastic learning with backpropagation which is a type of machine learning algorithm that uses the gradient of a mathematical loss function to adjust the weights of the network” “does not recite any mathematical relationships, formulas, or calculations.” See 2019 SME Example 39, PP. 8-9 (emphasis added). In this example, the plain meaning of “training the neural network” read in light of the disclosure reads on backpropagation using the gradient of a mathematical loss function. See MPEP § 2111.01. In contrast, the 2024 SME Examples instructs examiners that a claim reciting “training, by the computer, the ANN . . . wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm” does recite an abstract idea because “[t]he plain meaning of [backpropagation algorithm and gradient descent algorithm] are optimization algorithms, which compute neural network parameters using a series of mathematical calculations.” 2024 PEG Example 47, PP. 4-6. The Memorandum of August 4, 2025; Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101, P. 3 also directs examiners that “training the neural network” recited in Example 39 merely “involve[s] . . . mathematical concepts” and contrasts claim 2 of example 47 as “referring to [specific] mathematical calculations by name[.]” (Emphasis added.) 2 “For instance, the claims in Diehr . . . clearly stated a mathematical equation . . . and the claims in Mayo . . . clearly stated laws of nature . . . such that the claims ‘set forth’ an identifiable judicial exception. Alternatively, the claims in Alice Corp. . . . described the concept of intermediated settlement without ever explicitly using the words ‘intermediated’ or ‘settlement.’” MPEP § 2106.04(II)(A). 3 “By grouping the abstract ideas, the examiners’ focus has been shifted from relying on individual cases to generally applying the wide body of case law spanning all technologies and claim types. . . . If the identified limitation(s) falls within at least one of the groupings of abstract ideas, it is reasonable to conclude that the claim recites an abstract idea in Step 2A Prong One.” MPEP § 2106.04(a). See also MPEP 2104(a)(2). 4 Step 2A prongs one and two are evaluated individually, consistent with the framework in the MPEP. Evaluation of relationships between abstract ideas and additional elements in one location promotes clarity of the record. 5 “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. . . . It should be noted that while this consideration is often referred to in an abbreviated manner as the ‘improvements consideration,’ the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” MPEP 2106.04(d)(1). See also Koninklijke KPN N.V. v. Gemalto M2M GmbH, 942 F.3d 1143, 1150-1152 (Fed. Cir. 2019). 6 For clarity of the record, it is noted that merely using a generic machine learning technique in a particular environment, with no incentive concept has also been found to be an abstract idea. Recentive Analytics, Inc. v. Fox Corp., 134 F.4th 1205, 1208 (Fed. Cir. 2025). Further, a training is a process which can be accomplished in the mind. However, the basis for the rejection is that the claims recite implementing mental processes using generic computer components. The claimed “artificial intelligence model” is found to be a generic computing component consistent with the guidance in the MPEP. Specifically, generic machine learning techniques are more properly characterized as conventional computer components, which may be used to implement an abstract idea. Compare MPEP §§ 2106.04(a)(1) and 2106.04(a)(2)(III)(C). 7 See MPEP § 2106.05(d)(II) listing operations including “receiving or transmitting data,” “storing and retrieving data in memory,” and “performing repetitive calculations” as WURC. 8 “But ‘[f]or the role of a computer in a computer-implemented invention to be deemed meaningful in the context of this analysis, it must involve more than performance of 'well-understood, routine, [and] conventional activities previously known to the industry.’ Content Extraction, 776 F.3d at 1347-48 (quoting Alice, 134 S. Ct at 2359). Here, the server simply receives data, ‘extract[s] classification information . . . from the received data,’ and ‘stor[es] the digital images . . . taking into consideration the classification information.’ See ‘295 patent, col. 10 ll. 1-17 (Claim 17). . . . These steps fall squarely within our precedent finding generic computer components insufficient to add an inventive concept to an otherwise abstract idea. Alice, 134 S. Ct. at 2360 (‘Nearly every computer will include a 'communications controller' and a 'data storage unit' capable of performing the basic calculation, storage, and transmission functions required by the method claims.’); Content Extraction, 776 F.3d at 1345, 1348 (‘storing information’ into memory, and using a computer to ‘translate the shapes on a physical page into typeface characters,’ insufficient confer patent eligibility); Mortg. Grader, 811 F.3d at 1324-25 (generic computer components such as an ‘interface,’ ‘network,’ and ‘database,’ fail to satisfy the inventive concept requirement); Intellectual Ventures I, 792 F.3d at 1368 (a ‘database’ and ‘a communication medium’ ‘are all generic computer elements’); BuySAFE v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) (‘That a computer receives and sends the information over a network—with no further specification—is not even arguably inventive.’).” TLI Commc'ns LLC v. AV Auto., LLC, 823 F.3d 607, 614 (Fed. Cir. 2016), Emphasis Added. 9 “The analysis as to whether an element (or combination of elements) is widely prevalent or in common use is the same as the analysis under 35 U.S.C. 112(a) as to whether an element is so well-known that it need not be described in detail in the patent specification. See Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (supporting the position that amplification was well-understood, routine, conventional for purposes of subject matter eligibility by observing that the patentee expressly argued during prosecution of the application that amplification was a technique readily practiced by those skilled in the art to overcome the rejection of the claim under 35 U.S.C. 112, first paragraph)[.]” MPEP § 2106.05(d)(I). 10 “Similarly, claim elements or combinations of claim elements that are routine, conventional or well-understood cannot transform the claims. (Citing BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1290-1291 (Fed. Cir. 2018)). When the patent's specification ‘describes the components and features listed in the claims generically,’ it ‘support[s] the conclusion that these components and features are conventional.’ Weisner v. Google LLC, 51 F.4th 1073, 1083-84 (Fed. Cir. 2022); see also Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1357-58 (Fed. Cir. 2024).” Broadband iTV, Inc. v. Amazon.com, Inc., 113 F.4th 1359 (Fed. Cir. 2024) 11 “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.” MPEP § 2106.05(a).
Read full office action

Prosecution Timeline

Apr 05, 2022
Application Filed
Sep 06, 2025
Non-Final Rejection — §101, §103, §112
Oct 21, 2025
Interview Requested
Nov 12, 2025
Examiner Interview Summary
Dec 03, 2025
Response Filed
Feb 05, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530592
NON-LINEAR LATENT FILTER TECHNIQUES FOR IMAGE EDITING
2y 5m to grant Granted Jan 20, 2026
Patent 12530612
METHODS FOR ALLOCATING LOGICAL QUBITS OF A QUANTUM ALGORITHM IN A QUANTUM PROCESSOR
2y 5m to grant Granted Jan 20, 2026
Patent 12499348
READ THRESHOLD PREDICTION IN MEMORY DEVICES USING DEEP NEURAL NETWORKS
2y 5m to grant Granted Dec 16, 2025
Patent 12462201
DYNAMICALLY OPTIMIZING DECISION TREE INFERENCES
2y 5m to grant Granted Nov 04, 2025
Patent 12456057
METHODS FOR BUILDING A DEEP LATENT FEATURE EXTRACTOR FOR INDUSTRIAL SENSOR DATA
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+17.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 272 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month