Prosecution Insights
Last updated: April 19, 2026
Application No. 17/594,150

System and Method for Improving a Processing System using Learning Systems

Final Rejection §101§103§112
Filed
Oct 04, 2021
Examiner
KNIGHT, PAUL M
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Marvell Asia Pte. Ltd.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
169 granted / 272 resolved
+7.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
35.2%
-4.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 272 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Style In this action unitalicized bold is used for claim language, while italicized bold is used for emphasis. Support for the Claim Language The provisional application filed 4 December 2019 consists of one page Specification. No drawings or technical details of the claimed invention are found in the provisional application. The earliest potential support for the claims as a whole, first appears in PCT/US20/62978 filed 3 December 2020. Election/Restrictions Examiner notes that all claim amendments are consistent with the election without travers filed 06/09/2025. Applicant is reminded that this election continues to apply throughput prosecution. The office does not permit shift. Applicant elected Group 1, directed to implementing a genetic algorithm. The claims directed to various objectives for the model (Group 2), to feeding various inputs to the model (Group 3), and to specialized hardware used to acquire or generate data for the model (Group 4) will be evaluated for rejoinder if claims from which they depend are ultimately allowed. Applicant Reply “The claims may be amended by canceling particular claims, by presenting new claims, or by rewriting particular claims as indicated in 37 CFR 1.121(c). The requirements of 37 CFR 1.111(b) must be complied with by pointing out the specific distinctions believed to render the claims patentable over the references in presenting arguments in support of new claims and amendments. . . . The prompt development of a clear issue requires that the replies of the applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. . . . An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” MPEP § 714.02. Generic statements or listing of numerous paragraphs do not “specifically point out the support for” claim amendments. “With respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) (citing MPEP § 2163.04 which provides that a ‘simple statement such as ‘applicant has not pointed out where the new (or amended) claim is supported, nor does there appear to be a written description of the claim limitation ‘___’ in the application as filed’ may be sufficient where the claim is a new or amended claim, the support for the limitation is not apparent, and applicant has not pointed out where the limitation is supported.’)” MPEP § 2163(II)(A). Specification The proposed amendments to the title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. See MPEP § 606.01. Claim Objections Claim 2 is objected to because of the following informalities: Claim two recites “The system of Claim 1, wherein and wherein”. The “and wherein” should be deleted. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 11-14, 17-18, and 27-32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) and the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? All claims are found to be directed to one of the four statutory categories, unless otherwise indicated in this action. Step 2A Prongs One and Two (Alice Step 1): According to Office guidance, claims that read on math do not recite an abstract idea at step 2A1, when the claims fail to refer to the math by name.1 The MPEP also equates “recit[ing] a judicial exception” with “state[ing]” or “describ[ing]” an abstract idea in the claims.2 Consistent with this guidance, an abstract idea may be first recited in a dependent claim even though the independent claims read on that abstract idea. Claim limitations which recite any of the abstract idea groupings set forth in the manual are found to be directed, as a whole, to an abstract idea unless otherwise indicated.3 The claims do not recite additional elements that integrate the abstract ideas into a practical application.4 To confer patent eligibility to an otherwise abstract idea, claims may recite a specific means or method of solving a specific problem in a technological field.5 Independent Claims 1. (Original) A system comprising: a first learning system coupled to a system controller and configured to employ a genetic method to identify variations for altering processing of a processing system to meet at least one goal, (Identifying variations for altering processing reads on a mental process. The use of a “system controller” a “first learning system” and employment of a “genetic method” is merely an instruction to apply the abstract idea using generic computer components. Where the claims subsequently recite using the “system controller” and a “first learning system” to implement abstract ideas, this is a mere instruction to apply the exception using generic computer components. As such, this is not repeated for brevity, but only the abstract ideas are identified.) the system controller configured to apply the variations identified to the processing system, the variations identified including populations of respective trial variations, (Applying variations that include populations of respective trial variations reads on a mental process.) the genetic method configured to evolve populations; (Evolving populations is a mental process. The use of a “genetic method” to evolve populations is merely an instruction to apply the exception using a generic computer component.) and a second learning system coupled to the system controller, the second learning system configured to determine respective effects of the variations identified and applied, (Determining effects of identified variations reads on a mental process. Using a “second learning system” to make the determination is merely an instruction to apply the exception using generic computer components.) the first learning system further configured to converge on a given variation of the variations based on the respective effects determined, wherein the given variation is a given trial variation included, consistently, by the genetic method in the populations evolved, and wherein the given variation enables the at least one goal to be met. (This reads on the mental process of finding the best solution in the form of the variation that most consistently performs well in trials for a given goal. Implementation using a “first leaning system configured to converge” is a mere instruction to apply the exception on generic computer components.) Claim 17 is rejected for the reasons given in the rejection of claim 1. Claim 31, which includes means plus function language, is rejected for the reasons given in the rejection of claims 1 and 2 (below). Note that claim 31 erroneously recites “the neural network” without antecedent basis. Based on the claimed subject matter of the other independent claims, and selecting from possible terms which would provide antecedent basis for this term, Examiner finds this language is meant to recite “the genetic method”. Independent claim 32 is rejected for the reasons given in the rejection of claim 1. The claim also recites “A non-transitory computer-readable medium having encoded thereon a sequence of instructions which, when loaded and executed by at least one processor, causes the at least one processor to” implement the operations recited in claim 1. This is merely an instruction to apply the judicial exception on a computer. Step 2B (Alice Step 2): The rejected claims do not recite additional elements that amount to significantly more than the judicial exception. All additional limitations that do not integrate the claimed judicial exception into a practical application also fail to amount to significantly more for the reasons given at step 2A2. Further, all limitations found to be extra-solution activity at step 2A2 are found to be WURC. This includes limitations that read on mere data gathering, storage, and data outputting. This finding is based on cases which have recognized that generic input-output operations, repetitive processing operations, and storage operations are WURC.6 Other aspects of generic computing have also been found to be WURC.7 Further, the description itself may provide support for a finding that claim elements are WURC. The analysis under § 112(a) as to whether a claim element is “so well-known that it need not be described in detail in the patent specification” is the same as the analysis as to whether the claim element is widely prevalent or in common use.8 Similarly, generic descriptions in the Specification of claimed components and features has been found to support a conclusion that the claimed components were conventional.9 Improvements to the relevant technology may support a finding that the claims include a patent eligible inventive concept. But some mechanism that results in any asserted improvements must be recited in the claim, and the Specification must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing the improvement.10 The claimed operations of a genetic algorithm were found to be mere instructions to apply abstract ideas on generic computing components so a finding that the operations are WURC is not required at step 2B. However, it is noted that Darwish (A survey of swarm and evolutionary computing approaches for deep learning 2019) would support a finding that the claimed operations identified above as being part of a generic genetic algorithm are WURC. See Darwish 1782-1784 including Fig. 9. Dependent Claims 2. The system of Claim 1, wherein and wherein the second learning system is configured to employ a neural network to determine the respective effects. (A “neural network” without any specific operational details is a generic computing component.) 11. (Original) The system of Claim 1, wherein the first learning system is further configured to employ a genetic method to evolve the populations on a population-by-population basis, (This merely describes a genetic algorithm. Implementation of the abstract idea using a genetic algorithm remains an instruction to apply the exception using generic computing components.) wherein the first learning system is further configured to transmit, on the population-by- population basis, the populations evolved to the system controller, (Transmitting to a controller is mere extra-solution activity. The language “population by population basis” reads on merely sending the data in order.) and wherein, to apply the variations identified, the system controller is further configured to apply the respective trial variations of the populations evolved to the processing system on a trial- variation-by-trial-variation basis, by sequentially applying each respective trial variation. (Applying variations of a population on a trial variation by trail variation basis by sequentially applying each respective trial variation reads on a mental process.) 12. (Original) The system of Claim 11, wherein second learning system is configured to employ a neural network and wherein the neural network is configured to: determine the respective effects based on at least one monitored parameter of the processing system, the respective effects resulting from applying the respective trial variations to the processing system; (Determining the effects of a trial reads on a mental process. Using a neural network work to make the determination is a mere instruction to apply the exception using generic computer components.) assign respective rankings to the respective trial variations based on the respective effects determined and the at least one goal; and transmit, to the system controller, the respective rankings on the trial-variation- by-trial-variation basis by sequentially transmitting each respective ranking. (Ranking is a mental process. Transmitting the ranking data is extra-solution activity. Sequentially transmitting reads on merely repeated transmitting, which is also extra-solution activity. Both transmitting and repeated operations have been found to be WURC. See explanation of Alice step 2 above.) 13. (Original) The system of Claim 12, wherein: the system controller is further configured to transmit, to the first learning system, respective ranked populations of the populations, (Transmitting data is mere extra-solution activity and has been found to be WURC. See explanation of Alice step 2 above.) the respective ranked populations including respective rankings of the respective trial variations, the respective rankings assigned by the neural network and (Ranking is a mental process, where using a neural network to assign ranking is mere instruction to apply an exception using generic computing components.) transmitted to the system controller; (Transmitting data is mere extra-solution activity and WURC. See explanation of Alice step 2 above.) and the genetic method is configured to evolve a present population of the populations into a next population of the populations based on a given respective ranked population of the respective ranked populations, the given respective ranked population corresponding to the present population. (This reads on applying generic changes to a population then selecting the best members, which is a mental process. This reads on both math and on a mental process,) 14. (Original) The system of Claim 12, wherein the genetic method is configured to evolve the populations on a population-by-population basis, (This reads on a mental process.) and wherein the given variation is converged on by the genetic method based on a respective ranking assigned thereto by the neural network. (This reads on implementation of a mental process using two different generic computing components. Without any technical details indicating how the generic computing components are arranged, this is a mere instruction to apply the exception on generic computing components.) Claims 18 and 27-30 are rejected for the reasons given in the rejection of claims 2 and 11-14, respectively. All dependent claims are rejected as containing the material of the claims from which they depend. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Unless specifically indicated in this office action, claim limitations are not interpreted as means plus function under § 112(f). The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. The following limitations are interpreted under 112f: Claim 31 recites: “means for employing a genetic method to identify variations for altering processing of a processing system to meet at least one goal;” “means for employing the genetic method to evolve the populations;” “means for applying the variations identified to the processing system;” “means for determining respective effects of the variations identified and applied;” “means for converging on a given variation of the variations identified and applied, the converging based on the respective effects determined, the given variation enabling the at least one goal to be met.” Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 11-14, 27-31 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 11 and 27 substantially recite: “to apply the variations identified, the system controller is further configured to apply the respective trial variations of the populations evolved to the processing system on a trial-variation-by-trial-variation basis by sequentially applying each respective trial variation.” The original specification does not mention the language in italics. No language was found in the Specification that would provide implicit support for this language. Nothing in the remarks indicates any implicit support. Claim 12 and 28 substantially recite: “transmit, to the system controller, the respective rankings on the trial-variation-by-trial-variation basis by sequentially transmitting each respective ranking.” The original specification does not mention the language in italics. No language was found in the Specification that would provide implicit support for this language. Nothing in the remarks indicates any implicit support. Claim 31 recites five structures using means plus function language: “means for employing a genetic method to identify variations for altering processing of a processing system to meet at least one goal . . .; means for employing the genetic method to evolve the populations; means for applying the variations identified to the processing system; means for determining respective effects of the variations identified and applied; and means for converging on a given variation of the variations identified and applied, the converging based on the respective effects determined, the given variation enabling the at least one goal to be met.” The Specification fails to describe any corresponding structure that would support a “means for” the claimed functions. The Specification and claims indicate that a genetic algorithm is used to identify variations and a neural network is used to determine effects of the variations, but both classes of machine learning algorithms are described in generic terms throughout the Specification. No specific algorithm for implementing either a specific genetic algorithm or a specific neural network is found in the Specification. “For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b). See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[W]hen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.’") (quoting WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1349, 51 USPQ2d 1385, 1391 (Fed. Cir. 1999)).” MPEP § 2181. “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a).” With respect to the claimed “means for applying the variations identified to the processing system,” it is not clear what structure would provide support for this limitation. With respect to the claimed “means for converging,” and the claimed “means for converging,” as best understood, this also refers to an operation carried out by a neural network. But the Specification does not provide any specific algorithm to support a neural network converging. All dependent claims are rejected as containing the limitations of the claims from which they depend. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 14, 30, and 31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 14 and 30 substantially recite “wherein the given variation is converged on by the genetic method based on a respective ranking assigned thereto by the neural network.” It is not clear whether this language requires the genetic method to converge or requires the neural network to converge. Claim 31 recites five structures using means plus function language. “means for employing a genetic method to identify variations for altering processing of a processing system to meet at least one goal . . .; means for employing the genetic method to evolve the populations; means for applying the variations identified to the processing system; means for determining respective effects of the variations identified and applied; and means for converging on a given variation of the variations identified and applied, the converging based on the respective effects determined, the given variation enabling the at least one goal to be met.” The Specification fails to describe any corresponding structure that would support a “means for” the claimed functions. The Specification and claims indicate that a genetic algorithm is used to identify variations and a neural network is used to determine effects of the variations, but both classes of machine learning algorithms are described in generic terms throughout the Specification. No specific algorithm for implementing either a specific genetic algorithm or a specific neural network is found in the Specification. “For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b). See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[W]hen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.’") (quoting WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1349, 51 USPQ2d 1385, 1391 (Fed. Cir. 1999)).” MPEP § 2181. “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a).” With respect to the claimed “means for applying the variations identified to the processing system,” it is not clear what structure would provide support for this limitation. With respect to the claimed “means for converging,” and the claimed “means for converging,” as best understood, this also refers to an operation carried out by a neural network. But the Specification does not provide any specific algorithm to support a neural network converging. Further, the “means for converging” appears to refer to the same neural network as the “means for determining respective effects,” making it unclear whether these limitations both refer to the same structure. Claim 31 recites “the neural network” without antecedent basis. It is not clear whether this language should be interpreted as introducing a new claim element (i.e. “a neural network”) or if this is meant to refer back to another term in the claims (i.e. “a genetic method”). All dependent claims are rejected as containing the limitations of the claims from which they depend. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-2, 11-14, 17-18 and 27-32 are rejected under 35 U.S.C. 103 as being unpatentable over Tirunagari (US 2012/0041914) and Patra (Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn, 2017). 1. A system (See Tirunagari Fig. 7.) comprising: a first learning system coupled to a system controller and (Tirunagari teaches: “In this example, an input layer of the neural network monitors, gathers, and/or otherwise provides the following inputs to the hidden layer of the neural network: hardware parameter values 212, operating system parameters 213, information about data accesses 214, data access rates 215, cache hit rate data 216, and a data throughput rate 217. In other embodiments, the input layer of the neural network may monitor, capture and/or provide more, fewer, or different performance related parameter values than those illustrated in FIG. 2.” Tirunagari ¶33. See also Tirunagari Fig. 7 showing a neural network 730, comprising input, output, and hidden layers. Tirunagari teaches “As described herein, neural network 730 may include an input layer 732, a hidden layer 734 (which may be implemented as a single level of processing units, or a hierarchy of processing units, as described herein), and an output layer 736 (which may provide outputs to control various actions, such selecting and dynamically replacing a caching algorithm and/or parameters thereof.” Tirunagari ¶60. Further, Tirunagari teaches that the neural network used for controlling various actions may be implemented in hardware: “In various embodiments, neural network 730 may be implemented in hardware (e.g., as dedicated circuitry or a co-processor), in software (e.g., as program instructions that when executed on one or more processors 770 implement the functionality of an input layer, hidden layer or output layer of a neural network), or in a combination of program instructions and supporting circuitry for performing the functionality of a neural network.” Tirunagari ¶60. The hardware used to implementing the decisions of the neural network teaches the claimed controller coupled to a learning system.) to meet at least one goal, (This is written as an intended use. Intended use language is explained in MPEP §§ 2103 and 2111.02. Further, “[c]laim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.” MPEP § 2111.04. Further, see Tirunagari Fig. 2, item 230.) configured to employ a genetic method to identify variations for altering processing of a processing system (The previously cited art does not teach employing a genetic method. While the claimed “for altering processing of a processing system” reads on the prior art, note that this language is written as an intended use. See MPEP §§ 2103 and 2111.04. (The previously cited art does not teach using a genetic method to identify variations (while a neural network, first recited below, is employed to determine the respective effects of the variations.) However, this combination is obvious over the previously cited art in combination with Patra. Patra teaches operations of a common genetic algorithm, used in the data environment associated with materials design. “Genetic algorithms aim to mimic the process of natural selection to optimize a system’s properties. The schematic structure of a standard genetic algorithm as typically employed in materials design is shown in Figure 1. A random population of candidate materials is initially selected; each material’s chemical structure is mapped to a genetic representation; the fitness (the term for the value of the objective function in the context of evolutionary algorithms) of each candidate is determined; and genetic operators, including selection, crossover, and mutation, are then employed to obtain the next generation of candidates. This process is iterated until targeted properties are achieved within some acceptance criteria or a time constraint is reached.” Patra P. 97. See also Patra Fig. 1. Further, Patra teaches using a genetic algorithm to identify variations in materials, which are then evaluated by a neural network. “In effect, this strategy gives the genetic algorithm the ability to ‘learn’ from accumulated data, in contrast to standard genetic algorithms, which have no memory of prior attempts beyond their accumulated impact on the instantaneous genetic distribution. In principle, this strategy combines the best features of both genetic algorithms and neural networks. The neural network accelerates the progress of the direct-evaluation genetic algorithm toward a system optimum by regularly introducing predicted top-performing candidates into the genetic algorithm based on the ANN-evaluated genetic algorithm’s progressively improving projection of likely top candidates.” Patra P. 99. The combination including both a genetic method used to identify variations and a second learning system employing a neural network to determine the effects would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teaching of Patra application of a known technique to a known device (method, or product) ready for improvement to yield predictable results. (The present combination takes teaching of Patra including use of a genetic algorithm to find the best neural network for a given task, and applies this teaching to the problem of the primary reference.) The prior art contained a "base" device (method, or product) upon which the claimed invention can be seen as an "improvement,” (the improvement being that genetic algorithms tend to avoid local minima thereby offering a better chance at coming near to a more optimal solution especially with limited data, while use of a neural network to evaluate options offered by the genetic algorithm allows the output of the genetic algorithm to be evaluated without direct experiments, which saves time and computing resources.) The prior art contained a known technique that is applicable to the base device (method, or product) (The technique of Patra illustrated in Figure 9 on page 99 is applicable to the claimed identification of “variations for altering processing.”) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system (One of ordinary skill in the art, in this case, an engineer or computer scientist with several years’ experience in the field of machine learning, would have recognized the benefits of combining a genetic algorithm to suggest variations and evaluation using a neural network to the techniques of Tirunagari.). See MPEP § 2143(I)(D).)) the system controller configured to apply the variations identified to the processing system, (See Tirunagari Fig. 1 and ¶45.) the variations identified including populations of respective trial variations, the genetic method configured to evolve the populations; (A random population of candidate materials is initially selected; each material’s chemical structure is mapped to a genetic representation; the fitness (the term for the value of the objective function in the context of evolutionary algorithms) of each candidate is determined; and genetic operators, including selection, crossover, and mutation, are then employed to obtain the next generation of candidates. This process is iterated until targeted properties are achieved within some acceptance criteria or a time constraint is reached.” Patra P. 97. See also Patra Fig. 1.) and a second learning system coupled to the system controller, (The use of hardware to control caching taught in Tirunagari teaches coupling of each learning system to a controller. Further, one of ordinary skill in the art would understand both prior art documents as referring to computer implemented methods. See e.g. Patra P. 97 (“Typically, a GA includes tens to several hundred candidates per generation and iterates over tens to thousands of generations, such that it may ultimately assay anywhere from hundreds to hundreds of thousands of candidates.”) and P. 94 (“[W]e have employed computer simulations for fitness evaluation in the test cases[.]”) “[I]n considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom.” MPEP § 2144.01.) the second learning system configured to determine respective effects of the variations identified and applied, (“For example, in each of a series of interactions, the system may determine the current state (e.g., based on performance related parameter values gathered by the input layer of the neural network), and may choose an action to take based on the current state (e.g., continuing execution using the currently selected caching algorithm, or selecting and applying a different caching algorithm).” Tirunagari ¶45. “For example, the method may in some embodiments be used to select a caching algorithm for one of the iterations of the method illustrated in FIG. 5. As illustrated in this example, the method may include determining a caching algorithm that was previously estimated to be the best caching algorithm for the current state and inputs collected during execution of an application, as in 600. In some embodiments, the current state of the application (with respect to its cache performance or overall performance) may be reflected in one or more observed performance related values (e.g., a cache hit rate or data throughput rate that is "high", "medium" or "low" or that is "within range" or "out of range").” Tirunagari ¶52.) the first learning system further configured to converge on a given variation of the variations based on the respective effects determined, (Patra teaches “In any of these cases, the general approach is to begin with one or more initial candidates, to compute an objective function quantifying their properties relative to targeted values, and then to iteratively select new candidates chosen in an effort to converge to a desired value of the objective function. The process continues until some criterion for convergence is satisfied or it is terminated because of a maximal time constraint.” Patra P. 97.) wherein the given variation is a given trial variation included, consistently, by the genetic method in the populations evolved, (“Genetic algorithms aim to mimic the process of natural selection to optimize a system’s properties. . . . A random population of candidate materials is initially selected; each material’s chemical structure is mapped to a genetic representation; the fitness (the term for the value of the objective function in the context of evolutionary algorithms) of each candidate is determined; and genetic operators, including selection, crossover, and mutation, are then employed to obtain the next generation of candidates. This process is iterated until targeted properties are achieved within some acceptance criteria or a time constraint is reached.” Patra P. 97. “The neural network accelerates the progress of the direct-evaluation genetic algorithm toward a system optimum by regularly introducing predicted top-performing candidates into the genetic algorithm based on the ANN-evaluated genetic algorithm’s progressively improving projection of likely top candidates.” Patra P. 98. “The algorithm begins with a randomly generated initial population as shown in Figure 4. In order to produce the next generation after fitness assessment, two parents are randomly selected from among the candidates of a particular generation using roulette wheel selection with self-crossover prohibited. Two-point crossover is used to combine the selected parents, and point mutations are applied to new candidates at a per-gene rate of 0.01. Linear scaling of fitness values is employed to maintain consistent selection pressure. An elitism scheme, in which the single best candidate in each generation is directly transferred to the next generation, is employed to accelerate convergence.” Patra PP. 99-100. See Also Patra Fig. 4, box titled “send top candidate to direct evaluated GA.” “Within this process, if the suggestion is among the top candidates, it is likely to accelerate the convergence of the ANN-biased genetic algorithm. On the other hand, if the suggested candidate has a low fitness compared with the population, then the basic principles of fitness-based selection indicate that its genes are unlikely to be retained and bias the GA in the wrong direction.” Patra P. 100.) and wherein the given variation enables the at least one goal to be met. (This is written as a wherein clause naming an intended result. Claim language that does not require steps or specific structures is not given patentable weight. See MPEP §§ 2103 and 2111.04. Further, both references are directed to this result. Tirunagari teaches: “Over time, the system builds a policy, or set of policies, which specify the actions to be taken for different combinations of inputs, and refines them as rewards are measured each time an action is taken. The policies may define the expected reward for each input/output set, which may also be refined over time as more actions are taken and the actual (observed) rewards are measured. Various reinforcement learning techniques may be applied in a neural network to allow the neural network to learn the appropriate caching algorithms to be applied in different situations (e.g., depending on the hardware or operating system configuration, workload, cache size, etc.).” Tirunagari ¶45. “The input/action/reward values maintained by the neural network (e.g., in a data structure or through the configuration of combinational logic that provides this functionality) may be updated after each such iteration to allow the neural network to improve its performance in selecting suitable caching algorithms for applications given their situational/execution context. Based on the updated values, and any observed change of state, the neural network may take further action in an attempt to identify an optimal caching algorithm.” Tirunagari ¶46. “In some embodiments, a caching algorithm may have been previously estimated to be the best caching algorithm for the current state and inputs during a training phase or during a previous iteration of a self-training technique based on one or more actions taken for the same combination of state and inputs and corresponding costs or rewards observed in response to that action.” Tirunagari ¶52. One of ordinary skill in the art would understand optimization of the reward function as describing convergence of the model. Inferences drawn by POSA are properly considered when evaluating the prior art to determine anticipation. See In re Preda, 401 F.2d 825, 826 (C.C.P.A. 1968).11 Similarly, Patra teaches “In any of these cases, the general approach is to begin with one or more initial candidates, to compute an objective function quantifying their properties relative to targeted values, and then to iteratively select new candidates chosen in an effort to converge to a desired value of the objective function. The process continues until some criterion for convergence is satisfied or it is terminated because of a maximal time constraint.” Patra P. 97. 2. (Original) The system of Claim 1, wherein and wherein the second learning system is configured to employ a neural network to determine the respective effects. (Patra teaches using a genetic algorithm to identify variations in materials, which are then evaluated by a neural network. “In effect, this strategy gives the genetic algorithm the ability to “learn” from accumulated data, in contrast to standard genetic algorithms, which have no memory of prior attempts beyond their accumulated impact on the instantaneous genetic distribution. In principle, this strategy combines the best features of both genetic algorithms and neural networks. The neural network accelerates the progress of the direct-evaluation genetic algorithm toward a system optimum by regularly introducing predicted top-performing candidates into the genetic algorithm based on the ANN-evaluated genetic algorithm’s progressively improving projection of likely top candidates.” Patra P. 99. 11. (Original) The system of Claim 1, wherein the first learning system is configured to employ a genetic method to evolve the populations on a population-by-population basis, (Patra Teaches: “The algorithm begins with a randomly generated initial population as shown in Figure 4. In order to produce the next generation after fitness assessment, two parents are randomly selected from among the candidates of a particular generation using roulette wheel selection with self-crossover prohibited. Two-point crossover is used to combine the selected parents, and point mutations are applied to new candidates at a per-gene rate of 0.01. Linear scaling of fitness values is employed to maintain consistent selection pressure. An elitism scheme, in which the single best candidate in each generation is directly transferred to the next generation, is employed to accelerate convergence.” Patra PP. 99-100. See also Patra Fig. 4. The motivation to combine given in the rejection of claim 2 applies here.) wherein the first learning system is further configured to transmit, on the population-by- population basis, the populations evolved to the system controller, (See Patra Fig. 4, arrow labeled “update ANN database.” The claimed “system controller” is not limited to any particular hardware or software structure or limited to any particular configuration. Therefore, the claimed “system controller” reads on any sup-part or any combination thereof, in the system described in Patra. See Patra Fig. 4.) and wherein, to apply the variations identified, the system controller is further configured to apply the respective trial variations of the populations evolved to the processing system on a trial-variation-by-trial-variation basis by sequentially applying each respective trial variation. (“In both the standard ANN evaluated genetic algorithm and the ANN-biased genetic algorithm, the top candidate predicted by the ANN is then identified by employing the ANN to predict fitness within a genetic algorithm run for 100 generations with 100 candidates per generation.” Patra P. 100 col. 1.) 12. (Original) The system of Claim 11, wherein second learning system is configured to employ a neural network and wherein the neural network is configured to: (See Patra Fig. 4.) determine the respective effects based on at least one monitored parameter of the processing system, the respective effects resulting from applying the respective trial variations to the processing system; (See Patra fig. 4 box titled “Fitness evaluation with neural network predictions.” ) assign respective rankings to the respective trial variations based on the respective effects determined and the at least one goal; and transmit, to the system controller, the respective rankings on the trial-variation-by-trial-variation basis by sequentially transmitting each respective ranking. (Patra teaches “using the progressively trained ANN by introducing into each generation of the GA the best projected candidate identified by a separate ANN-evaluated genetic algorithm that uses the ANN as trained on all of the data accumulated up to that point.” Patra P. 99. “The single best candidate identified by the NEGA is then sent to the ANN-biased genetic algorithm and incorporated into the next generation’s population along with candidates introduced via standard genetic operations and elitism.” Patra P. 100. The claimed “system controller” is not limited to any particular hardware or software structure or limited to any particular configuration. Therefore, the claimed “system controller” reads on any sup-part or any combination thereof, in the system described in Patra. See Patra Fig. 4.) 13. (Original) The system of Claim 12, wherein: the system controller is further configured to transmit, to the first learning system, respective ranked populations of the populations, the respective ranked populations including respective rankings of the respective trial variations, the respective rankings assigned by the neural network and transmitted to the system controller; (“The neural network accelerates the progress of the direct-evaluation genetic algorithm toward a system optimum by regularly introducing predicted top-performing candidates into the genetic algorithm based on the ANN-evaluated genetic algorithm’s progressively improving projection of likely top candidates.” Patra P. 99. See also Patra Fig. 4. The claimed “system controller” is not limited to any particular hardware or software structure or limited to any particular configuration. Therefore, the claimed “system controller” reads on any sup-part or any combination thereof, in the system described in Patra. See Patra Fig. 4.) and the genetic method is configured to evolve a present population of the populations into a next population of the populations based on a given respective ranked population of the respective ranked populations, the given respective ranked population corresponding to the present population. (See Patra Fig. 4, box titled “send top candidate to direct evaluated GA.” Patra teaches “The algorithm begins with a randomly generated initial population as shown in Figure 4. In order to produce the next generation after fitness assessment, two parents are randomly selected from among the candidates of a particular generation using roulette wheel selection with self-crossover prohibited. Two-point crossover is used to combine the selected parents, and point mutations are applied to new candidates at a per-gene rate of 0.01. Linear scaling of fitness values is employed to maintain consistent selection pressure. An elitism scheme, in which the single best candidate in each generation is directly transferred to the next generation, is employed to accelerate convergence.” Patra PP. 99-100.) 14. (Original) The system of Claim 12, wherein the genetic method is configured to evolve the populations on a population-by-population basis, (See Patra Fig. 4, box titled “send top candidate to direct evaluated GA.” Patra teaches “The algorithm begins with a randomly generated initial population as shown in Figure 4. In order to produce the next generation after fitness assessment, two parents are randomly selected from among the candidates of a particular generation using roulette wheel selection with self-crossover prohibited. Two-point crossover is used to combine the selected parents, and point mutations are applied to new candidates at a per-gene rate of 0.01. Linear scaling of fitness values is employed to maintain consistent selection pressure. An elitism scheme, in which the single best candidate in each generation is directly transferred to the next generation, is employed to accelerate convergence.” Patra PP. 99-100.) and wherein the given variation is converged on by the genetic method based on a respective ranking assigned thereto by the neural network. (See Patra Fig. 4, left circle labeled “Convergence/termination criteria satisfied” and box titled “send top candidate to direct evaluated GA.”) Claims 17 and 32 are each rejected for the reasons given in the rejection of claim 1. Claims 18 and 27-30 are rejected for the reasons given in the rejection of claims 2 and 11-14, respectively. Claim 31, which includes means plus function limitations, is rejected for the reasons given in the rejections of claims 1 and 2. Applicant's arguments filed 10/30/2025 have been fully considered but they are not persuasive. Rejections under §101: Applicant states that the claims are directed to a specific set of rules that improve a technological process by “altering processing of a processing system to meet a target goal(s) for optimizing the system. Rem. 12. Per the Remarks, the aforementioned goals “could be to reduce latency, increase throughput, reduce power consumption” or could include other, unmentioned goals. See Rem. 12. This explanation omits any concrete improvement to the technology that one of ordinary skill in the art would understand to result from the set of operations recited in the claims. To confer patent eligibility to an otherwise abstract idea, claims may recite a specific means or method of solving a specific problem in a technological field.12 Nothing in the Remarks ties specific operations recited in the claims to any specific improvement to the technological field or to that of another technological field. Interpretation under § 112f: As correctly noted by Applicant, the non-final action errantly listed claim 32 and not claim 31 as being interpreted under § 112f and being subject to the corresponding rejections under §§ 112a and 112b. Rejections under § 112a: Applicant states that the means carrying out the functions of claim 31 are given in the “internal structure of a computer” shown in Fig. 7. Rem. 14. It is submitted a long explanation of the requirement of an algorithms and special purpose computers in means-plus-function claiming is not necessary here. Please refer to MPEP § 2181(II)(B) for further guidance. As indicated in the rejection above, support for computer implemented means plus function claims is generally in the form of a special purpose computer, including the algorithm which transforms a general computer into a special purpose machine. While source code is not required, some algorithm capable of causing this transformation is required for support of computer implemented means plus function claims. Applicant cites paragraphs 55-60 and 62-63 as providing support. Notably absent from the remarks is any specific algorithm that would support any of the claimed means. Rejections under § 112b: No specific arguments are submitted in response to rejections under this section. Rejections under § 103: Applicant summarizes the art of record and asserts that the combination does not teach the claimed subject matter. No specific arguments are put forth. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL M KNIGHT whose telephone number is (571) 272-8646. The examiner can normally be reached Monday - Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL M. KNIGHTExaminerArt Unit 2148 /PAUL M KNIGHT/Examiner, Art Unit 2148 1 This distinction between claims which read on math and claims which recite an abstract idea is based on official USPTO Guidance. The 2019 Subject Matter Eligibility (SME) Examples instructs examiners that a claim reciting “training the neural network” where the background describes training as “using stochastic learning with backpropagation which is a type of machine learning algorithm that uses the gradient of a mathematical loss function to adjust the weights of the network” “does not recite any mathematical relationships, formulas, or calculations.” See 2019 SME Example 39, PP. 8-9 (emphasis added). In this example, the plain meaning of “training the neural network” read in light of the disclosure reads on backpropagation using the gradient of a mathematical loss function. See MPEP § 2111.01. In contrast, the 2024 SME Examples instructs examiners that a claim reciting “training, by the computer, the ANN . . . wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm” does recite an abstract idea because “[t]he plain meaning of [backpropagation algorithm and gradient descent algorithm] are optimization algorithms, which compute neural network parameters using a series of mathematical calculations.” 2024 PEG Example 47, PP. 4-6. The Memorandum of August 4, 2025; Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101, P. 3 also directs examiners that “training the neural network” recited in Example 39 merely “involve[s] . . . mathematical concepts” and contrasts claim 2 of example 47 as “referring to [specific] mathematical calculations by name[.]” (Emphasis added.) 2 “For instance, the claims in Diehr . . . clearly stated a mathematical equation . . . and the claims in Mayo . . . clearly stated laws of nature . . . such that the claims ‘set forth’ an identifiable judicial exception. Alternatively, the claims in Alice Corp. . . . described the concept of intermediated settlement without ever explicitly using the words ‘intermediated’ or ‘settlement.’” MPEP § 2106.04(II)(A). 3 “By grouping the abstract ideas, the examiners’ focus has been shifted from relying on individual cases to generally applying the wide body of case law spanning all technologies and claim types. . . . If the identified limitation(s) falls within at least one of the groupings of abstract ideas, it is reasonable to conclude that the claim recites an abstract idea in Step 2A Prong One.” MPEP § 2106.04(a). See also MPEP 2104(a)(2). 4 Step 2A prongs one and two are evaluated individually, consistent with the framework in the MPEP. Evaluation of relationships between abstract ideas and additional elements in one location promotes clarity of the record. 5 “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. . . . It should be noted that while this consideration is often referred to in an abbreviated manner as the ‘improvements consideration,’ the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” MPEP 2106.04(d)(1). See also Koninklijke KPN N.V. v. Gemalto M2M GmbH, 942 F.3d 1143, 1150-1152 (Fed. Cir. 2019). 6 See MPEP § 2106.05(d)(II) listing operations including “receiving or transmitting data,” “storing and retrieving data in memory,” and “performing repetitive calculations” as WURC. 7 “But ‘[f]or the role of a computer in a computer-implemented invention to be deemed meaningful in the context of this analysis, it must involve more than performance of 'well-understood, routine, [and] conventional activities previously known to the industry.’ Content Extraction, 776 F.3d at 1347-48 (quoting Alice, 134 S. Ct at 2359). Here, the server simply receives data, ‘extract[s] classification information . . . from the received data,’ and ‘stor[es] the digital images . . . taking into consideration the classification information.’ See ‘295 patent, col. 10 ll. 1-17 (Claim 17). . . . These steps fall squarely within our precedent finding generic computer components insufficient to add an inventive concept to an otherwise abstract idea. Alice, 134 S. Ct. at 2360 (‘Nearly every computer will include a 'communications controller' and a 'data storage unit' capable of performing the basic calculation, storage, and transmission functions required by the method claims.’); Content Extraction, 776 F.3d at 1345, 1348 (‘storing information’ into memory, and using a computer to ‘translate the shapes on a physical page into typeface characters,’ insufficient confer patent eligibility); Mortg. Grader, 811 F.3d at 1324-25 (generic computer components such as an ‘interface,’ ‘network,’ and ‘database,’ fail to satisfy the inventive concept requirement); Intellectual Ventures I, 792 F.3d at 1368 (a ‘database’ and ‘a communication medium’ ‘are all generic computer elements’); BuySAFE v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) (‘That a computer receives and sends the information over a network—with no further specification—is not even arguably inventive.’).” TLI Commc'ns LLC v. AV Auto., LLC, 823 F.3d 607, 614 (Fed. Cir. 2016), Emphasis Added. 8 “The analysis as to whether an element (or combination of elements) is widely prevalent or in common use is the same as the analysis under 35 U.S.C. 112(a) as to whether an element is so well-known that it need not be described in detail in the patent specification. See Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (supporting the position that amplification was well-understood, routine, conventional for purposes of subject matter eligibility by observing that the patentee expressly argued during prosecution of the application that amplification was a technique readily practiced by those skilled in the art to overcome the rejection of the claim under 35 U.S.C. 112, first paragraph)[.]” MPEP § 2106.05(d)(I). 9 “Similarly, claim elements or combinations of claim elements that are routine, conventional or well-understood cannot transform the claims. (Citing BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1290-1291 (Fed. Cir. 2018)). When the patent's specification ‘describes the components and features listed in the claims generically,’ it ‘support[s] the conclusion that these components and features are conventional.’ Weisner v. Google LLC, 51 F.4th 1073, 1083-84 (Fed. Cir. 2022); see also Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1357-58 (Fed. Cir. 2024).” Broadband iTV, Inc. v. Amazon.com, Inc., 113 F.4th 1359 (Fed. Cir. 2024) 10 “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.” MPEP § 2106.05(a). 11 “The sole issue is whether claims 7 and 8 are anticipated[.] . . . We agree with appellant that Figure 1 of Thacker, by itself, does not disclose every limitation in the appealed claims. However, in considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. In re Shepard, 319 F.2d 194, 50 CCPA 1439 (1963).” In re Preda, 401 F.2d 825, 826 (C.C.P.A. 1968) 12 “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. . . . It should be noted that while this consideration is often referred to in an abbreviated manner as the ‘improvements consideration,’ the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” MPEP 2106.04(d)(1). See also Koninklijke KPN N.V. v. Gemalto M2M GmbH, 942 F.3d 1143, 1150-1152 (Fed. Cir. 2019).
Read full office action

Prosecution Timeline

Oct 04, 2021
Application Filed
Oct 20, 2022
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §101, §103, §112
Oct 30, 2025
Response Filed
Feb 02, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530592
NON-LINEAR LATENT FILTER TECHNIQUES FOR IMAGE EDITING
2y 5m to grant Granted Jan 20, 2026
Patent 12530612
METHODS FOR ALLOCATING LOGICAL QUBITS OF A QUANTUM ALGORITHM IN A QUANTUM PROCESSOR
2y 5m to grant Granted Jan 20, 2026
Patent 12499348
READ THRESHOLD PREDICTION IN MEMORY DEVICES USING DEEP NEURAL NETWORKS
2y 5m to grant Granted Dec 16, 2025
Patent 12462201
DYNAMICALLY OPTIMIZING DECISION TREE INFERENCES
2y 5m to grant Granted Nov 04, 2025
Patent 12456057
METHODS FOR BUILDING A DEEP LATENT FEATURE EXTRACTOR FOR INDUSTRIAL SENSOR DATA
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+17.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 272 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month