Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
This communication is in response to the amendment filed 09/29/2025. Claims 1, 11 have been amended. Claims 1–4, 6–14, 16–20 are presented for examination.
Subject Matter Free of Prior Art
Claim(s) 1–4, 6–14, 16–20 are allowable over prior art. However, the claims are still rejected under 101.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1–4, 6–14, 16–20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Based upon consideration of all of the relevant factors with respect to the claims as a whole, the claims are directed to non-statutory subject matter, wherein the judicial exception is not integrated into a practical application and the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the following analysis:
Claim 1 is drawn to a system which is within the four statutory categories (i.e., machine). Claim 11 is drawn to a method which is within the four statutory categories (i.e., method).
Independent claim 1 (which is representative of independent claim 11) recites…determine an ocular attribute datum as a function of receiving an ocular assessment; generate an ocular profile by determining at least an ocular vector as a function of the ocular attribute datum; and generating a degree of variance as a function of the ocular vector and the ocular utopia, wherein the ocular profile is further generated as a function of the ocular attribute datum and an ocular utopia using an ocular…model which comprises: receiving ocular training data, wherein the ocular training data correlates a plurality of ocular attribute data and a plurality of ocular utopia data as inputs to a plurality of ocular profile data as outputs; and generating the ocular profile using the trained ocular…model; and identify at least an edible as a function of the ocular profile, a nourishment composition, and an edible [model], wherein the identification includes: determining an ocular dysfunction as a function of the ocular profile using a dysfunction training set correlating at least an ocular enumeration and a visual system effect to the ocular dysfunction; and develop a profile outcome including a treatment outcome and a prevention outcome as a function of the edible, wherein the treatment outcome is configured to at least eliminate the ocular attribute datum associated with the ocular dysfunction.
Under the broadest reasonable interpretation, the limitations noted above, as drafted, covers concepts performed in the human mind, but for the recitation of generic computer components. That is, other than reciting a “computing device,” nothing in the claim precludes the step from practically being performed in the mind. For example, but for the “computing device,” the claim encompasses a person thinking about ocular data, converting the data into numbers, and determining the next steps (i.e., what is the most appropriate item to eat) based on a person’s numbers and goals in the manner described in the identified abstract idea, supra. If a claim limitation, under its broadest reasonable interpretation, covers concepts performed in the human mind, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Independent claim 1 (which is representative of independent claim 11) further recites…training, iteratively, the ocular machine-learning model, comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes, using the ocular training data with feedback from previous iterations of the ocular machine-learning model, wherein connections between nodes are be created through a process of training the network, in which elements from ocular training data with feedback are applied to the input layer of nodes and a training algorithm is then used to adjust the connections and weights between nodes in adjacent layers of the neural network to produce values at the output nodes, wherein training the ocular machine-learning model comprises: training the ocular machine-learning model using the ocular training data as input; adjusting one or more connections and one or more weights between nodes in adjacent layers of the ocular machine-learning model; and adjusting the ocular machine-learning model as a function of the adjusted connections to produce the output layer of nodes.
Under the broadest reasonable interpretation, the limitations noted above, as drafted, covers mathematical relationships, but for the recitation of generic computer components. For example, with regards to training iteratively a machine learning model, the specification mentions: “K-nearest neighbors algorithm may include specifying a K-value, or a number directing the classifier to select the k most similar entries training data to a given sample, determining the most common classifier of the entries in the database, and classifying the known sample; this may be performed recursively and/or iteratively to generate a classifier that may be used to classify input data as further samples” (¶ 0023) and “a suitable training algorithm (such as Levenberg-Marquardt, conjugate gradient, simulated annealing, or other algorithms) is then used to adjust the connections and weights between nodes in adjacent layers of the neural network to produce the desired values at the output nodes” (¶ 0042). In light of the disclosure, the claim encompasses the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra. If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
For purposes of the following analysis, the aforementioned types of identified abstract ideas are considered together as a single abstract idea. See MPEP § 2106.04(II)(B).
This judicial exception is not integrated into a practical application. In particular, claim 1 recites additional elements (i.e., a computing device; machine-learning model comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes; classifier). Claim 11 recites additional elements (i.e., a computing device; machine-learning model comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes; classifier). Looking to the specification, a computing device is described at a high level of generality (¶ 0009; ¶ 0054), such that it amounts to no more than mere instructions to apply the exception using generic computer components. Also, although the claims add a “machine-learning model… comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes” and “classifier” (which is defined in ¶ 0021 of the specification as “a machine-learning model”), using a trained machine learning model (i.e., convolutional neural network) amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea.
Reevaluated under step 2B, the additional elements noted above do not provide “significantly more” when taken either individually or as an ordered combination. The use of a general purpose computer or computers (i.e., computing device) amounts to no more than mere instructions to apply the exception using generic computer components and does not impose any meaningful limitation on the computer implementation of the abstract idea, so it does not amount to significantly more than the abstract idea. Also, although the claims add a “machine-learning model… comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes” and “classifier” (which is defined in ¶ 0021 of the specification as “a machine-learning model”), using a trained machine learning model (i.e., convolutional neural network) amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. The combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology and their collective functions merely provide a conventional computer implementation of the abstract idea. Furthermore, the additional elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generally linking the abstract idea to a particular technological environment or field of use, as the courts have found in Parker v. Flook; similarly, the current invention merely limits the claimed calculations to the healthcare industry which does not impose meaningful limits on the scope of the claim. Therefore, there are no limitations in the claims that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception.
Dependent claims 2–4, 6–10, 12-14, 16–20 include all the limitations of the parent claims and further elaborate on the abstract idea discussed above and incorporated herein.
Claims 2-4, 8-10, 12-14, 18-20 further define the analysis and organization of data for the performance of the abstract idea and do not recite any additional elements. Thus, the claims do not integrate the abstract idea into a practical application and do not provide “significantly more.”
Claims 6, 16 further recites the “training an edible machine-learning model by training data that contains nourishment compositions and ocular profiles as inputs correlated to a plurality of edibles as outputs; and outputting the edible as a function of training the edible machine-learning model.” However, similar to the analysis above, under the broadest reasonable interpretation in light of the disclosure, the claim encompasses the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra, which covers mathematical relationships within the “Mathematical Concepts” grouping of abstract ideas. Furthermore, using a trained machine learning model amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. Also, functional limitations further define the analysis and organization of data for the performance of the abstract idea. Thus, the claims as a whole do not integrate the abstract idea into a practical application and do not provide “significantly more.”
Claims 7, 17 further recites the additional elements of “a K-nearest neighbors (KNN) algorithm,” which is described in the specification as “a machine-learning model.” However, using a machine learning model amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Also, functional limitations further define the analysis and organization of data for the performance of the abstract idea. Thus, the claims as a whole do not integrate the abstract idea into a practical application and do not provide “significantly more.”
Although the dependent claims add additional limitations, they only serve to further limit the abstract idea by reciting limitations on what the information is and how it is received and used. These information characteristics do not change the fundamental analogy to the abstract idea groupings of “Mental Processes” and “Mathematical Concepts,” and, when viewed individually or as a whole, they do not add anything substantial beyond the abstract idea. Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore, the claims when taken as a whole are ineligible for the same reasons as the independent claims.
Response to Arguments
Applicant's arguments filed 09/29/2025 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response filed 09/29/2025.
In the remarks, Applicant argues in substance that:
Regarding the 101 rejections,
“Machine-learning algorithms use iterative calculations to, for instance, minimize error, identify training data, or otherwise operate on subject data through intensive and repeated calculations, which, except in their most simplistic form, cannot be performed in the human mind or with pencil and paper… The complexity and volume of data, along with the iterative nature of the training process render these tasks impractical for human execution without computational assistance… training a supervised machine learning model, is provided in the MPEP as an example of a claim that is not directed to an abstract idea. See MPEP §2106.04(a)(2). Applicant respectfully submits that this is precisely the case with machine-learning processes in general, and the processes recited in claim 1 in particular: like a neural network, and indeed like the "several-step manipulation" cited in Synopsys, the future learning processes in claim 1 would generally involve acting on a large body of data to modify and create data structures in a manner that, unless improperly reduced to its "most simplistic form," could not be performed by a human being with the aid of pen and paper”; “Claim 1 discloses a first training data set (analogous to example 39's first set of digital facial images), followed by adjusting the weights and connections between nodes, producing a second training set (analogous to example 39's second training set), which is used to further train the machine learning model”; “although any form of neural network would necessarily "involve" mathematical calculations, including minimization of error as in other machine-learning algorithms, a claim reciting training a neural network does not itself recite those mathematical operations. Accordingly, Applicant respectfully asserts that at last the limitations describing training of the ocular machine-learning model do not recite "mathematical concepts"”;
“Similar to the claims in Enfish, Applicant asserts that, as amended, claim 1 is directed to a specific improvement in computer functionality, not a disembodied idea. Claim 1 recites a feedback-driven training pipeline for a convolutional neural network… claim 1 tells the computer how to do better what only a computer does at all: iteratively train the claimed CNN on ocular training data with feedback from previous iterations, adjust inter-layer connections and weights, and produce the output layer as a function of those adjustments-exactly the kind of logical structures and processes that Enfish recognizes as patent-eligible improvements to computer technology. The Examiner's Step 2A(1) position that the additional elements are merely a "computing device," "machine-learning model," or "classifier"-is contradicted by the record: the ordered combination here is the specified CNN architecture and explicit training operations quoted above, which constrain the model's internal mechanics and yield concrete, domain-specific outputs (e.g., ocular profiles) generated by the trained model, not a generic "apply it." Accordingly, even if a judicial exception were implicated, claim 1 integrates it into a practical application through the recited iterative CNN training pipeline and classifier-based determinations expressly taught in the specification and captured in the amended limitations”; and
“The claim as amended mirrors the reasoning in BASCOM because the training steps and other limitations amount to significantly more by providing a technical improvement in the art. As amended, the claim recites training a multi-layer system of nodes in an artificial neural network, wherein the training steps comprise adjusting the connections between the nodes.”
It is respectfully submitted that Examiner has considered Applicant’s arguments and does not find them persuasive. Examiner has attempted to address all of the arguments presented by Applicant; however, any arguments inadvertently not addressed are not persuasive for at least the following reasons:
In response to Applicant’s argument that (a) regarding the 101 rejections,
“Machine-learning algorithms use iterative calculations to, for instance, minimize error, identify training data, or otherwise operate on subject data through intensive and repeated calculations, which, except in their most simplistic form, cannot be performed in the human mind or with pencil and paper… The complexity and volume of data, along with the iterative nature of the training process render these tasks impractical for human execution without computational assistance… training a supervised machine learning model, is provided in the MPEP as an example of a claim that is not directed to an abstract idea. See MPEP §2106.04(a)(2). Applicant respectfully submits that this is precisely the case with machine-learning processes in general, and the processes recited in claim 1 in particular: like a neural network, and indeed like the "several-step manipulation" cited in Synopsys, the future learning processes in claim 1 would generally involve acting on a large body of data to modify and create data structures in a manner that, unless improperly reduced to its "most simplistic form," could not be performed by a human being with the aid of pen and paper”; “Claim 1 discloses a first training data set (analogous to example 39's first set of digital facial images), followed by adjusting the weights and connections between nodes, producing a second training set (analogous to example 39's second training set), which is used to further train the machine learning model”; “although any form of neural network would necessarily "involve" mathematical calculations, including minimization of error as in other machine-learning algorithms, a claim reciting training a neural network does not itself recite those mathematical operations. Accordingly, Applicant respectfully asserts that at last the limitations describing training of the ocular machine-learning model do not recite "mathematical concepts"”:
It is respectfully submitted that Applicant argues “Machine-learning algorithms use iterative calculations to, for instance, minimize error, identify training data, or otherwise operate on subject data through intensive and repeated calculations, which, except in their most simplistic form, cannot be performed in the human mind or with pencil and paper… The complexity and volume of data, along with the iterative nature of the training process render these tasks impractical for human execution without computational assistance.” However, per broadest reasonable interpretation of the claim in light of the specification, the claim limitations to which Applicant refer as describing the “learning algorithms [using] iterative calculations” and “the training process” (i.e., “training, iteratively, the ocular machine-learning model, comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes, using the ocular training data with feedback from previous iterations of the ocular machine-learning model, wherein connections between nodes are be created through a process of training the network, in which elements from ocular training data with feedback are applied to the input layer of nodes and a training algorithm is then used to adjust the connections and weights between nodes in adjacent layers of the neural network to produce values at the output nodes, wherein training the ocular machine-learning model comprises: training the ocular machine-learning model using the ocular training data as input; adjusting one or more connections and one or more weights between nodes in adjacent layers of the ocular machine-learning model; and adjusting the ocular machine-learning model as a function of the adjusted connections to produce the output layer of nodes”) covers mathematical relationships within the “Mathematical Concepts” grouping of abstract ideas, and not concepts performed in the human mind within the “Mental Processes” grouping of abstract ideas, as Applicant now argues. For example, with regards to training iteratively a machine learning model with feedback from previous iterations of the model, the specification mentions: “K-nearest neighbors algorithm may include specifying a K-value, or a number directing the classifier to select the k most similar entries training data to a given sample, determining the most common classifier of the entries in the database, and classifying the known sample; this may be performed recursively and/or iteratively to generate a classifier that may be used to classify input data as further samples” (¶ 0023) and “a suitable training algorithm (such as Levenberg-Marquardt, conjugate gradient, simulated annealing, or other algorithms) is then used to adjust the connections and weights between nodes in adjacent layers of the neural network to produce the desired values at the output nodes” (¶ 0042). In light of the disclosure, the claim encompasses the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra. If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas.
Applicant argues “training a supervised machine learning model, is provided in the MPEP as an example of a claim that is not directed to an abstract idea. See MPEP §2106.04(a)(2). Applicant respectfully submits that this is precisely the case with machine-learning processes in general, and the processes recited in claim 1 in particular: like a neural network, and indeed like the "several-step manipulation" cited in Synopsys, the future learning processes in claim 1 would generally involve acting on a large body of data to modify and create data structures in a manner that, unless improperly reduced to its "most simplistic form," could not be performed by a human being with the aid of pen and paper.” However, it is unclear to which “example” Applicant refers because “training a supervised machine learning model” is not “an example of a claim that is not directed to an abstract idea” in “MPEP §2106.04(a)(2),” as Applicant now argues. Furthermore, merely “training a supervised machine learning model, is [not] provided in the MPEP as an example of a claim that is not directed to an abstract idea,” as Applicant now argues. Regardless, MPEP §2106.04(a)(1) lists “a method of training a neural network for facial detection comprising: collecting a set of digital facial images, applying one or more transformations to the digital images, creating a first training set including the modified set of digital facial images; training the neural network in a first stage using the first training set, creating a second training set including digital non-facial images that are incorrectly detected as facial images in the first stage of training; and training the neural network in a second stage using the second training set” as one of the “Non-limiting hypothetical examples of claims that do not recite (set forth or describe) an abstract idea,” which is a specific example that does not apply to “machine-learning processes in general,” as Applicant now argues. The claim limitations of the present invention which Applicant seem to refer as “the future learning processes” are not similar to the aforementioned example and instead, covers mathematical relationships within the “Mathematical Concepts” grouping of abstract ideas, as stated previously above. Furthermore, the claim limitations which describe the “neural network” are not interpreted as part of the abstract idea, but as additional elements to be interpreted in Step 2A, Prong Two, which amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually.
Applicant argues “Claim 1 discloses a first training data set (analogous to example 39's first set of digital facial images), followed by adjusting the weights and connections between nodes, producing a second training set (analogous to example 39's second training set), which is used to further train the machine learning model.” However, the claim limitations of the present invention are different from the claim limitations of those found eligible in Example 39. Even if the claim limitations of the present invention are similar to that of the claims found eligible (and they are not similar), the claimed inventions are fundamentally different in scope and should be interpreted based on the asserted fact patterns; other fact patterns may have different eligibility outcomes, as is the case with the claims of the present invention. Furthermore, the claims to which Applicant seem to refer (i.e., “training, iteratively, the ocular machine-learning model…using the ocular training data with feedback from previous iterations of the ocular machine-learning model, wherein training the ocular machine-learning model comprises: training the ocular machine-learning model using the ocular training data as input; adjusting one or more connections and one or more weights between nodes in adjacent layers of the ocular machine-learning model; and adjusting the ocular machine-learning model as a function of the adjusted connections to produce the output layer of nodes”) encompasses the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra, which is part of the abstract idea.
Applicant argues “although any form of neural network would necessarily "involve" mathematical calculations, including minimization of error as in other machine-learning algorithms, a claim reciting training a neural network does not itself recite those mathematical operations. Accordingly, Applicant respectfully asserts that at last the limitations describing training of the ocular machine-learning model do not recite "mathematical concepts.”” As stated previously above, with regards to training iteratively a machine learning model (i.e., neural network) with feedback from previous iterations of the model, the specification mentions: “K-nearest neighbors algorithm may include specifying a K-value, or a number directing the classifier to select the k most similar entries training data to a given sample, determining the most common classifier of the entries in the database, and classifying the known sample; this may be performed recursively and/or iteratively to generate a classifier that may be used to classify input data as further samples” (¶ 0023) and “a suitable training algorithm (such as Levenberg-Marquardt, conjugate gradient, simulated annealing, or other algorithms) is then used to adjust the connections and weights between nodes in adjacent layers of the neural network to produce the desired values at the output nodes” (¶ 0042), which are mathematical interrelationships between data in the manner described in the identified abstract idea, supra. If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Thus, the claims are directed to an abstract idea.
“Similar to the claims in Enfish, Applicant asserts that, as amended, claim 1 is directed to a specific improvement in computer functionality, not a disembodied idea. Claim 1 recites a feedback-driven training pipeline for a convolutional neural network… claim 1 tells the computer how to do better what only a computer does at all: iteratively train the claimed CNN on ocular training data with feedback from previous iterations, adjust inter-layer connections and weights, and produce the output layer as a function of those adjustments-exactly the kind of logical structures and processes that Enfish recognizes as patent-eligible improvements to computer technology. The Examiner's Step 2A(1) position that the additional elements are merely a "computing device," "machine-learning model," or "classifier"-is contradicted by the record: the ordered combination here is the specified CNN architecture and explicit training operations quoted above, which constrain the model's internal mechanics and yield concrete, domain-specific outputs (e.g., ocular profiles) generated by the trained model, not a generic "apply it." Accordingly, even if a judicial exception were implicated, claim 1 integrates it into a practical application through the recited iterative CNN training pipeline and classifier-based determinations expressly taught in the specification and captured in the amended limitations”:
Applicant argues “Similar to the claims in Enfish, Applicant asserts that, as amended, claim 1 is directed to a specific improvement in computer functionality, not a disembodied idea. Claim 1 recites a feedback-driven training pipeline for a convolutional neural network… claim 1 tells the computer how to do better what only a computer does at all: iteratively train the claimed CNN on ocular training data with feedback from previous iterations, adjust inter-layer connections and weights, and produce the output layer as a function of those adjustments-exactly the kind of logical structures and processes that Enfish recognizes as patent-eligible improvements to computer technology.” However, the claim limitations of the present invention are different from the claim limitations of Enfish. Even if the claim limitations of the present invention are similar to that of the claims found eligible in Enfish (and they are not similar), the claimed inventions are fundamentally different in scope and examples should be interpreted based on the asserted fact patterns; as previously stated above, other fact patterns may have different eligibility outcomes, as is the case with the claims of the present invention. The claims found eligible in Enfish are directed to an improvement to computer functionality; unlike Enfish, the claims of the present invention do not recite any technological computational efficiency, solution, or improvement, but are directed to a person thinking about ocular data, converting the data into numbers, and determining the next steps (i.e., what is the most appropriate item to eat) based on a person’s numbers and goals in the manner described in the identified abstract idea, supra, and the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra, which is the abstract idea.
Furthermore, the claim limitations which describe “iteratively train the claimed CNN on ocular training data with feedback from previous iterations, adjust inter-layer connections and weights, and produce the output layer as a function of those adjustments” encompasses the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra, which is part of the abstract idea. Therefore, any alleged improvement is at best, an improvement to the abstract idea (i.e., mathematical relationships). However, an improved abstract idea is still an unpatentable abstract idea.
Applicant argues “The Examiner's Step 2A(1) position that the additional elements are merely a "computing device," "machine-learning model," or "classifier"-is contradicted by the record: the ordered combination here is the specified CNN architecture and explicit training operations quoted above, which constrain the model's internal mechanics and yield concrete, domain-specific outputs (e.g., ocular profiles) generated by the trained model, not a generic "apply it.”” However, Applicant fails to specify how “the ordered combination…the specified CNN architecture and explicit training operations quoted above, which constrain the model's internal mechanics and yield concrete, domain-specific outputs (e.g., ocular profiles) generated by the trained model [is] not a generic "apply it”” or how they provide “improvements to computer technology.” Regardless, a computing device is described at a high level of generality (¶ 0009; ¶ 0054), such that it amounts to no more than mere instructions to apply the exception using generic computer components. Also, although the claims add a “machine-learning model… comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes” and “classifier” (which is defined in ¶ 0021 of the specification as “a machine-learning model”), using a trained machine learning model (i.e., convolutional neural network) amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually.
Furthermore, Examiner cannot find and Appellant has not identified any problem caused by the technological environment to which the claims are confined (i.e., a well-known, general purpose computer). The computing system did not cause the argued problem and thus it is not a technical problem caused by the technological environment to which the claims are confined. Even if the claims reflected the alleged improvements (which it does not), the disclosure does not provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing any technical improvement or any physical improvement to the computer. See MPEP § 2106.04(d)(1) and 2106.05(a).
Thus, the claim as a whole does not integrate the recited judicial exception into a practical application.
“The claim as amended mirrors the reasoning in BASCOM because the training steps and other limitations amount to significantly more by providing a technical improvement in the art. As amended, the claim recites training a multi-layer system of nodes in an artificial neural network, wherein the training steps comprise adjusting the connections between the nodes”:
Applicant mentions “BASCOM.” However, the claim limitations of the present invention are different from the claim limitations of those found eligible in BASCOM. Even if the claim limitations of the present invention are similar to that of the claims found eligible (and they are not similar), the claimed inventions are fundamentally different in scope and should be interpreted based on the asserted fact patterns; other fact patterns may have different eligibility outcomes, as is the case with the claims of the present invention. Unlike the claims found eligible in BASCOM, Applicant’s claims do not recite a “non-conventional and non-generic arrangement of the additional elements,” let alone a “non-conventional and non-generic arrangement of the additional elements” which provides improvements to computer functionality, technology, or any other technological field.
As stated previously above, “the training steps” to which Applicant seem to refer as “training a multi-layer system of nodes in an artificial neural network, wherein the training steps comprise adjusting the connections between the nodes” encompasses the creation of mathematical interrelationships between data in the manner described in the identified abstract idea, supra, which is part of the abstract idea, and not additional elements to be interpreted in Step 2B. Furthermore, the “machine-learning model… comprising a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes” amounts to no more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, and only generally links the use of a judicial exception to a particular technological environment or field of use (i.e., computer technology), which does not impose meaningful limits on the scope of the claim. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually.
Furthermore, Applicant fails to specify what the “technical improvement in the art” is.
Thus, the claim as a whole does not amount to significantly more than the judicial exception.
Thus, Examiner maintains the 101 rejections of claims 1–4, 6–14, 16–20, which have been updated to address Applicant’s remarks and to comply with the 2019 Revised Patent Subject Matter Eligibility Guidance and the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence in the above Office Action.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emily Huynh whose telephone number is (571)272-8317. The examiner can normally be reached on M-Th 8-5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on (571) 272-6773.The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMILY HUYNH/Examiner, Art Unit 3683