Prosecution Insights
Last updated: April 18, 2026
Application No. 18/382,700

CLINICAL DIAGNOSTIC AND PATIENT INFORMATION SYSTEMS AND METHODS

Non-Final OA §101§103
Filed
Oct 23, 2023
Examiner
RUIZ, JOSHUA DAMIAN
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
IDEXX Laboratories, Inc.
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 7 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
41 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The status of the claims as of the response filed February 5, 2026 is as follows: Claims 1-3, 5-11 and 13-22 are pending. Claims 4 and 12 are canceled. Claims 18, 19, 20, 21, and 22 are new The applicant has amended Claims 1, 9 and 17 are amended and have been considered below.Request for Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 5, 2026 has been entered. Response to Arguments Rejection Under 35 U.S.C. § 101 Applicant’s arguments with respect to the rejection of claims 1-3, 5-11 and 13-22 pages 8-12, under 35 U.S.C. § 101, filed on February 5, 2026, have been fully considered but are not persuasive. The rejection is maintained. Applicant argues that Claim 1, "training a first neural network using unsupervised learning to segment unstructured clinical notes into words or phrases and generating an ontology" is not properly rejected because the natural language processing operations are computational processes that cannot practically be performed in the human mind. The Examiner respectfully disagreed because under proper BRI, Claim 1, "generating an ontology comprising a reference set of normalized concepts that capture associations and relations" means the cognitive act of organizing and normalizing textual observations into a conceptual framework of symptoms. The record shows "unsupervised machine learning techniques are used to automatically generate and update ontologies for extracting structured knowledge from the unstructured notes. (par.0060)" The examiner found the applicant’s position not persuasive because using generic computational techniques like word frequency counting to perform the cognitive steps of observing, evaluating, and categorizing medical records merely uses a computer as a tool to automate a human mental process, rather than representing a non-abstract concept. Therefore, the rejection is maintained. Applicant argues that Claim 1, "training a first neural network using unsupervised learning to segment unstructured clinical notes into words or phrases and generating an ontology" is not properly rejected because the nature of the steps relies on computational processing of massive datasets rather than human cognition. The examiner found the applicant’s position not persuasive because MPEP 2106.04(a)(2) dictates that a claim directed to a process practically performed in the human mind remains a mental process even if it is automated by a computer. While the applicant asserts the process relies on the mathematical optimization of network weights, independent Claim 1 only broadly recites the functional result of using generic unsupervised learning to segment text and map concepts, which merely instructs a computer to automate the human cognitive task of reading and conceptualizing clinical notes without reciting any specific technical improvement to the algorithms themselves. Therefore, the rejection is maintained. Applicant argues that Claim 1 is not properly rejected because the steps of training neural networks using unsupervised and supervised learning are computational processes that cannot practically be performed in the human mind, similar to the allowable neural network training steps in 2019 PEG Example 39. The Examiner respectfully disagreed because under proper BRI, Claim 1, "train a first neural network... by using unsupervised learning to segment the unstructured data... and generating an ontology" means using a generic machine learning model to automate the cognitive act of parsing textual clinical notes and grouping synonymous terms into a conceptual framework. The record shows "unsupervised machine learning techniques are used to automatically generate and update ontologies for extracting structured knowledge from the unstructured notes. (par.0060)" The examiner found the applicant’s position not persuasive because the 2024 AI Guidance Update instructs that claiming generic AI or machine learning tools to merely automate a process practically performed in the human mind remains a mental process. Unlike Example 39, which recites a specific technical method for generating sequential training sets to reduce false positives, Claim 1 broadly recites the functional result of applying unsupervised learning to automate a veterinarian's cognitive diagnostic workflow without claiming the specific computational processes or mathematical optimizations that perform the task. Therefore, the rejection is maintained. Applicant argues that Claim 1 is not properly rejected because it integrates the abstract idea into a practical application by improving machine learning technology itself, analogizing the claim to the technological improvements recognized in Ex parte Desjardins and Enfish. The Examiner respectfully disagreed because under proper BRI, Claim 1 broadly means using a first and second neural network to segment data and predict a disease likelihood, without reciting any specific technical architecture or specialized internal training methodology that alters how the underlying machine learning models physically or computationally operate. The record shows the application relies on unmodified machine learning algorithms, stating “backpropagation may be used to train one or more neural networks (par. 0074)” and “Euclidean distance may be used to train one or more k-means clustering classifiers.(par.0075)” The examiner found the applicant’s position not persuasive because MPEP 2106.05(a) requires the claim to recite the specific features that yield the technical improvement; unlike the claims in Desjardins which specifically recited how to learn new tasks while computationally protecting previous task knowledge to reduce system complexity, Claim 1 merely instructs a generic computer to apply standard machine learning techniques to a new domain of veterinary data to achieve a more accurate diagnostic result. Therefore, the rejection is maintained. Applicant argues that Claim 1, "train a first neural network... generating an ontology... combine the first structured data and the second structured data... train the second neural network" is not properly rejected because the two-stage architecture solves the technical problem of combining heterogeneous structured and unstructured data, thus improving machine learning technology itself. The Examiner respectfully disagreed because under proper BRI, Claim 1, "combine the first structured data and the second structured data... train the second neural network," means feeding both quantitative test results and previously processed text concepts as inputs into a machine learning model to calculate a diagnosis. The record shows the application relies on conventional machine learning processes without altering their internal mechanics, explicitly stating, "backpropagation may be used to train one or more neural networks (par. 0074)." The examiner found the applicant’s position not persuasive because MPEP 2106.05(a) establishes that a claim must "recite the specific features that yield the improvement," and the 2024 AI Guidance Update clarifies that merely applying a known machine learning tool to a new dataset or problem does not constitute an improvement to the computer or the AI itself. The examiner's Step 2A, Prong Two analysis correctly determined that the claim does not improve machine learning technology; unlike the cited precedent where the claim recited specific computational mechanisms to preserve internal memory weights, Claim 1 merely organizes heterogeneous veterinary data into sequential generic neural networks. This arrangement improves the accuracy of the abstract diagnostic prediction, but it does not technically improve the computational architecture or functioning of the machine learning algorithms themselves. Therefore, the rejection is maintained. Applicant argues that the claims should be found eligible because any doubt or close call regarding whether the machine learning training steps are a mental process must be resolved in favor of eligibility. The Examiner respectfully disagreed because under proper BRI, the claims clearly and unambiguously mean automating the cognitive diagnostic process using generic neural networks, which does not present a close call or evidentiary doubt. The record shows the system relies on generic tools applied to clinical data, stating “unsupervised machine learning techniques are used to automatically generate and update ontologies for extracting structured knowledge.” The examiner found the applicant’s position not persuasive because the 2024 AI Guidance Update instructs examiners not to reject claims based on mere uncertainty, but it does not require withdrawing a clearly articulated prima facie case of ineligibility just because the applicant cites distinguishable precedent like Example 39 or Desjardins. The examiner is not uncertain; the claims broadly recite the functional result of analyzing veterinary data without specific technical improvements to the AI itself, establishing that it is more likely than not that the claims are directed to an unintegrated abstract idea. Therefore, the rejection is maintained. Rejection Under 35 U.S.C. § 103 Applicant’s arguments with respect to the rejection of claims 1-3, 5-11 and 12-22 pages 12-16, under 35 U.S.C. § 103, filed on February 5, 2026, have been fully considered but are not persuasive. The rejection is maintained. Refer also to 35 U.S.C 103 rejection below for further details. Applicant argues that Claim 1, “the unstructured data includes clinical notes in free-form text … training a first neural network on the unstructured data … by using unsupervised learning to segment the unstructured data into a plurality of words or phrases and generating an ontology”, is not properly rejected because Lascelles does not teach those limitations and Etkin allegedly addresses a different ontology problem. The Examiner respectfully disagrees because Lascelles supplies the base veterinary diagnostic systema two-stage animal-health pipeline in which sensor/activity data and subject metadata are used to generate a training dataset and a movement-condition model, with the movement score “manually computed … based on information collected by veterinary specialists to define the phenotype”; the Office action then properly relies on Etkin for the missing text-processing feature, because Etkin teaches “applying … a first machine learning technique” to a corpus with “textual data”, where the first technique “may include an unsupervised machine learning technique,” and then “applying the ontology to process an electronic medical record.” The Examiner respectfully disagrees because Lascelles provides the foundational veterinary diagnostic framework by collecting animal-related data and applying trained models to predict movement conditions, while Etkin contributes the text-processing feature by teaching the use of NLP and unsupervised machine learning to generate and apply ontologies from textual data, which is not limited to human subjects (Etkin col. 37 ll.9-15). The rejection is supported because Lascelles teaches the animal-diagnosis pipeline and Etkin teaches the known unsupervised text-to-ontology technique; using that known technique to add structured information from clinical text to Lascelles’s existing animal disease-prediction system is a predictable improvement in the same diagnostic context. Therefore, the rejection is maintained. Applicant argues that Claim 1, "a two-stage machine learning architecture that enables effective combination of both quantitative diagnostic test results and qualitative textual observations for veterinary disease prediction" is not properly rejected because neither Lascelles nor Etkin individually teaches combining quantitative diagnostic tests with ontology-derived concepts from clinical notes. Applicant argues that Claim 1, "a two-stage machine learning architecture that enables effective combination of both quantitative diagnostic test results and qualitative textual observations for veterinary disease prediction" is not properly rejected because neither Lascelles nor Etkin individually teaches combining quantitative diagnostic tests with ontology-derived concepts from clinical notes. The Examiner respectfully disagreed because under proper BRI, Claim 1, "a two-stage machine learning architecture that enables effective combination of both quantitative diagnostic test results and qualitative textual observations" means a sequential machine learning system that ingests both numerical medical data and written clinical notes to predict a health condition. The record shows the applicant uses this architecture to merge structured metrics with normalized textual concepts to evaluate the whole patient. The examiner found the applicant’s position not persuasive because under MPEP 2143, the test for obviousness does not require a single reference to identically disclose the entire combination; rather, it looks to what the combined teachings would have suggested to a POSITA. Lascelles teaches the foundational two-stage veterinary architecture that processes quantitative diagnostic data (sensor metrics) to predict disease, while Etkin teaches the missing capability of using machine learning to extract predictive value from "textual data describing diagnoses, encounters, procedures" (Etkin, Col. 42, ll. 50-67). A POSITA would be motivated to integrate Etkin's qualitative textual processing into Lascelles' quantitative system to predictably improve the veterinary diagnostic model by capturing observable characteristics hidden in clinical notes that physical sensors in Lascelles cannot detect. Therefore, the rejection is maintained. Applicant argues that Claim 1, from “separating the filtered patient medical record data …” through “qualitative textual observations for veterinary disease prediction”, is not properly rejected because the combination of Lascelles and Etkin allegedly does not teach or suggest the claimed separation of structured and unstructured medical-record data, ontology extraction from free-text notes, and the resulting two-stage architecture for veterinary disease prediction. The Examiner respectfully disagreed because, Lascelles supplies the foundational veterinary prediction framework, while Etkin provides the missing text-processing feature, using NLP and unsupervised learning to derive ontologies from medical text and apply them to clinical records. The Examiner finds the applicant’s argument unpersuasive because the combination rationale is explicitly addressed: a POSITA would be motivated to integrate Etkin’s text-to-ontology technique into Lascelles’s diagnostic system, thereby improving disease prediction by incorporating structured information from clinical notes. Thus, under MPEP 2143 and 2145, the rejection remains justified and is maintained. Claim Rejections - 35 U.S.C. § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-11 and 13-22 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (an abstract idea) without embodying an inventive concept that amounts to significantly more. Step 1: Statutory Categories The claims fall within the statutory categories of invention. Claims 1-3 and 5-8 are directed to a process. Claims 9-11 and 13-22 are directed to a machine (a system). Claim 17 is directed to a manufacture (a non-transitory computer-readable storage medium). As the claims are directed to statutory subject matter, the analysis proceeds to Step 2A. Step 2A, Prong One: Judicial Exception Analysis The claims are directed to a judicial exception because they recite the abstract idea of collecting, organizing, and analyzing information to form a judgment. Under their broadest reasonable interpretation, the claims as a whole describe the mental process of a veterinarian diagnosing a patient: gathering patient data, mentally structuring clinical notes into a conceptual framework (an ontology), combining that with test results, and then using the combined knowledge to identify correlations and predict a disease. Independent Claim 9: A clinical diagnostic system comprising: at least one computer accessible-storage device configured to store instructions; and at least one processor communicatively connected to the at least one computer accessible storage device and configured to execute the instructions to: receive patient medical record data; filter the received patient medical record data by at least one of species, breed, gender, or geographic location; separate the filtered patient medical record data into first structured data and unstructured data, wherein the first structured data includes at least one or more diagnostic test results and the unstructured data includes clinical notes in free-form text entered by a user into the patient medical record data the clinical notes describing symptoms, observations, and findings from a clinical examination of an animal patient;; train a first neural network on the unstructured data as a first training set to extract second structured data from the unstructured data by using unsupervised learning to segment the unstructured data into a plurality of words or phrases and generating an ontology for the segmented plurality of words and phrases as output from the first neural network, the ontology comprising a reference set of normalized concepts that capture associations and relations present in the plurality of words or phrases, the ontology defining the second structured data; combine the first structured data and the second structured data to form a second training set for training a second neural network; train the second neural network on the second training set formed from the combined first structured data and second structured data to output the likelihood of one or more diseases; and apply the trained first neural network and the trained second neural network, in sequence, on new patient medical record data to predict disease diagnosis wherein the first neural network and the second neural network constitute a two-stage machine learning architecture that enables effective combination of both quantitative diagnostic test results and qualitative textual observations for veterinary disease prediction. Note: The bolded portions represent additional elements evaluated in Prong Two and Step 2B. The non-bolded portions represent the abstract idea. Claim Abstract Classification & Rationale Independent Claims 1, 9 and 17 fall within the mental process abstract idea category because they describe a cognitive diagnostic workflow. The limitations of receiving and filtering information, separating test results from clinical notes, generating an ontology of symptoms, and combining this data to predict disease diagnosis directly mirror the cognitive evaluation performed by a human. A veterinarian practically performs these steps in the human mind by reviewing a patient chart, mentally structuring the handwritten observations into a conceptual framework of symptoms, integrating those symptoms with hard laboratory numbers, and relying on past experience to form a diagnostic judgment. This aligns with the disclosure that Conventionally, a veterinarian or other health care professional uses a combination of the patient's medical record and the present symptoms to generate a diagnosis par. 0005. Consequently, the claim as a whole recites an abstract idea under the MPEP 2106 framework because the steps describe observation, evaluation, and judgment practically performed in the human mind. Dependent Claims Analysis The dependent claims 2-3, 5-8, 10-11, and 13-22 are also directed to an abstract idea as they merely add further details to the core mental process. Claims 2-3 and 10-11: Under BRI, these claims recite identifying specific "input features" such as "age," "test results," and "symptoms" and pairing them with a "ground truth." This is the fundamental mental process of observation and classification, where a practitioner identifies relevant patient data points to consider for a diagnosis. Claims 5 and 13: These claims recite using "supervised learning." This describes the mental process of learning by example, where a veterinarian learns to associate a set of symptoms (input) with a confirmed diagnosis (known outcome), which is an inherent part of clinical training and experience. Claims 6, 7, 14, and 15: Under BRI, these claims recite training multiple models, "evaluating" them with "metrics" like "prediction error," and "selecting" the best one. This describes the mental act of considering different diagnostic possibilities (models), weighing their accuracy and reliability (metrics), and choosing the most likely one (selecting). This is a routine act of evaluation and judgment. Claims 8 and 16: These claims recite combining data based on "time information." This describes the mental process of organizing patient data chronologically to understand the progression of a condition, a fundamental step in forming a diagnosis. Claims 18-22: These claims fall within both the mental processes and mathematical concepts abstract idea categories. The limitations of receiving, filtering, and separating the data, along with segmenting text so that different words or phrases are normalized to a common concept in the ontology, mirror the mental process of observation, evaluation, and judgment. A human reading clinical notes performs a cognitive diagnostic workflow by parsing sentences, tracing context, and mentally normalizing different descriptions of a symptom into a common conceptual framework, aligning with the disclosure that unsupervised machine learning techniques are used to automatically generate and update ontologies for extracting structured knowledge from the unstructured notes. Furthermore, the steps reciting an error function that expresses error as a low probability or as an unstable high energy state, adjusting weights and biases for each connected pair of neurons, and determining a minimum value of an error function in weight space to find a global minimum of the error function recite mathematical algorithms and calculations. These limitations describe mathematical relationships and formulas used to optimize data models, which are mathematical concepts under the MPEP 2106 framework. These dependent claims do not add a non-abstract concept. Having determined the claims are directed to an abstract idea, the analysis proceeds to Step 2A, Prong Two, to determine if the claims recite additional elements that are integrated into a practical application. Step 2A, Prong Two: Integration into a Practical Application The claims fail to integrate the abstract idea into a practical application because the additional elements merely provide a generic technological environment for an abstract mental process. Evaluation of Independent Claims 1, 9, and 17 Additional Elements Generic Hardware (Processor and Storage): The recitation of a "processor," "computer," "computer accessible-storage device," and "non-transitory computer readable storage medium" fails to integrate the abstract idea because it: (MPEP § 2106.05(f)) - Mere Instructions: Recites mere instructions to implement the abstract idea on a computer. The claims simply take the abstract mental process of diagnosis and instruct a user to "apply it" using a generic computer and storage, which is insufficient to confer eligibility. The hardware performs its basic, general functions of processing and storing data without any specific configuration that would represent a meaningful limitation on the abstract idea. (MPEP § 2106.05(a)) - No Tech Improvement: Fails to improve the functioning of the computer itself. The claims do not describe a faster, more efficient, or more reliable processor or storage device; they only describe using these generic components to execute the claimed diagnostic method. The specification does not describe any technical improvement to computer hardware, only an application of that hardware to a diagnostic problem. Functional Software (First and Second Neural Networks): The recitation of training and applying a "first neural network" and a "second neural network" to predict diseases fails to integrate the abstract idea because it: (MPEP § 2106.05(f)) - Mere Instructions: Represents the specific instructions of the abstract idea itself. The neural networks are the mathematical tools used to perform the cognitive steps of the claimed mental process organizing unstructured data and identifying patterns. As claimed, they do not represent a practical application but rather a more detailed recitation of the abstract idea being performed. (MPEP § 2106.05(a)) - No Tech Improvement: Fails to improve machine learning technology itself. Applicant argues for a "novel two stage neural network," but the claims do not recite a specific, high-level architecture or training method that improves how machine learning works. Instead, they recite machine learning techniques (unsupervised learning to create an ontology, supervised learning for prediction) to a specific type of data (veterinary records). An improvement in the accuracy of a prediction is an improvement to the abstract mathematical analysis itself, not to the underlying computer technology. (MPEP § 2106.05(h)) - Linking to Environment: Fails to impose meaningful limits on the abstract idea. The use of neural networks, as claimed, does not limit the process to anything more than a mathematical implementation of the abstract diagnostic method. The claims cover any neural network capable of performing these steps, making it a generic application of a mathematical concept. Combination Analysis: When viewed as a whole, the combination of these elements does not integrate the abstract idea. The claim describes a generic computer arrangement executing a series of mathematical steps that are analogous to a human mental process. Using a neural network to analyze data on a generic processor does not transform the abstract idea into a patent-eligible application. Dependent Claims Analysis The dependent claims add only minor limitations that fail to provide the necessary integration. They do not introduce any new hardware or tangible components. Claims 2, 3, 10, and 11: These claims add limitations regarding data features ("input features," "ground truth," "age," "symptoms"). This merely narrows the abstract idea by specifying the type of information being mentally analyzed and fails to improve computer functionality (a). Claims 5 and 13: These claims add the limitation of using "supervised learning." This is a fundamental machine learning technique and merely specifies a particular mathematical approach for the abstract analysis, failing to improve computer functionality (a). Claims 6, 7, 14, and 15: These claims add limitations related to model evaluation ("evaluating...using one or more metrics"). This is a mere field-of-use limitation (h), as it simply specifies how the abstract output should be assessed within the field of diagnostics, and represents an insignificant pre-solution activity (g). Claims 8 and 16: These claims add limitations for combining data based on "time information." This merely refines the abstract mental step of data organization and is an insignificant pre-solution activity (g). Claims 18 and 19: These claims add natural language processing such as word frequency counting, dependency parsing, and normalizing different words to a common concept, which is a mere field-of-use limitation (h) and mere instructions (f). These elements fail to improve computer functionality (a) because they simply apply algorithmic text-processing instructions to perform the cognitive step of reading, organizing, and finding synonyms within clinical notes. Claims 20-22: These claims add specific mathematical formulas, including error functions based on mimicking data, expressing error as an unstable high energy state, and determining a minimum value of an error function in weight space by adjusting weights starting from an output layer back to an input layer. These limitations fail to improve computer functionality (a) because they merely represent the specific mathematical equations of the abstract idea itself. Reciting the mathematical calculations used to train a neural network does not solve a technical problem in computer architecture; it merely describes how the mathematical tool computes its results. When viewed as a whole, the combination of these elements in the dependent and independent claims does not integrate the abstract idea. The dependent claims only provide more detail about the abstract mental process itself (what data to look at, how to evaluate it) rather than specifying a concrete, practical application that moves beyond the abstract realm. Because the claims are directed to an abstract idea without integrating it into a practical application, the analysis proceeds to Step 2B. Step 2B: Inventive Concept Analysis The claims lack an inventive concept because the additional elements, do not amount to significantly more than the judicial exception itself. Evaluation of Independent Claims 1, 9, and 17 Additional Elements Generic Hardware (Processor and Storage): MPEP § 2106.05(f) - Mere Instructions: The claims are not overcome because they merely instruct the user to "apply" the abstract diagnostic method using a generic computer. The specification describes these components performing their basic functions: "a data processing device system...includes one or more data processing devices that implement or execute...control programs" and a system implemented by "programmed instructions stored in one or more memories and executed by one or more processors" (Spec., paras. [0034], [0042]). MPEP § 2106.05(a) - No Tech Improvement: The claims are not directed to an improvement in the functioning of the computer itself. The specification describes a standard computing environment, not an improved one: "The memory 151, input/output (I/O) adapter 156, and non-transitory storage medium 157 may correspond to the memory device system 130" (Spec., para. [0040]). Functional Software (First and Second Neural Networks): MPEP § 2106.05(f) - Mere Instructions: The claims are not overcome because the neural networks simply represent the set of mathematical instructions for carrying out the abstract diagnostic method. The specification describes their function as performing the abstract analysis: "unsupervised machine learning techniques are used to automatically generate and update ontologies for extracting structured knowledge from the unstructured notes" (Spec., para. [0060]). MPEP § 2106.05(a) - No Tech Improvement: The claims fail to improve machine learning technology itself; they merely apply it to a specific field. The specification discusses using well-known, conventional machine learning techniques, not improving them: "backpropagation may be used to train one or more neural networks" and "Euclidean distance may be used to train one or more k-means clustering classifiers" (Spec., paras. [0074], [0075]). Combination Analysis: When viewed as a whole, the combination of generic hardware and functional software does not amount to an inventive concept. The elements together describe nothing more than using a high-level computer to execute a series of mathematical steps that automate a human mental process, which is not significantly more than the abstract idea itself. Dependent Claims Analysis (Claims 2, 3, 10, and 11): These claims does not add additional elements, just data features ("input features," "ground truth," "age," "symptoms"), which is a mere field-of-use limitation (h). (Claims 5 and 13): These claims add the limitation of using "supervised learning," which fails to improve computer functionality (a) as it is a fundamental machine learning technique. The specification identifies this as a standard approach: "The second stage machine learning models are trained using supervised learning..." (Spec., para. [0073]). (Claims 6, 7, 14, and 15): These claims add the limitation of model evaluation ("evaluating...using one or more metrics"), which is insignificant pre-solution activity (g). (Claims 8 and 16): These claims describe combining data based on "time information," which is insignificant pre-solution activity (g). (Claims 18-19): These claims add natural language processing steps (word frequency counting, dependency parsing, context tracing, part-of-speech tagging) and data normalization, which is MPEP § 2106.05(f) - Mere Instructions to apply the exception, and insignificant pre-solution activity (g). The claims recite these operations at a high level of generality, merely instructing a generic computer to "apply" generalized NLP categories to segment text and group synonyms. The specification confirms this is a generalized data-gathering and pre-processing step, describing the techniques by their general functions without reciting any specific technical improvement to the algorithms themselves: "The text snippets are pre-processed by performing techniques such as, but not limited to, word frequency counting, dependency parsing, context tracing, and part-of-speech tagging (par.0061)". Because the claims merely instruct a generic computer to apply broad categories of NLP to gather and organize the unstructured clinical notes, they fail to provide an inventive concept. (Claims 20-22): These claims add mathematical optimization formulas including error functions and weight space adjustments, which fails to improve computer functionality (a) because it is explicitly admitted to be well-understood, routine, and conventional. The specification confirms this is a known mathematical technique: The backpropagation algorithm looks for the minimum value of the error function in weight space using a well-known technique called the delta rule or gradient descent. Furthermore, the specification explicitly admits that working backwards through layers is a standard, conventional part of this math: "In backpropagation, the weights and biases are repeatedly adjusted... starting with the output layer and working back to the input layer (par. 0074)". As a whole, the dependent claims merely add further details about the abstract analysis, specifying the data types and analytical techniques to be used. This combination of general steps, performed on a generic computer, fails to transform the abstract idea into an inventive concept. The claims are directed to an abstract idea and lack an inventive concept. Therefore, Claims 1-3, 5-11 and 13-22 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5, 8, 9-11, 13,16-20, are rejected under 35 U.S.C. 103 as being unpatentable by WO2021173571 – Lascelles in combination with US11526808B2-Etkin. Claim 1. Lascelles teaches, A processor executed method for predicting diseases in animals, comprising: (Lascelles, [0011-0012], [ 0006-0008], abstract) Lascelles teaches a "computer-implemented method" run on a "computing device" or "processing unit" using software for "evaluating a movement-related condition" (like OA) "in an animal" (specifically mentioning dogs and cats), which results in a "predicted movement score indicating the movement-related condition". Lascelles discloses a processor-executed method (computer-implemented on processing unit) for predicting/evaluating a disease/condition (OA/movement condition) in animals (dogs/cats). The Lascelles receiving animal-related data and predicting a movement-related condition via a machine learning model. receiving patient medical record data; (Lascelles, [0006], [0011], abstract) Lascelles teaches the processing unit "receiving sensor data" (measuring movement parameters indicative of health/condition) electronically from sensors associated with the animal. filtering the received patient medical record data by at least one of species, breed, gender, or geographic location; (Lascelles, [0019], [0093], [0035]- [0036], [0065]- [0066], 0072]) Lascelles teaches receiving data relevant to the animal patient. While Lascelles primarily focuses on receiving "sensor data", analogous to "patient medical record data" needed for context and model training/application under BRI. Lascelles discusses selecting subjects for training based on "sexes, sizes, weights, breeds, and detailed health phenotypes" and creating profiles including "breed, sex, age, and name of the subject", which constitute patient medical record data. separating the filtered patient medical record data into first structured data and unstructured data, wherein the first structured data includes at least one or more diagnostic test results and the unstructured data(Lascelles, p. [0003-0004], [0006], [0012], [0072], [0074, [0019], [0066]- [0067], [0085], [0093]) Lascelles receives/slicing and processes raw sensor data (unstructured) and uses metadata be appended (structured). Lascelles teaches receiving "sensor data" (e.g., accelerometer, gyroscope readings) which constitutes unstructured data representing continuous measurements over time. Lascelles also utilizes "metadata associated with the subject" such as "pathology, activity type, repeat number", "breed, sex, age" which constitutes structured data. The overall process involves processing the raw, time-series sensor data (unstructured) and associating it with discrete metadata attributes (structured) for training and prediction. Prior art disclosed the evaluation of clinical signs, owner observations, and general health information collected by veterinary specialists to define an animal's phenotype. , wherein the first structured data includes clinical (Lascelles, p. [0006], [0012], [0072], [0074, [0019], [0066]- [0067]]) Lascelles also utilizes "metadata associated with the subject" such as "pathology, activity type, repeat number", "breed, sex, age" which constitutes structured data. Lascelles addresses the existence of clinical information, as said this is read on "clinical signs" and "information collected by veterinary specialists" because the reference recognizes that professional medical observations and owner-reported symptoms are the standard basis for identifying health conditions like osteoarthritis. training a first neural network on the unstructured data as a first training set to extract second structured data from the unstructured data by using (Lascelles, [0068-0070], 0089, [0100] [0006]) The reference explicitly names neural networks as a potential model and describes training it on sensor measurements. The trained CNN processes the sensor data to classify it, producing a specific, structured label like "walking" or "trotting." combining the first structured data and the second structured data to form a second training set for training a second neural network; (Lascelles, par. 0085-0089) Lascelles disclosed the combination of multiple sets of structured data to form a training set for a second machine learning model, as said this is read on combining calculated kinematic movement metrics, identified activity types, and associated subject metadata into a tabular training dataset utilized to train the movement condition model, which can be an artificial neural network. combining the first structured data and the second structured data to form a second training set for training a second neural network; (Lascelles [0074-0075], [0084-0085], [0089], [0090]) Lascelles show a process where calculated metrics (second structured data) and metadata (first structured data) are combined to form a training data set, which in turn can be used to train an artificial neural network. training the second neural network on the second training set formed from the combined first structured data and second structured data to output the likelihood of one or more diseases; (Lascelles, [0006], [0008], [0031], [0089], [0090], [0109], [0090]) Lascelles explicitly teaches that the "movement condition model" (the second model) can be an artificial neural network (ANN) that is trained on a combined dataset. This trained model outputs a "predicted movement score" that indicates a "movement-related condition" (e.g., Osteoarthritis), which corresponds to the likelihood of a disease. The score differentiates between a healthy animal and one with a condition, thereby assessing the likelihood of disease. applying the trained first neural network and the trained second neural network, in sequence, on new patient medical record data to predict disease diagnosis, wherein the first neural network and the second neural network constitute a two-stage machine learning architecture that enables effective (Lascelles, [0010], [0006], [0068],[0093], [0098], [0100-0101], [0103], [0105], [0106], [0109-0110]) Lascelles, describe a two-step process where a first neural network identifies activities from sensor data, and a second neural network uses metrics from those activities to predict a disease condition. Obvious Rationales: wherein the first structured data includes clinical Under the broadest reasonable interpretation, this limitation requires partitioning a patient's medical record into discrete structured elements, such as test outcomes, and unstructured elements consisting of free-text clinical notes containing a practitioner's manual exam observations. Lascelles teaches separating patient data into structured and unstructured formats including diagnostic information, as shown by appending "metadata... [such as] pathology, activity type" to raw unstructured sensor data and utilizing "information collected by veterinary specialists to define the phenotype" (Lascelles, Para. [0074], [0085]). This reads on separating patient data into structured data including diagnostic results because Lascelles parses discrete pathology and phenotype attributes from continuous unstructured inputs. However, Lascelles does not teach the unstructured data includes clinical notes in free-form text entered by a user describing symptoms. Etkin teaches that missing feature, as shown by processing an "electronic medical record... including textual data describing diagnoses, encounters, procedures, laboratory finding" (Etkin, abstract, fig. 1, Col. 42, ll. 50-67), which reads on clinical notes in free-form text entered by a user because the system specifically extracts unstructured textual descriptions of a patient's clinical encounters, observations, and symptoms to establish a phenotype. A person of ordinary skill in the art would have combined before filling data Lascelles with Etkin to comprehensively define an animal's health phenotype by leveraging both objective physical metrics and rich textual health histories. This would be done by modifying Lascelles' collection of "information collected by veterinary specialists" (Lascelles, Para. [0012, 0085]) to explicitly include the separation and processing of "textual data describing diagnoses, encounters, procedures" because analyzing unstructured free-form text allows the system to determine "one or more observable characteristics of the patient" that cannot be captured by structured metadata alone. Doing so would have predictably resulted in a highly accurate diagnostic evaluation system capable of reliably correlating objective sensor metrics with subjective, free-form clinical observations to better detect and treat movement-related conditions. training a first neural network on the unstructured data as a first training set to extract second structured data from the unstructured data by using (Lascelles, [0068-0070], 0089, [0100] [0006]) Under the broadest reasonable interpretation, this limitation requires utilizing an unsupervised neural network to parse unstructured text into discrete linguistic units to automatically build a structured relational ontology that maps the conceptual associations found within that text. Lascelles teaches training a neural network on unstructured data to extract structured data, as shown by utilizing "convolutional neural networks (CNN)" trained on "sensor data" to estimate an "activity label". This reads on training a neural network to extract structured data from an unstructured training set because the CNN processes raw, continuous measurements to output specific, discrete classifications. However, Lascelles does not teach using unsupervised learning to segment the unstructured data into words or phrases and generating an ontology comprising a reference set of normalized concepts capturing associations. Etkin teaches that missing feature, as shown by applying an "unsupervised learning approach that takes into account insights from information theory" using "natural language processing (NLP)" to identify term "co-occurrences" and generate an "ontology mapping" of related concepts [Etkin, Col. 37, ll. 30-55; Col. 6, ll. 1-30, Col. 5, ll. 10-35], which reads on generating an ontology from segmented words capturing associations because the system uses unsupervised techniques to parse text and map the relational co-occurrences of extracted terms into a structured reference set. A person of ordinary skill in the art would have combined before filling data Lascelles with Etkin to expand the diagnostic system's feature extraction capabilities to encompass "textual data describing diagnoses, encounters, procedures", by modifying the neural network training of Lascelles to include an "unsupervised learning approach" that generates an "ontology" from unstructured text, because capturing the semantic "co-occurrences" within clinical free-text allows the system to automatically derive structured, relational data points that complement physical sensor metrics. Doing so would have predictably resulted in a comprehensive, automated health evaluation system capable of reliably converting unstructured clinical notes into a normalized, structured ontology to improve the overall predictive accuracy of the diagnostic model. applying the trained first neural network and the trained second neural network, in sequence, on new patient medical record data to predict disease diagnosis, wherein the first neural network and the second neural network constitute a two-stage machine learning architecture that enables effective (Lascelles, [0010], [0006], [0068],[0093], [0098], [0100-0101], [0103], [0105], [0106], [0109-0110]) Under the broadest reasonable interpretation, this limitation requires a two-stage neural network system that inputs both numerical test data and written clinical notes to predict a medical condition. Lascelles teaches applying a trained first neural network and a trained second neural network on new data to predict a veterinary disease, as shown by utilizing a "trained activity model" to identify activities from raw sensor data, and providing those calculated metrics to a "movement base model" that "receiv[es] a predicted movement score" indicating a "movement-related condition". This reads on the sequential two-stage architecture using quantitative diagnostic data to predict veterinary disease because Lascelles actively processes numerical sensor metrics through successive models to output a condition score. However, Lascelles does not teach the architecture combining this with qualitative textual observations. Etkin teaches that missing feature, as shown by applying machine learning models to "phenotype an electronic medical record" containing "textual data describing diagnoses, encounters, procedures" to "predict a clinical outcome for a patient", which reads on enabling the combination of qualitative textual observations for disease prediction because Etkin explicitly uses neural networks to extract diagnostic predictive value directly from free-form clinical text. (Etkin, abstract, fig. 1, Col. 42, ll. 50-67, Col. 43, ll. 1-20Col. 37, ll. 30-55; Col. 6, ll. 1-30, Col. 5, ll. 10-35) A person of ordinary skill in the art would have combined Lascelles with Etkin to enhance the accuracy of the veterinary diagnostic predictions by integrating "textual data describing diagnoses, encounters, procedures" alongside the quantitative sensor metrics. The specific modification would involve adapting Lascelles' two-stage architecture to ingest Etkin’s neural network-processed textual phenotypes as an additional data feature, because combining these data types ensures the predictive model captures "observable characteristics" and "meaningful and/or consistent characteristic[s]" hidden in clinical notes that physical sensors inherently cannot detect. Doing so would have predictably resulted in a more comprehensive two-stage veterinary disease prediction system capable of evaluating the complete patient record—combining objective physical movement measurements with subjective veterinary observations—to accurately forecast clinical outcomes. Claim 2. Lascelles in combination with Etkin teaches, The method according to claim 1, wherein the second training set for the second neural network includes one or more input features extracted from the combined first structured data and second structured data and corresponding ground truth. (Lascelles, paragraphs [0074-0075], [0084- 0085], [0089], [0090], [0106], [0109], [0118]) Lascelles describes calculating specific "movement-related metrics" (input features) from processed sensor data slices. This derived structured data (metrics) is explicitly combined with "corresponding metadata associated with the subject" (e.g., pathology, breed, age - original structured data) to form the "training data set". This training set, comprising the input features derived from the combined data and linked to manually computed "movement scores" (corresponding ground truth), is used to train a supervised "movement condition model" (second machine learning model). Claim 3. Lascelles in combination with Etkin teaches, The method according to claim 2, wherein the one or more input features include one or more of an age of the patient, propensity of the patient to one or more diseases, one or more test results, one or more symptoms, and one or more observations, and wherein the ground truth includes the likelihood of one or more diseases. (Lascelles, [0006], [0075], [0085], [0087], [0089], [0092-0094]) Lascelles teaches using calculated "movement-related metrics" derived from sensors, which constitute "observations", as input features. Lascelles also includes "metadata associated with the subject" in the training data, which may include "age" and "breed", potentially indicating "propensity" and including "pathology" (analogues to diseases), metadata suggests prior conditions might be used. Lascelles uses a manually computed "movement score" as ground truth. This score evaluates a "movement-related condition" (e.g., OA) and distinguishes between having the "condition" and being "healthy. The movement score based on specialist assessment of phenotype thus represents the likelihood or classification of a disease/condition. Claim 5. Lascelles in combination with Etkin teaches, The method according to claim 1, wherein the second neural network is trained using supervised learning. (Lascelles, paragraphs [0008], [0089], [0090]) Claim 8. Lascelles in combination with Etkin teaches, The method according to claim 1, wherein the first structured data and the second structured data is combined based on data or time information included in the patient medical record data. Lascelles describes receiving "sensor data... while the animal is engaged in ... movement related activities" (Lascelles, paragraph [0006]), which is time-series data. It mentions sensor data may be "time-stamped" (Lascelles, paragraph [0066]), involves "slicing the sensor data based on the determined prescribed movement-related activities" (Lascelles, paragraph [0006]), and discusses aggregating scores over time using methods like "rolling-moving-average ... to show the progression of movement scores over time" (Lascelles, paragraph [0110]). In addition, Lascelles discloses movement-related metrics combined with metadata such as breed, sex, and age structured data. Lascelles in combination with Etkin teaches, Claim 18 The method according to claim 1, wherein training the first neural network on the unstructured data as the first training set further includes using the unsupervised learning to perform natural language processing comprising at least one of word frequency counting, dependency parsing, context tracing, or part-of-speech tagging to segment the unstructured data into a plurality of words or phrases. Under the broadest reasonable interpretation, this limitation requires utilizing unsupervised machine learning and natural language processing, such as word frequency counting, to segment and process an unstructured text corpus to train a neural network. Lascelles teaches training a neural network on a training set, as shown in (Paragraphs [0008], [0068]). This reads on training the first neural network on a training set because it explicitly discloses configuring and optimizing artificial neural network models using collected data sets. However, Lascelles does not teach using unstructured data as the training set, nor does it teach using unsupervised learning to perform natural language processing comprising word frequency counting to segment the unstructured data. Etkin teaches that missing feature, as shown by applying an "unsupervised machine learning technique" (Col. 2, ll. 8-9) and a "natural language processing (NLP) technique to preprocess the corpus of data" (Col. 44, ll. 43-45, fig.9) where a "co-occurrence value corresponds to a frequency at which a... term appear[s] in a same article" (Col. 48, ll. 31-33, Col.3, ll. 13-25), which reads on using unsupervised learning to perform natural language processing comprising word frequency counting to segment unstructured data because the system processes raw textual articles to extract and count word frequencies for an unsupervised clustering algorithm. A person of ordinary skill in the art would have combined before filling data Lascelles with Etkin to expand the data ingestion capabilities of the evaluation system by integrating "natural language processing (NLP) techniques and machine learning models" (Etkin, Col. 5, 25-40) into the neural network training process of Lascelles (Lascelles, Para. [0068]), by modifying the neural network training to preprocess unstructured textual data via unsupervised NLP word frequency counting, because utilizing unsupervised NLP allows the system "to identify candidate domains... through an unsupervised learning approach that takes into account insights from information theory" (Etkin, Col. 37, ll. 35-59), thereby enabling the extraction of meaningful features from raw, unlabeled text. Doing so would have predictably resulted in a more robust and versatile machine learning system capable of accurately mapping unstructured data inputs to diagnostic or evaluative outcomes without requiring manual data labeling. Lascelles in combination with Etkin teaches, Claim 19. The method according to claim 1, wherein the ontology is generated by the first neural network self-learning the associations and relations present in the plurality of words and phrases through the unsupervised learning without ground truth labels, and outputting the reference set of normalized concepts that capture the associations and relations, and wherein different words or phrases used by different veterinarians to describe a same entity or concept are normalized to a common concept in the ontology. Under the broadest reasonable interpretation, this limitation requires an unsupervised neural network to autonomously learn semantic relationships from unlabeled text to create a standardized vocabulary where various synonymous terms are mapped to a single unified concept. Lascelles teaches processing data with a neural network, as shown by utilizing "convolutional neural networks (CNN)" (Lascelles, Para. [0068]). This reads on using a neural network to process inputs because it trains models to extract data patterns. However, Lascelles does not teach unsupervised self-learning without ground truth labels to output an ontology that normalizes different synonymous words to a common concept. Etkin teaches that missing feature, as shown by applying an "unsupervised learning approach" based on "term-term co-occurrences" where "candidate synonyms... may be identified based on the cosine similarity of their embeddings to the centroid of seed embeddings in each domain" (Etkin, Col. 38, ll. 50-64, Col. 40. 30-45). This reads on self-learning without ground truth to normalize different words to a common concept because the unsupervised word embeddings mathematically group various synonymous free-text terms to a single unified centroid representing the common clinical entity. A person of ordinary skill in the art would have been motivated to combine Lascelles with Etkin by incorporating Etkin’s unsupervised synonym-clustering techniques into Lascelles’ diagnostic system to preprocess free-text clinical notes. Etkin teaches using unsupervised learning to identify and normalize semantically related terms based on term co-occurrence and similarity, which addresses the known problem of inconsistent terminology in clinical text. Applying this known technique to Lascelles’ system would have predictably improved the consistency and usability of extracted clinical information without requiring manual labeling, yielding the expected result of more standardized structured data for downstream disease prediction. Lascelles in combination with Etkin teaches, Claim 20. The method according to claim 1, wherein a first error function used for the unsupervised learning of the first neural network is different from a second error function used for the supervised learning of the second neural network, wherein the first error function is based on mimicking input data, and wherein the second error function is based on reducing an error between a target output labels from ground truth and actual output labels from the second neural network. Under the broadest reasonable interpretation, this limitation requires utilizing distinct loss algorithms for two separate neural networks, where the first unsupervised network optimizes by modeling or reconstructing the provided input data, and the second supervised network optimizes by minimizing the difference between predicted outputs and known target labels. Lascelles teaches the second error function for supervised learning of the second neural network, as shown by "training an ANN/CNN essentially means selecting one model from a set... that minimizes a cost criterion" where "the CNN changes its weights to be more likely to produce the correct output" based on "corresponding movement scores assigned" (Paragraphs [0008], [0068], [0070]). This reads on the second error function based on reducing an error between target output labels from ground truth and actual output labels because the network explicitly adjusts its weights to minimize the difference between the predicted classification and the manually assigned correct labels. However, Lascelles does not teach the first error function used for the unsupervised learning based on mimicking input data being different from the second error function. Etkin teaches those missing features, as shown by applying "a natural language processing (NLP) technique to preprocess the corpus of data" and noting that "The first machine learning technique may include an unsupervised machine learning technique" while "The second machine learning technique may include a supervised machine learning technique". Etkin further teaches the unsupervised approach uses "word embeddings of length 100 [that] may be trained using GloVe" where a "co-occurrence value corresponds to a frequency at which a brain structure and a mental function term appear in a same article" , while the supervised network uses a "forward inference model (e.g., a multilayer neural network classifier) [that] may be fit on the training set to predict the occurrence" of target labels. This reads on a first error function based on mimicking input data via word frequency counting that is different from the supervised error function because the unsupervised NLP word embeddings optimize by mimicking input text frequencies and co-occurrences, which is entirely distinct from the classification error function used by the subsequent supervised classifier model. Refer to Col. 5, ll. 45 – 62, Col. 3, ll. 1-12, Col. 2, ll. 1-30, fig. 9, Col. 40, ll. 17-30, col. 6, ll. 35-50 A person of ordinary skill in the art would have combined before filling data Lascelles with Etkin to expand the data ingestion capabilities of the evaluation system by integrating "natural language processing (NLP) techniques and machine learning models" into the neural network training process of Lascelle, by modifying Lascelle system to first train an unsupervised neural network mimicking the input data structure before executing the supervised classification network. A POSITA would do this because using distinct error functions tailored to each learning phase allows the system to build its initial domains "by applying an unsupervised learning approach that takes into account insights from information theory" before applying "a supervised learning strategy in order to optimize the number and size of domains in the ontology (Col. 38, ll. 10-30)". Doing so would have predictably resulted in a more robust evaluation system capable of accurately mapping unstructured textual data to diagnostic outcomes without requiring manual data labeling for the initial feature extraction phase, ensuring the system leverages foundational data patterns for improved predictive accuracy. Note: Claims 9-11, 13 and 16-17 are rejected with the same analysis above. Claim(s) 6,7, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable by WO2021173571 – Lascelles in combination with US11526808B2-Etkin in further view of US20190362846A1 - VODENCAREVIC. Claim 6. Lascelles in combination with Etkin teaches, The method according to claim 1, further including: training a plurality of second neural networks; (Lascelles, paragraphs [0008], [0068], [0072], [0075], [0089-0090]). Lascelles teaches training machine learning models (e.g., SVM, CNNs) using supervised learning on structured metrics derived from sensor data (unstructured time-series data) to predict movement scores indicative of a health condition. Lascelles explicitly mentions training "two different CNNs" in one embodiment and describes distinct activity and movement base models, thus teaching the training of a plurality (two or more) models within its system. evaluating the plurality of second neural networks using one or more metrics; Lascelles, paragraphs [0008], [0068], [0072], [0075], [0089]). and selecting one or more neural networks of the plurality of second neural networks for application to the new patient medical record data to predict disease diagnosis. (Lascelles, paragraphs [0090], [0093], [0103], [0109], [0112], [0008], [0068], [0072], [0075], [0089]). Lascelles clearly teaches applying its trained movement base model to new sensor data ("new records") from a subject to predict a movement score indicating a health condition. Lascelles discloses training a "movement base model" using supervised learning ("supervised machine learning algorithm") on "movement-related metrics" from "sensor data" to predict "movement scores" [Lascelles, paragraphs [0006-0008], [0065], [0075], [0089], [0090]] and teaches training multiple models such as "two different CNNs" or distinct "activity model" and "movement base model" [Lascelles, paragraphs, [0056], [0068], [0072]]. Lascelles discusses model training until models correctly predict output "most of the times" [Lascelles, paragraph 0070-0071] and applying the trained model [Lascelles, paragraph 0090, paragraph 0109], but does not describe evaluating multiple trained base models against specific metrics like error rate or complexity, or selecting based on such evaluation. However, Vodencarevic describes performing "nested k-fold cross-validation procedure ... the inner one which tunes the hyperparameters and the outer one which estimates the performance" (Vodencarevic, paragraph [0018]), using "performance metrics... such as the Area Under the receiver operating Characteristics (AUC) or classification error" (Vodencarevic, paragraph [0017], comparing "averaged results... for different models... and finally the one that maximized the model performance is selected as the best model" (Vodencarevic, paragraph [0017]), and comparing "aggregated testing results... for different trained models and selecting the optimal predictive model". Combining Lascelles and Vodencarevic would have been obvious under 35 U.S.C. 103 because both references operate within the analogous art of developing and optimizing predictive machine learning models that process input data to generate health-related assessments. Lascelles focuses on predicting animal movement scores from sensor data ("evaluating a movement-related condition") [Lascelles 0006-0008, 0026, 0030], while Vodencarevic focuses on automated clinical decision support using EMR data, specifically including model evaluation and selection ("creating predictive models", "model selection") [Vodencarevic paragraphs, 0002, 0009, 0017, 0018, 0111, 0114]; they share the technical goal of building effective ML systems for health assessment. Vodencarevic teaches methods for "automated optimal model and parameter selection" (Vodencarevic, paragraph [0398]) which allows selection of the model that "maximized the model performance" (Vodencarevic, paragraph [0017]) and provides a "conservative model performance estimation" (Vodencarevic, paragraphs [0197]), benefits readily understood by one skilled in the art as valuable for improving any predictive system like that in Lascelles which aims to accurately evaluate health conditions. Claim 7. The method according to claim 6, wherein the one or more metrics include prediction error, complexity, explainability, or data size. (Vodencarevic, paragraphs [0017], [6101], [0206], [(0436}). Note: Claims 14-15 are rejected with the same analysis above. Claim(s) 21-22 are rejected under 35 U.S.C. 103 as being unpatentable by WO2021173571 – Lascelles in combination with US11526808B2-Etkin and in further view of US20030004906A1 - Lapointe. Lascelles in combination with Etkin teaches, Claim 21. The method according to claim 1, wherein the first neural network comprises a convolutional neural network trained using an error function that expresses error as a low probability that erroneous output occurs or as an unstable high energy state in the convolutional neural network, and wherein the first neural network adjusts weights and Under the broadest reasonable interpretation, this limitation requires minimizing classification error probabilities by updating both connection weights and node biases during training. Lascelles in view of Etkin teaches a convolutional neural network that adjusts its weights in response to incorrect outputs to reduce error. However, Lascelles in view of Etkin does not teach adjusting biases for each connected pair of neurons based on the error. Lapointe teaches adjusting neuron bias weights in response to backpropagated error signals during neural network training, thereby disclosing bias adjustment based on error in conjunction with weight updates. Lapointe, par. 0647, 0008 A person of ordinary skill in the art would have combined Lascelles in view of Etkin with Lapointe to optimize the diagnostic network's training by modifying Lascelles' CNN to adjust "bias weights" alongside connection weights. A POSITA would be motivated to make this combination because Lapointe teaches that tuning biases during error propagation prevents the network from getting trapped in local minimal, providing a known mathematical solution to ensure the model more reliably produces the "correct output". Applying this known optimization technique to a known neural network structure would predictably result in a highly accurate veterinary diagnostic system that comprehensively minimizes classification errors. Lascelles in combination with Etkin teaches, Claim 22. The method according to claim 1, wherein training the second neural network comprises determining a minimum value of an error function in weight space, and wherein weights and . (Lascelles, 0068, 0070, 0098) Under the broadest reasonable interpretation, this limitation requires executing a backpropagation algorithm that sequentially updates connection weights and node biases backwards from the final output layer to the initial input layer to converge on the absolute lowest error state. Lascelles in view of Etkin teaches determining a minimum value of an error function in weight space, as shown by iteratively training a neural network that "minimizes a cost criterion" where "the CNN changes its weights" (Lascelles, Para. [0068], [0070]). This reads on determining a minimum value in weight space because updating weights to minimize a cost criterion mathematically traverses the parameter space to systematically reduce error. However, Lascelles in view of Etkin does not teach adjusting weights and biases for each layer starting with an output layer and working back to an input layer to find a global minimum. Lapointe teaches that missing feature, as shown by adjusting layer "bias weights" as "error signals are propagated back-wards through the network" with the explicit goal "being to converge to a global minimum" (Lapointe, Para. [0008], [0647]), which reads on adjusting weights and biases backwards to find a global minimum because Lapointe explicitly defines executing the backpropagation algorithm to optimize all layer parameters sequentially backwards toward the absolute lowest mathematical error state. A person of ordinary skill in the art would have combined before filling data Lascelles in view of Etkin with Lapointe to maximize the predictive accuracy of the veterinary diagnostic network by integrating backpropagation to adjust both connection weights and "bias weights" (Lapointe, Para. [0647]), by modifying Lascelles' training process to propagate error signals "back-wards through the network" (Lapointe, Para. [0008]), because applying this specific layer-by-layer backward optimization provides a known mathematical mechanism to escape suboptimal local minima and securely "converge to a global minimum" (Lapointe, Para. [0008]). Doing so would have predictably resulted in a highly robust veterinary diagnostic system that reliably minimizes classification errors by achieving the absolute lowest possible error state across the entire neural network parameter space. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA DAMIAN RUIZ whose telephone number is (571)272-0409. The examiner can normally be reached 0800-1800. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA DAMIAN RUIZ/Examiner, Art Unit 3684 /KAREN A HRANEK/Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Oct 23, 2023
Application Filed
Apr 17, 2025
Non-Final Rejection — §101, §103
Aug 20, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Examiner Interview Summary
Sep 22, 2025
Response Filed
Oct 30, 2025
Final Rejection — §101, §103
Feb 05, 2026
Request for Continued Examination
Feb 26, 2026
Response after Non-Final Action
Apr 02, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month