Prosecution Insights
Last updated: April 19, 2026
Application No. 18/391,169

Methods and Systems for Processing Pathology Data of a Patient For Pre-Screening Veterinary Pathology Samples

Final Rejection §101§103
Filed
Dec 20, 2023
Examiner
LEE, ANDREW ELDRIDGE
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
IDEXX Laboratories, Inc.
OA Round
2 (Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
4y 7m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
23 granted / 130 resolved
-34.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
171
Total Applications
across all art units

Statute-Specific Performance

§101
38.9%
-1.1% vs TC avg
§103
40.8%
+0.8% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION In the response filed on 03 November 2025, claims 1, 5, 10-13, 15 and 17-19 have been amended; claims 2 and 9 have been canceled; claims 21-22 are newly added. Now claims 1,3-8 and 10-22 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8 and 10-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 15 and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite computer-implemented method, server, and non-transitory computer readable medium (CRM) for processing pathology data. The limitations of: Claim 1, which is representative of claims 15 and 18 providing a [… output …] for display […]; [… obtaining …] a first input […]; sequentially generating, for display […], a plurality of fields, each of the plurality of fields corresponding to a further input, wherein the plurality of fields are [… provided …] for input in a rules-based manner, wherein a first field of the plurality of fields is generated based on the first input, and wherein each subsequent field of the plurality of fields is generated based on a further input provided in a previous field of the plurality of fields; [… obtaining …], from each of the plurality of fields, the further input corresponding to the field, wherein the first input and further inputs comprise pathology data associated with a patient; extracting, […], a keyword from the pathology data; determining, […] based on the keyword extracted from the pathology data, a pathology summary, […] using veterinary pathology training data labeled with corresponding diagnostic results; determining, based at least in part on the pathology data, a staining input, the staining input corresponding to a virtual staining to apply to an image of a prior collected sample of the patient; creating, […], a digitally-stained slide based on the image of the prior collected sample of the patient and the staining input, wherein the digitally-stained slide is a virtual version of a stained slide; providing […] the pathology summary and at least a portion of the digitally-stained slide; and [… obtaining …] a second input […]; and in response to receiving the second input […], providing for [… output …] at least one of: a background information module comprising data associated with the pathology summary; a contact information module comprising contact information of a pathologist associated with the pathology data; and an ordering module, which when initiated, generates an order for follow-on testing. as drafted, is a system, which under its broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) via human interaction with generic computer components. That is, by a human user interacting with various devices (claims 1, 15 and 18), a computer and a processor (claim 1), a server comprising a processor and memory (claim 15) and a non-transitory CRM with a processor (claim 18), the claimed invention amounts to managing personal behavior or interaction between people, the Examiner notes as stated in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. For example, but for various devices (claims 1, 15 and 18), a computer and a processor (claim 1), a server comprising a processor and memory (claim 15) and a non-transitory CRM with a processor (claim 18) the claim encompasses a user interacting with various computer components to be provided a pathology summary that they can use for the treatment of their patients. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of various devices (claims 1, 15 and 18), a computer and a processor (claim 1), a server comprising a processor and memory (claim 15) and a non-transitory CRM with a processor (claim 18), which implements the abstract idea. The various devices (claims 1, 15 and 18), a computer and a processor (claim 1), a server comprising a processor and memory (claim 15) and a non-transitory CRM with a processor (claim 18) are recited at a high-level of generality (i.e., a general-purpose computers/ computer components implementing generic computer functions; see Applicant’s Specification Figures 1-2, paragraph [0033]-[0036]) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim recites the additional elements of “providing a first graphical user interface for display on a first device… providing a second graphical user interface for display on a second device presenting…”, “receiving…” and “executing a first machine-learning logic… wherein the first machine-learning logic is trained…”. The “providing a first graphical user interface for display on a first device… providing a second graphical user interface for display on a second device presenting…” is recited at a high-level of generality (i.e., as a general displaying data) and amounts to merely linking of the abstract idea to particular technological environment. The “receiving…” steps are recited at a high-level of generality (i.e., as a general means of receiving/transmitting data) and amounts to the mere transmission and/or receipt of data, which is a form of extra-solution activity. The “executing a first machine-learning logic… wherein the first machine-learning logic is trained…” is recited at a high-level of generality (i.e., training a generic off the shelf machine learning algorithm to make predictions) and amounts to merely linking of the abstract idea to particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of various devices (claims 1, 15 and 18), a computer and a processor (claim 1), a server comprising a processor and memory (claim 15) and a non-transitory CRM with a processor (claim 18) to perform the noted steps amounts to no more than mere instructions to apply the exception using generic hardware components. Mere instructions to apply an exception using generic hardware components cannot provide an inventive concept ("significantly more"). Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “providing a first graphical user interface for display on a first device… providing a second graphical user interface for display on a second device presenting…”, “receiving…” and “executing a first machine-learning logic… wherein the first machine-learning logic is trained…” were considered extra-solution activity and/or generally linking the abstract idea to particular technological environment. The “providing a first graphical user interface for display on a first device… providing a second graphical user interface for display on a second device presenting…” has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Aidt (20240029409): paragraph [0315]; Chorny (20210247393): Figure 2, paragraph [0066]; Peng (20210019342): paragraph [0006]; display of user interfaces on devices is well-understood, routine and conventional. The “receiving…” steps have been re-evaluated under the "significantly more" analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in MPEP 2106.0S(d)(II)(i) "Receiving or transmitting data over a network" is well-understood, routine, and conventional. The “executing a first machine-learning logic… wherein the first machine-learning logic is trained…” has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Aidt (20240029409): paragraph [0015]; Chorny (20210247393): paragraph [0061]; Peng (20210019342): paragraph [0016]; use of machine learning to train a model to predictions is well-understood, routine and conventional. Well-understood, routine, and conventional elements/functions cannot provide "significantly more." As such the claim is not patent eligible. Claims 2-14, 16-17 and 19-20 are similarly rejected because either further define the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible. Claims 3-4 and 16 further describe determination of a mitotic count, but does not recite any additional elements, therefore the claim cannot provide significantly more and/or a practical application. Claims 5-8, 10, 12, 17 and 19 further describes creation of data (i.e., organization of data) for presentation to the user, however high-level generic presentation of data on a generic user interface was already considered above and is incorporated herein. Claims 11, 14 and 20 further describe use of machine learning, however high-level training of a generic off-the-shelf machine learning model was already considered above and is incorporated herein. Claim 13 further describes the creation of the virtual slide, however no additional elements are claimed, the creation of the slide amounts to high-level organization of data for output for a human user to use and is not an additional element, and therefore the claim cannot provide significantly more and/or a practical application. Claim 21 further details use of a decision tree, however use of machine learning techniques was already considered above and is incorporated herein. Claim 22 further displays data for user selection, however display of data was already considered above and is incorporated herein. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 8, 10-15 and 18-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10395772 (hereafter “Lucas”), in view of U.S. Patent App. No. 20210247393 (hereafter “Chorny”), in view of U.S. Patent App. No. 20240029409 (hereafter “Aidt”). Regarding (Currently Amended) claim 1, Lucas teaches a computer-implemented method for processing pathology data (Lucas: Column 1, lines 20-25, “A system and method implemented in a mobile platform”, Column 4, lines 20-35, “the present disclosure describes an application interface 10 that physicians can reference easily through their mobile or tablet device 12. Through the application interface 10, reports a physician sees may be supplemented with aggregated data”, Column 7, line 55-Column 8, line 5, “Medical data, or key health information, may include numerous fields including, but not limited to… laboratory information (such as genetic testing, performance scores, lab tests, pathology results, prognostic indicators, or corresponding dates)”), the method comprising: providing a first graphical user interface for display on a first device (Lucas: Figures 1-2, Column 4, lines 20-65, “the present disclosure describes an application interface 10 that physicians can reference easily through their mobile or tablet device 12. Through the application interface 10, reports a physician sees may be supplemented with aggregated data… Generating a report supplement may be performed by a physician by opening or starting-up the application, following prompts provided by the applications to capture or upload a report or EMR… as seen in FIG. 1, a home screen 14 of a mobile application is displayed on the mobile device 12. The home screen 14 may provide access to patient records through a patient interface 18 that may provide, for one or more patients, patient identification information 20, such as the patient's name, diagnosis, and record identifiers, enabling a user (physician or staff) to identify a patient and to confirm that the record selected by the user and/or presented on the mobile device 12 relates to the patient that the user wishes to analyze”); receiving a first input from the first graphical user interface (Lucas: Figures 1-2, Column 2, lines 50-65, “capturing, with a mobile device, a next generation sequencing (NGS) report comprising a NGS medical information about a sequenced patient”, Column 4, lines 20-55, “a physician by opening or starting-up the application, following prompts provided by the applications to capture or upload a report or EMR… Once captured, the patient's data may be uploaded to server and analyzed in real time”, Column 5, lines 15-45, “Adding a patient may either be performed manually, by entering patient information into the application”, Column 7, line 55-Column 8, line 5, “Medical data, or key health information, may include numerous fields including, but not limited to… laboratory information (such as genetic testing, performance scores, lab tests, pathology results, prognostic indicators, or corresponding dates)”); sequentially generating, for display at the first graphical user interface, a plurality of fields, each of the plurality of fields corresponding to a further input, wherein the plurality of fields are displayed for input in a rules-based manner, wherein a first field of the plurality of fields is generated based on the first input, and wherein each subsequent field of the plurality of fields is generated based on a further input provided in a previous field of the plurality of fields (Lucas: Figure 16, Column 16, line 50-Column 17, lines 35, “mobile application for review by the user, as seen in FIG. 16… selecting the data field in the application corresponding to the information… For text based fields, a text editor/keyboard may be displayed for the user to provide text… The predefined model that is associated with each of the templates may contain reference fields to identify each 20 of the fields that may be extracted to generate the extracted patient information”, Column 32, lines 55-end, “generate a rule set”, Column 47, lines 35-55, “subjected to rules for validation”, Column 54, “the application may permit manual editing of that information… Selecting that icon may open a search dialog box and a keyboard on the mobile device in order to receive user input… A button or gesture may be used to add a field to the data which may have been omitted by the extraction process above. For example, the user may press and hold on the blank space below the last entry to cause a new field to appear, or the user may place a finger on two consecutive fields and spread them apart to cause a new field to appear between them. The newly added field may appear at the bottom of the list and the user may drag and drop the field into the appropriate place on the application interface… Upon validation of the information… New fields and updated fields may be processed again for entity linking and normalization to ensure that each field is accurately curated for generating the query and storage”. The Examiner notes a user may enter/correct data in a displayed a field and subsequently displayed fields are updated based on the manual input that is entered into a field and teaches what is required of subsequent fields being generated based on a previous field using a ruleset under the broadest reasonable interpretation); receiving, from each of the plurality of fields, the further input corresponding to the field, wherein the first input and further inputs comprise pathology data associated with a patient (Lucas: Figures 1-2, Column 2, lines 50-65, “capturing, with a mobile device, a next generation sequencing (NGS) report comprising a NGS medical information about a sequenced patient”, Column 4, lines 20-55, “a physician by opening or starting-up the application, following prompts provided by the applications to capture or upload a report or EMR… Once captured, the patient's data may be uploaded to server and analyzed in real time”, Column 5, lines 15-45, “Adding a patient may either be performed manually, by entering patient information into the application”, Column 7, line 55-Column 8, line 5, “Medical data, or key health information, may include numerous fields including, but not limited to… laboratory information (such as genetic testing, performance scores, lab tests, pathology results, prognostic indicators, or corresponding dates)”); extracting, by a processor, a keyword from the pathology data (Lucas: Column 4, lines 20-45, “a server hosting the application, or devices”, Column 7, lines 45-65, “As a result of the OCR output, the application also may identify medical data present in the document. Medical data, or key health information, may include numerous fields”, Column 8, lines 40-end, “the electronic document capture may reference the predefined model to identify the region of the electronic document capture containing key health information and extract the identified region for further processing… the region may be extracted. Text may be identified from the extracted region and provided to a natural language processing (NLP) algorithm to extract patient information”, Column 13, lines 60-end, “As discussed above, the MLA may search for one or more keywords”); determining, by the processor executing a first machine-learning logic and based on the keyword extracted from the pathology data, a pathology summary (Lucas: Figures 5, 17-19, 28, Column 23, line 50, Column 24, line 5, “use a combination of text extraction techniques, text cleaning techniques, natural language processing techniques, machine learning algorithms, and medical concept (Entity) identification, normalization, and structuring techniques. The system also maintains and utilizes a continuous collection of training data across clinical use cases (such as diagnoses, therapies, outcomes, genetic markers, etc.) that help to increase both accuracy and reliability of predictions specific to a patient record. The system accelerates a structuring of clinical data in a patient's record. The system may execute subroutines that highlight, suggest, and pre-populate an electronic medical record (“EHR” or “EMR”). The system may provide other formats of structured clinical data, with relevant medical concepts extracted from the text and documents of record”, Column, 44, lines 10-35, “a response may be a formatted into an output divided into several sections, each section relating to, for example, the fields of Diagnosis, Procedures, Radiology, etc., as discussed above. Under a Diagnosis header/identifier, structured entities relating to diagnosis may be summarized with the final normalized entity, information from the entity structuring, and any confidence values generated during the classification and/or ranking/filtering. The response may include all of the sections with corresponding structured entities. The response may be generated and output”), wherein the first machine-learning logic is trained using […] pathology training data labeled with corresponding diagnostic results (Lucas: Column 9, lines 60-end, “The MLA may have been trained with a training dataset that comprises annotations for types of classification”, Column 14, lines 10-35, “using an unsupervised or semi-supervised training set”, Column 45, lines 1-15, “the training dataset has many known values or annotations”); […]; providing a second graphical user interface for display on a second device presenting the pathology summary […] (Lucas: Figures 5, 17-19, 28-29, Column 55, lines 35-end, “As seen in FIGS. 17-19, the application may display the cohort report of summarized medical information, which that may help the physician make a treatment recommendation to the patient. When the summarized medical information supplements the information on the initial report, it provides new information to the physician, not present in the initial report, which can help the physician make a more informed treatment decision that may result in improvements to the patient's care, such as improved patient outcomes and greater value of care… The summarized medical information includes one or more treatment regimens 184 administered to patients in the cohort, along with relevant response data 186 for each regimen”, Column 63, lines 20-60, “Staying with FIG. 29 the server 1102 may be in communication with one or more mobile devices, such as smartphones 1108A, 1108B, tablet devices 1108C, 1108D, and laptop or other computing devices 1108E… one or more analytical actions described herein may be performed by the mobile devices”); and receiving a second input from the second graphical user interface (Lucas: Figures 5, 17-19, 28, Column 57, lines 20-55, “FIG. 17 includes a drop down 190 providing the user with various options… including the ability to access select reference materials, to search one or more databases of information (such as PubMed), and/or to edit one or more fields”); and in response to receiving the second input at the second graphical user interface, providing for display at least one of: a background information module comprising data associated with the pathology summary; a contact information module comprising contact information of a pathologist associated with the pathology data; and an ordering module, which when initiated, generates an order for follow-on testing (Lucas: Figures 5, 17-19, 28, Column 59, lines 25-end, “the report may be supplemented by providing access, via hyperlinks or otherwise, to recent articles, publications, or other relevant information that may aid the physician by providing context, background information, or access to recent publications that address details of the report, the other articles, publications, or other relevant information being structured using similar techniques for convenient retrieval”. The Examiner notes this is a background information module under the broadest reasonable interpretation). Lucas may not explicitly teach (underlined below for clarity): wherein the first machine-learning logic is trained using veterinary pathology training data labeled with corresponding diagnostic results; Chorny teaches wherein the first machine-learning logic is trained using veterinary pathology training data labeled with corresponding diagnostic results (Chorny: paragraphs [0003]-[0005], “detection of cancer or other diseases can aid in maintaining the health and quality of life, and extending the span of life, of dogs or other pets… assessing a health state of canines or other animals”, paragraph [0054], “The samples for the medical test are biological material(s) deriving from the pets. The biological material(s) can include: blood, urine, saliva, fur, tissue, scrapings (e.g., plaque scraped from a tooth for dental diagnostic tests), and other fluid or solid materials from the pet.”, paragraphs [0060]-[0062], “After analyzing the results from the samples for the medical testing kits for assessing the health of the pet of the consumer… The AI techniques can include, for example, neural networks. Neural networks can be optimized for training with a very small number of parameters and do not need to make a priori assumptions on the properties of the underlying data”); One of ordinary skill in the art before the effective filing date would have found it obvious to include using veterinary pathology data within the use of pathology data to train a machine learning system as taught by Lucas with the motivation of “aid in maintaining the health and quality of life, and extending the span of life, of dogs or other pets” (Chorny: paragraphs [0003]-[0004]). Lucas and Chorny may not explicitly teach (underlined below for clarity): determining, based at least in part on the pathology data, a staining input, the staining input corresponding to a virtual staining to apply to an image of a prior collected sample of the patient; creating, by the processor, a digitally-stained slide based on the image of the prior collected sample of the patient and the staining input, wherein the digitally-stained slide is a virtual version of a stained slide; providing a second graphical user interface for display on a second device presenting the pathology summary and at least a portion of the digitally-stained slide; Aidt teaches determining, based at least in part on the pathology data, a staining input, the staining input corresponding to a virtual staining to apply to an image of a prior collected sample of the patient (Aidt: paragraph [0279], “A set of rules is applied to pixel values of colors… for mapping the pixel values using a set of rules or other mapping function to classification categories indicative of labels for the segmented biological objects… when the segmented biological objects are stained with colors specific to the virtually created second sequential image, which may or may not also include colors specific to the first type of image optionally stained with a first stain. For example, when the first stain is brown, and the second sequential stain is magenta (e.g., as described herein), the set of rules may map for example pixel intensity values indicating magenta to a specific classification category. The set of rules may be set, for example, based on observation by Inventors that the specific classification category has pixel values indicating presence of the second sequential stain”, paragraph [0297], “Staining device 1226 may apply the first stain, and optionally the second stain”, paragraph [0389], “Predicted virtual stain 1816 is added 1817 to input image 1804 to produce the output”, paragraph [0636], “simulate the antigen B staining patterns on a pixel-to-pixel basis. The virtual-stain network F takes as an input image stained with the original antigen A and adds a virtual antigen B staining on top of it, as if they were stained simultaneously”, paragraph [0769], “Hematoxylin plus DAB sequential staining and Hematoxylin plus DAB plus Magenta sequential staining are depicted in FIGS. 8A and 8B for the input image and the ground truth image, respectively, or Hematoxylin stain and Hematoxylin plus p40 Magenta sequential staining are depicted in FIGS. 8C and 8D for the input image and the ground truth image, respectively”); creating, by the processor, a digitally-stained slide based on the image of the prior collected sample of the patient and the staining input, wherein the digitally-stained slide is a virtual version of a stained slide; providing a second graphical user interface for display on a second device presenting the pathology summary and at least a portion of the digitally-stained slide (Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0247], “Virtual second images corresponding to the first image may be synthesized by a virtual stainer machine learning model… This enables using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination”, paragraph [0277], “The virtually created second sequential image provides additional visual data for the pathologist, depicting additional biomarkers in the same physical slice of tissue, in addition to the first biomarkers in the first stained same physical slice of tissue. Therefore allowing the pathologist to view information about two (or more) bio-marker for each cell”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”); One of ordinary skill in the art before the effective filing date would have found it obvious to include using a virtual stain as taught by Aidt within the image analysis as taught by Lucas and Chorny with the motivation of “enable[ing] using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination” (Aidt: paragraph [0247]). Regarding (Original) claim 8, Lucas, Aidt and Chorny teach the limitations of claim 1, and further teaches providing within the pathology summary a confidence indicator that is indicative of a confidence level of the pathology summary based on an amount of differential factors between histologic features of the pathology data and the veterinary pathology training data (Lucas: Column 22, lines 30-55, “a confidence value identifying an estimated accuracy of the result”, Column 44, lines 5-35, “generate a response/report… Under a Diagnosis header/identifier, structured entities relating to diagnosis may be summarized with the final normalized entity, information from the entity structuring, and any confidence values generated during the classification and/or ranking/filtering.”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Currently Amended) claim 10, Lucas, Chorny and Aidt teach the limitations of claim 1, and further teach providing within the pathology summary for display on the second graphical user interface an analysis of the digitally-stained slide (Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0247], “Virtual second images corresponding to the first image may be synthesized by a virtual stainer machine learning model… This enables using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination”, paragraph [0277], “The virtually created second sequential image provides additional visual data for the pathologist, depicting additional biomarkers in the same physical slice of tissue, in addition to the first biomarkers in the first stained same physical slice of tissue. Therefore allowing the pathologist to view information about two (or more) bio-marker for each cell”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Currently Amended) claim 11, Lucas, Chorny and Aidt teaches the limitations of claim 1, and further teaches creating, by the processor executing a second machine-learning logic, the digitally-stained slide, wherein the second machine-learning logic is trained using images of a plurality of physically stained slides labeled with attributes for a stain used on a sample (Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0027], “a biological object machine learning model is trained on a synthetic multi-record training dataset including sets of first and second images labelled with ground truth labels selected from a plurality of biological object categories, wherein the biological object machine learning model is fed a first target image depicting biological objects presenting the at least one first biomarker and a second target image depicting biological objects presenting at least one second biomarker”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Currently Amended) claim 12, Lucas, Chorny and Aidt teach the limitations of claim 1, and further teaches based on an analysis of the first input, the processor determining that a stained slide analysis of a sample from the patient is needed (Aidt: paragraph [0372], “the first target image may be fed into the virtual stainer (e.g., as described herein, for example, trained as described with reference to FIG. 13) to obtain the second target image. Alternatively, second target images are available, in which case the virtual stainer is not necessarily used”. The Examiner notes this is determination of a necessity, which teaches what is required under the broadest reasonable interpretation); providing within the pathology summary for display on the second graphical user interface an analysis of the digitally-stained slide (Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0247], “Virtual second images corresponding to the first image may be synthesized by a virtual stainer machine learning model… This enables using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination”, paragraph [0277], “The virtually created second sequential image provides additional visual data for the pathologist, depicting additional biomarkers in the same physical slice of tissue, in addition to the first biomarkers in the first stained same physical slice of tissue. Therefore allowing the pathologist to view information about two (or more) bio-marker for each cell”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”). The motivation to combine is the same as in claim 10, incorporated herein. Regarding (Currently Amended) claim 13, Lucas, Chorny and Aidt teaches the limitations of claim 1, and further teaches creating the digitally-stained slide based on a mix of stains (Aidt: paragraph [0411], “the first biological marker may include, without limitation, at least one of Hematoxylin, Acridine orange, Bismarck brown, Carmine, Coomassie blue, Cresyl violet, Crystal violet, 4′,6-diamidino-2-phenylindole (“DAPI”), Eosin, Ethidium bromide intercalates, Acid fuchsine, Hoechst stain, Iodine, Malachite green, Methyl green, Methylene blue, Neutral red, Nile blue, Nile red, Osmium tetroxide, Propidium Iodide, Rhodamine, Safranine, programmed death-ligand 1 (“PD-L1”) stain”, paragraph [0425], “processing with different biological markers (which may include, without limitation, stains, chromogens, or other suitable compounds that are useful for characterizing biological samples, or the like)”, paragraph [0549], “a third stain different from each of the first stain and the second stain”, paragraph [0784], “the second stain may be… different from the first stain”). The motivation to combine is the same as in claim 10, incorporated herein. Regarding (Original) claim 14, Lucas, Chorny and Aidt teaches the limitations of claim 12, and further teaches determining, by the processor executing a second machine-learning logic, the analysis of the digitally-stained slide, wherein the second machine-learning logic is trained using images of a plurality of physically stained slides labeled with attributes for a stain used on a sample (Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0027], “a biological object machine learning model is trained on a synthetic multi-record training dataset including sets of first and second images labelled with ground truth labels selected from a plurality of biological object categories, wherein the biological object machine learning model is fed a first target image depicting biological objects presenting the at least one first biomarker and a second target image depicting biological objects presenting at least one second biomarker”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”). The motivation to combine is the same as in claim 10, incorporated herein. REGARDING CLAIM(S) 15 AND 18 Claim(s) 15 and 18 is/are analogous to Claim(s) 1, thus Claim(s) 15 and 18 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 1. REGARDING CLAIM(S) 19 AND 20 Claim(s) 19 and 20 is/are analogous to Claim(s) 10 and 11, thus Claim(s) 19 and 20 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 10 and 11. Regarding (New) claim 21, Lucas, Chorny and Aidt teach the limitations of claim 1, and further teach wherein sequentially generating the plurality of fields includes using a decision tree to generate an input format for each of the plurality of fields (Lucas: Fig. 7, Col. 3, lines 30-3, “tree representation”, Col. 30, lines 55-end, “parse trees or deep semantic representations”; Aidt; paragraph [0302], “decision trees”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (New) claim 22, Lucas, Chorny and Aidt teach the limitations of claim 1, and further teach wherein at least one field of the plurality of fields includes an annotated image of an animal, wherein the further input corresponding to the at least one field includes a user selection of a region of the annotated image (Lucas: Col. 9, lines 60-end, “annotations for types of classification that may be performed.”, Col. 54, lines 20-30, “Selecting that icon”, Col. 57, lines 45-55, “selection of the information icon or selecting the field”; Chorny: Fig. 2, paragraphs [0066]-[0069], “interface 205 may be presented in GUI 109 in consumer device 115. The consumer entity enters the following information… type (of pet)… selections made on interface 210, interface 215 is displayed on consumer device 115”, paragraph [0071], “select buttons in a GUI”, paragraph [0106], “clicking on a selection box”). The motivation to combine is the same as in claim 1, incorporated herein. Claim(s) 3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10395772 (hereafter “Lucas”), U.S. Patent App. No. 20210247393 (hereafter “Chorny”) and U.S. Patent App. No. 20240029409 (hereafter “Aidt”) as applied to claim 1 above, and further in view of U.S. Patent App. No. 20210018742 (hereafter “Stumpe”). Regarding (Original) claim 3, Lucas, Chorny and Aidt teach the limitations of claim 1, but may not explicitly teach wherein the pathology summary comprises a mitotic count. Stumpe teaches wherein the pathology summary comprises a mitotic count (Stumpe: Figure 14, paragraph [0008], “As another example, the one or more areas of interest take the form of individual cells undergoing mitosis and wherein the quantitative data is a count of the number of cells in the view undergoing mitosis”, paragraph [0161], “the AR microscope of this disclosure is application agnostic and can be used for any kind of image analysis task for which a deep learning algorithm has been trained… including quantitative analysis, such as mitotic rate estimation, IHC quantification and positive margin detection in frozen sections… mitotic figure counting (FIG. 14B, with the overlay being identification of cells currently undergoing mitosis and a measure of the number of such cells per high power field)”, paragraph [0167], “Mitoses counting is done for breast cancer Nottingham grading. In routine clinical practice the number of mitoses is counted in a few fields of view and then extrapolated to the entire image, Display options include the absolute number or the density, as shown in FIG. 14.”). One of ordinary skill in the art before the effective filing date would have found it obvious to include using digitally stained slides with mitotic count as taught by Stumpe with the use of pathology data as taught by Lucas, Chorny and Aidt with the motivation of “assisting a pathologist in classifying biological samples such as blood or tissue, e.g., as containing cancer cells or containing a pathological agent” (Stumpe: paragraph [0002]). Regarding (Currently Amended) claim 5, Lucas, Chorny Aidt and Stumpe teach the limitations of claim 3, and further teach wherein the digitally-stained slide is based on the mitotic count; and wherein the method further comprises providing within the pathology summary for display on the second graphical user interface an analysis of the digitally-stained slide (Stumpe: paragraph [0003], “the sample is placed on a microscope slide… the sample may be stained and scanned with a high resolution digital scanner”, paragraph [0055], “generation of digital data for overlays for the field of view”, paragraph [0062], “Digital whole slide scanners and systems for staining slides are known in the art”, paragraph [0076], “data from step 2A or 2B is translated into an image on a display by the AR display unit 128”; Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0247], “Virtual second images corresponding to the first image may be synthesized by a virtual stainer machine learning model… This enables using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination”, paragraph [0277], “The virtually created second sequential image provides additional visual data for the pathologist, depicting additional biomarkers in the same physical slice of tissue, in addition to the first biomarkers in the first stained same physical slice of tissue. Therefore allowing the pathologist to view information about two (or more) bio-marker for each cell”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”). The motivation to combine is the same as in claim 3, incorporated herein. Regarding (Original) claim 6, Lucas, Chorny, Aidt and Stumpe teach the limitations of claim 3, and further teach receiving a pathology image of the patient; creating, by the processor, annotations on the pathology image indicative of histologic tumor free areas and tumor cells; and providing within the pathology summary for display on the second graphical user interface the pathology image of the patient annotated to highlight the tumor cells (Stumpe: paragraph [0020], “The enhancement further includes a text box providing annotations, in this example Gleason score grading and tumor size data. The superimposing of the outline and annotations FIG. 3B assists the pathologist in characterizing the sample because it directs their attention to areas of interest that are particularly likely to be cancerous and provides proposed scores for the sample”, paragraphs [0055]-[0056], “generation of digital data for overlays for the field of view… generating overlay digital data (e.g. heat maps, annotations, outlines, etc.) based on the inference results from the pattern recognizer 200”, paragraphs [0097]-[0099], “The annotations could also include statistics, such as the % of the image positive for cancer cells and the % of the image negative for cancer cells, and confidence or probability scores… the rectangles could be accompanied by additional information such as annotation like size, confidence or probability scores, species identification, etc., depending on the application”). The motivation to combine is the same as in claim 2, incorporated herein. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10395772 (hereafter “Lucas”), U.S. Patent App. No. 20210247393 (hereafter “Chorny”), U.S. Patent App. No. 20240029409 (hereafter “Aidt”) and U.S. Patent App. No. 20210018742 (hereafter “Stumpe”) as applied to claim 3 above, and further in view of U.S. Patent App. No. 20210019342 (hereafter “Peng”). Regarding (Original) claim 4, Lucas, Chorny, Aidy and Stumpe teach the limitations of claim 3, and further teach receiving a pathology image of the patient; […]; and in response […], the processor determining the mitotic count (Stumpe: paragraph [0044], “the compute unit 126 includes a machine learning pattern recognizer which receives the images from the camera. The machine learning pattern recognizer may take the form of a deep convolutional neural network which is trained on a set of microscope slide images of the same type as the biological specimen under examination”, paragraph [0148], “For algorithm application, FIG. 11B, the neural network 1100 is provided with images”, paragraph [0161], “the AR microscope of this disclosure is application agnostic and can be used for any kind of image analysis task for which a deep learning algorithm has been trained… including quantitative analysis, such as mitotic rate estimation, IHC quantification and positive margin detection in frozen sections… mitotic figure counting (FIG. 14B, with the overlay being identification of cells currently undergoing mitosis and a measure of the number of such cells per high power field)”). Lucas, Chorny and Stumpe may not explicitly teach (underlined below for clarity): receiving a pathology image of the patient; comparing, by the processor, the pathology image of the patient with a reference image; and in response to comparing the pathology image of the patient with the reference image, the processor determining the mitotic count. Peng teaches receiving a pathology image of the patient; comparing, by the processor, the pathology image of the patient with a reference image; and in response to comparing the pathology image of the patient with the reference image, the processor determining the mitotic count (Peng: paragraph [0003], “a computer memory system storing a reference library in the form of a multitude of medical images. At least some of the images, and preferably all of them, are associated with metadata including clinical information relating to the specimen or patient associated with the medical images. The system further includes a computer system configured as a search tool for receiving an input image query from a user. The computer system is trained to find one or more similar medical images in the memory system which are similar to the input image”, paragraph [0041], “comparison to other images as visual reference, and for (b) seeing the metadata of those images”, paragraph [0074], “Nuclear & cytoplasmic features (mitotic count)”, paragraphs [0105]-[0108], “a machine learning pattern recognizer trained to find one or more additional portions of the larger medical image which is similar to the input query. For example, the pattern recognizer could be trained to find tumor cells. There is a module in the computer system configured to perform at least one of the following operations on the additional portions of the larger medical image… providing quantifications for the additional portions (e.g., by providing some size or cell count metrics or other information”). One of ordinary skill in the art before the effective filing date would have found it obvious to include comparison of images as taught by Peng with the determination of mitotic count as taught by Lucas, Chorny, Aidt and Stumpe with the motivation of provide “more information for that particular case” (Peng: paragraph [0096]). Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10395772 (hereafter “Lucas”), U.S. Patent App. No. 20210247393 (hereafter “Chorny”), U.S. Patent App. No. 20240029409 (hereafter “Aidt”) and U.S. Patent App. No. 20210018742 (hereafter “Stumpe”) as applied to claim 6 above, and further in view of U.S. Patent App. No. 202200764 (hereafter “Georgescu”). Regarding (Original) claim 7, Lucas, Chorny, Aidt and Stumpe teach the limitations of claim 6, and further teach creating, by the processor, a […] image based on a zoomed-in view of the tumor cells, wherein the […] image illustrates a mitotic figure; and providing within the pathology summary for display on the second graphical user interface the […] image (Stumpe: paragraph [0014], “a display of quantitative data relating to the cells which are undergoing mitosis.”, paragraph [0048], “in order to zoom in on the heat map area 154 of FIG. 2B (e.g., change to a 40× lens), a new field of view of the sample would be seen through the microscope eyepiece and directed to the camera”, paragraphs [0055]-[0056], “generation of digital data for overlays for the field of view”, paragraph [0125], “zoom in on demand”, paragraph [0161], “image analysis task for which a deep learning algorithm has been trained… mitotic figure”). Lucas, Chorny, Aidt and Stumpe may not explicitly teach (underlined below for clarity): creating, by the processor, a pop-up image based on a zoomed-in view of the tumor cells, wherein the pop-up image illustrates a mitotic figure; and providing within the pathology summary for display on the second graphical user interface the pop-up image. Georgescu teaches creating, by the processor, a pop-up image based on a zoomed-in view of the tumor cells, wherein the pop-up image illustrates a mitotic figure; and providing within the pathology summary for display on the second graphical user interface the pop-up image (Georgescu: paragraphs [0013], “improve design of a graphical user interface of a visualization application for presenting virtual slide images to pathologists”, paragraph [0063], “various visualization options. The visualization may include an overview viewing pane in which the toxicity map is overlaid on the histological image… visualization application may include a user interface control for selecting one or more of the toxic areas… displayed in a pop-up window or side-bar, or permit further numerical processing to be initiated by the user on the selected toxic area. The visualization may include a close-up viewing pane which presents a visualization of whichever toxic area is currently selected, where the close-up viewing pane shows a high-resolution, zoomed in view of the currently selected toxic area. The toxic area selection control may have a scroll function for sweeping through the toxic areas in order of ranking”). One of ordinary skill in the art before the effective filing date would have found it obvious to include using a pop-window for zooming with the zooming as taught by Lucas, Chorny, Aidt and Stumpe with the motivation of “improve design of a graphical user interface of a visualization application for presenting virtual slide images to pathologists” (Georgescu: paragraphs [0013]). Claim(s) 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10395772 (hereafter “Lucas”), U.S. Patent App. No. 20210247393 (hereafter “Chorny”) and U.S. Patent App. No. 20240029409 (hereafter “Aidt”) as applied to claim 15 above, in view of U.S. Patent App. No. 20210018742 (hereafter “Stumpe”) and further in view of U.S. Patent App. No. 20210019342 (hereafter “Peng”). Regarding (Original) claim 16, Lucas, Chorny and Aidt teach the limitations of claim 15, but may not explicitly teach wherein the pathology summary comprises a mitotic count and the functions further comprise: receiving a pathology image of the patient; […]; and in response […], determining the mitotic count. Stumpe teaches wherein the pathology summary comprises a mitotic count and the functions further comprise: receiving a pathology image of the patient; […]; and in response […], determining the mitotic count (Stumpe: paragraph [0044], “the compute unit 126 includes a machine learning pattern recognizer which receives the images from the camera. The machine learning pattern recognizer may take the form of a deep convolutional neural network which is trained on a set of microscope slide images of the same type as the biological specimen under examination”, paragraph [0148], “For algorithm application, FIG. 11B, the neural network 1100 is provided with images”, paragraph [0161], “the AR microscope of this disclosure is application agnostic and can be used for any kind of image analysis task for which a deep learning algorithm has been trained… including quantitative analysis, such as mitotic rate estimation, IHC quantification and positive margin detection in frozen sections… mitotic figure counting (FIG. 14B, with the overlay being identification of cells currently undergoing mitosis and a measure of the number of such cells per high power field)”). One of ordinary skill in the art before the effective filing date would have found it obvious to include using digitally stained slides as taught by Stumpe within the use of pathology data as taught by Lucas, Chorny and Aidt with the motivation of “assisting a pathologist in classifying biological samples such as blood or tissue, e.g., as containing cancer cells or containing a pathological agent” (Stumpe: paragraph [0002]). Lucas, Chorny, Aidt and Stumpe may not explicitly teach (underlined below for clarity): wherein the pathology summary comprises a mitotic count and the functions further comprise: receiving a pathology image of the patient; comparing the pathology image of the patient with a reference image; and in response to comparing the pathology image of the patient with the reference image, determining the mitotic count. Peng teaches wherein the pathology summary comprises a mitotic count and the functions further comprise: receiving a pathology image of the patient; comparing the pathology image of the patient with a reference image; and in response to comparing the pathology image of the patient with the reference image, determining the mitotic count (Peng: paragraph [0003], “a computer memory system storing a reference library in the form of a multitude of medical images. At least some of the images, and preferably all of them, are associated with metadata including clinical information relating to the specimen or patient associated with the medical images. The system further includes a computer system configured as a search tool for receiving an input image query from a user. The computer system is trained to find one or more similar medical images in the memory system which are similar to the input image”, paragraph [0041], “comparison to other images as visual reference, and for (b) seeing the metadata of those images”, paragraph [0074], “Nuclear & cytoplasmic features (mitotic count)”, paragraphs [0105]-[0108], “a machine learning pattern recognizer trained to find one or more additional portions of the larger medical image which is similar to the input query. For example, the pattern recognizer could be trained to find tumor cells. There is a module in the computer system configured to perform at least one of the following operations on the additional portions of the larger medical image… providing quantifications for the additional portions (e.g., by providing some size or cell count metrics or other information”). One of ordinary skill in the art before the effective filing date would have found it obvious to include comparison of images as taught by Peng with the determination of mitotic count as taught by Lucas, Chorny, Aidt and Stumpe with the motivation of provide “more information for that particular case” (Peng: paragraph [0096]). Regarding (Currently Amended) claim 17, Lucas, Chorny, Aidt, Stumpe and Peng teach the limitations of claim 16, and further teaches providing within the pathology summary for display on the second graphical user interface an analysis of the digitally-stained slide, and wherein the digitally-stained slide is created based on the mitotic count (Aidt: paragraph [0007], “developing deep learning based models for image analysis, for cell classification, for feature of interest identification, and/or for virtual staining of biological samples”, paragraph [0023], “a virtual stainer machine learning model on the imaging multi-record training dataset for generating a virtual image depicting biological objects presenting the at least one second biomarker in response to an input image depicting biological objects presenting the at the least one first biomarker”, paragraph [0247], “Virtual second images corresponding to the first image may be synthesized by a virtual stainer machine learning model… This enables using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination”, paragraph [0277], “The virtually created second sequential image provides additional visual data for the pathologist, depicting additional biomarkers in the same physical slice of tissue, in addition to the first biomarkers in the first stained same physical slice of tissue. Therefore allowing the pathologist to view information about two (or more) bio-marker for each cell”, paragraph [0294], “Computing device may feed the sample image(s) into one or more machine learning model(s) 1222A to obtain an outcome, for example, a virtual image… The outcome obtained from computing device 1204 may be provided to each respective client terminal 1208, for example, for presentation on a display”). The motivation to combine is the same as in claim 17, incorporated herein. Response to Arguments Applicant's arguments filed 03 November 2025 have been fully considered but they are not persuasive. Applicants' arguments will be addressed herein below in the order in which they appear in the response filed on 03 November 2025. Rejections under 35 U.S.C. § 101 Regarding the rejection of claims 1-20, the Examiner has considered the Applicant's arguments but does not find them persuasive. Any arguments inadvertently not addressed are unpersuasive for at least the following reasons: Applicant argues: Independent claims 1, 15, and 18 are not directed toward the abstract idea of managing personal behavior or "following rules or instructions."… This is not analogous for instructions for a human regarding how to play a game, cut hair, or hedge risk, and therefore is not directed to the abstract idea of managing human behavior… In the instant example, the claims integrate the alleged judicial exception into a practical application because the claims recite an improvement in the field of veterinary diagnostics, and amount to more than simply an effort to monopolize the alleged judicial exception… The element provides several technological improvements over prior art systems including, for example, the advantages enumerated in paragraph [0055] of the Specification… This element removes the need for manual and subjective selection of staining for images of samples… As recited in the Specification, this provides a benefit over conventional free form text inputs, and reduces a need for normalizing and focusing inputs after the inputs are received. (Specification, para. [0044])… The independent claims recite similar elements as recited in claim 1 of Example 42 of the 2019 Revise Patent Subject Matter Eligibility Examples (2019 PEG)… The claims reflect this improvement, including by recitation of the sequential generation of the plurality of fields. The Examiner respectfully disagrees. It is respectfully submitted, the claims under the broadest reasonable interpretation are directed toward a human user interacting with displayed data via generic computer components to collect and organize information to provide to the human user an output of the organized data, which as stated in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping” of abstract ideas. The claims are directed toward the abstract idea grouping of abstract ideas. The claimed additional elements do not provide a technical solution to a technical problem recited in Applicant’s specification and/or an improvement in the functionality of the computer. Applicant argues 2 paragraphs of their specification paragraphs [0044] and [0055], however neither of the paragraphs recite a technical problem rooted in computer hardware technology and/or an improvement in the functionality of the computer, paragraph [0044] does not describe any technical problems rooted in computer hardware technology, and certainly does not describe any recitations any of claimed additional elements providing the alleged technical solution, and paragraph [0055] is directed at human activity problems of wait times and confidence, which are not technical problems rooted in computer hardware technology, the claims may improve upon the abstract idea, nevertheless an improved abstract idea is still an abstract idea. As the claimed additional elements do not recite a technical solution to a technical problem recited in Applicant’s specification the claims are not subject matter eligible, unlike example 42 which explicitly recites a technical solution to a technical problem recited in example 42’s background (i.e., spec). Therefore as the claims do not recite any additional elements that provide a technical solution to a technical problem recited in Applicant’s specification the argument is unpersuasive. Rejections under 35 U.S.C. § 103 Regarding the rejection of claims 1-20, the Examiner has considered the applicant’s arguments; however, the arguments are not persuasive as addressed herein. The Examiner has attempted to address all of the arguments presented by the Applicant; however, any arguments inadvertently not addressed are not persuasive for at least the following reasons: Applicant argues: Independent claim 1 is amended to incorporate some subject matter of dependent claim 9, and recites… Lucas and Chorny, alone or in combination, fail to teach or suggest sequentially generating a plurality of fields wherein a first field of the plurality of fields is generated based on the first input, and wherein each subsequent field of the plurality of fields is generated based on a further input provided in a previous field of the plurality of fields… However, Lucas does not teach a sequential generation of a plurality of fields as claimed. Rather, Lucas teaches extracting of information from a document and presenting the information to a user for validation… Instead, in Lucas, the alleged fields are populated based on the contents of a document. While Lucas discloses that fields may receive text based inputs, there is no teaching or suggestion that inputs into these fields impacts a generation of a subsequent field. The Examiner respectfully disagrees. It is respectfully submitted, that Lucas teaches the argued limitation under the broadest reasonable interpretation. In particular, Lucas explicitly teaches a sequential process that a user inputs/validates information extracted from fields and the user’s modifications (i.e., input) are used to update the subsequent fields and are re-processed (see above but at least Figure 16 and Column 47, lines 35-55), this teaches what is required of the user interface being updated using rules under the broadest reasonable interpretation, and in combination with Chorny and Aidt teach what is required of the claim under the broadest reasonable interpretation and would be prima facie to combine with the motivation of “enable[ing] using only first images without requiring physical capturing second images which may require special sequential staining procedures and/or special illumination” (Aidt: paragraph [0247]) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew E Lee whose telephone number is (571)272-8323. The examiner can normally be reached M-Th 9-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.E.L./Examiner, Art Unit 3684 /Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Jul 26, 2025
Non-Final Rejection — §101, §103
Nov 03, 2025
Response Filed
Feb 18, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542210
WEARABLE DEVICE AND COMPUTER ENABLED FEEDBACK FOR USER TASK ASSISTANCE
2y 5m to grant Granted Feb 03, 2026
Patent 12154077
USER INTERFACE FOR DISPLAYING PATIENT HISTORICAL DATA
2y 5m to grant Granted Nov 26, 2024
Patent 12040070
RADIOTHERAPY SYSTEM, DATA PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Jul 16, 2024
Patent 12027251
SYSTEMS AND METHODS FOR MANAGING LARGE MEDICAL IMAGE DATA
2y 5m to grant Granted Jul 02, 2024
Patent 11942189
Drug Efficacy Prediction for Treatment of Genetic Disease
2y 5m to grant Granted Mar 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
51%
With Interview (+33.5%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month