Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,195

APPARATUS AND METHOD FOR GENERATING A PREOPERATIVE DATA STRUCTURE USING A PRE-OPERATIVE PANEL

Non-Final OA §101§103
Filed
Jul 15, 2024
Examiner
VANDER WOUDE, KIMBERLY ELAINE
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Anumana, Inc.
OA Round
3 (Non-Final)
7%
Grant Probability
At Risk
3-4
OA Rounds
2y 6m
To Grant
16%
With Interview

Examiner Intelligence

Grants only 7% of cases
7%
Career Allow Rate
2 granted / 30 resolved
-45.3% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
24 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
35.6%
-4.4% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 30 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to Applicant’s communication filed on May 28, 2025. Claims 1 and 11 have been amended and are hereby entered. Claims 1-20 are currently pending and have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on May 28, 2025 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 analysis: Claims 1 and 11 are directed to a system and a method respectively and therefore all claims fall into one of the four statutory categories. (Step 1: Yes, the claims fall into one of the four statutory categories). Step 2A analysis - Prong one: The substantially similar independent method and system claims, taking claim 11 as exemplary, recite the following limitations: receiving, using at least a processor, subject data, wherein the subject data comprises electrocardiogram (ECG) data; generating, using the at least a processor, a plurality of panel outputs as a function of the subject data using a pre-operative panel machine-learning module; wherein the pre-operative panel machine-learning module comprises a plurality of panel machine-learning models and a large language model (LLM), wherein each of the plurality of panel machine-learning models and the large language model are configured to generate one panel output for one panel focus as a function of the subject data; wherein the large language model comprises a transformer architecture that employs self- attention and positional encoding, to receive the subject data as input, analyze the subject data, and generate an output; wherein generating the plurality of panel outputs comprises: generating a plurality of sets of panel training data, wherein the plurality of sets of panel training data comprises correlations between exemplary subject data, exemplary panel focuses and exemplary panel outputs, wherein generating the plurality of sets of panel training data comprises: sanitizing the plurality of sets of panel training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations, wherein sanitizing the plurality of sets of panel training data comprises: determining by the dedicated hardware unit that at least one training data entry of the plurality of sets of panel training data has a signal to noise ratio below a threshold value; and removing the at least one training data entry from the plurality of sets of panel training data to create a sanitized plurality of sets of panel training data; training each of the plurality of panel machine-learning models using each of the sanitized plurality of sets of panel training data; and generating the plurality of panel outputs using the plurality of trained panel machine-learning models; and generating, using the at least a processor, a pre-operative data structure as a function of the plurality of panel outputs. The examiner is interpreting the above bolded limitations as additional elements as further discussed below. The remaining un-bolded limitations above, as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. That is, other than reciting a method implemented by a processor (computer), the claimed invention amounts to managing personal behavior or interaction between people. For example, but for the additional elements identified/bolded above, this claim encompasses receiving ECG data for a subject, determining a data set to use for training a model, cleaning the data set by removing undesirable data points, and then determining pre-operative insights for the subject (see spec para 82 and Figure 2) in the manner described in the identified abstract idea, supra. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. The claim further recites “training each of the plurality of panel machine-learning models using each of the sanitized plurality of sets of panel training data”. When given its broadest reasonable interpretation in light of the disclosure, the training of a machine learning model by using sanitized data sets represents the creation of mathematical interrelationships between data (see Applicants Spec paras 103, 119). As such, the training of the machine learning model represents a mathematical concept that is interpreted to be part of the identified abstract idea, supra. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. Accordingly, the claim recites an abstract idea. (Step 2A – Prong 1: Yes, the claims are abstract). Step 2A analysis - Prong two: Claims 1 and 11 recite additional elements beyond the abstract idea. Claims 1 and 11 recite a processor, a panel machine-learning module comprising a plurality of panel machine-learning models and a large language model (LLM), a dedicated hardware unit comprising circuitry configured to perform signal processing operations, using the plurality of trained panel machine-learning models. Claim 1 further recites a memory and instructions. The claims are applying generic computer components to the recited abstract limitations. The recited instructions appear to be software. This judicial exception is not integrated into a practical application. In particular, the claims recite a processor, a panel machine-learning module comprising a plurality of panel machine-learning models and a large language model (LLM), a dedicated hardware unit comprising circuitry configured to perform signal processing operations, a memory, and instructions which are recited at a high-level of generality (i.e., as a generic processor performing generic computer functions) such that it amounts to no more than mere instructions to apply the exceptions using a generic computer component. For example, Applicant’s specification indicates that the processor generates the panel machine-learning models which are “mathematical model, neural net, or program generated by a machine learning algorithm that generates a panel output correlated to inputted data” (see Spec paras 31, 119). Therefore, the processor simply reads instructions, receives input data, analyzes data, etc. (see Applicant’s Spec paras 31 and 36). The claim further recites the additional element of using a plurality of trained machine learning model to analyze data and output results. This represents mere instructions to implement the abstract idea on a generic computer. Implementing an abstract idea using a generic computer or components thereof does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. See, e.g., Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 at 10 (Fed. Cir. April 18, 2025) (finding that claims that do no more than apply established methods of machine learning to a new data environment are ineligible). Alternatively, or in addition, the implementation of the trained machine learning model to generate a plurality of outputs merely confines the use of the abstract idea (i.e., the trained model) to a particular technological environment or field of use (i.e., panel machine learning models) and thus fails to add an inventive concept to the claims. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, Claims 1 and 11 are directed to an abstract idea without practical application. (Step 2A – Prong 2: No, the additional claimed elements are not integrated into a practical application). Step 2B analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor, a panel machine-learning module comprising a plurality of panel machine-learning models and a large language model (LLM), and a dedicated hardware unit comprising circuitry configured to perform signal processing operations in order to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a plurality of trained panel machine-learning models to analyze data and generate a plurality of outputs was found to represent mere instructions to implement the abstract idea on a generic computer and/or confine the use of the abstract idea (i.e., the trained model) to a particular technological environment or field of use (panel machine-learning models and a large language model). This has been re-evaluated under the “significantly more” analysis and determined to be insufficient to provide significantly more. MPEP 2106.05(I) indicates that mere instructions to implement the abstract idea on a generic computer and/or confining the use of the abstract idea to a particular technological environment or field of use cannot provide significantly more. See also Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 at 17 (Fed. Cir. April 18, 2025) (finding that applying machine learning to an abstract idea does not transform a claim into something significantly more). Applicant’s specification discloses the following: Applicant describes embodiments of the disclosure at a very high level to include the use of a wide variety of machine learning models and known training methods, data structures, databases, networks, communication protocols, computing devices, memories, user interfaces, storage devices, bus structures, input/output devices, etc. (see Applicant’s spec. paras 17, 48-52, 57, 82, 96, 105, 130, 141-147). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. The collective functions appear to be implemented using conventional computer systemization. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2B: No, the claims do not provide significantly more). Dependent Claims 2-10 and 12-20 further define the abstract idea that is presented in independent Claims 1 and 11 respectively, and are further grouped as certain methods of organizing human activity and are abstract for the same reasons and basis as presented above. Further, Claims 3-8, 13-18 recite additional elements beyond the abstract idea. Claims 3 and 13 recite using a trained ECG feature machine-learning model. Claims 4 and 14 recite a first panel machine-learning model. Claims 5 and 15 recite a second panel machine-learning model. Claims 6 and 16 recite a third panel machine-learning model. Claims 7 and 17 recite a fourth panel machine-learning model. Claims 8 and 18 recite using a trained a cohort classifier which is being interpreted as purely software (see Applicant’s Spec para 80). These additional elements are recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. For example, as noted above, the Applicant’s specification indicates the use of known machine-learning models and training methods. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not recite additional elements that integrate the judicial exception into a practical application when considered both individually and as an ordered combination. Therefore, the dependent claims are also directed to an abstract idea. Thus, Claims 1-20 are rejected under 35 U.S.C. 101 as being directed to abstract ideas without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-7, 10-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shi (US20210128076) in view of Masood et al. (US 20250062035), further in view of Mehta (US 10492730). Regarding Claim 1, Shi discloses the following limitations: An apparatus for generating a preoperative data structure using a pre-operative panel, the apparatus comprising: at least a processor; and a memory communicatively connected to the at least a processor, wherein the memory contains instructions configuring the at least a processor to: (Shi discloses a system for image data acquisition. The system may include at least one storage device (a memory) storing executable instructions (instructions), and at least one processor in communication with the at least one storage device (communicatively connected to the at least a processor). - abstract; paras 4, 55) receive subject data, wherein the subject data comprises electrocardiogram (ECG) data; (Shi discloses acquiring physiological data of a subject (receive subject data). Physiological data may include an ECG signal (electrocardiogram (ECG) data). – abstract; paras 3, 5, 11, 92; FIG. 5) generate a plurality of panel outputs as a function of the subject data using a preoperative panel machine-learning module; (Shi discloses that the trained machine learning models may determine output results (generate a plurality of panel outputs). The output result may include the feature data represented in the physiological data, a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like. – abstract; paras 47, 73-74) wherein the pre-operative panel machine-learning module comprises a plurality of panel machine-learning models…wherein each of the plurality of panel machine-learning models…are configured to generate one panel output for one panel focus as a function of the subject data; (Shi discloses using trained machine learning models (a plurality of panel machine-learning models) to generate output results such as the feature data represented in the physiological data, a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (generate one panel output for one panel focus). The trained machine learning models may output the results in the form of trust values (e.g., 1 being the physiological data includes the feature data or 0 being the physiological data does not include the feature data). – paras 74, 107, 163-164) wherein generating the plurality of panel outputs comprises: generating a plurality of sets of panel training data, wherein the plurality of sets of panel training data comprises correlations between exemplary subject data, exemplary panel focuses and exemplary panel outputs, (Shi discloses obtaining a plurality of training samples (a plurality of sets of panel training data- – paras 122-123). Each of the plurality of training samples may include physiological data of a sample subject (exemplary subject data), reference output (exemplary panel outputs – paras 13, 83). The output result may include the feature data represented in the physiological data, a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like, (exemplary panel focuses – para 74). – paras 6, 20-22, 74, 83-84, 122-123; FIG. 6) wherein generating the plurality of sets of panel training data comprises: sanitizing the plurality of sets of panel training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations, (Shi discloses that the processing device (using a dedicated hardware unit comprising circuitry) may include an obtaining module (i.e., logic/software – para 42) which may be configured to obtain data regarding model training, for example, a plurality of training samples. The obtaining module may perform a pretreatment operation on each of at least a portion of the plurality of training samples (sanitizing the plurality of sets of panel training data). The pretreatment operation may include at least one of a normalization operation, a denoising operation, a smoothing operation, or a downsampling operation (configured to perform signal processing operations). – paras 80-81, 85, 131) and removing the at least one training data entry from the plurality of sets of panel training data to create a sanitized plurality of sets of panel training data; (Shi discloses that the pretreatment operation may be the same as or different from the pretreatment operation on the physiological data. The pretreatment operation (e.g., the downsampling operation) may decrease data quantity (removing the at least one training data entry from the plurality of sets of panel training data) to be processed using the trained machine learning model (to create a sanitized plurality of sets of panel training data), thereby reducing the time of identifying the feature data. – paras 115, 131) training each of the plurality of panel machine-learning models using each of the sanitized plurality of sets of panel training data; (Shi discloses a training process in FIG. 7 for generating trained machine learning models (training each of the plurality of panel machine-learning models) by using a plurality of training samples. The processing device may perform a pretreatment operation on each of at least a portion of the plurality of training samples (using each of the sanitized plurality of sets of panel training data). The pretreatment operation may improve the efficiency of training the machine learning model. – paras 20-22, 85, 131, 140-143; FIG. 7) and generating the plurality of panel outputs using the plurality of trained panel machine-learning models; (Shi discloses using the trained machine learning models (using the plurality of trained panel machine-learning models) to determine output results (generating the plurality of panel outputs). – abstract; paras 47, 73-74) and generate a pre-operative data structure as a function of the plurality of panel outputs. (Shi discloses exemplary output devices such as a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Further, the processor may execute computer instructions such as data structures. Therefore, the output result (the plurality of panel outputs) may be displayed, e.g., a processor executing a data structure and displaying via a display device – paras 64, 67) Shi does not disclose the following limitations met by Masood: a plurality of panel machine-learning models and a large language model (LLM), wherein each of the plurality of panel machine-learning models and the large language model are configured to generate one panel output for one panel focus as a function of the subject data; (Masood teaches the use of a large language model (a large language model (LLM)) that may obtain a single input (e.g., a patient record) and outputs different potential diagnosis. – abstract; paras 30, 46, 74) wherein the large language model comprises a transformer architecture that employs self-attention and positional encoding, to receive the subject data as input, analyze the subject data, and generate an output; (Masood teaches obtaining an input such as a patient record (receive the subject data as input), analyze the input to identify contextual insights (analyze the subject data) and output potential diagnosis (generate an output). – abstract; paras 4, 30) (A large language model is known to comprise transformer architecture that employs self-attention and positional encoding. This position is further evidenced by TrueFoundry. See section titled “Key Components of the Transformer Model” (TrueFoundry. (2024, March 22). Demystifying transformer architecture in large language models. Demystifying. https://www.truefoundry.com/blog/transformer-architecture#:~:text=The%20architecture%20of%20Transformers%20stands,generating%20accurate%20responses%20are%20key.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified using machine learning models to analyze physiological data of a subject as disclosed by Shi to incorporate a large language model as taught by Masood in order to provide more accurate and informed diagnoses (see Masood para 24). Shi and Masood do not disclose the following limitations met by Mehta: wherein sanitizing the plurality of sets of panel training data comprises: determining by the dedicated hardware unit that at least one training data entry of the plurality of sets of panel training data has a signal to noise ratio below a threshold value; (Mehta teaches determining reliability of electrocardiogram (ECG) data by determining a signal-to-noise ratio (SNR). If the SNR for a particular heartbeat falls below the minimum threshold (determining a signal to noise ratio below a threshold value), the monitoring application assigns little or no confidence to the ECG data for that heartbeat. Further, Mehta teaches that the SNR may fall below an average threshold or a high threshold. – abstract; col 15, lines 65-67 through col 16, lines 1-24) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have further modified using machine learning models to analyze ECG data of a subject as disclosed by Shi to incorporate determining reliability of ECG data as taught by Mehta in order to avoid difficulties in identifying and treating irregular and potentially dangerous heart conditions (see Mehta col 1, lines 14-20). Regarding Claim 2, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 1, wherein generating the plurality of panel outputs comprises: determining at least an ECG feature as a function of the ECG data; (Shi discloses that the trained machine learning model is configured to detect feature data (determining at least an ECG feature) in the physiological data such as ECG data (as a function of the ECG data). – abstract; paras 5, 11, 20, 23, 73-74) and determining the plurality of panel outputs as a function of the ECG feature. (Shi discloses that an output result (the plurality of panel outputs) of the trained machine learning model is generated (determining) based on the feature data (a function of the ECG feature). – paras 23, 73-74) Regarding Claim 3, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 2, wherein determining the at least an ECG feature further comprises: generating ECG feature training data, wherein the ECG feature training data comprises correlations between exemplary ECG data and exemplary ECG features; (Shi discloses that each of the plurality of training samples may include annotated physiological data (ECG feature training data) corresponding to the physiological data. The physiological data corresponding to each of the training samples (exemplary ECG data) may be annotated by identifying the feature data (exemplary ECG features) (e.g., the R wave) from the physiological data. – paras 82, 85) training an ECG feature machine-learning model using the ECG feature training data; (Shi discloses training the machine learning model to detect feature data (training an ECG feature machine-learning model) represented in the physiological data. The training samples may include sample feature data (using the ECG feature training data).– paras 23, 82, 85) And determining the at least an ECG feature using the trained ECG feature machine-learning model. (Shi discloses that the trained machine learning model (sing the trained ECG feature machine-learning model.) is configured to detect feature data represented in the physiological data (at least an ECG feature). – para 23) Regarding Claim 4, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 1, wherein the plurality of panel machine-learning models comprises a first panel machine-learning model comprising a first panel focus related to coronary heart disease, wherein the first panel machine-learning model is configured to generate a first panel output related to the coronary heart disease as a function of the subject data. (Shi discloses that the output result (a first panel output) from the trained machine learning models (a first panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to the coronary heart disease). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a first panel focus related to coronary heart disease). – paras 11, 72-74) Regarding Claim 5, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 1, wherein the plurality of panel machine-learning models comprises a second panel machine-learning model comprising a second panel focus related to pulmonary hypertension, wherein the second panel machine-learning model is configured to generate a second panel output related to the pulmonary hypertension as a function of the subject data. (Shi discloses that the output result (a second panel output) from the trained machine learning models (a second panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to pulmonary hypertension). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a second panel focus related to pulmonary hypertension). – paras 11, 72-74) Regarding Claim 6, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 1, wherein the plurality of panel machine-learning models comprises a third panel machine-learning model comprising a third panel focus related to atrial fibrillation, wherein the third panel machine-learning model is configured to generate a third panel output related to the atrial fibrillation as a function of the subject data. (Shi discloses that the output result (a third panel output) from the trained machine learning models (a third panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to atrial fibrillation). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a third panel focus related to atrial fibrillation). – paras 11, 72-74) Regarding Claim 7, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 1, wherein the plurality of panel machine-learning models comprises a fourth panel machine-learning model comprising a fourth panel focus related to ejection fraction, wherein the fourth panel machine-learning model is configured to generate a fourth panel output related to the ejection fraction as a function of the subject data. (Shi discloses that the output result (a fourth panel output) from the trained machine learning models (a fourth panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to ejection fraction). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a fourth panel focus related to ejection fraction). – paras 11, 72-74) Regarding Claim 10, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The apparatus of claim 1, wherein the plurality of panel outputs comprises a preoperative optimization output. (Shi discloses acquiring image data of the subject using an imaging device based on the output result. The system may generate a trigger pulse signal (comprises a preoperative optimization output) based on the output result (the plurality of panel outputs). The system may also cause the imaging device to scan the subject based at least in part on the trigger pulse signal. – paras 8-9, 52, 75-76) Regarding Claim 11, Shi discloses the following limitations: A method for generating a preoperative data structure using a pre-operative panel, the method comprising: receiving, using at least a processor, subject data, wherein the subject data comprises electrocardiogram (ECG) data; (Shi discloses acquiring physiological data of a subject (receiving subject data). Physiological data may include an ECG signal (electrocardiogram (ECG) data). – abstract; paras 3, 5, 11, 92; FIG. 5) generating, using the at least a processor, a plurality of panel outputs as a function of the subject data using a pre-operative panel machine-learning module; (Shi discloses that the trained machine learning models may determine output results (generate a plurality of panel outputs). The output result may include the feature data represented in the physiological data, a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like. – abstract; paras 47, 73-74) wherein the pre-operative panel machine-learning module comprises a plurality of panel machine-learning models…wherein each of the plurality of panel machine learning models…are configured to generate one panel output for one panel focus as a function of the subject data; (Shi discloses using trained machine learning models (a plurality of panel machine-learning models) to generate output results such as the feature data represented in the physiological data, a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (generate one panel output for one panel focus). The trained machine learning models may output the results in the form of trust values (e.g., 1 being the physiological data includes the feature data or 0 being the physiological data does not include the feature data). – paras 74, 107, 163-164) wherein generating the plurality of panel outputs comprises: generating a plurality of sets of panel training data, wherein the plurality of sets of panel training data comprises correlations between exemplary subject data, exemplary panel focuses and exemplary panel outputs, (Shi discloses obtaining a plurality of training samples (a plurality of sets of panel training data- – paras 122-123). Each of the plurality of training samples may include physiological data of a sample subject (exemplary subject data), reference output (exemplary panel outputs – paras 13, 83). The output result may include the feature data represented in the physiological data, a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like, (exemplary panel focuses – para 74). – paras 6, 20-22, 74, 83-84, 122-123; FIG. 6) wherein generating the plurality of sets of panel training data comprises: sanitizing the plurality of sets of panel training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations, (Shi discloses that the processing device (using a dedicated hardware unit comprising circuitry) may include an obtaining module (i.e., logic/software – para 42) which may be configured to obtain data regarding model training, for example, a plurality of training samples. The obtaining module may perform a pretreatment operation on each of at least a portion of the plurality of training samples (sanitizing the plurality of sets of panel training data). The pretreatment operation may include at least one of a normalization operation, a denoising operation, a smoothing operation, or a downsampling operation (configured to perform signal processing operations). – paras 80-81, 85, 131) and removing the at least one training data entry from the plurality of sets of panel training data to create a sanitized plurality of sets of panel training data; (Shi discloses that the pretreatment operation may be the same as or different from the pretreatment operation on the physiological data. The pretreatment operation (e.g., the downsampling operation) may decrease data quantity (removing the at least one training data entry from the plurality of sets of panel training data) to be processed using the trained machine learning model (to create a sanitized plurality of sets of panel training data), thereby reducing the time of identifying the feature data. – paras 115, 131) training each of the plurality of panel machine-learning models using each of the sanitized plurality of sets of panel training data; (Shi discloses a training process in FIG. 7 for generating trained machine learning models (training each of the plurality of panel machine-learning models) by using a plurality of training samples. The processing device may perform a pretreatment operation on each of at least a portion of the plurality of training samples (using each of the sanitized plurality of sets of panel training data). The pretreatment operation may improve the efficiency of training the machine learning model. – paras 20-22, 85, 131, 140-143; FIG. 7) and generating the plurality of panel outputs using the plurality of trained panel machine-learning models; (Shi discloses using the trained machine learning models (using the plurality of trained panel machine-learning models) to determine output results (generating the plurality of panel outputs). – abstract; paras 47, 73-74) and generating, using the at least a processor, a pre-operative data structure as a function of the plurality of panel outputs. (Shi discloses exemplary output devices such as a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Further, the processor may execute computer instructions such as data structures. Therefore, the output result (the plurality of panel outputs) may be displayed, e.g., a processor executing a data structure and displaying via a display device – paras 64, 67) Shi does not disclose the following limitations met by Masood: a plurality of panel machine-learning models and a large language model (LLM), wherein each of the plurality of panel machine-learning models and the large language model are configured to generate one panel output for one panel focus as a function of the subject data; (Masood teaches the use of a large language model (a large language model (LLM)) that may obtain a single input (e.g., a patient record) and outputs different potential diagnosis. – abstract; paras 30, 46, 74) wherein the large language model comprises a transformer architecture that employs self-attention and positional encoding, to receive the subject data as input, analyze the subject data, and generate an output; (Masood teaches obtaining an input such as a patient record (receive the subject data as input), analyze the input to identify contextual insights (analyze the subject data) and output potential diagnosis (generate an output). – abstract; paras 4, 30) (A large language model is known to comprise transformer architecture that employs self-attention and positional encoding. This position is further evidenced by TrueFoundry. See section titled “Key Components of the Transformer Model” (TrueFoundry. (2024, March 22). Demystifying transformer architecture in large language models. Demystifying. https://www.truefoundry.com/blog/transformer-architecture#:~:text=The%20architecture%20of%20Transformers%20stands,generating%20accurate%20responses%20are%20key.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified using machine learning models to analyze physiological data of a subject as disclosed by Shi to incorporate a large language model as taught by Masood in order to provide more accurate and informed diagnoses (see Masood para 24). Shi and Masood do not disclose the following limitations met by Mehta: wherein sanitizing the plurality of sets of panel training data comprises: determining by the dedicated hardware unit that at least one training data entry of the plurality of sets of panel training data has a signal to noise ratio below a threshold value; (Mehta teaches determining reliability of electrocardiogram (ECG) data by determining a signal-to-noise ratio (SNR). If the SNR for a particular heartbeat falls below the minimum threshold (determining a signal to noise ratio below a threshold value), the monitoring application assigns little or no confidence to the ECG data for that heartbeat. Further, Mehta teaches that the SNR may fall below an average threshold or a high threshold. – abstract; col 15, lines 65-67 through col 16, lines 1-24) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have further modified using machine learning models to analyze ECG data of a subject as disclosed by Shi to incorporate determining reliability of ECG data as taught by Mehta in order to avoid difficulties in identifying and treating irregular and potentially dangerous heart conditions (see Mehta col 1, lines 14-20). Regarding Claim 12, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 11, wherein generating the plurality of panel outputs comprises: determining, using the at least a processor, at least an ECG feature as a function of the ECG data; (Shi discloses that the trained machine learning model is configured to detect feature data (determining at least an ECG feature) in the physiological data such as ECG data (as a function of the ECG data). – abstract; paras 5, 11, 20, 23, 73-74) and determining, using the at least a processor, the plurality of panel outputs as a function of the ECG feature. (Shi discloses that an output result (the plurality of panel outputs) of the trained machine learning model is generated (determining) based on the feature data (a function of the ECG feature). – paras 23, 73-74) Regarding Claim 13, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 12, wherein determining the at least an ECG feature further comprises: generating, using the at least a processor, ECG feature training data, wherein the ECG feature training data comprises correlations between exemplary ECG data and exemplary ECG features; (Shi discloses that each of the plurality of training samples may include annotated physiological data (ECG feature training data) corresponding to the physiological data. The physiological data corresponding to each of the training samples (exemplary ECG data) may be annotated by identifying the feature data (exemplary ECG features) (e.g., the R wave) from the physiological data. – paras 82, 85) training, using the at least a processor, an ECG feature machine-learning model using the ECG feature training data; (Shi discloses training the machine learning model to detect feature data (training an ECG feature machine-learning model) represented in the physiological data. The training samples may include sample feature data (using the ECG feature training data).– paras 23, 82, 85) and determining, using the at least a processor, the at least an ECG feature using the trained ECG feature machine-learning model. (Shi discloses that the trained machine learning model (sing the trained ECG feature machine-learning model.) is configured to detect feature data represented in the physiological data (at least an ECG feature). – para 23) Regarding Claim 14, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 11, wherein the plurality of panel machine-learning models comprises a first panel machine-learning model comprising a first panel focus related to coronary heart disease, wherein the first panel machine-learning model is configured to generate a first panel output related to the coronary heart disease as a function of the subject data. (Shi discloses that the output result (a first panel output) from the trained machine learning models (a first panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to the coronary heart disease). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a first panel focus related to coronary heart disease). – paras 11, 72-74) Regarding Claim 15, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 11, wherein the plurality of panel machine-learning models comprises a second panel machine-learning model comprising a second panel focus related to pulmonary hypertension, wherein the second panel machine-learning model is configured to generate a second panel output related to the pulmonary hypertension as a function of the subject data. (Shi discloses that the output result (a second panel output) from the trained machine learning models (a second panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to pulmonary hypertension). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a second panel focus related to pulmonary hypertension). – paras 11, 72-74) Regarding Claim 16, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 11, wherein the plurality of panel machine-learning models comprises a third panel machine-learning model comprising a third panel focus related to atrial fibrillation, wherein the third panel machine-learning model is configured to generate a third panel output related to the atrial fibrillation as a function of the subject data. (Shi discloses that the output result (a third panel output) from the trained machine learning models (a third panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to atrial fibrillation). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a third panel focus related to atrial fibrillation). – paras 11, 72-74) Regarding Claim 17, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 11, wherein the plurality of panel machine-learning models comprises a fourth panel machine-learning model comprising a fourth panel focus related to ejection fraction, wherein the fourth panel machine-learning model is configured to generate a fourth panel output related to the ejection fraction as a function of the subject data. (Shi discloses that the output result (a fourth panel output) from the trained machine learning models (a fourth panel machine-learning model) may include the feature data represented in the physiological data (the subject data), a determination as to whether the trigger condition is satisfied, a determination as to whether the physiological data includes the feature data, the gating weighting function corresponding to the physiological data, or the like (related to ejection fraction). The physiological data (of the subject data) may be acquired by a monitoring device based on echo signal, an ECG signal, a photoelectric signal, an oscillation signal, a pressure signal, an imaging device such as an MRI device, or the like (a fourth panel focus related to ejection fraction). – paras 11, 72-74) Regarding Claim 20, Shi, Masood and Mehta disclose all the limitations above and further disclose the following limitations: The method of claim 11, wherein the plurality of panel outputs comprises a pre-operative optimization output. (Shi discloses acquiring image data of the subject using an imaging device based on the output result. The system may generate a trigger pulse signal (comprises a preoperative optimization output) based on the output result (the plurality of panel outputs). The system may also cause the imaging device to scan the subject based at least in part on the trigger pulse signal. – paras 8-9, 52, 75-76) Claims 8-9 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shi (US20210128076), in view of Masood et al. (US 20250062035), in view of Mehta (US 10492730), further in view of Balakrishnan et al. (US 20240289586). Regarding Claim 8, Shi, Masood and Mehta disclose all the limitations above, however, do not disclose the following limitations met by Balakrishnan: The apparatus of claim 1, wherein the memory contains instructions further configuring the at least a processor to: generate cohort training data, wherein the cohort training data comprises correlations between exemplary subject data and exemplary subject cohorts; (Balakrishnan teaches a cohort selection and retraining module that selects classes of training samples for a classification model (generate cohort training data). The term “training sample” generally refers to samples for which a classification may be known (exemplary subject cohorts). The training samples can correspond to samples having measured properties of the sample (e.g., genomic data and other subject data, such as images or health records), as well as observed classifications/labels (e.g., phenotypes or treatments) for the subject (exemplary subject data). – paras 209, 211, 248, 271) train a cohort classifier using the cohort training data; (Balakrishnan teaches training machine learning models such as a classification model (train a cohort classifier) by using a set of training samples (the cohort training data). – abstract; paras 8, 209) and classify the subject data to one or more subject cohorts using the trained cohort classifier. (Balakrishnan teaches that machine learning models (using the trained cohort classifier) may be used to classify individuals in a population (classify the subject data to one or more subject cohorts) for disease screening, diagnosis, prognosis, or treatment decisions. – abstract; paras 4, 96, 152) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have further modified using trained machine learning models as disclosed by Shi to incorporate a trained machine learning classification model as taught by Balakrishnan in order to catalyze advances in the field of medical screening and diagnosis (see Balakrishnan para 5). Regarding Claim 9, Shi, Masood, Mehta and Balakrishnan teach all the limitations above and further teach the following limitations: The apparatus of claim 8, wherein the memory contains instructions further configuring the at least a processor to update the panel training data as a function of an output of the cohort classifier. (Balakrishnan teaches that the model retraining pipeline may re-fit parameters and update the existing model with new data (update… training data as a function of an output of the cohort classifier.). – para 559) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have further modified using trained machine learning models as disclosed by Shi to incorporate a trained machine learning classification model and a retraining pipeline to update classification models with new data as taught by Balakrishnan in order to catalyze advances in the field of medical screening and diagnosis (see Balakrishnan para 5). Regarding Claim 18, Shi, Masood, Mehta and Balakrishnan teach all the limitations above and further teach the following limitations: The method of claim 11, further comprising: generating, using the at least a processor, cohort training data, wherein the cohort training data comprises correlations between exemplary subject data and exemplary subject cohorts; (Balakrishnan teaches a cohort selection and retraining module that selects classes of training samples for a classification model (generating…cohort training data). The term “training sample” generally refers to samples for which a classification may be known (exemplary subject cohorts). The training samples can correspond to samples having measured properties of the sample (e.g., genomic data and other subject data, such as images or health records), as well as observed classifications/labels (e.g., phenotypes or treatments) for the subject (exemplary subject data). – paras 209, 211, 248, 271) training, using the at least a processor, a cohort classifier using the cohort training data; (Balakrishnan teaches training machine learning models such as a classification model (training…a cohort classifier) by using a set of training samples (the cohort training data). – abstract; paras 8, 209) and classifying, using the at least a processor, the subject data to one or more subject cohorts using the trained cohort classifier. (Balakrishnan teaches that machine learning models (using the trained cohort classifier) may be used to classify individuals in a population (classifying…the subject data to one or more subject cohorts) for disease screening, diagnosis, prognosis, or treatment decisions. – abstract; paras 4, 96, 152) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have further modified using trained machine learning models as disclosed by Shi to incorporate a trained machine learning classification model as taught by Balakrishnan in order to catalyze advances in the field of medical screening and diagnosis (see Balakrishnan para 5). Regarding Claim 19, Shi, Masood, Mehta and Balakrishnan teach all the limitations above and further teach the following limitations: The method of claim 18, further comprising: updating, using the at least a processor, the panel training data as a function of an output of the cohort classifier. (Balakrishnan teaches that the model retraining pipeline may re-fit parameters and update the existing model with new data (updating… training data as a function of an output of the cohort classifier.). – para 559) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have further modified using trained machine learning models as disclosed by Shi to incorporate a trained machine learning classification model and a retraining pipeline to update classification models with new data as taught by Balakrishnan in order to catalyze advances in the field of medical screening and diagnosis (see Balakrishnan para 5). Response to Arguments Regarding rejections under 35 USC § 101 to Claims 1-20, Applicant’s arguments have been fully considered, and are not persuasive. The rejection has been updated in light of latest amendments. Applicant argues: (a) Applicant submits that representative claim 1, at least as amended, recites additional elements that integrate any alleged judicial exception into a practical application by providing an improvement in technology. The improvement in technology includes, inter alia, a multi-stage provision of training, and implementing, a plurality of machine-learning models and transforming training data from a first form to a second form to create sanitized data that can be more efficiently processed (e.g., efficient convergence of machine-learning model) thereby desirably reducing downstream computational steps, increasing accuracy and mitigating the associated carbon footprint. This, advantageously, can provide a computationally robust approach to anomaly detection, that is, by preprocessing training data, which can subsequently be used for automated and real-time generation of a preoperative data structure using a pre-operative panel while resolving current problems related to one or more issues of time consumption and efficiency. (p. 4). Regarding (a), Examiner respectfully disagrees. MPEP 2106.04(d)(1) states "the word 'improvements' in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B." Here there is no improvement to the computer nor is there an improvement to another technology. The Applicant has identified that there is a technical problem relating to efficient data processing; however, there is no indication that the claim actually solves this problem and thus improves the technology. For example, the claim does not define any manipulation, restructuring, or changing of the data in a way that solves the identified technical problem. Rather, the claims define reducing the amount of data being processed by providing a “sanitized” data set. Merely limiting the amount of data that the computer analyzes does not provide an improvement to the computer itself, it is not made to run faster or more efficiently. The computer merely takes less time to analyze less data than it would the full amount of data, the computer is still performing as expected. Therefore, an improvement to technology is not present and there is no practical application. (b) Additionally, Applicant submits that the limitations as recited in claim 1, at least as amended, do not recite any abstract idea, such as certain methods of organizing human activity, much like Example 47, claim 3, listed in the July 2024 Subject Matter Eligibility Examples (see also, 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence). For example, Example 47 presents allowable claims (Subject Matter Eligibility Examples: Abstract Ideas, pgs. 8–9) directed to anomaly detection similar to Applicant’s representative claim 1, at least as amended, wherein training data entries that interfere with convergence of a plurality of machine-learning models are removed, that is, anomalous or poor quality data having a signal to noise ratio below a threshold value is eliminated. Applicant submits that Applicant’s representative claim 1 and the allowable claims presented in Example 47 are analogous, at least, because both provide a technological solution to a problem by automatedly processing and transforming training data (e.g., by determining signal to noise ratios) for downstream analysis by utilizing a specific multi-stage training scheme for real-time machine-learning activities. (p. 4-5). Regarding (b), Examiner respectfully disagrees. MPEP 2106.04(d)(1) and MPEP 2106.05(a) indicates that a practical application may be present where the claimed invention provides a technical solution to a technical problem. See, e.g., DDR Holdings, LLC. v. Hotels-com, L.P., 773 F.3d 1245, 1259 (Fed. Cir. 2014) (finding that claiming a website that retained the "look and feel" of a host webpage provided a technological solution to the problem of retention of website visitors by utilizing a website descriptor that emulated the "look and feel" of the host webpage, where the problem arose out of the internet and was thus a technical problem). The claimed invention in Example 47, claim 3, was found eligible under Step 2A Prong Two because the additional elements recited in the claim integrated the abstract idea into a practical application because the claim improved the technical field of network intrusion detection as described in the specification. Here, while Applicant's argued problem of processing efficiency is a technical problem, there is no nexus between the argued problem and the argued solution because there is no indication that the claimed invention actually solves this problem. As noted in response to argument (a) above, the claim does not define any manipulation, restructuring, or changing of the data in a way that solves the identified technical problem. Rather, the claims define reducing the amount of data being processed by providing a “sanitized” data set and the performance of the computer or technology itself is not improved. Therefore, no practical application is present in the current claims. (c) Claim 1 as amended contains multiple additional elements that do not recite the allegedly abstract idea. These include, without limitation, the machine-learning activities involving training of a plurality of machine-learning models by sanitized training data which is rendered free of anomalous data to facilitate, at least, efficient convergence of the machine learning model, and generation of automated results in real-time. Neither the instant disclosure nor the record of prosecution in this matter includes any admission that any limitation of claim 1 as amended is “well-understood, routine, [and] conventional.” The references of record in this matter also do not contain any such characterization, and there is no court case or printed publication supporting the conclusion that the above-described limitations are “well-understood, routine, [and] conventional.” Applicant respectfully submits therefore that the recitation of the above limitations, both individually and as an ordered combination with other claim elements, amounts to “significantly more” than any allegedly abstract idea for at least this reason. Additionally, as discussed below, Applicant’s claims amount to an “inventive concept” because they are not taught by the relevant art. (p. 5-6). Regarding (c), Examiner respectfully disagrees. MPEP 2106.05(d) states: “Another consideration when determining whether a claim recites significantly more than a judicial exception is whether the additional element(s) are well-understood, routine, conventional activities previously known to the industry (emphasis added).” Further, MPEP 2106.05(I) states: “As made clear by the courts, the novelty of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter (internal quotations omitted, emphasis original).” As such, it is only the additional elements identified by the Examiner to not be part of the abstract idea that are analyzed to determine whether they represent well-understood, routine, conventional activities in the field of the invention. Examiner notes that the limitations regarding the training of the machine learning models is interpreted to represent a mathematical concept that is interpreted to be part of the identified abstract idea and not as an additional element. The identified additional elements include the limitations regarding the use of the dedicated hardware unit and the use of the trained machine learning models. See updated rejection above. MPEP 2106.05(d)(I) indicates that in determining whether the additional elements represent are well-understood, routine, conventional activities, the Examiner should consider whether the additional elements (1) provide an improvement to the technological environment to which the claim is confined, (2) whether the additional elements are mere instructions to apply the judicial exception, or (3) whether the additional elements represent insignificant extra-solution activity. The additional elements of the claims do not provide significantly more based on this inquiry. Taking these in turn, whether the additional elements of the claim provide an improvement was analyzed/addressed in the Step 2A Prong 2 analysis and no improvement was present (see also response to arguments (a)-(b) above). The technological environment to which the claims are confined (a general-purpose computer performing generic computer functions [see Spec. Paras 17, 48-52, 57, 82, 96, 105, 130, 141-147] is recited at a high level of generality and has been found by the courts to be insufficient to provide a practical application (see MPEP 2106.05(d)(II); Alice Corp.). None of the additional elements of the claim were found to represent extra-solution activity and thus no well-understood, routine, conventional analysis is required. As such, when viewed either individually or as an ordered combination, the additional elements do no provide significantly more to the abstract idea and the claims are not subject matter eligible. (d) Each of claims 2-10 and 12-20 depend directly or indirectly from claim 1 or 11, and thus recite all the same elements as claim 1 or 11. Applicant therefore submits 2-10 and 12-20 overcome these rejections for at least the same reasons as discussed above with reference to claims 1 and 11, and because of the additional patent-eligible limitations recited therein. (p. 6). Regarding (d), Examiner respectfully disagrees. Based on response to arguments above, claim 1 is unpatentable and therefore similar independent claim 11, as well as all claims depending therefrom, are unpatentable according to the same rationale. Regarding rejections under 35 USC § 103 to Claims 1-20, Applicant’s arguments have been fully considered and are not persuasive. The rejection has been updated in light of latest amendments. Applicant argues: (e) Applicant submits that at least such an arrangement of “sanitizing the plurality of sets of panel training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations, wherein sanitizing the plurality of sets of panel training data comprises: determining by the dedicated hardware unit that at least one training data entry of the plurality of sets of panel training data has a signal to noise ratio below a threshold value; and removing the at least one training data entry from the plurality of sets of panel training data to create a sanitized plurality of sets of panel training data” is not taught, suggested, or motivated, by Shi and Masood, individually or in combination. (p. 8). Regarding (d), Examiner respectfully disagrees. Shi discloses the limitations “wherein generating the plurality of sets of panel training data comprises: sanitizing the plurality of sets of panel training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations” and “and removing the at least one training data entry from the plurality of sets of panel training data to create a sanitized plurality of sets of panel training data;”. Shi discloses that the obtaining module within the processing device may perform a pretreatment operation on each of at least a portion of the plurality of training samples (sanitizing the plurality of sets of panel training data). The pretreatment operation may include various signal processing operations such as a normalization operation, a denoising operation, a smoothing operation, or a downsampling operation (configured to perform signal processing operations). Further, Shi discloses that the pretreatment operation(s) may decrease data quantity to be processed (removing the at least one training data entry from the plurality of sets of panel training data). However, Applicants arguments are persuasive regarding the newly added limitations “wherein sanitizing the plurality of sets of panel training data comprises: determining by the dedicated hardware unit that at least one training data entry of the plurality of sets of panel training data has a signal to noise ratio below a threshold value;”. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection necessitated by Applicant’s amendments is made in view of Mehta (US 10492730), as per the rejection above. Mehta teaches determining if a measured signal-to-noise ratio (SNR) falls below a minimum, average or high threshold. See updated rejection above. (f) Each of claims 2-7, 10, 12-17 and 20 depends, directly or indirectly, from claim 1 or claim 11. As discussed above, claims 1 and 11 are patentably distinguishable over Shi and Masood, individually or in combination, for at least the reasons discussed above. Accordingly, Applicant respectfully submits that claims 2-7, 10, 12-17 and 20 are patentably distinguishable over Shi and Masood, individually or in combination, for at least the reasons discussed above. Each of claims 8, 9, 18 and 19 depends, directly or indirectly, from claim 1 or 11. As discussed above, claims 1 and 11 are patentably distinguishable over Shi and Masood, individually or in combination, for at least the reasons discussed above. Furthermore, Balakrishnan fails to cure the deficiencies of Shi and Masood. Accordingly, Applicant respectfully submits that claims 1 and 11, as amended, and claims 8, 9, 18 and 19 are patentably distinguishable over Shi, Masood and Balakrishnan, individually or in any combination thereof, for at least the reasons discussed above. (p. 9). Regarding (f), Examiner respectfully disagrees. Based on response to arguments above, claim 1 is unpatentable and therefore similar independent claim 11, as well as all claims depending therefrom, are unpatentable according to the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIMBERLY VANDER WOUDE whose telephone number is (703)756-4684. The examiner can normally be reached M-F 9 AM-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER H CHOI can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.E.V./Examiner, Art Unit 3681 /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Sep 16, 2024
Non-Final Rejection — §101, §103
Feb 03, 2025
Response Filed
Feb 21, 2025
Final Rejection — §101, §103
May 28, 2025
Request for Continued Examination
Jun 02, 2025
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12437863
SYSTEMS AND METHODS FOR CENTRALIZED BUFFERING AND INTERACTIVE ROUTING OF ELECTRONIC DATA MESSAGES OVER A COMPUTER NETWORK
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
7%
Grant Probability
16%
With Interview (+9.5%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 30 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month