Prosecution Insights
Last updated: April 19, 2026
Application No. 18/869,473

METHODS AND SYSTEMS FOR ANALYSIS OF LUNG ULTRASOUND

Final Rejection §101§102§103
Filed
Nov 26, 2024
Examiner
SZUMNY, JONATHON A
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Koninklijke Philips N V
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
143 granted / 247 resolved
+5.9% vs TC avg
Strong +61% interview lift
Without
With
+60.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
58 currently pending
Career history
305
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
30.8%
-9.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 247 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-15 were previously pending and subject to a non-final Office Action having a notification date of November 21, 2026 (“non-final Office Action”). Following the non-final Office Action, Applicant filed an amendment on February 19, 2026 (the “Amendment”), amending claims 1-4 and 10; canceling claim 5; and adding new claims 16-20. The Examiner notes that new claims 19 and 20 are inadvertently respectively labeled as "Original" and "Previously presented." The present Final Office Action addresses pending claims 1-4 and 6-20 in the Amendment. Response to Arguments Response to Applicant’s Arguments Regarding Claim Rejections Under 35 USC §112 These rejections are withdrawn in view of the Amendment. Response to Applicant’s Arguments Regarding Claim Rejections Under 35 USC §101 At the top of page 10 of the Amendment, Applicant asserts that the present claims are not directed to "mental processes" because the entirety of the claims are not practically performable in the human mind. Applicant then asserts that "analyzing, using a trained clinical lung feature identification algorithm, the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient" and "analyzing, using a trained clinical lung feature severity algorithm, the identified first clinical feature to characterize a severity of the identified first clinical feature" as recited in the independent claims are "very specific, non-generic trained algorithms…and specific, non-generic applications of those trained algorithm" which allegedly takes the claims out of the "mental processes" bucket of abstract ideas. The Examiner disagrees. The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). MPEP 2106.05(III). Claims do not recite a mental process when they do not contain limitations that can practically be performed in the human mind, for instance when the human mind is not equipped to perform the claim limitations. See SRI Int’l, Inc. v. Cisco Systems, Inc., 930 F.3d 1295, 1304 (Fed. Cir. 2019). MPEP 2106.05(III)(A). However, claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer"). MPEP 2106.05(III)(C). In the present case, the independent claims recite a mental process because a person (e.g., radiologist) could practically in their mind analyze a received temporal sequence ultrasound image data frames of one or more zones of a lung of a patient to identify a spatiotemporal location of a first clinical feature (e.g., B-line, A-line, etc.) in the frames (e.g., such as a particular location in consecutive frames which is thus a "spatiotemporal" location because the frame are a temporal sequence) and analyze/determine a severity of the feature (e.g., based on the shape, intensity level, location, etc. of the feature). These recitations, under their broadest reasonable interpretation, are similar to the concepts of collecting information, analyzing it and displaying certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQe2d 1739 (Fed. Cir. 2016)). MPEP 2106.04(a)(2)(III). On pages 10-14 of the Amendment, Applicant appears to take the position that recited trained algorithms (the first being a "trained clinical lung feature identification algorithm" that analyzes the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient and the second being a "trained clinical lung feature severity algorithm" that analyzes the identified first clinical feature to characterize a severity of the identified first clinical feature) provide a "practical application" of the abstract idea because ML algorithms by their very nature involve processes and implementations that cannot be performed in the human mind, provide "substantial technical improvements," "performs computations and data manipulations that are far beyond human mental capacity," "operate with a level of precision and consistency that human minds cannot match," etc. However, and as noted in the rejection below (as well as in the non-final Office Action to which Applicant has not responded), claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. Requirements that the machine learning model be “iteratively trained” or dynamically adjusted do not represent a technological improvement because iterative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), p. 12. “[T]he way machine learning works is the inputs are defined, the model is trained, and then the algorithm is actually updated and improved over time based on the input.” Id. That is, despite that the claims at issue in Recentive recited iteratively training an ML model for a particular purpose and executing the ML for the particular purpose (where such ML model would by their very nature also involve processes and implementations that cannot be performed in the human mind, "[perform] computations and data manipulations that are far beyond human mental capacity," "operate with a level of precision and consistency that human minds cannot match," etc.), the Federal Circuit nevertheless found that such ML limitations amounted to applying established methods of machine learning to a new data environment which are not patent eligible. Id., pp. 10, 14. In the present case and as set forth herein, a person can practically in their mind analyze the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient and analyze the identified first clinical feature to characterize a severity of the identified first clinical feature. In this regard, that such above practically-mentally performable functions are performed by a "trained clinical lung feature identification algorithm" and a "trained clinical lung feature severity algorithm" just amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)) and amounts to applying established methods of machine learning to a new data environment which are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14 Furthermore, the generic recitation of the "trained clinical lung feature identification algorithm" and "trained clinical lung feature severity algorithm" does not take the claim out of the above abstract idea category because such recitation merely amounts to, at such high level of generality, generally linking use of the abstract idea to a particular technological environment or field of use without altering or affecting how the steps of the at least one abstract idea are performed (see MPEP § 2106.05(h)). Response to Applicant’s Arguments Regarding Claim Rejections Under 35 USC §103 Applicant's remarks regarding Kim and Isla Garcia are moot in view of the new rejections set forth herein as necessitated by the Amendment. At page 17 of the Amendment, Applicant asserts "Regarding wherein identifying the first clinical feature comprises analysis of multiple frames in the temporal sequence, wherein identifying the first clinical feature comprises identification of a location of the first clinical feature within the multiple frames, and wherein identifying a location of the first clinical feature within the multiple frames comprises identifying a spatiotemporal location across multiple frames, Mehanian does not disclose two separate algorithms for identifying and characterizing clinical features respectively, but rather only describes the use of one single neural network for detecting at least one feature and then for determining "a respective position and a respective class of each of the detected at least one feature" (cf. Mehanian, paragraph [0015]). Mehanian can then be said to disclose either a trained clinical lung feature identification algorithm or a trained clinical lung feature severity algorithm, but cannot be said to disclose two algorithms as is required by the claims." The Examiner disagrees because [0047], [0059] of Mehanian discloses using a CNN (trained per [0048]) to detect/identify features/objects in one or more frames of the US video (temporal sequence per [0003]) of a lung (where such CNN is thus a "trained clinical lung feature identification algorithm") while [0079]-[0080] discloses analyzing a detected feature to yield a severity grade such as via use of k-means, GMM, etc. (ML algorithms which are known to be trained in an unsupervised manner) per the end of [0093] and [0104]-[0105] (such that at least one of the disclosed algorithm is a "trained clinical lung feature severity algorithm"). Therefore, the trained CNN is a first algorithm for identifying/classifying a feature and the trained GMM is a second algorithm for characterizing a severity of the feature. On page 18 of the Amendment, Applicant then asserts "Mehanian further discloses the analysis of multiple frames by its neural network, but rather merely describes executing the neural network to determine "a probability that the image indicates that the lung exhibits, or does not exhibit, lung sliding" (cf. Mehanian, paragraph [0018]). This is even when the system receives "a sequences of images of a lung, such as a video stream" (cf. Mehanian, paragraph [0018]). Thus, Mehanian implicitly discloses extracting single images out of a received video and analyzing a mere single frame at any one time. There is no disclosure whatsoever of analyzing multiple frames, let alone to identifying a spatiotemporal location of a clinical feature across multiple frames." The Examiner disagrees that Mehanian does not disclose analyzing multiple frames, let alone to identifying a spatiotemporal location of a clinical feature across multiple frames as asserted by Applicant because [0047], [0059] of Mehanian discloses using a CNN (trained per [0048]) to detect/identify features/objects in one or more frames (i.e., multiple frames) of the US video (temporal sequence as noted above), [0057] discloses processing US images, and steps 210-225 in Figure 9 illustrate how multiple images/frames are analyzed. Furthermore, because the US video image frames are a time sequence/series per [0003], [0047], [0056], then detecting/classifying/identifying a location of the features in the multiple frames/images per [0074] (which are time-related as noted above) amounts to identifying a "spatiotemporal location across multiple frames" of the feature. Applicant then asserts "Regarding "analyzing, using a trained clinical lung feature severity algorithm, the identified first clinical feature to characterize a severity of the identified first clinical feature," there is no mention at all in Mehanian of characterizing a severity of an identified clinical feature. Indeed, Mehanian merely discloses determining a probability of a condition (cf. paragraph [0018] of Mehanian) or a "respective class of each of the detected at least one feature" (cf. Mehanian, paragraph [0015] of Mehanian). There is no mention of a respective class corresponding to a severity of the clinical feature." The Examiner disagrees because [0079]-[0080] of Mehanian discloses analyzing a detected feature to yield a severity grade and the end of [0093] and [0104]-[0105] discloses use of k-means, GMM, etc. (ML algorithms which are known to be trained in an unsupervised manner) to determine the severity level (such that at least one of the disclosed algorithm is a "trained clinical lung feature severity algorithm"). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4 and 6-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more: Subject Matter Eligibility Criteria - Step 1: Claims 1-4 and 6-9 are directed to a method (i.e., a process), claims 10-15 are directed to a system (i.e., a machine), and claims 16-20 are directed to a non-transitory computer-readable storage medium (i.e., a manufacture). Accordingly, claims 1-4 and 6-20 are all within at least one of the four statutory categories. 35 USC §101. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong One: Regarding Prong One of Step 2A of the Alice/Mayo test (which collectively includes the guidance in the January 7, 2019 Federal Register notice and the October 2019 and July 2024 updates issued by the USPTO as incorporated into the MPEP, as supported by relevant case law), the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. MPEP 2106.04(II)(A)(1). An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts. MPEP 2106.04(a). Representative independent claim 10 includes limitations that recite at least one abstract idea. Specifically, independent claim 10 recites: An ultrasound analysis system configured to analyze ultrasound image data, comprising: a temporal sequence of ultrasound image data for one or more of a plurality of different zones of one or both lungs of a patient; a trained clinical lung feature identification algorithm configured to analyze the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient, wherein identifying the first clinical feature comprises analysis of multiple frames in the temporal sequence, wherein identifying the first clinical feature comprises identification of a location of the first clinical feature within the multiple frames, and wherein identifying a location of the first clinical feature within the multiple frames comprises identifying a spatiotemporal location across multiple frames; a trained clinical lung feature severity algorithm configured to analyze the identified first clinical feature to characterize a severity of the identified first clinical feature; a processor configured to: (i) analyze, using the trained clinical lung feature identification algorithm, the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient; (ii) analyze, using the trained clinical lung feature severity algorithm, the identified first clinical feature to characterize a severity of the identified first clinical feature; and a user interface configured to provide the identified first clinical feature and the characterized severity of the first clinical feature. The Examiner submits that the foregoing underlined limitations constitute "mental processes" because they are observations/evaluations/judgments/analyses that can, at the currently claimed high level of generality, be practically performed in the human mind (e.g., with pen and paper). As an example, a person (e.g., radiologist) could practically in their mind analyze a received temporal sequence ultrasound image data frames of one or more zones of a lung of a patient to identify a spatiotemporal location of a first clinical feature (e.g., B-line, A-line, etc.) in the frames (e.g., such as a particular location in consecutive frames which is thus a "spatiotemporal" location because the frame are a temporal sequence) and analyze/determine a severity of the feature (e.g., based on the shape, intensity level, location, etc. of the feature). These recitations, under their broadest reasonable interpretation, are similar to the concepts of collecting information, analyzing it and displaying certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQe2d 1739 (Fed. Cir. 2016)). MPEP 2106.04(a)(2)(III). Claims “directed to collection of information, comprehending the meaning of that collected information, and indication of the results, all on a generic computer network operating in its normal, expected manner,” fail step one of the Alice framework. In re Killian, 45 F.4th 1373, 1380 (Fed. Cir. 2022). Claims directed to “collecting, analyzing, manipulating, and displaying data’’ are abstract. Univ. of Fla. Research Found., Inc. v. General Elec. Co., 916 F.3d 1363, 1368 (Fed. Cir. 2019). Claims directed to organizing, storing, and transmitting information determined to be directed to an abstract idea. Cyberfone Sys., L.L.C. v. CNN Interactive Grp., Inc., 558 F. App’x 988, 992 (Fed. Cir. 2014). Accordingly, the claim recites at least one abstract idea. Furthermore, dependent claims 2-9 and 11-13 further define the at least one abstract idea (and thus fail to make the abstract idea any less abstract) as set forth below: -Claims 2, 3, and 17 call for analyzing the received temporal sequence of ultrasound image data to identify a different second clinical feature in a lung of the patient, analyzing the identified second clinical feature to characterize a severity of the identified second clinical feature, and providing the identified second clinical feature and the characterized severity of the second clinical feature. These steps just further define the "mental processes" discussed above. -Claims 4, 11, and 18 call for analyzing the temporal sequence to identify a different second clinical feature, analyzing the second feature to determine a corresponding severity, and prioritizing the identified first or second clinical feature based on one or more of a type of the identified clinical features, the characterized severity of the features, a timing of the features in the temporal sequence of ultrasound image data, and/or a suspected or diagnosed clinical condition of the patient; and providing the prioritization. These steps just further define the "mental processes" discussed above. -Claims 6, 7, 12, and 19 recite how providing the identified first clinical feature and its characterized severity includes providing a subset of the received temporal sequence of ultrasound image data (e.g., where the subset is a temporal sequence itself) that includes the identified location of the identified first clinical feature. These steps just further define the "mental processes" discussed above. -Claims 8, 9, and 20 call for receiving feedback from a user about the provided identified first clinical feature and/or the characterized severity of the first clinical feature (e.g., adjustment of the characterized severity of the first clinical feature, a selection of one or more frames in the temporal sequence of ultrasound image data, an acceptance or rejection of the feature, and/or a change of the type of feature) which is practically performable in the human mind such as by listening or reviewing ("mental processes"). -Claim 13 calls for receiving feedback from a user about the provided identified first clinical feature and/or the characterized severity of the first clinical feature and generating, based on the received feedback, a report including the identified first clinical feature and/or the characterized severity of the first clinical feature. These steps are practically performable in the human mind ("mental processes"). Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong Two: Regarding Prong Two of Step 2A of the Alice/Mayo test, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. As noted at MPEP §2106.04(II)(A)(2), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” MPEP §2106.05(I)(A). In the present case, the additional limitations beyond the above-noted at least one abstract idea recited in the claim are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”): An ultrasound analysis system configured to (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) analyze ultrasound image data, comprising: a temporal sequence of ultrasound image data for one or more of a plurality of different zones of one or both lungs of a patient; a trained clinical lung feature identification algorithm configured to (using computers or machinery as mere tools to perform the abstract idea as noted below and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished, see MPEP § 2106.05(f)) analyze the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient, wherein identifying the first clinical feature comprises analysis of multiple frames in the temporal sequence, wherein identifying the first clinical feature comprises identification of a location of the first clinical feature within the multiple frames, and wherein identifying a location of the first clinical feature within the multiple frames comprises identifying a spatiotemporal location across multiple frames; a trained clinical lung feature severity algorithm configured to (using computers or machinery as mere tools to perform the abstract idea as noted below and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished, see MPEP § 2106.05(f)) analyze the identified first clinical feature to characterize a severity of the identified first clinical feature; a processor configured to (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)): (i) analyze, using the trained clinical lung feature identification algorithm (using computers or machinery as mere tools to perform the abstract idea as noted below and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished, see MPEP § 2106.05(f)), the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient; (ii) analyze, using the trained clinical lung feature severity algorithm (using computers or machinery as mere tools to perform the abstract idea as noted below and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished, see MPEP § 2106.05(f)), the identified first clinical feature to characterize a severity of the identified first clinical feature; and a user interface configured to (using computers or machinery as mere tools to perform the abstract idea as noted below and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished, see MPEP § 2106.05(f)) provide the identified first clinical feature and the characterized severity of the first clinical feature. For the following reasons, the Examiner submits that the above-identified additional limitations, when considered as a whole with the limitations reciting the at least one abstract idea, do not integrate the above-noted at least one abstract idea into a practical application. Regarding the additional limitations of the analysis system including processor, various trained algorithms, and user interface, the Examiner submits that these limitations amount to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims drafted using largely (if not entirely) result-focused functional language, containing no specificity about how the purported invention achieves those results, are almost always found to be ineligible for patenting under Section 101.” Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1356 (Fed. Cir. 2024). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Furthermore, looking at the additional limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. MPEP §2106.05(I)(A) and §2106.04(II)(A)(2). For these reasons, representative independent claim 10 and analogous independent claim 1 do not recite additional elements that integrate the judicial exception into a practical application. Accordingly, representative independent claim 10 and analogous independent claim 1 are directed to at least one abstract idea. The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below: -Claim 14 recites how the user interface further includes a summary display of the temporal sequence of ultrasound image data and the identified first clinical feature such that a user can select a region of the temporal sequence and/or the identified first clinical feature for review which just amounts to using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). -Claim 15 recites how the summary display of the temporal sequence of ultrasound image data and/or the identified first clinical feature is updated by the processor to show a status of a review by the user which again just amounts to using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). When the above additional limitations are considered as a whole along with the limitations directed to the at least one abstract idea, the at least one abstract idea is not integrated into a practical application. Therefore, the claims are directed to at least one abstract idea. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2B: Regarding Step 2B of the Alice/Mayo test, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. Regarding the additional limitations of the analysis system including processor, various trained algorithms, and user interface, the Examiner submits that these limitations amount to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea and/or merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims drafted using largely (if not entirely) result-focused functional language, containing no specificity about how the purported invention achieves those results, are almost always found to be ineligible for patenting under Section 101.” Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1356 (Fed. Cir. 2024). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. The dependent claims also do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application. -Claim 14 recites how the user interface further includes a summary display of the temporal sequence of ultrasound image data and the identified first clinical feature such that a user can select a region of the temporal sequence and/or the identified first clinical feature for review which just amounts to using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). -Claim 15 recites how the summary display of the temporal sequence of ultrasound image data and/or the identified first clinical feature is updated by the processor to show a status of a review by the user which again just amounts to using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). Therefore, claims 1-4 and 6-20 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 10, 16, and 17 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by U.S. Patent App. Pub. No. 2020/0054306 to Mehanian et al. ("Mehanian"): Regarding claim 1, Mehanian discloses a method for analyzing ultrasound image data ([0047] discloses using CNNs to identify/classify features/objects in US images), comprising: receiving a temporal sequence of ultrasound image data for one or more of a plurality of different zones of one or both lungs of a patient ([0047], [0056] discloses generating a time sequence/series of US images (i.e., ultrasound video) of a lung of a patient (necessarily for at least one zone of the lung)); analyzing, using a trained clinical lung feature identification algorithm, the received temporal sequence of ultrasound image data to identify a first clinical feature in a lung of the patient ([0047], [0059] discloses using a CNN (trained per [0048]) to detect/identify features/objects in one or more frames of the US video (temporal sequence as noted above); for instance, [0074] discloses features such as A-lines, B-lines, pleural effusion, etc.), wherein identifying the first clinical feature comprises analysis of multiple frames in the temporal sequence (as [0059] detects/identifies features/objects in one or more frames of the US video, then multiple frames in the US video (which is a temporal sequence as noted above) are analyzed; also, [0057] discloses processing US images and steps 210-225 in Figure 9 illustrate how multiple images/frames are analyzed), wherein identifying the first clinical feature comprises identification of a location of the first clinical feature within the multiple frames ([0074] discloses detecting/classifying a location in the image frames 211 of the feature); and wherein identifying a location of the first clinical feature within the multiple frames comprises identifying a spatiotemporal location across multiple frames (because the US video image frames are a time sequence/series per [0003], [0047], [0056], then detecting/classifying/identifying a location of the features in the multiple frames/images per [0074] (which are time-related as noted above) amounts to identifying a "spatiotemporal location across multiple frames" of the feature); analyzing, using a trained clinical lung feature severity algorithm, the identified first clinical feature to characterize a severity of the identified first clinical feature ([0079]-[0080] discloses analyzing a detected feature to yield a severity grade; furthermore, the end of [0093] and [0104]-[0105] discloses use of k-means, GMM, etc. (ML algorithms which are known to be trained in an unsupervised manner) to determine the severity level (such that at least one of the disclosed algorithm is a "trained clinical lung feature severity algorithm"); and providing, via a user interface, the identified first clinical feature and the characterized severity of the first clinical feature ([0080]-[0086] discloses outputting the features and severities; furthermore, as [0093] discloses how outputs are reported to a user, the outputs would necessarily be provided via a user interface). Regarding claim 2, Mehanian discloses the method of claim 1, further including analyzing, using the trained clinical lung feature identification algorithm, the received temporal sequence of ultrasound image data to identify a second clinical feature in a lung of the patient, wherein the second clinical feature is different from the first clinical feature ([0047], [0059] discloses using a CNN (trained per [0048]) to detect/identify features/objects in one or more frames of the US video (temporal sequence as noted above); for instance, [0074] discloses features such as A-lines, B-lines, pleural effusion, etc. (different first and second clinical features); also, [0094], [0096] discloses how the same CNN can detect each of the features); and analyzing, using the trained clinical lung feature severity algorithm, the identified second clinical feature to characterize a severity of the identified second clinical feature ([0079]-[0080] discloses analyzing a detected feature (the second clinical feature) to yield a severity grade; furthermore, the end of [0093] and [0104]-[0105] discloses use of k-means, GMM, etc. (ML algorithms which are known to be trained in an unsupervised manner) to determine the severity level (such that at least one of the disclosed algorithm is a "trained clinical lung feature severity algorithm")); wherein said providing step further comprises providing, via the user interface, the identified second clinical feature and the characterized severity of the second clinical feature ([0080]-[0086] discloses outputting the features and severities; furthermore, as [0093] discloses how outputs are reported to a user, the outputs would necessarily be provided via a user interface). Regarding claim 3, Mehanian discloses the method of claim 1, further including analyzing, using a second trained clinical lung feature identification algorithm, the received temporal sequence of ultrasound image data to identify a second clinical feature in a lung of the patient, wherein the second clinical feature is different from the first clinical feature ([0047], [0059] discloses using a CNN (trained per [0048]) to detect/identify features/objects in one or more frames of the US video (temporal sequence as noted above); for instance, [0074] discloses using different CNN detectors (trained per end of [0075]) to respectively identify a plurality of different features (e.g., A-lines, B-lines, etc.) (different first and second clinical features)); and analyzing, using the trained clinical lung feature severity algorithm, the identified second clinical feature to characterize a severity of the identified second clinical feature ([0079]-[0080] discloses analyzing a detected feature (the second clinical feature) to yield a severity grade; furthermore, the end of [0093] and [0104]-[0105] discloses use of k-means, GMM, etc. (ML algorithms which are known to be trained in an unsupervised manner) to determine the severity level (such that at least one of the disclosed algorithm is a "trained clinical lung feature severity algorithm")); wherein said providing step further comprises providing, via the user interface, the identified second clinical feature and the characterized severity of the second clinical feature ([0080]-[0086] discloses outputting the features and severities; furthermore, as [0093] discloses how outputs are reported to a user, the outputs would necessarily be provided via a user interface). Claim 10 is rejected in view of Mehanian similar to as discussed in relation to claim 1 above. Regarding the recited processor, Figure 24 illustrates a computing machine 2406 that includes a processor per [0195]. Claim 16 is rejected in view of Mehanian similar to as discussed in relation to claims 1 and 10 above. Regarding the recited non-transitory computer-readable storage medium comprising instructions executable by a processor Figure 24 illustrates a computing machine 2406 that includes a processor and memory per [0195] and [0346] discloses a non-transitory computer-readable medium storing instructions executable by a computing circuit). Claim 17 is rejected in view of Mehanian similar to as discussed in relation to claims 2-3 above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2020/0054306 to Mehanian et al. ("Mehanian"), and further in view of U.S. Patent App. Pub. No. 2022/0148736 to Hafez et al. ("Hafez"). Regarding claim 4, Mehanian discloses the method of claim 1, further including analyzing, using the trained clinical lung feature identification algorithm or a second trained clinical lung feature identification algorithm, the received temporal sequence of ultrasound image data to identify a second clinical feature in a lung of the patient, wherein the second clinical feature is different from the first clinical feature ([0047], [0059] discloses using a CNN (trained per [0048]) to detect/identify features/objects in one or more frames of the US video (temporal sequence as noted above); for instance, [0074] discloses features such as A-lines, B-lines, pleural effusion, etc. (different first and second clinical features); also, [0094], [0096] discloses how the same CNN can detect each of the features); and analyzing, using the trained clinical lung feature severity algorithm, the identified second clinical feature to characterize a severity of the identified second clinical feature ([0079]-[0080] discloses analyzing a detected feature (the second clinical feature) to yield a severity grade; furthermore, the end of [0093] and [0104]-[0105] discloses use of k-means, GMM, etc. (ML algorithms which are known to be trained in an unsupervised manner) to determine the severity level (such that at least one of the disclosed algorithm is a "trained clinical lung feature severity algorithm")); … wherein said providing step further comprises providing the identified second clinical feature, the characterized severity of the second clinical feature ([0080]-[0086] discloses outputting the features and severities), and … However, Mehanian appears to be silent regarding prioritizing, using a trained clinical feature prioritization algorithm, the identified first clinical feature or the identified second clinical feature, wherein prioritization is based on one or more of a type of the identified clinical feature, the characterized severity of the first clinical feature and second clinical feature, a timing of the first clinical feature and/or second clinical feature in the temporal sequence of ultrasound image data, and/or a suspected or diagnosed clinical condition of the patient; wherein said providing step further comprises providing said prioritization. Nevertheless, Hafez teaches ([0131]-[0135]) that it was known in the healthcare informatics art for an adaptive algorithm ("trained clinical feature prioritization algorithm") to rank ("prioritize") features (radiology image features per [0044], [0057]) based on their importance as to a model prediction of metastasis (suspected/diagnosed clinical condition) and display the ranked features (Figure 11) ("proving the prioritization") according to for instance color, shade, etc. to advantageously allow medical professionals to review how different features affect clinical diagnoses and identify appropriate interventions ([0320]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have prioritized, using a trained clinical feature prioritization algorithm, the identified first clinical feature or the identified second clinical feature, wherein prioritization is based on one or more of a type of the identified clinical feature, the characterized severity of the first clinical feature and second clinical feature, a timing of the first clinical feature and/or second clinical feature in the temporal sequence of ultrasound image data, and/or a suspected or diagnosed clinical condition of the patient, wherein said providing step further comprises providing said prioritization, in the system of Mehanian similar to as taught by Hafez to advantageously allow medical professionals to review how different features affect clinical diagnoses and identify appropriate interventions. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claims 11 and 18 rejected in view of the Mehanian/Hafez combination as discussed above in relation to claim 4. Claims 6, 7, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2020/0054306 to Mehanian et al. ("Mehanian") in view of U.S. Patent App. Pub. No. 2011/0172526 to Lachaine et al. ("Lachaine"): Regarding claim 6, Mehanian discloses the method of claim 1, but appears to be silent regarding wherein providing the identified first clinical feature and the characterized severity of the first clinical feature comprises providing a subset of the received temporal sequence of ultrasound image data, the subset comprising the identified location of the identified first clinical feature. Nevertheless, Lachaine teaches (Abstract and [0064]) that it was known in the healthcare informatics art to provide a subset of a temporal sequence of US images of an anatomical feature of an anatomical region of a patient to advantageously facilitate tracking of the feature in relation to diagnosis and development of a treatment plan for the patient ([0009], [0030]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the providing the identified first clinical feature and the characterized severity of the first clinical feature of Mehanian to include providing a subset of the received temporal sequence of ultrasound image data, the subset including the identified location of the identified first clinical feature, similar to as taught by Lachaine to advantageously facilitate tracking of the feature in relation to diagnosis and development of a treatment plan for the patient. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Regarding claim 7, the Mehanian/Lachaine combination discloses the method of claim 6, further including wherein the subset is a temporal sequence (Abstract and [0064] of Lachaine discloses that it was known in the healthcare informatics art to provide a subset of a temporal sequence of US images of an anatomical feature of an anatomical region of a patient to advantageously facilitate tracking of the feature in relation to diagnosis and development of a treatment plan for the patient ([0009], [0030]); again, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the providing the identified first clinical feature and the characterized severity of the first clinical feature of Mehanian to include providing a subset of the received temporal sequence of ultrasound image data, the subset being a temporal sequence and including the identified location of the identified first clinical feature, similar to as taught by Lachaine to advantageously facilitate tracking of the feature in relation to diagnosis and development of a treatment plan for the patient. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claims 12 and 19 are rejected in view of the Mehanian/Lachaine combination as discussed above in relation to claim 6. Claims 8, 9, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2020/0054306 to Mehanian et al. ("Mehanian") in view of U.S. Patent App. Pub. No. 2024/0428561 to Katchinskiy et al. ("Katchinskiy"): Regarding claim 8, Mehanian discloses the method of claim 1, but appears to be silent regarding receiving, via the user interface, feedback from a user about the provided identified first clinical feature and/or the characterized severity of the first clinical feature. Nevertheless, Katchinskiy teaches ([0007], [0100]) that it was known in the healthcare informatics and machine learning art to receive feedback from a user via a GUI that corrects a feature map generated by a trained model to advantageously allow for retraining of the model thereby improving model accuracy. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have received, via the user interface, feedback from a user about the provided identified first clinical feature and/or the characterized severity of the first clinical feature in the system of Mehanian as taught by Katchinskiy to advantageously allow for retraining of the model thereby improving model accuracy. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Regarding claim 9, the Mehanian/Katchinskiy combination discloses the method of claim 8, further including wherein the feedback comprises an adjustment of the characterized severity of the first clinical feature, a selection of one or more frames in the temporal sequence of ultrasound image data, an acceptance or rejection of the feature, and/or a change of the type of feature ([0100] of Katchinskiy discloses how the user can correct a feature map generated by the model which amounts to a "rejection" and/or "change" of the feature output by the model; again, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have received, via the user interface, feedback from a user about the provided identified first clinical feature (e.g., rejection/change of the feature) in the system of Mehanian as taught by Katchinskiy to advantageously allow for retraining of the model thereby improving model accuracy. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id.). Claim 20 is rejected in view of the Mehanian/Katchinskiy combination as discussed above in relation to claims 8-9. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2020/0054306 to Mehanian et al. ("Mehanian") in view of U.S. Patent App. Pub. No. 2006/0274928 to Collins et al. ("Collins"): Regarding claim 13, Mehanian discloses the ultrasound analysis system of claim 10, but appears to be silent regarding wherein the processor is further configured to: receive via the user interface, feedback from a user about the provided identified first clinical feature and/or the characterized severity of the first clinical feature; and generate, based on the received feedback, a report comprising the identified first clinical feature and/or the characterized severity of the first clinical feature. Nevertheless, Collins teaches ([0036], [0069], [0077]) that it was known in the healthcare informatics art to automatically analyze medical images, detect/identify features/characteristics within the images, allow interactive feedback from a user to dynamically modify a list of detected features/characteristics, and generate a report with the user modifications/feedback including an automatically determined diagnosis based on the feedback thereby improving the accuracy of generated diagnoses by taking into account the user feedback ([0066]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have received via the user interface in Mehanian, feedback from a user about the provided identified first clinical feature and/or the characterized severity of the first clinical feature; and generated, based on the received feedback, a report comprising the identified first clinical feature and/or the characterized severity of the first clinical feature similar to as taught by Collins to advantageously improve the accuracy of generated diagnoses by taking into account the user feedback. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2020/0054306 to Mehanian et al. ("Mehanian") and further in view of U.S. Patent No. 10,140,421 to Bernard et al. ("Bernard"): Regarding claim 14, Mehanian discloses the ultrasound analysis system of claim 10, but appears to be silent regarding wherein the user interface further comprises a summary display of the temporal sequence of ultrasound image data and the identified first clinical feature, wherein a user can select a region of the temporal sequence and/or the identified first clinical feature for review. Nevertheless, Bernard teaches (Figures 8A-8R, 22:35-23:5, 23:65-25:50) that it was known in the healthcare informatics art to automatically identify abnormalities/features/findings in medical scan data (ultrasound per 5:29-30 of the lung per 9:47), display a summary of a plurality/sequence of images and the identified abnormalities/features/findings, and allow a user to select one or more of the images and abnormalities/features/findings for review. Upon selecting a particular one of the images for review, the display would necessarily indicate that such image was selected (which indicates its "selected" status) and display corresponding abnormalities/features/findings. Furthermore, Figures 8A and 8R show how a user can select various buttons (e.g., edit, approve, deny) in relation to the abnormalities/features/findings which become highlighted upon selection (indicating a status) while Figures 8K-8R show checkmarks and x's indicating review status of various findings. This arrangement advantageously aids medical professionals in diagnosing, triaging, and classifying medical scans (22:19-29). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the user interface to further include a summary display of the temporal sequence of ultrasound image data and the identified first clinical feature, wherein a user can select a region of the temporal sequence and/or the identified first clinical feature for review in the system of Mehanian as taught by Bernard to advantageously aid medical professionals in diagnosing, triaging, and classifying medical scans. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Regarding claim 15, the Mehanian/Bernard combination discloses the ultrasound analysis system of claim 14, further including wherein, after review by the user, the summary display of the temporal sequence of ultrasound image data and/or the identified first clinical feature is updated by the processor to show a status of the review (as noted above, upon selecting a particular one of the images for review, the display of Bernard would necessarily indicate that such image was selected (which indicates its "selected" status) and display corresponding abnormalities/features/findings. Furthermore, Figures 8A and 8R of Bernard show how a user can select various buttons (e.g., edit, approve, deny) in relation to the abnormalities/features/findings which become highlighted upon selection (indicating a status) while Figures 8K-8R show checkmarks and x's indicating review status of various findings; again, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the user interface to further include a summary display of the temporal sequence of ultrasound image data and the identified first clinical feature, wherein a user can select a region of the temporal sequence and/or the identified first clinical feature for review and the summary display of the temporal sequence of ultrasound image data and/or the identified first clinical feature is updated by the processor to show a status of the review in the system of Mehanian as taught by Bernard to advantageously aid medical professionals in diagnosing, triaging, and classifying medical scans. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHON A. SZUMNY whose telephone number is (303) 297-4376. The examiner can normally be reached Monday-Friday 7-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Dunham, can be reached at 571-272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHON A. SZUMNY/Primary Examiner, Art Unit 3686
Read full office action

Prosecution Timeline

Nov 26, 2024
Application Filed
Nov 10, 2025
Non-Final Rejection — §101, §102, §103
Feb 19, 2026
Response Filed
Mar 05, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597508
COMPUTERIZED DECISION SUPPORT TOOL FOR POST-ACUTE CARE PATIENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12586667
PSEUDONYMIZED STORAGE AND RETRIEVAL OF MEDICAL DATA AND INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12562277
METHOD OF AND SYSTEM FOR DETERMINING A PRIORITIZED INSTRUCTION SET FOR A USER
2y 5m to grant Granted Feb 24, 2026
Patent 12537102
SYSTEM AND METHOD FOR DETERMINING TRIAGE CATEGORIES
2y 5m to grant Granted Jan 27, 2026
Patent 12505912
METHODS AND SYSTEMS FOR RESTING STATE FMRI BRAIN MAPPING WITH REDUCED IMAGING TIME
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+60.6%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 247 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month