Prosecution Insights
Last updated: April 19, 2026
Application No. 17/786,105

SYSTEMS AND METHODS FOR COPD MONITORING

Non-Final OA §101§103§112
Filed
Jun 16, 2022
Examiner
CHOI, PETER H
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
ResMed
OA Round
2 (Non-Final)
26%
Grant Probability
At Risk
2-3
OA Rounds
5y 5m
To Grant
45%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
56 granted / 215 resolved
-26.0% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
5y 5m
Avg Prosecution
36 currently pending
Career history
251
Total Applications
across all art units

Statute-Specific Performance

§101
32.7%
-7.3% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 215 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is responsive to the response filed May 28, 2025. Claims 6, 18 and 21 have been canceled. Claims 26 and 28 were previously cancelled. Claims 22-25 and 27 have been withdrawn as a result of the election/restriction. Claims 1-5, 7-17 and 19-20 are currently pending and have been fully examined. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-5, 7-17 and 19-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 10 recite the limitations, “determine, using the trained machine learning algorithm, a COPD trajectory based on the first set of sputum features, the second set of sputum features, and the time period” or some limitation that is the substantially similar to this limitation. Although these limitations are part of the original disclosure by being part of the originally filed claims, it is possible for the originally filed claims to not have sufficient written description support if the claim language is “generic or functional, or both.” (MPEP 2161.01, “The written description requirement of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, applies to all claims including original claims that are part of the disclosure as filed. Ariad, 598 F.3d at 1349, 94 USPQ2d at 1170. As stated by the Federal Circuit, "[a]lthough many original claims will satisfy the written description requirement, certain claims may not."”). These limitations reciting “determine a COPD trajectory” use functional language, but the original specification does not give any further explanation as to how the COPD trajectory is determined based on the processed photographs. Therefore, the claims must be rejected under 35 USC 112(a). Claims 1 and 10 also recite “a trained machine learning algorithm” used to process the first and second image to identify pixels representing a sputum, identifying sets of sputum features, and a COPD trajectory. However, the specification does not provide any support for a machine learning algorithm, or a trained machine learning algorithm to perform any of these steps. The specification describes, at para. [0054] that multiple algorithms may be executed, at para. [0072] that various image processing and computer vision algorithms known in the art may be utilized, and para. [0076] that computer vision algorithms identify sputum features, and para. [0068] that scaled down versions of machine learning algorithms may run locally on a mobile device. This does not support that the image processing and computer vision algorithms are machine learning (or trained machine learning algorithms) specifically, and does not specify how or where machine learning algorithms are used or for what purpose. Claims 2-5, 7-9, 11-17 and 19-20 all ultimately depend from one of claim 1 or 10 and inherit the defects of the claim. Therefore, claims 2-5, 7-9, 11-17 and 19-20 must also be rejected under 35 USC 112(a). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 7-17, and 19-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 The claim(s) recite(s) subject matter within a statutory category as a process (claim 10) and machines (claim 1), which are recited as a method and systems that perform the steps and/or functions of: Claim 1 receive a first image output from the camera with a first time stamp and a second image output from the camera with a second time stamp, compare the first time stamp and the second time stamp to determine a period between the first image and the second image; process, using a trained machine learning algorithm, the first image and the second image to identify a portion of the pixels within the first image and a second portion of pixels within the second image that represent a sputum; process, using the trained machine learning algorithm, the first portion to identify a first set of sputum features; process, using the trained machine learning algorithm, the second portion to identify a second set of sputum features; determine, using the trained machine learning algorithm, a COPD trajectory based on the first set of sputum features, the second set of sputum features, and the time period; and display the COPD trajectory on the display. Claim 10 receiving, at a control system comprising one or more processors, a first image output from a camera with a first time stamp and a second image output from the camera with a second time stamp; comparing the first and second time stamps to determine a time period between the first and second images; processing, using a trained machine learning algorithm, the first and second images to identify a first portion of the first image and a second portion of the second image that represent a sputum; processing, using the trained machine learning algorithm, the first portion to identify a first set of sputum features and processing the second portion to identify a second set of sputum features; processing, using the trained machine learning algorithm, the first and second set of sputum features and the time period to determine a COPD trajectory; and displaying the COPD trajectory on the display. Step 2A: Prong 1 When taken individually and as a whole, the steps corresponds to concepts identified as abstract ideas by the courts, such as “mental processes”, which are concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The steps also correspond to “certain methods of organizing human activity”, which include concepts relating to managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions). This grouping includes include social activities, teaching, and following rules or instructions. An example of managing personal behavior recited in a claim is: a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982) Claim 1 The claim is directed to a system to perform the process of determining a COPD trajectory, which is performed by the system performing the steps underlined in claim 1 above. These steps amount to a mental process because they are all recited at such a high degree of generality to encompass every possible method of making the determination, which includes mental processes. They also amount to certain methods of organizing human activity as encompassing the steps or rules a medical professional would follow to make a determination based on received images. Comparing the time stamps to determine a time period is evaluating the time stamps and making a judgment regarding the difference in time between them. Processing the images can be as simple as observing the images. Identifying portions of the pixels within the first and second images that represent a sputum can be as simple as evaluating the images and making a judgment as to which portions of the images depict sputum. Processing the portions to identify a first and second sets of sputum features is evaluating the portion of the images that are sputum and making a judgment as to what features are present in the sputum (e.g., determining what color the sputum is). Processing the time stamps and determining a COPD trajectory based on the sputum features is evaluating the identified features and making a judgment of the COPD trajectory of a patient. Claim 10 Claim 10 is substantially similar to claim 1, the only difference being the statutory category of invention (claim 1 being a system, claim 10 being a method). Thus, the same rationale and analysis for claim 1 applies to claim 10. Step 2A: Prong 2 The claims do not include additional elements that are sufficient to be considered a practical application because the additional elements amount to: insignificant extra-solution activity (MPEP 2106.05(g)), generally linking the application of the abstract idea to a particular field of use or technological environment (2106.05(h)), or mere instructions to apply it with a computer (MPEP 2106.05(f)), as discussed below. Insignificant Extra-Solution Activity The steps of receiving the data are examples of mere data gathering, which is an insignificant extra-solution activity (MPEP 2106.5(g)). The steps specifying the data to be sputum or related to COPD are examples of selecting by type or source the data to be manipulated, which is an extra-solution activity (MPEP 2106.05(g)). The steps of displaying the COPD trajectory are examples of necessary data outputting. Necessary data outputting is an insignificant extra-solution activity (MPEP 2106.05(g)). Insignificant extra-solution activities are not sufficient to integrate the abstract idea into a practical application or cause the claim to amount to significantly more than the abstract idea (MPEP 2106.05(g)) Generally Linking Implementation a Particular Technological Environment or Field of Use The steps reciting generically recited components of a computer system, such as the camera and the computer components, only serve to generally link the implementation of the abstract idea to a technological environment, which would be a computer system with a camera connected to the system. Generally linking the application of the abstract idea to a particular field of use or technological environment is not sufficient to integrate the abstract idea into a practical application or cause the claim to amount to significantly more than the abstract idea (MPEP 2106.05(h)). Mere Instructions to Apply the Abstract Idea Using a Computer The steps reciting the use of computer components, such as having the steps executed by the control system coupled to the memory execute machine executable code, and the use of a trained machine learning algorithm serve as mere instructions to apply the abstract idea using a computer. Mere instructions to apply the abstract idea using a computer are not sufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (MPEP 2106.05(f)). Step 2B The claims also do not include additional elements that are sufficient to be considered a significantly more than the abstract idea because the additional elements amount to: insignificant extra-solution activity (MPEP 2106.05(g)), mere instructions to apply it with a computer (MPEP 2106.05(f)), generally linking the application of the abstract idea to a particular field of use or technological environment (MPEP 2106.05(h)), or a well-understood, routine, and conventional limitation (MPEP 2106.05(d)), as discussed below. The steps addressed above in Step 2A: Prong 2, when considered again under Step 2B are not considered to make the claims amount to significantly more than the abstract idea because those steps, when considered additionally with regards to Step 2B, are still considered to be either insignificant extra-solution activity, mere instructions to apply an abstract idea with a computer, or generally linking the application of the abstract idea to a particular field of use or technological environment, which are types of limitations that are not sufficient to make the claims amount to significantly more than the abstract idea (MPEP 2106.05.I.A). The steps recited as either being part of the abstract idea or insignificant extra-solution activity are all examples of at least one of: storing and retrieving data from a memory (receiving data if that data that is stored locally), sending and receiving data over a network (receiving data if that data is sent from an external device), electronic recordkeeping, or performing repetitive calculations. All of those functions have been identified as well-understood, routine, and conventional functions of a generic computer that are not significantly more than the abstract idea when claimed broadly or as an extra-solution activity (MPEP 2106.05(d).II). The recited computer components (e.g., a display, a camera, a memory, a control system comprising one or more processors) are all generically recited components (see specification, par. [0065]-[0068], which describe the system as including mobile devices and servers with a CPU). Further, the algorithm used to process the images is identified in the specification as being, “known in the art” (specification, par. [0072], “This may include various image processing and computer vision algorithms known in the art for identifying boundaries of sputum in the image including color and size based algorithms that identify typical sputum colors (e.g. white, yellow, clearish, brown, or red for blood).”). Commercially available components, generic computer components, and specially-programmed computer components performing the functions of a generic computer are not considered to be amount to significantly more than the abstract idea (MPEP 2106.05(b)). Similarly, the use of machine learning algorithm is not described with any particularity in the specification (par. [0068]) other than its ability to broadly be utilized, with no mention of “trained” machine learning algorithms or in any particular step. The specification does broadly mention the use of algorithms (para. [0072], [0076]) in processing the images and identifying sputum features, but not that they are machine learning algorithms or trained machine learning algorithms. Even if they were trained machine learning algorithms, they are utilized in an “apply it” manner. When considered as a whole, the components do not provide anything that is not present when the component parts are considered individually. Using the broadest reasonable interpretation, the system as a whole is a system to capture an image from a camera, analyze the image and make a determination of a COPD trajectory. This is a system of general purpose computer systems performing the abstract idea and insignificant extra-solution activities through these generically described devices performing well-understood, routine, and conventional functions of a generic computer (MPEP 2106.05(d).II). Dependent Claim Analysis Claims 2-5 and 7-9 are ultimately dependent from Claim(s) 1 and includes all the limitations of Claim(s) 1. Therefore, claim(s) 2-5, 7-9 recite the same abstract idea of certain methods of organizing human activity and mental processes of claim 1. Claims 2-5 and 7-8 all recite additional limitations that serve to select by type or source the data to be manipulated. Selecting by type or source the data to be manipulated is a type of insignificant extra-solution activity that is not sufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (MPEP 2106.05(g)). Claim 9 recites additional limitations that amount to necessary data outputting, e.g., displaying a stock image associated with the milestone. Necessary data outputting is a type of insignificant extra-solution activity that is not sufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (MPEP 2106.05(g)). Claims 11-17 and 19-20 are ultimately dependent from Claim(s) 10 and includes all the limitations of Claim(s) 10. Therefore, claim(s) 11-17, 19-20 recite the same abstract idea of certain methods of organizing human activity and mental processes of claim 10. Claims 11-13, 16, and 20 all recite additional limitations that further describe the abstract idea by describing the analysis that is performed as part of the abstract idea. These steps are either additional mental processes or mathematical concepts, which are both abstract idea groupings. The abstract idea and additional limitations that also recite an abstract idea are not sufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (MPEP 2106.04.II.A.2). Claims 14-15 and 19 all recite additional limitations that serve to select by type or source the data to be manipulated. Selecting by type or source the data to be manipulated is a type of insignificant extra-solution activity that is not sufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (MPEP 2106.05(g)). Claim 17 recites additional limitations that amount to necessary data outputting, e.g., sending an alert notification. Necessary data outputting is a type of insignificant extra-solution activity that is not sufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (MPEP 2106.05(g)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 8, and 10-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rajasekhar (US PG Pub. 2020/0135334) in view of Turek (US PG Pub. 2005/0105788) Claim 1 Regarding claim 1, Rajasekhar discloses A system for monitoring a COPD trajectory, the system comprising: Abstract, “A method for managing or remotely monitoring chronic medical conditions (e.g., chronic respiratory conditions) of a plurality of patients includes, at one or more processors, receiving a plurality of bodily metrics associated with each of the plurality of patients over one or more remote communication links (e.g., an automated phone system), characterizing the severity of the medical condition for each patient based on the plurality of bodily metrics associated with the patient, and transmitting a programmed alert to a medical care provider for at last one patient having a medical condition characterized as having a threshold level of severity.” Par. [0088], “For example, the patient data may be analyzed via machine learning and/or other artificial intelligence techniques, and an output may be created as an assessment of patient risk (e.g., severity of patient condition, or predicted likelihood of a respiratory or other clinical event, such as a COPD exacerbation).” A display Par. [0150], “For example, as shown in FIG. 5, an exemplary variation of a user computing device 500 may include at least one network communication interface 510, at least one processor 520, at least one memory device 540, at least one audio device 540, and/or at least one display device 550.” A camera configured to output image data; Par. [0169], “Photo/audio/video of patient (or sputum/mucus or other patient features). In some variations, the system may allow the patient to record and remotely transmit data of the patient or patient features. For example, a patient may use the measurement device (or user computing device, described below) to take a photo/video of symptoms (such as edema), or an audio/visual recording, or take a photo/video of sputum/mucus. Machine learning and/or other process may be used to analyze such data, for example, using the image to predict whether sputum is purulent, in order to guide potential therapy decisions.” A memory containing machine readable medium comprising machine executable code having stored thereon instructions; and a control system coupled to the memory comprising one or more processors, the control system configured to execute the machine executable code to cause the control system to: Par. [0144], “Generally, the measurement device 200 may include a controller including a processor(s) 230 (e.g., CPU) and memory device(s) 240 (which can include one or more computer-readable storage mediums). The processor may incorporate data received from memory and user input. The memory may include store instructions to cause the processor to execute modules, processes, and/or functions associated with the methods described herein. In some variations, the memory and processor may be implemented on a single chip, while in other variations they can be implanted on separate chips.” Receive a first image output from the camera and a second image output from the camera; Par. [0150], “For example, as shown in FIG. 5, an exemplary variation of a user computing device 500 may include at least one network communication interface 510, at least one processor 520, at least one memory device 540, at least one audio device 540, and/or at least one display device 550.” Par. [0169], “For example, a patient may use the measurement device (or user computing device, described below) to take a photo/video of symptoms (such as edema), or an audio/visual recording, or take a photo/video of sputum/mucus. Machine learning and/or other process may be used to analyze such data, for example, using the image to predict whether sputum is purulent, in order to guide potential therapy decisions.” Fig. 22D shows that the sputum measurements are taken at multiple intervals over multiple time points. Par. [0210], “Furthermore, the trending index score may additionally or alternatively incorporate other reporting data such as time stamp of an assessment.” Comparing the first and second time stamps to determine a time period between the first and second images Par. [0191], “Similarly, a medical care provider may additionally or alternatively select preferred factors for a trending index (as further described below), such as indicating whether to use absolute changes or relative changes in bodily metrics or other settings. For example, a medical care provider may select settings to enable monitoring of a patient based on the patient's changes relative to other patients (e.g., through ranking), and/or monitoring of a patient based on the patient's historical trends (e.g., relative to past bodily metrics).” Par. [0200], “Relative change of a bodily metric may be measured against the immediately prior value of the bodily metric, or across any suitable time interval, such as across the preceding 6 hours, preceding 12 hours, preceding 24 hours, and the like. Alternatively, relative change of a bodily metric may be measured against a running average (mean) for a preceding time interval, such as average across the preceding 12 hours, preceding 24 hours, preceding 2 days, preceding 3 days, etc.” process, using a trained machine learning algorithm, the first image and the second image to identify a first portion of the pixels within the first image and a second portion of pixels within the second image that represent a sputum Par. [0169], “the patient data may be analysed via machine learning and/or other artificial intelligence techniques, and an output may be created as an assessment of patient risk,” e.g. see Rajasekhar [0088], “a patient may use the measurement device to take a photo/video of symptoms…or take a photo/video of sputum/mucus. Machine learning and/or other process may be used to analyze such data,”; Determine a COPD trajectory based on sputum; and Par. [0180], “Generally, in some variations, a method for managing a chronic respiratory condition of a patient includes receiving a plurality of bodily metrics for a patient over a remote communication link, predicting the likelihood of an upcoming respiratory event for the patient based on the plurality of bodily metrics” Par. [0214], “A neural network may be trained to process multimodal patient data using a recurrent neural network to predict and forecast COPD flaring as well as future patient biomarkers. Every day, new data is collected that improves and refines the predictive model.” Display the COPD trajectory on the display Par. [0223], “As shown in FIG. 10, an alert (e.g., a programmed alert) may be transmitted or otherwise communicated, such as based on predicted likelihood of an upcoming respiratory event (1050) or other characterization of patient risk and/or severity of medical condition.” Par. [0238], “FIG. 21 illustrates trends in BCSS score for a remotely monitored patient having a relatively high or elevated baseline BCSS score. Over the course of remote monitoring as described above, the patient experienced two COPD flares (“COPD flare #1” and “COPD flare #2”) reflected by an elevated BCSS score of 8 within 42 days of being monitored. Following each COPD flare, the patient's clinic remotely prescribed medication changes (steroids) to reduce the patient's symptoms and thereby treat the patient's COPD exacerbations at home, instead of in the emergency room. The general decline in BCSS score following each COPD flare and medication change strongly suggests a successful response to treatment. Accordingly, the systems and methods described herein may be used by clinics to successfully remotely monitor conditions of patients, adjust treatment, and then follow-up to characterize the level of success with treatment (e.g., remotely monitor whether and how successfully the patients respond to treatment) and/or determine whether the patients require further intervention.” Although not explicitly taught by Rajasekhar, Turek teaches the steps of: receive a first image output from the camera with a first time stamp and a second image output from the camera with a second time stamp (“Also included in the method is imaging the person or the object with an imaging apparatus and downloading images of the person or object produced by the imaging apparatus to the personal computer. The imaging and downloading are repeated a plurality of times at intervals selected to provide the analysis software with sufficient images to track the changeable parameter”, e.g. see Turek [0016], “In some configurations, personal COPD recorder 400 receives image data from a CT imaging device 110 or other suitable imaging device (e.g., MRI device) at intervals determined by a physician in accordance with the needs of patient 402”, e.g. see Turek [0061].); compare the first time stamp and the second time stamp to determine a period between the first image and the second image (“In some configurations, personal COPD recorder 400 receives image data from a CT imaging device 110 or other suitable imaging device (e.g., MRI device) at intervals determined by a physician in accordance with the needs of patient 402”, e.g. see Turek [0061].); process, using the trained machine learning algorithm, the first portion to identify a first set of sputum features (“Image data is acquired at 310 and segmented at 320 by a plurality of segmentation steps. The segmentation segments into regions having different properties, for example intensity, area, perimeter, aspect ratio, diameter, variance, derivatives and other properties that may be of interest for a disease…At 330, feature extraction is performed on the segmented image data to extract relevant features for a disease,” e.g. see Turek [0046]); process, using the trained machine learning algorithm, the second portion to identify a second set of sputum features (“Image data is acquired at 310 and segmented at 320 by a plurality of segmentation steps. The segmentation segments into regions having different properties, for example intensity, area, perimeter, aspect ratio, diameter, variance, derivatives and other properties that may be of interest for a disease…At 330, feature extraction is performed on the segmented image data to extract relevant features for a disease,” e.g. see Turek [0046]). determine, using the trained machine learning algorithm, a COPD trajectory based on the first set of sputum features, the second set of sputum features, and the time period (“The apparatus comprises an imaging device configured to acquire image data and an image processing device responsive to the imaging device for processing images…These one or more measurements are used for at least one of disease diagnosis and/or tracking of disease progression, wherein the disease is chronic obstructive pulmonary disease or asthma”, e.g. see Turek [0044]). Both Rajasekhar and Turek utilize image data to aid in diagnosing and treating patient conditions. Thus, they are considered to be analogous references as they are directed towards solving similar problems. It would have been obvious before the effective filing date to modify the teachings of Rajasekhar to include timestamp information for images, determine the time elapsed between images, utilize machine learning to identify sputum features from the images, and determine a COPD trajectory based on the sputum features and elapsed time period, as taught by Turek, because these steps of detecting, quantifying, staging, reporting and tracking of a disease aid in early diagnosis and effective treatment which can be used to improve a patient’s quality of life, with early diagnosis being desirable to enable measures to be taken by the patient to prevent further progression and be able to monitor a patient’s response to various therapy and drug treatments, as specified by Turek [para. 0002, 0007]. Claim 2 Regarding claim 2, the combination of Rajasekhar and Turek teaches all the limitations of claim 1. Rajasekhar further teaches The first set of sputum features comprise at least one of color, quantity, volume, and blood Par. [0019], “The plurality of bodily metrics may include, for example, at least one of oxygen saturation, heart rate, breathlessness severity, cough severity, and sputum/mucus severity.” Par. [0067], “As another example, the patient may be prompted to describe the color of any sputum coughed up, whether there has been an increased volume of sputum, whether the patient is wheezing when exhaling, whether the patient is experiencing any sweats, fevers, chest tightness, shortness of breath, chest pain, etc., whether the patient has any increased ankle swelling, whether the patient's cough or breathing is interfering with sleep, whether the patient feels he or she can cope or manage with their condition that day, and any other suitable questions.” Par. [0169], “Photo/audio/video of patient (or sputum/mucus or other patient features). In some variations, the system may allow the patient to record and remotely transmit data of the patient or patient features. For example, a patient may use the measurement device (or user computing device, described below) to take a photo/video of symptoms (such as edema), or an audio/visual recording, or take a photo/video of sputum/mucus. Machine learning and/or other process may be used to analyze such data, for example, using the image to predict whether sputum is purulent, in order to guide potential therapy decisions.” Claim 4 Regarding claim 4, the combination of Rajasekhar and Turek teaches all the limitations of claim 1. Rajasekhar further teaches The healing trajectory comprising a COPD stage or a predicted exacerbation event Par. [0180], “Generally, in some variations, a method for managing a chronic respiratory condition of a patient includes receiving a plurality of bodily metrics for a patient over a remote communication link, predicting the likelihood of an upcoming respiratory event for the patient based on the plurality of bodily metrics” Par. [0214], “A neural network may be trained to process multimodal patient data using a recurrent neural network to predict and forecast COPD flaring as well as future patient biomarkers. Every day, new data is collected that improves and refines the predictive model.” Claim 8 Regarding claim 8, the combination of Rajasekhar and Turek teaches all the limitations of claim 1. Rajasekhar further teaches Determining the healing trajectory comprises determining whether the patient is close to a milestone of a COPD trajectory Par. [0135], “Accelerometer with step counting and/or positional data. In some variations, the measurement device may include a sensor, such as an accelerometer, which can track patient physical activity or positional data (e.g., pedometer, distance traveled, etc.). Level of exercise tolerance and daily physical activity may be correlated with risk of hospitalization and mortality in COPD and other respiratory conditions. Furthermore, in some variations the accelerometer may incorporated into a remote pulmonary rehabilitation program that is initiated through the disclosed system. For example, if a patient is inactive, the system may remind the patient, caregiver, or healthcare team that the patient has not had any activity for a certain time period, and in response, the patient is assessed for clinical status or the patient is encouraged to increase physical activity, such as walking around the house or using a stationary bicycle, and the activity can be tracked using sensors in the system, such as with an accelerometer.” This shows that Rajasekhar is capable of using data for the patient to determine whether the patient is at risk of hospitalization, which is an episode of acute care. Episodes of acute care are identified in the specification as being one of the possible COPD milestones (see specification, par. [0061]). Par. [0090], “Furthermore, a medical care provider may select suitable alert conditions, such as thresholds and/or other conditions for triggering an alert notification associated with a high-risk patient (e.g., a patient with a relatively high likelihood of experiencing an upcoming clinical event.)” Claim 10 Claim 10 recites limitations that are substantially similar to those of claim 1. Thus, the same rejection applies. Claim 11 Regarding claim 11, the combination of Rajasekhar and Turek teaches all the limitations of claim 10. Rajasekhar further teaches Processing the first and second set of sputum features comprising determining a trend of at least one of the first and second set of sputum features Fig. 22D; Par. [0239], “FIGS. 22A-22D illustrate trends in various bodily metrics for a remotely monitored patient, including heart rate (FIG. 22A), breathlessness severity (FIG. 22B), cough severity (FIG. 22C), and sputum severity (FIG. 22D).” Claim 12 Regarding claim 12, the combination of Rajasekhar and Turek teaches all the limitations of claim 11. Rajasekhar further teaches The trend comprising a liner, a logarithmic, or a parametric trajectory Fig. 22 shows a linear trend. See also Fig. 21. Claim 13 Regarding claim 13, the combination of Rajasekhar and Turek teaches all the limitations of claim 10. Rajasekhar further teaches Processing the first and second set of sputum features comprises determining change of at least one of the first and second set of sputum features Par. [0237], “FIGS. 20A and 20B illustrate trends in SpO.sub.2 and BCSS score, respectively, for a remotely monitored patient. Over the course of remote monitoring as described above, a drastic rise of BCSS score over 48 hours (days 8 and 9) was detected. In response to the remote monitoring and detection of the significant rate of increase in BCSS score, the patient's clinic prescribed medication.” Although not explicitly taught by Rajasekhar, Turek further teaches a rate of change of sputum features Par. [0043], “For example, the output may be used for staging a disease in a patient, measuring response to therapy, phenotyping for patient selection to participate in drug trials, measuring stability of an anatomical structure and prediction of rate of change of the disease,” Both Rajasekhar and Turek utilize image data to aid in diagnosing and treating patient conditions. Thus, they are considered to be analogous references as they are directed towards solving similar problems. It would have been obvious before the effective filing date to modify the teachings of Rajasekhar to include the rate of change of sputum features, as taught by Turek, because these steps of detecting, quantifying, staging, reporting and tracking of a disease aid in early diagnosis and effective treatment which can be used to improve a patient’s quality of life, with early diagnosis being desirable to enable measures to be taken by the patient to prevent further progression and be able to monitor a patient’s response to various therapy and drug treatments, as specified by Turek [para. 0002, 0007]. Claim 14 Regarding claim 14, the combination of Rajasekhar and Turek teaches all the limitations of claim 10. Rajasekhar further teaches Determining the COPD trajectory further comprises receiving a first input from a user interface Par. [0067], “In some variations, when a patient is placed on an “alerts” list based on currently reported patient data, dynamic questions may be generated to gather further information regarding the patient's respiratory condition. For example, the patient may be prompted to verbally describe generally how he or she is feeling. As another example, the patient may be prompted to describe the color of any sputum coughed up, whether there has been an increased volume of sputum, whether the patient is wheezing when exhaling, whether the patient is experiencing any sweats, fevers, chest tightness, shortness of breath, chest pain, etc., whether the patient has any increased ankle swelling, whether the patient's cough or breathing is interfering with sleep, whether the patient feels he or she can cope or manage with their condition that day, and any other suitable questions.” Claim 15 Regarding claim 15, the combination of Rajasekhar and Turek teaches all the limitations of claim 14. Rajasekhar further teaches Wherein the input comprises at least one of: existing patient health conditions that impact COPD, activity level, smoking history, diet, medication, and pain level Par. [0055], “During patient onboarding, a patient may complete an intake questionnaire or other suitable intake form (e.g., dynamic or static questionnaire, such as a self-screening tool). The questionnaire may, for example, be tailored for a particular condition intended for remote monitoring. For example, a patient desiring to participate in remote monitoring for a chronic respiratory condition be asked to complete a questionnaire relating to personal respiratory medical history (e.g., diagnosis of any one or more of chronic lung diseases, use of an inhaler or nebulizer or other medications, use of supplemental oxygen, any recent emergency room or hospital visits, smoking history, etc.) or family medical history, and/or current symptoms of respiratory condition (e.g., coughing, breathing, sputum/mucus, etc.).” Claim 16 Regarding claim 16, the combination of Rajasekhar and Turek teaches all the limitations of claim 11. Rajasekhar further teaches The rate of change being compared to an expected rate of change to determine whether the COPD is progressing significantly faster, or slower than normal Par. [0237], “FIGS. 20A and 20B illustrate trends in SpO.sub.2 and BCSS score, respectively, for a remotely monitored patient. Over the course of remote monitoring as described above, a drastic rise of BCSS score over 48 hours (days 8 and 9) was detected. In response to the remote monitoring and detection of the significant rate of increase in BCSS score, the patient's clinic prescribed medication. As shown in FIG. 20B, remote monitoring of the patient's bodily metrics continued during the medication treatment, and captured a general recovery of the patient's symptoms (days 11-13). Accordingly, the systems and methods described herein may be used by clinics to successfully remotely monitor conditions of patients, adjust treatment, and then follow-up to characterize the level of success with treatment (e.g., remotely monitor whether and how successfully the patients respond to treatment) and/or determine whether the patients require further intervention.” The use of the word “drastic” means that the change is greater than expected. Par. [0019], “Furthermore, in some variations, the method may include verifying at least one of the received plurality of bodily metrics in response to the at least one bodily metric being outside of a predetermined range (e.g., an expected range based on historical data, patient characteristics, etc.) and thus suspected of being erroneous. Verifying the at least one received bodily metric may include, for example, prompting a manual review of the at least one received bodily metric to correct or confirm the patient data (e.g., prior to transmitting the patient data to a healthcare team, etc.).” Par. [0070], “As one example in the verification process, a transcribed and/or parsed value of a bodily metric may be flagged for additional review in response to the transcribed and/or parsed value being outside of a predicted or expected range of values for that bodily metric. The predicted or expected range may, for example, be approximately centered around a historical average for that bodily metric for that patient (e.g., average of the values received over a certain number of preceding assessments, such as the most recent two, three, four, etc. assessments), or approximately centered around the value received for the most recent assessment. Such additional review may include, for example, a manual verification by a person to confirm or correct the transcribed and/or parsed value. For example, manual review may be prompted by sending a notification to an assigned operator who listens to the recording and addresses any issues with the transcription.” Par. [0019] and [0070] show the ability of the system to compare received values against an expected or predicted value. Claim 17 Regarding claim 17, the combination of Rajasekhar and Turek teaches all the limitations of claim 16. Rajasekhar further teaches Sending an alert notification if it is determined that the COPD trajectory is predicted to reach a milestone Par. [0135], “Accelerometer with step counting and/or positional data. In some variations, the measurement device may include a sensor, such as an accelerometer, which can track patient physical activity or positional data (e.g., pedometer, distance traveled, etc.). Level of exercise tolerance and daily physical activity may be correlated with risk of hospitalization and mortality in COPD and other respiratory conditions. Furthermore, in some variations the accelerometer may incorporated into a remote pulmonary rehabilitation program that is initiated through the disclosed system. For example, if a patient is inactive, the system may remind the patient, caregiver, or healthcare team that the patient has not had any activity for a certain time period, and in response, the patient is assessed for clinical status or the patient is encouraged to increase physical activity, such as walking around the house or using a stationary bicycle, and the activity can be tracked using sensors in the system, such as with an accelerometer.” Par. [0087], “As shown in FIG. 1B, the patient data may be used to assess the patient (180), and a medical care provider may be allowed to view patient data and/or patient assessments (190). For example, the predictive analysis system may perform analytics to assess the received patient data and may transmit or otherwise communicate an alert (e.g., a programmed alert) to at least one medical care provider (e.g., pulmonologist, other clinical staff or auxiliary staff member or personnel, physician, or other qualified healthcare professional, etc.).” Par. [0089], “For example, an alert based on the assessed patient risk may be transmitted or otherwise communicated to a medical care provider. The alert may be communicated, for example, if the assessed level of risk for a patient (or ranked risk level, etc.) satisfies one or more predetermined thresholds.” Par. [0090], “Furthermore, a medical care provider may select suitable alert conditions, such as thresholds and/or other conditions for triggering an alert notification associated with a high-risk patient (e.g., a patient with a relatively high likelihood of experiencing an upcoming clinical event.)” Claims 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rajasekhar and Turek in further view of Rubin (US PG Pub. 2006/0193824). Claim 3 Regarding claim 3, the combination of Rajasekhar and Turek teaches all the limitations of claim 2. However, Rajasekhar does not teach Wherein quantity comprises surface area of sputum in an image However, Rubin teaches surface area of sputum in an image Par. [0058], “Laser scanning confocal microscopy (LSCM) was employed to examine sputum components from three patients with CF and two with CB. Sputa were dual-labeled using 10 μg/ml fluorescent Texas Red-conjugated Ulex europaeus agglutinin (UEA) lectin (Sigma, St. Louis, Mo.) for mucin-like glycoproteins and 1 μM YOYO-1 (Molecular Probes, Eugene, Oreg.) for DNA. A Carl Zeiss LSM 510 (Carl Zeiss, Jena, Germany) and a Leica laser scanning confocal microscope (Leica CLSM; Leica, Lasertechnik GmbH, Heidelberg, Germany) were used to collect images of the stained sputum. Dual excitation wavelengths of 488λ and 568λ were employed to visualize the DNA in combination with mucin. Images were recorded in a planar matrix (X, Y) using the 40× oil objective. Optical sections in the Z-axis were recorded by adjusting the stage height by stepper motors. Quantitative measurements of fluorescence intensity and area were obtained directly from images using VoxelView software (Vital Images, Fairfield, Iowa). Representative fields of interest were visually selected and random coordinates within the field were imaged and analyzed to give mean fluorescent intensities. Serial images for each specimen were analyzed and the mean surface area covered by Texas Red-UEA and YOYO-1 was calculated using NIH Image imaging software (National Institutes of Health, Bethesda, Md.“ Rajasekhar, Turek and Rubin each utilize image data to aid in diagnosing and treating patient conditions. Thus, they are considered to be analogous references as they are directed towards solving similar problems. It would have been obvious before the effective filing date to modify the teachings of Rajasekhar to identify the surface area of sputum within an image, as taught by Rubin, because quantifying the size and magnitude of sputum can facilitate effective treatment which can be used to improve a patient’s quality of life, with early diagnosis being desirable to enable measures to be taken by the patient to prevent further progression and be able to monitor a patient’s response to various therapy and drug treatments, as specified by Turek [para. 0002, 0007]. Claims 5 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rajasekhar and Turek in further view of Peng (US PG Pub. 2021/0357696). Claim 5 Regarding claim 5, the combination of Rajasekhar and Turek teaches all the limitations of claim 4. The combination of Rajasekhar and Turek further teaches The ability to process images of sputum and make predictions regarding the sputum See rejection of claim 1. However, Rajasekhar does not explicitly teach The control system being further configured to process the first image to output a predicted image of the sputum after a first time interval Peng teaches The control system being further configured to process the first image to output a predicted image of the a patient characteristic after a first time interval Par. [0138], “The predicted fundus image is an image of the fundus of the eye of the patient as it is predicted to look at a particular future time, e.g., in six months, in one year, or in five years.” It would have been obvious to one having ordinary skill in the art before the effective filing date of this application to add to the system of Rajasekhar and Turek the ability to predict an image of a patient characteristic after a future time interval, as taught by Peng, because the predicted future image can help determine the state of the patient’s condition at a future time (Peng, par. [0071]-[0072], [0147]-[0148]), enhancing the ability of Rajasekhar to characterize the severity of the medical condition by comparing bodily metrics (e.g., sputum characteristics such as average depth and change in average depth) to predetermined thresholds and determine whether further medical intervention is appropriate [para. 0011, 0089]. Claim 7 Regarding claim 7, the combination of Rajasekhar and Turek teaches all the limitations of claim 5. Rajasekhar further teaches Determining the healing trajectory further comprises receiving a first input from a user interface Par. [0067], “In some variations, when a patient is placed on an “alerts” list based on currently reported patient data, dynamic questions may be generated to gather further information regarding the patient's respiratory condition. For example, the patient may be prompted to verbally describe generally how he or she is feeling. As another example, the patient may be prompted to describe the color of any sputum coughed up, whether there has been an increased volume of sputum, whether the patient is wheezing when exhaling, whether the patient is experiencing any sweats, fevers, chest tightness, shortness of breath, chest pain, etc., whether the patient has any increased ankle swelling, whether the patient's cough or breathing is interfering with sleep, whether the patient feels he or she can cope or manage with their condition that day, and any other suitable questions.” Claim 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rajasekhar and Turek in further view of Criner (US PG Pub. 2016/0004834). Claim 9 Regarding claim 9, the combination of Rajasekhar and Turek teaches all the limitations of claim 8. However, Rajasekhar does not teach The control system being further configured to display a stock image associated with the milestone Criner teaches The control system being further configured to display a stock image associated with the milestone Fig. 8; Par. [0056], “As shown in FIGS. 8 and 9, other screens may ask the user to rate their sputum color 800 and/or sputum consistency 900 over the past 24 hours. For example, the sputum color may be white, yellow, green, or brown. In some embodiments, the most severe color should be selected, whereas in other embodiments all colors brought up by the patient may be selected. Color swatch images 802 may be shown next to each option to help the user better identify the appropriate response by way of comparison.” Different sputum colors are associated with different stages of COPD, so the color swatch images for the color are in some way associated with the milestone event. It would have been obvious to one having ordinary skill in the art before the effective filing date of this application to add to the system of Rajasekhar and Turek the ability to display a stock image associated with the milestone, as taught by Criner, because it allows the user to make a comparison against a standardized color palette to improve the consistency of the color analysis (Criner, par. [0056]; Fig. 8). Claim 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rajasekhar and Turek in further view of Rowe (US PG Pub. 2015/0253240). Claim 19 Regarding claim 19, the combination of Rajasekhar and Turek teaches all the limitations of claim 10. However, Rajasekhar does not teach Wherein the first and second set of sputum features comprises infrared depth measurements However, Rowe teaches infrared depth measurements Para [0015], “Techniques for reflectance microscopy in vivo include optical coherence tomography (OCT) which has been developed to provide unprecedented cellular detail and live motion capture… Vakoc BJ et al. “Three-dimensional micros copy of the tumor microenvironment in Vivo using optical frequency domain imaging. Nat Med 2009; 15:1219-23). The technology uses the reflectance signature of near-infra red light to permit real-time imaging with cellular level detail, and has been employed successfully for microscopic analysis of coronary artery and esophageal mucosa by the endoscopic approach in living human Subjects. OCT uses coherence gating for optical sectioning to attain an axial resolution or section thickness ranging from 1-10 μm.” Para [0047], “For example, the arrangement(s) can comprise at least one optical configuration which is configured to focus at least one electromagnetic radiation on the samples. A depth range of the focus of the electromagnetic radiation(s) caused by the optical configuration(s) can be greater than a confocal parameter associated with a spot size of the focus. The optical configuration(s) can include an axicon lens arrangement, a binary apodization element, a phase apodization element, a defractive optical element, an annulus, and/or a diffractive element. The arrangement(s) can also comprise a confocal arrangement, a florescence arrangement, Raman arrangement, an infrared arrangement..” Para [0091], “FIGS. 2 a-2 d show exemplary imaging results from the exemplary μOCT procedure applied to respiratory epithelial cells. As shown in FIGS. 2 a-2 d, in a time-averaged image of normal human bronchial epithelial (HBE) cells, e.g., distinct layers of air (203), mucus (206), cilia (209), PCL (215) and epithelium (218) can be visualized, and the morphology matches the inset image 221, a H&E stained sample of the same type. From the exemplary μOCT image, the ASL depth. (200) and PCL depth (209) can be measured.” Para [0110], “This disclosure can include automated procedures employed to determine airway surface liquid depth, mucociliary transport rate, and ciliary beat frequency from the exemplary μOCT image data,” Para [0125], “An exemplary standard method for optical particle tracking rheology is fluorescence microscopy, which can be compared to the exemplary μOCT results as shown in FIG. 29 using samples from the same expectorated sputum. Traditional fluorescence exogenous particle tracking (line 2900) and the exemplary μOCT-based endogenous particle tracking (line 2910) can produce similar results, thus validating the potential of μOCT for measuring the mechanical properties of mucus,” Para [0120], “For example, in block 2510, data is obtained from the exemplary μOCT imaging, including the use of airway surface functional microanatomy in block 2520 (which may include airway surface liquid depth, periciliary liquid depth, ciliary beat frequency, and mucociliary transport), in block 253 0 properties of mucus can be determined by particle tracking microrheology, and in block 2540,” Rajasekhar, Turek and Rowe each utilize image data to aid in diagnosing and treating patient conditions. Thus, they are considered to be analogous references as they are directed towards solving similar problems. It would have been obvious before the effective filing date to modify the teachings of Rajasekhar to collect infrared depth measurements of sputum features, as taught by Rowe, because these steps of detecting, quantifying, staging, reporting and tracking of a disease aid in early diagnosis and effective treatment which can be used to improve a patient’s quality of life, with early diagnosis being desirable to enable measures to be taken by the patient to prevent further progression and be able to monitor a patient’s response to various therapy and drug treatments, as specified by Turek [para. 0002, 0007]. Claim 20 Regarding claim 20, the combination of Rajasekhar and Turek teaches all the limitations of claim 10. However, Rajasekhar does not teach The processing the first and second set of sputum features comprising determining a change in average depth of the sputum However, Rowe teaches determining a change in average depth of the sputum Para [0122], “Particle position can be tracked in one, two, or three dimensions over time; full three-dimensional tracking allows the measurement of viscosity along all spatial coordinates and captures any anisotropic diffusion behavior,” Rajasekhar, Turek and Rowe each utilize image data to aid in diagnosing and treating patient conditions. Thus, they are considered to be analogous references as they are directed towards solving similar problems. It would have been obvious before the effective filing date to modify the teachings of Rajasekhar to determine a change in average depth of sputum, as taught by Rowe, because these steps of detecting, quantifying, staging, reporting and tracking of a disease aid in early diagnosis and effective treatment which can be used to improve a patient’s quality of life, with early diagnosis being desirable to enable measures to be taken by the patient to prevent further progression and be able to monitor a patient’s response to various therapy and drug treatments, as specified by Turek [para. 0002, 0007], and further enhancing the ability of Rajasekhar to characterize the severity of the medical condition by comparing bodily metrics (e.g., sputum characteristics such as average depth and change in average depth) to predetermined thresholds and determine whether further medical intervention is appropriate [para. 0011, 0089]. Response to Arguments Applicant's arguments filed May 28, 2025 have been fully considered but they are not persuasive. Applicant argues that the specification provides significant detail regarding the determination of COPD trajectory. This is not persuasive. The specification only broadly describes that COPD trajectory is determined “based on” sputum features and the time elapsed between images but does not provide any details on how. There is no algorithm, equation, or methodology described or disclosed to explain the specifics of what the determination is, or how the determination is made. In other words, the specification describes the “inputs” that are taken into consideration, and the resulting “output”, with no further details other than the involvement of a trained machine learning model. With respect to 35 USC 101, Applicant argues that amended claims 1 and 10 are deeply rooted in computer and artificial intelligence technology such that Applicant’s claim would not cover a mental process of determining a COPA trajectory. Applicant argues that a human, in their mind, cannot process images using a trained machine learning algorithm to determine a COPD trajectory. This is not persuasive. It is noted that “using a trained machine learning algorithm” is not recited as part of the abstract idea; however, said machine learning algorithm is considered to being utilized in an “apply it" manner to carry out or execute the abstract idea. As previously articulated, “processing images” is broadly recited without any details on what it entails; thus, the broadest reasonable interpretation of “processing images” is looking at or observing the images. Furthermore, per MPEP 2106.04(a)(2)(III)(C), claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures “can be carried out in existing computers long in use, no new machinery being necessary.” 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of “anonymous loan shopping” recited in a computer system claim is an abstract idea because it could be “performed by humans without a computer”). This includes performing a mental process on a generic computer, performing a mental process in a computer environment, and using a computer as a tool to perform a mental process. Applicant argues that the claims are in line with claim 2 of Example 37. This is not persuasive. Example 37 does not recite a judicial exception (e.g., abstract idea) as noted be Applicant. However, the instant claims do recite a judicial exception (e.g., abstract idea) as articulated in the previous and updated 35 USC 101 rejection above. Applicant argues that Rajasekhar and AAPA do not teach the particulars of claims 1 and 10 as amended, specifically, receiving a first and second image with their own timestamps, comparing the timestamps to determine a period between images, identifying a portion of pixels within each image that represent a sputum, identifying two sets of sputum features, using both set of sputum features and the time period between the first and second image, and using a trained machine learning algorithm. The prior art rejection has been updated above, with Turek teaching several of the amended limitations being argued. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER H CHOI whose telephone number is (469)295-9171. The examiner can normally be reached M-Th 9am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Jun 16, 2022
Application Filed
Feb 21, 2025
Non-Final Rejection — §101, §103, §112
May 28, 2025
Response Filed
Feb 14, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536578
CONTACTLESS CHECKOUT SYSTEM WITH THEFT DETECTION
2y 5m to grant Granted Jan 27, 2026
Patent 12530181
TRAINING AN AGENT-BASED HEALTHCARE ASSISTANT MODEL
2y 5m to grant Granted Jan 20, 2026
Patent 11901073
Online Social Health Network
2y 5m to grant Granted Feb 13, 2024
Patent 8386300
STRATEGIC WORKFORCE PLANNING MODEL
2y 5m to grant Granted Feb 26, 2013
Patent 8370269
SYSTEM AND METHODS FOR ELECTRONIC COMMERCE USING PERSONAL AND BUSINESS NETWORKS
2y 5m to grant Granted Feb 05, 2013
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
26%
Grant Probability
45%
With Interview (+19.4%)
5y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 215 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month