Prosecution Insights
Last updated: April 19, 2026
Application No. 18/397,997

SCREENING, MONITORING, AND TREATMENT OF COGNITIVE DISORDERS

Final Rejection §101§103§112
Filed
Dec 27, 2023
Examiner
BASET, NESHAT
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
West Virginia University
OA Round
2 (Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
3y 11m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
19 granted / 63 resolved
-39.8% vs TC avg
Strong +28% interview lift
Without
With
+27.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
47 currently pending
Career history
110
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 63 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This office action is in response to the remarks filed on 09/17/2025. The amendment filed 09/17/2025 has been entered. Claims 1-2, 5-25 remain pending in the application, claims 15-25 have been previously withdrawn, and claims 3-4 have been canceled. The 112(a) enablement rejections have been withdrawn in light of claim amendments. The claim objection has been withdrawn in light of claim amendments. The 112(b) rejections have been withdrawn in light of claim amendments. The 35 USC § 101 rejections of claims 1-11 have been withdrawn in light of claim amendments. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-2, and 5-14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites the limitations “generating a value representing a progression of Alzheimer's disease and related dementias”, “providing a representation of each of the first image and the second image to a machine learning model”, “generating the value with the machine learning model from the representation of the first image and the representation of the second image”, and “providing a first focused ultrasound treatment to the patient according to the assigned one of the plurality of intervention classes;”. Proper written description cannot be identified in the specification on a) how the value is generated for Alzheimer's disease and related dementias using the machine learning model, and b) how the value is then used to provide an intervention for Alzheimer's disease and related dementias. Claim 12 recites “a machine learning model that generates a first value representing progression of Alzheimer's disease or a related dementia from a representation of the first image and a representation of the second image”, and “an assisted decision making module that assigns the patient to one of a plurality of intervention classes according to the generated value”. Proper written description cannot be identified in the specification on a) how the value is generated for all Alzheimer's disease and related dementias using the machine learning model, and b) how the value is then used to provide an intervention for Alzheimer's disease and related dementias. The dependent claims of the above rejected claims are rejected due to their dependency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 12-14 rejected are under 35 U.S.C. 101. Regarding claim 12, Step 1: Statutory category: Yes- A system for generating a value representing one of a risk and a progression of one or more cognitive disorders, is disclosed, therefore a system is disclosed. Step 2: Step 2A, Prong 1, Judicial Exception: Yes- This claim recites the limitations “… generates a first value representing progression of Alzheimer's disease or a related dementia from a representation of the first image and a representation of the second image”, “assigns the patient to one of a plurality of intervention classes according to the generated value”, “… generates a second value representing progression of Alzheimer's disease or a related dementia from the first clinical parameter, the second clinical parameter, and the third clinical parameter, and a second focused ultrasound treatment is provided to the patient according to the second value”. This limitation, as drafted, according to its broadest reasonable interpretation, recites a mental-process type abstract idea, which can practically be performed in the mind and/or with the with the aid of pen and paper or with a generic computer, in a computer environment, or merely using the generic computer as a tool to perform the steps. One of ordinary skill in the art could take observe two images/observe values from two images, group/assign the values/features in the image to a plurality of intervention cases to determine the intervention. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or be reasonably performed with an aid of pen and paper or on a generic computer. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No- the claim recites the following additional elements of “an imager interface that acquires a first image, representing a brain … and a second image, representing one of a retina…” “a sensor interface that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient, including a first clinical parameter for the patient acquired either during or immediately after a first focused ultrasound treatment provided to the patient at the neuromodulation system, a second clinical parameter for the patient acquired between five hours and five days after the first focused ultrasound treatment, and a third clinical parameter for the patient acquired more than five days after the first focused ultrasound” Acquiring and providing medical images, as well as generating a value using a machine learning model, is a form of data gathering that is a pre-solution insignificant activity. Displaying the assigned intervention class is a post-solution insignificant activity. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. The claim additionally recites “a processor and a non- transitory computer readable medium storing machine-readable instructions executable by the processor to provide: “an imager interface”, “a first imaging system”, “second imaging system”, and “a sensor interface” however, all of these are well-known generic components that are used for capturing medical images. Accordingly, claim 12 is directed to non-eligible patent subject matter and is therefore rejected. Regarding claim 13-14, Step 1: Statutory category: Yes- A system for generating a value representing one of a risk and a progression of one or more cognitive disorders, is disclosed, therefore a system is disclosed. Step 2: Step 2A, Prong 1, Judicial Exception: Yes- This claim contains a judicial exception as noted above for claim 1. Step 2A, Prong 2, Integrated into Practical Application: No- the claim recites the following additional elements of “the machine learning model generating the value from the representation of the first image, the representation of the second image, and the clinical parameters”, “..the clinical parameters including at least two of a parameter representing sleep length, a parameter representing sleep depth, a length of a sleep stage, heart rate, heart rate variability, a parameter representing perspiration, a parameter representing salivation, blood pressure, pupil size, changes in pupil size, a parameter representing brain activity, a parameter representing electrodermal activity, body temperature, and blood oxygen saturation level” , “generates the representation of one of the first image and the second image as a set of numerical features”. Generating values from images and receiving clinical parameters a form of data gathering that is a form of a pre-solution insignificant activity. Generating a representation of the first image and the second image as a set of numerical features is a post-solution insignificant activity. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. Accordingly, claim 13-14 are directed to non-eligible patent subject matter and is therefore rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 5, 6, 8, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al (US 20240180415 A1) in view of Etkin (US 20180236255 A1). Regarding claim 1, Chang teaches a method for generating a value representing one of a risk and a progression of a cognitive disorder (calculate an assessing grade representing a possibility of the subject having neurodegenerative disease [0010]), the method comprising: acquiring a first image, representing a brain of a patient, from a first imaging system ([0078] In Step 670, a clinical test data of the subject is provided. The clinical test data can include a computed tomography (CT) image, an magnetic resonance imaging (MRI) image [0079]; the image/clinical test data obtained contains information regarding the central nervous system, which includes the brain, [0004] and [0094] disclose this comparison) acquiring a second image, representing one of a retina, an optic nerve, and a vasculature associated with one of the optic nerve and the retina of the patient, from a second imaging system (In Step 610, a target optical coherence tomographic image of a subject is provided. In Step 620, the retinal layer quantification system 200 is provided [0073]). providing a representation of each of the first image and the second image to a machine learning model ([0047], [0082]-[0088] disclose a machine learn model which is used to classify the images which the first and second image are supplied to, the supplying the images disclosed in [0079]) ; generating the value with the machine learning model from the representation of the first image and the representation of the second image (In Step 680, the clinical test data is compared with the thickness of each area of retinal layer, the horizontal retinal layer area and the vertical retinal layer area by a regression analysis model, so as to calculate an assessing grade representing a possibility of the subject having neurodegenerative disease. [0079]; the assessing grade is the value as claimed); assigning the patient to one of a plurality of intervention classes according to the generated value (the electronic device 710 can further show an assessing result of the subject having ophthalmic diseases or neurodegenerative disease and follow-up recommended medical plans such as medication or referral in real time [0080]); Chang, however, does not teach: providing a first focused ultrasound treatment to the patient according to the assigned one of the plurality of intervention classes; acquiring a first clinical parameter for the patient either during or immediately after the first focused ultrasound treatment; acquiring a second clinical parameter for the patient between five hours and five days after the first focused ultrasound treatment; acquiring a third clinical parameter for the patient more than five days after the first focused ultrasound treatment; generating a value representing progression of Alzheimer's disease or a related dementia at the machine learning model from the first clinical parameter, the second clinical parameter, and the third clinical parameter; and providing a second focused ultrasound treatment to the patient according to the generated value. Etkin is considered analogous to the instant application as “Neurostimulation treatment” is disclosed (title). Etkin teaches: providing a first focused ultrasound treatment to the patient according to the assigned one of the plurality of intervention classes (first plurality of non-invasive stimulations is … focused ultrasound stimulation [0057]; the disclosures herewith provide a method to determine where in the brain of an individual with a disorder of the central nervous system abnormalities in brain function are elicited through non-invasive brain stimulation. The purpose of identifying these locations, as well as their particular signatures (i.e. the nature of the abnormality) is that this can then guide neurostimulation interventions that normalize the identified abnormality [0105]); acquiring a first clinical parameter for the patient either during or immediately after the first focused ultrasound treatment (Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment. … remediation or optimization to treatment protocols occur in real time, at a following treatment session (e.g., within hours, within a single day, within days, within weeks). Furthermore, monitoring a non-invasive brain stimulation evoked response may occur following an initial course of treatment as a disease monitoring, prophylactic of diagnostic method. In embodiments, monitoring of a non-invasive brain stimulation evoked response may occur about one to four weeks, one month, two months, three months, 6 months, one year, or more following a successfully complete course of treatment [0113]);; acquiring a second clinical parameter for the patient between five hours and five days after the first focused ultrasound treatment (Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment. … remediation or optimization to treatment protocols occur in real time, at a following treatment session (e.g., within hours, within a single day, within days, within weeks). Furthermore, monitoring a non-invasive brain stimulation evoked response may occur following an initial course of treatment as a disease monitoring, prophylactic of diagnostic method. In embodiments, monitoring of a non-invasive brain stimulation evoked response may occur about one to four weeks, one month, two months, three months, 6 months, one year, or more following a successfully complete course of treatment [0113]); acquiring a third clinical parameter for the patient more than five days after the first focused ultrasound treatment (Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment. … remediation or optimization to treatment protocols occur in real time, at a following treatment session (e.g., within hours, within a single day, within days, within weeks). Furthermore, monitoring a non-invasive brain stimulation evoked response may occur following an initial course of treatment as a disease monitoring, prophylactic of diagnostic method. In embodiments, monitoring of a non-invasive brain stimulation evoked response may occur about one to four weeks, one month, two months, three months, 6 months, one year, or more following a successfully complete course of treatment [0113]); generating a value representing progression of Alzheimer's disease or a related dementia at the machine learning model from the first clinical parameter, the second clinical parameter, and the third clinical parameter (the response throughout the treatment is measured in a value/generates a value, which uses the clinical parameters/measurements during the treatment sessions, as disclosed in [0078]-[0080]); and providing a second focused ultrasound treatment to the patient according to the generated value (multiple treatments sessions using live monitoring/responses are disclosed in [0123], and [0091]-[0092]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include providing a first focused ultrasound treatment to the patient according to the assigned one of the plurality of intervention classes, acquiring a first clinical parameter for the patient either during or immediately after the first focused ultrasound treatment, acquiring a second clinical parameter for the patient between five hours and five days after the first focused ultrasound treatment, acquiring a third clinical parameter for the patient more than five days after the first focused ultrasound treatment, generating a value representing progression of Alzheimer's disease or a related dementia at the machine learning model from the first clinical parameter, the second clinical parameter, and the third clinical parameter, and providing a second focused ultrasound treatment to the patient according to the generated value, as taught by Etkin. Doing so would ameliorate or reduce symptoms of the disorder, as suggested by Etkin ([0049]). Regarding claim 2, modified Chang teaches the method of claim 1, as discussed above. Chang further teaches wherein providing the intervention to the patient comprises at least one of providing advising the patient to make dietary changes, advising the patient to make changes to sleep habits, assigning brain exercises, prescribing a therapeutic agent (follow-up recommended medical plans such as medication or referral in real time [0080]), referring the patient to rehabilitation, referring the patient to a clinical trial, referring the patient to family planning, referring the patient to a support group, and providing neuromodulation. Regarding claim 5, modified Chang teaches the method of claim 1, as discussed above. Chang further teaches providing a clinical parameter to the machine learning model (reference database [0082]), wherein generating the value at the machine learning model comprises generating the value from the clinical parameter, the representation of the first image, and the representation of the second image (In Step 680, the clinical test data is compared with the thickness of each area of retinal layer, the horizontal retinal layer area and the vertical retinal layer area by a regression analysis model, so as to calculate an assessing grade representing a possibility of the subject having neurodegenerative disease. [0079]), the clinical parameter being extracted from an electronic health records (EHR) database and representing one of a medical history of the patient (In Step 670, a clinical test data of the subject is provided. The clinical test data can include a computed tomography (CT) image, an magnetic resonance imaging (MRI) image, a blood test value, a personal medical history, a family medical history, a past surgery history [0078]), a treatment prescribed to the patient, and a measured biometric parameter of a patient. Regarding claim 6, modified Chang teaches the method of claim 5, as discussed above. Chang further teaches, wherein the clinical parameter represents one of heart rate variability, sleep quality, and concentrations of biomarkers in one of the blood (In Step 670, a clinical test data of the subject is provided. The clinical test data can include a computed tomography (CT) image, an magnetic resonance imaging (MRI) image, a blood test value [0078]), and the cerebrospinal fluid of the patient. Regarding claim 8, modified Chang teaches the method of claim 1, as discussed above. Chang further teaches wherein the second image is one of an optical coherence tomography (OCT) image (optical coherence tomography [0043]), an OCT angiography image, and an image generated via fundus photography. Regarding claim 10, modified Chang teaches the method of claim 1, as discussed above. Chang further teaches wherein the representation of the second image comprises a parameter representing one of a volume of the retina, a thickness of the retina (In Step 610, a target optical coherence tomographic image of a subject is provided. In Step 620, the retinal layer quantification system 200 is provided [0073]; a calculating step is performed. The retinal layer each region thicknesses in the enhanced label target image are calculated using the layer thickness detection module 440 [0077]), a texture of the retina, a thickness of a retinal layer, a volume of a retinal layer, a texture of a retinal layer, a value representing a vascular pattern, a value representing vascular density, a size of the foveal avascular zone, a width of the optic chiasm, a height of the intraorbital optic nerve, a width of the intracranial optic nerve, or a total area of the vasculature in the image. Claims 7, 9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al (US 20240180415 A1) in view of Etkin (US 20180236255 A1) and Newman (US 20090270717 A1). Regarding claim 7, modified Chang teaches the method of claim 1, as discussed above. Chang, however, does not teach wherein acquiring the first image comprises acquiring the first image via one of diffusion tensor imaging and a position emission tomography (PET) scan using one of glucose tagged with radioactive fluorine, a tracer for beta- amyloid, and a tracer for tau protein. Newman is considered analogous to the instant application as “Apparatus and method for diagnosis of optically identifiable ophthalmic conditions” is disclosed (title). Newman, however, teaches acquiring the first image comprises acquiring the first image via position emission tomography (PET) scan using one of glucose tagged with radioactive fluorine (Additional tests 1145 and addition information 1150 about a person comprises any one or more of: collecting an additional image; collection of an additional data set; additional information from a magnetic resonance imaging (MRI) test, which can be a structural MRI test or a functional MRI test; patient history data, vital signs data, additional information from a positron emission tomography (PET) test which can be an FDG-PET glucose determination or a C-PK11195-PET test, additional information from a brain biopsy, and additional information from a cognitive impairment test. The order of performing the steps of obtaining an image, obtaining a data set indicative of a neurological disorder, and obtaining said additional information about said person is not critical, and may be performed in any order that is convenient [0126]) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include acquiring the first image comprises acquiring the first image via position emission tomography (PET) scan using one of glucose tagged with radioactive fluorine, as taught by Newman. Doing so would measure early stages of neurological disease and abnormalities, and to track changes over time, as suggested by Newman ([0149]). Regarding claim 9, modified Chang teaches the method of claim 1, as discussed above. Chang, however, does not teach wherein providing the representation of the first image to the machine learning model comprises extracting a representation of one of a cortical profile of the brain, a vasculature of the brain, a B-amyloid profile of the brain, and a connectivity of the brain from the first image. Newman is considered analogous to the instant application as “Apparatus and method for diagnosis of optically identifiable ophthalmic conditions” is disclosed (title). Newman teaches teach extracting a connectivity of the brain from the first image ([0144]-[0146], [0154], discloses that the imaging extracts connectivity of brain/neurons). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include extracting a connectivity of the brain from the first image, as taught by Newman. Doing so would measure early stages of neurological disease and abnormalities, and to track changes over time, as suggested by Newman ([0149]). Regarding claim 11, modified Chang teaches the method of claim 1, as discussed above. Chang, however, does not teach further comprising imaging a pupil of the patient to provide a parameter representing at least one of eye tracking data, eye movement, pupil size, and a change in pupil size, wherein generating the value at the machine learning model comprises generating the value from the representation of the first image, the representation of the second image, and the parameter. Newman, however teaches further comprising imaging a pupil of the patient to provide a parameter representing at least one of eye tracking data (A white portion 262 of the surface of the eve 260 will in general reflect light more strongly than a darker portion of the surface of the eye 260, such as the iris 264, or the pupil of the eye situated within the iris 264. As the eye moves, the change in intensity of light reflected from the white portion 262 as compared to the intensity of light reflected from the iris 264 is tracked. Position is measured as a pixel location counted from one end of a sensor 240 [0088]), wherein generating the value at the machine learning model comprises generating the value from the representation of the first image, the representation of the second image, and the parameter ([0016] discloses comparing data using data collection apparatus, and comparing with multiple imaging modalities to provide a diagnosis of a condition of health). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include comprising imaging a pupil of the patient to provide a parameter representing at least one of eye tracking data, wherein generating the value at the machine learning model comprises generating the value from the representation of the first image, the representation of the second image, and the parameter, as taught by Newman. Doing so would measure early stages of neurological disease and abnormalities, and to track changes over time, as suggested by Newman ([0149]). Claim 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al (US 20240180415 A1) in view of Etkin (US 20180236255 A1) and Rezai (WO 2021195616 A1). Regarding claim 12, Chang teaches a system for monitoring progression of Alzheimer's disease and related dementias (calculate an assessing grade representing a possibility of the subject having neurodegenerative disease [0010]), the system comprising; a processor (processor [0077]) and a non-transitory computer readable medium storing machine-readable instructions executable by the processor to provide (The electronic device 710 can be a portable electronic device such as a mobile phone or a tablet. The processor 722 can further be integrated into the electronic device [0071]; it is known that phones/tablets can carry out instructions/have storage): an imager interface that acquires a first image, representing a brain of a patient, from a first imaging system ([0078] In Step 670, a clinical test data of the subject is provided. The clinical test data can include a computed tomography (CT) image, an magnetic resonance imaging (MRI) image [0079]; the image/clinical test data obtained contains information regarding the central nervous system, which includes the brain, [0004] and [0094] disclose this comparison) and a second image, representing one of a retina, an optic nerve, and a vasculature associated with one of the retina and the optic nerve of the patient, from a second imaging system (In Step 610, a target optical coherence tomographic image of a subject is provided. In Step 620, the retinal layer quantification system 200 is provided [0073]); a machine learning model that generates a first value representing progression of Alzheimer's disease or a related dementia from a representation of the first image and a representation of the second image ([0047], [0082]-[0088] disclose a machine learn model which is used to classify the images which the first and second image are supplied to, the supplying the images disclosed in [0079]); an assisted decision making module that assigns the patient to one of a plurality of intervention classes according to the generated value (the electronic device 710 can further show an assessing result of the subject having ophthalmic diseases or neurodegenerative disease and follow-up recommended medical plans such as medication or referral in real time [0080]; medication or referral are the plurality of intervention classes). and a display that displays the assigned intervention class to a user (the electronic device 710 can further show an assessing result of the subject having ophthalmic diseases or neurodegenerative disease and follow-up recommended medical plans such as medication or referral in real time [0080]; medication or referral are the plurality of intervention classes; The electronic device 710 can be a portable electronic device such as a mobile phone or a tablet [0081]). Chang, however, does not teach: a neuromodulation system that provides focused ultrasound treatment to a patient; a sensor interface that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient, including a first clinical parameter for the patient acquired either during or immediately after a first focused ultrasound treatment provided to the patient at the neuromodulation system, a second clinical parameter for the patient acquired between five hours and five days after the first focused ultrasound treatment, and a third clinical parameter for the patient acquired more than five days after the first focused ultrasound; wherein the machine learning model generates a second value representing progression of Alzheimer's disease or a related dementia from the first clinical parameter, the second clinical parameter, and the third clinical parameter, and a second focused ultrasound treatment is provided to the patient according to the second value. Etkin is considered analogous to the instant application as “Neurostimulation treatment” is disclosed (title). Etkin teaches: a neuromodulation system that provides focused ultrasound treatment to a patient (first plurality of non-invasive stimulations is … focused ultrasound stimulation [0057]; the disclosures herewith provide a method to determine where in the brain of an individual with a disorder of the central nervous system abnormalities in brain function are elicited through non-invasive brain stimulation. The purpose of identifying these locations, as well as their particular signatures (i.e. the nature of the abnormality) is that this can then guide neurostimulation interventions that normalize the identified abnormality [0105]); including a first clinical parameter for the patient acquired either during or immediately after a first focused ultrasound treatment provided to the patient at the neuromodulation system (Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment. … remediation or optimization to treatment protocols occur in real time, at a following treatment session (e.g., within hours, within a single day, within days, within weeks). Furthermore, monitoring a non-invasive brain stimulation evoked response may occur following an initial course of treatment as a disease monitoring, prophylactic of diagnostic method. In embodiments, monitoring of a non-invasive brain stimulation evoked response may occur about one to four weeks, one month, two months, three months, 6 months, one year, or more following a successfully complete course of treatment [0113]), a second clinical parameter for the patient acquired between five hours and five days after the first focused ultrasound treatment (Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment. … remediation or optimization to treatment protocols occur in real time, at a following treatment session (e.g., within hours, within a single day, within days, within weeks). Furthermore, monitoring a non-invasive brain stimulation evoked response may occur following an initial course of treatment as a disease monitoring, prophylactic of diagnostic method. In embodiments, monitoring of a non-invasive brain stimulation evoked response may occur about one to four weeks, one month, two months, three months, 6 months, one year, or more following a successfully complete course of treatment [0113]), and a third clinical parameter for the patient acquired more than five days after the first focused ultrasound (Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment. … remediation or optimization to treatment protocols occur in real time, at a following treatment session (e.g., within hours, within a single day, within days, within weeks). Furthermore, monitoring a non-invasive brain stimulation evoked response may occur following an initial course of treatment as a disease monitoring, prophylactic of diagnostic method. In embodiments, monitoring of a non-invasive brain stimulation evoked response may occur about one to four weeks, one month, two months, three months, 6 months, one year, or more following a successfully complete course of treatment [0113]);; wherein the machine learning model generates a second value representing progression of Alzheimer's disease or a related dementia from the first clinical parameter, the second clinical parameter, and the third clinical parameter the response throughout the treatment is measured in a value/generates a value, which uses the clinical parameters/measurements during the treatment sessions, as disclosed in [0078]-[0080]), and a second focused ultrasound treatment is provided to the patient according to the second value (multiple treatments sessions using live monitoring/responses are disclosed in [0123], and [0091]-[0092]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include a neuromodulation system that provides focused ultrasound treatment to a patient, including a first clinical parameter for the patient acquired either during or immediately after a first focused ultrasound treatment provided to the patient at the neuromodulation system, a second clinical parameter for the patient acquired between five hours and five days after the first focused ultrasound treatment, and a third clinical parameter for the patient acquired more than five days after the first focused ultrasound, wherein the machine learning model generates a second value representing progression of Alzheimer's disease or a related dementia from the first clinical parameter, the second clinical parameter, and the third clinical parameter, and a second focused ultrasound treatment is provided to the patient according to the second value, as taught by Etkin. Doing so would ameliorate or reduce symptoms of the disorder, as suggested by Etkin ([0049]). The combined invention still does not teach a sensor interface that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient. Rezai is considered analogous to the instant application as “Predicting wellness of a user with monitoring from portable monitoring devices” is disclosed (title). Rezai teaches: a sensor interface (160) that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient (using portable monitoring devices 102 and 110 [0057]; The system 150 can further include a mobile device 160 that communicates with the first and second portable monitoring devices 152 and 154 via a local transceiver 162. The mobile device 160 can also include a graphical user interface 164 that allows a user to interact with one or more data gathering applications 166 stored at the base unit [0058]; A “portable monitoring device,” as used herein, refers to a device that is worn by, carried by, or implanted within a user that incorporates either or both of an input device and user interface for receiving input from the user [0026]; two monitoring devices are disclosed, which can either be worn or carried by the patient that measures clinical parameters). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include a sensor interface that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient, as taught by Rezai. Doing so would facilitate measuring these physiological parameters in a naturalistic, non-clinical setting, as suggested by Rezai ([0031]). Regarding claim 13, modified Chang teaches system of claim 12, as discussed above. Chang further teaches the machine learning model generating the first value from the representation of the first image, the representation of the second image, and the clinical parameters ([0047], [0082]-[0088] disclose a machine learn model which is used to classify the images which the first and second image are supplied to, the supplying the images disclosed in [0079]) ;. Chang, however does not teach: the clinical parameters including at least two of a parameter representing sleep length, a parameter representing sleep depth, a length of a sleep stage, heart rate, heart rate variability, a parameter representing perspiration, a parameter representing salivation, blood pressure, pupil size, changes in pupil size, a parameter representing brain activity, a parameter representing electrodermal activity, body temperature, and blood oxygen saturation level. Rezai is considered analogous to the instant application as “Predicting wellness of a user with monitoring from portable monitoring devices” is disclosed (title). Rezai teaches: a sensor interface (160) that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient (using portable monitoring devices 102 and 110 [0057]; The system 150 can further include a mobile device 160 that communicates with the first and second portable monitoring devices 152 and 154 via a local transceiver 162. The mobile device 160 can also include a graphical user interface 164 that allows a user to interact with one or more data gathering applications 166 stored at the base unit [0058]; A “portable monitoring device,” as used herein, refers to a device that is worn by, carried by, or implanted within a user that incorporates either or both of an input device and user interface for receiving input from the user [0026]; two monitoring devices are disclosed, which can either be worn or carried by the patient that measures clinical parameters) ; and the clinical parameters including at least two of a parameter representing sleep length, a parameter representing sleep depth, a length of a sleep stage, heart rate, heart rate variability, (Wellness-relevant parameters monitored by the first and second portable monitoring devices … can include, for example, heart rate, heart rate variability, metrics of sleep quality, biological rhythm variations, metrics of sleep quantity, physical activity of the user [0057]) a parameter representing perspiration, a parameter representing salivation, blood pressure, pupil size, changes in pupil size, a parameter representing brain activity, a parameter representing electrodermal activity, body temperature, and blood oxygen saturation level. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Chang to include a sensor interface that receives clinical parameters measured by one of a device worn by the patient and a device carried by the patient; and the clinical parameters including at least two of a parameter as taught by Rezai. Doing so would facilitate measuring these physiological parameters in a naturalistic, non-clinical setting, as suggested by Rezai ([0031]). Regarding claim 14, modified Chang teaches the system of claim 13, as discussed above. Chang further teaches the machine-readable instructions executable by the processor to further provide a feature extractor that generates the representation of one of the first image and the second image as a set of numerical features (a regression analysis model to compare the thickness of each area of retinal layer, the horizontal retinal layer area or the vertical retinal layer area calculated by the retinal layer quantitative system of the present disclosure with the clinical test data, such as the CT image, the MRI image and the blood test value, to calculate the assessing grade representing the possibility of the subject having neurodegenerative disease [0094]; The image capturing unit is configured to capture a target optical coherence tomographic image of a subject. The processor is electrically connected to the image capturing unit and stores a program, wherein the program detects a retinal layer thickness and a retinal layer area of the subject when the program is executed by the processor [0007]; retinal layer thickness is compared in the images, i.e. numerical features). Response to Arguments Applicant's arguments filed 09/17/2025 have been fully considered but they are not persuasive. Regarding the §112(a) written description rejection, applicant argues on pages 9-10 that paragraph [0051] of the specification describes use of the predictive model to detect the onset or progression of dementia. In response, the examiner respectfully disagrees. Paragraph [0051] does not adequately describe a) how the value is generated for Alzheimer's disease and related dementias using the machine learning model, and b) how the value is then used to provide an intervention for Alzheimer's disease and related dementias. Accordingly, the argument is not persuasive and the rejection is maintained. Regarding the 35 USC §101 rejections of claims 12-14 on page 10 of remarks, applicant argues on page 10 that the added amendments overcome the rejection, however, the examiner respectfully disagrees. Acquiring clinical parameters recites extra solution activity, and generating a value recites a mental process as recited in the rejection above. Accordingly, the argument is not persuasive and the rejection is maintained. Regarding the 35 USC §102 rejections, applicant argues on page 11 that the prior art references Chang or Etkin do not teach the limitations regarding acquiring three different parameters at three different time points to guide a treatment. In response, the examiner respectfully disagrees. The examiner first notes that there is no proper written description on how “a value representing progression of Alzheimer's disease or a related dementia” as noted above. The examiner further notes that paragraph [0113] of Etkin discloses “Remediation or optimization of a treatment protocol for an individual patient may be enacted in real time via monitoring of the non-invasive brain stimulation evoked response, for a closed-loop individualized treatment”, i.e. guiding treatment based off of taking measurements at different time points. Etkin further discloses that multiple treatments take place that are monitored ([0078]-[0080]. [0091]-[0092], [0123]). Accordingly, this argument is not persuasive and the claims are rejected under 35 USC §103, as noted above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NESHAT BASET whose telephone number is (571)272-5478. The examiner can normally be reached M-F 8:30-17:30 CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PASCAL M. BUI-PHO can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.B./ Examiner, Art Unit 3798 /PASCAL M BUI PHO/ Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Jun 13, 2025
Non-Final Rejection — §101, §103, §112
Sep 17, 2025
Response Filed
Jan 07, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582377
ULTRASOUND BASED THREE-DIMENSIONAL LESION VERIFICATION WITHIN A VASCULATURE
2y 5m to grant Granted Mar 24, 2026
Patent 12558065
ULTRASOUND TRANSDUCER
2y 5m to grant Granted Feb 24, 2026
Patent 12376758
BIOLOGICAL INFORMATION MONITORING APPARATUS AND MAGNETIC RESONANCE APPARATUS
2y 5m to grant Granted Aug 05, 2025
Patent 12350097
DEVICES, SYSTEMS, AND METHODS FOR TRANS-VAGINAL, ULTRASOUND-GUIDED HYSTEROSCOPIC SURGICAL PROCEDURES
2y 5m to grant Granted Jul 08, 2025
Patent 12285289
MODULAR ULTRASOUND APPARATUS AND METHODS
2y 5m to grant Granted Apr 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
58%
With Interview (+27.6%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 63 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month