Prosecution Insights
Last updated: April 19, 2026
Application No. 18/163,278

Detection and Differentiation of Activity Using Behind-the-Ear Sensing

Final Rejection §101§102§103§112§DP
Filed
Feb 01, 2023
Examiner
HODGE, LAURA NICOLE
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Regents of the University of Colorado
OA Round
2 (Final)
42%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
40 granted / 95 resolved
-27.9% vs TC avg
Strong +44% interview lift
Without
With
+43.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
58 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
24.0%
-16.0% vs TC avg
§103
32.3%
-7.7% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
27.1%
-12.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 95 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The Examiner notes that the claim listing dated 10/16/2025 is missing claim 20. As such, claims have been renumbered according to 37 CFR 1.126 as set forth below and are being referenced accordingly throughout. Claim 21 has been renumbered as 20. Claim 22 has been renumbered as 21. Claim 23 has been renumbered as 22. Claim 24 has been renumbered as 23. Claims 1-8 and 12-23 are rejected. Claims 9-11 are canceled. Response to Arguments Drawings The drawing objection has been withdrawn in view of Applicant cancelling Figs. 4B, 4C, 5A-5C, and 6A-6B and renumbering original Fig 4A to Fig. 4 and original Fig. 7 to Fig. 5. Claim Objections The claim objection has been withdrawn in view of the document with the annotated claim listing with renumbered claims according to 37 CFR 1.126. Claim Rejections - 35 USC § 112(a) The previous 112(a) rejections have been withdrawn in view of amending claims 7-8 and canceling claims 9-11. Claim Rejections - 35 USC § 112(b) Some of the previous 112(b) rejections have been withdrawn in view of the amendment. Claim Rejections - 35 USC § 101 Applicant’s arguments, see Remarks, filed 10/16/25, with respect to claims1-8 and 12-23 have been fully considered and are persuasive. The 101 rejection regarding not falling within one of the four statutory categories of invention of claims 1-8 and 12-23 has been withdrawn. Regarding the 101 rejection of 1-8 and 12-23, Applicant's arguments filed 10/16/25 have been fully considered but they are not persuasive. Applicant argues that the Office provides no evidence that it is humanly possible to mentally (1) separate a signal into component bio-signals, (2) extract features from such bio-signals, or (3) determine, based on such features, whether the patient is engaged in an activity. However, MPEP 2106.07(a) recites: There is no requirement for the examiner to rely on evidence, such as publications or an affidavit or declaration under 37 CFR 1.104(d)(2), to find that a claim recites a judicial exception. Applicant argues that the Office Action does not explain how a human possibly could determine, without the use of computer technology and based on the features extracted from the bio-signals, whether the patient is engaged in a particular activity. The Examiner cites Applicant to MPEP 2106.05(f) which recites: Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). Claim Rejections - 35 USC § 102/103 Applicant's arguments filed 10/16/25 have been fully considered but they are not persuasive. Applicant argues that Vu does not teach the limitation of determine, based on the one or more features extracted from the one or more individual bio-signals, whether the patient is engaged in one or more activities. However, the Examiner disagrees. Vu teaches determine, based on the one or more features extracted from the one or more individual bio-signals, whether the patient is engaged in one or more activities (¶32-determine, based on the one or more features extracted from the one or more individual biosignals, a wakefulness classification of the patient). Since the ”one or more activities” is not specified in the claim, determining a wakefulness classification based on extracted features from the one or more biosignals reads on being one or more activities. Double Patenting The double patenting rejection has been withdrawn in view of the filed Terminal Disclaimer. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 17-19 and 20-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17 recites the limitation "the first activity" in line 5 and 18. There is insufficient antecedent basis for this limitation in the claim. Applicant is encouraged to change “the first activity” in line 5 to recite –a first activity—and “a first activity” in line 18 to recite –the first activity--. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 and 12-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, specifically an abstract idea without significantly more. Step 1 The claimed invention in claims 1-8 and 12-23 are directed to statutory subject matter as the claims recite a system, a non-transitory computer readable medium, and a method for determining whether the patient is engaged in one or more activities. Step 2A, Prong One Regarding claims 1, 22, and 23, the recited steps are directed to a mental process of performing concepts in a human mind or by a human using a pen and paper (see MPEP 2106.04(a)(2) subsection (III)). Regarding claims 1, 22, and 23, the limitations of “separate the first signal into one or more component bio-signals; extract one or more features from each of the one or more individual bio-signals; and determine, based on the one or more features extracted from the one or more individual bio-signals, whether the patient is engaged in one or more activities” are a process, as drafted, covers performance of the limitation that can be performed by a human mind (including an observation, evaluation, judgment, opinion) under the broadest reasonable standard. For example, these limitations are nothing more than medical professional receiving print outs of a signal, separating the signal into components/features, and analyzing the features to determine whether the patient is engaged in one or more activities. Step 2A, Prong Two For claims 1, 22, and 23, the judicial exception is not integrated into a practical application. In particular, claims 1, 22, and 23 recite “a processor and a sensor.” The sensor amounts to nothing more than pre-solution activity of data gathering. The processor is recited at a high-level of generality and amount to nothing more than parts of a generic computer. Merely including instructions to implement an abstract idea on a computer does not integrate a judicial exception into practical application. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a sensor amounts to nothing more than mere pre-solution activity of data gathering, which does not amount to an inventive concept. Moreover, the sensor is well-understood, routine, and conventional activity as evidenced by WO 2020227433 (¶32), US 20220199245 (¶10), and US 20160135738 (¶9). Further, simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)). In this case, elements of general computer are being used to implement the abstract idea. Regarding dependent claims 2-8, 12-19, and 21-22, the limitations of claims 1, 23, and 24 further define the limitations already indicated as being directed to the abstract idea. Claims 2 and 12-16 amount to further defining the data gathering. Claims 3-8 and 18 further define the abstract idea. Claim 4 applies a stimulus, however it is not a particular treatment or prophylaxis, see MPEP 2106.04(d)(2). The specification does not provide any details of what would be considered a stimulus. This can be so many different things including those that may be considered abstract (giving verbal instructions). Claim 17 amounts to an abstract idea, data gathering, and a machine learning model which is nothing more than the computer implementation/automation of an abstract mental process of screening a patient, which is what a physician typically does with a patient in a diagnostic setting. Claims 19 and 21-22 further define the machine learning model which is nothing more than the computer implementation/automation of an abstract mental process of screening a patient, which is what a physician typically does with a patient in a diagnostic setting. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 7-8, 12-16, 22, and 23 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Vu (WO 2020227433 filed on 5/6/20 as cited in the IDS). Regarding claims 1, 22, and 23, Vu teaches a system, a non-transitory computer readable medium, and a method comprising: a processor (¶32-processor); and a computer readable medium in communication with the processor (¶32-a computer readable medium in communication with the processor), the computer readable medium having encoded thereon a set of instructions executable by the processor to (¶32-the computer readable medium having encoded thereon a set of instructions executable by the processor to): obtain, via a sensor, a first signal from a first position of a patient (¶32-obtain, via one or more behind-the-ear sensors, a first signal collected from behind the ear of a patient); separate the first signal into one or more component bio-signals (¶32-separate the first signal into one or more individual component biosignals); extract one or more features from each of the one or more individual bio-signals (¶32-extract the one or more features from each of the one or more individual biosignals); and determine, based on the one or more features extracted from the one or more individual bio-signals, whether the patient is engaged in one or more activities (¶32-determine, based on the one or more features extracted from the one or more individual biosignals, a wakefulness classification of the patient; ¶5-swallowing sound; ¶42-speech; ¶34). Regarding claim 2, Vu teaches the system of claim 1, wherein the one or more individual bio-signals includes electroencephalogram (EEG) signal (claim 15-wherein the one or more individual biosignals includes at least one of an electroencephalogram (EEG) signal). Regarding claim 7, Vu teaches the system of claim 1, wherein the one or more individual bio-signals include an electrooculography (EOG) signal (claim 15 wherein the one or more individual biosignals includes at least one…electrooculography (EOG) signal). Regarding claim 8, Vu teaches the system of claim 1, wherein the one or more individual bio-signals includes an electromyography (EMG) signal (claim 15-wherein the one or more individual biosignals includes at least one of an…electromyography (EMG) signal). Regarding claim 12, Vu teaches the system of claim 1, further comprising a wearable device, the wearable device comprising the sensor, the sensor configured to be in contact with the skin of the patient (¶30-wearable device may include an ear piece configured to be worn behind an ear of a patient, one or more sensors coupled to the ear piece and configured to be in contact with the skin of the patient). Regarding claim 13, Vu teaches the system of claim 12, wherein the wearable device is configured to position the sensor above the ear of the patient and below the crown of the patient, wherein the first position is a position located above the ear of the patient and below the crown of the patient (¶43-the one or more sensors 120 may further be configured to other parts of the patient, including, without limitation, the eyes, eyelids, and surrounding areas around the eyes, forehead, and temple of the patient 180). Regarding claim 14, Vu teaches the system of claim 12, wherein the wearable device is configured to position the sensor on the skin over mastoid bone of the patient, wherein the first position is a position over the mastoid bone of the patient (¶41-at least one sensor of the one or more sensors 120 being in contact with the skin over the respective mastoid bones of the patient 180; ¶98). Regarding claim 15, Vu teaches the system of claim 12, wherein the wearable device is configured to be worn around an ear of the patient (¶39-the one or more ear pieces 110 may alternatively comprise a hoop-like structure, configured to be worn around the ear and/or earlobe of the patient 180). Regarding claim 16, Vu teaches the system of claim 12, wherein the wearable device is a headband (¶43-the one or more sensors 120 may further be included in a device, such as a headband). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3-6 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vu in view of Wipperman (US 20220199245 filed on 12/22/21). Regarding claim 3, Vu teaches the system of claim 1, wherein the set of instructions is further executable by the processor to: determine whether the patient is engaged in a first activity based on the determination that the patient is engaged in the one or more activities (¶34-brain waves (EEG), eyes movements (FOG), facial muscle activities (EMG), electrodermal activity (EDA), and head motion from the area behind human ears). However, Vu does not teach wherein determining whether the patient is engaged in the first activity further comprises determining a first score for a first set of features associated with the first activity, wherein the first score indicates how closely the one or more features extracted from the one or more individual bio-signals matches the first set of features. Wipperman teaches wherein determining whether the patient is engaged in the first activity further comprises determining a first score for a first set of features associated with the first activity (¶235-activity detection F1 scores; ¶238-F1 scores for all biometric sensor device features (161 features) 5200B; ¶298), wherein the first score indicates how closely the one or more features extracted from the one or more individual bio-signals matches the first set of features (¶238-F1 scores range from 0 to 1, with 1 indicating perfect classification; ¶209-the identified clinically relevant features indicated that the biometric sensor device tested was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9; ¶91; ¶179-180). Wipperman relates to profiling features derived from signals (e.g., signals based on biometric cues in a subject using a biometric device, including, but not limited to wearable devices) for use in clinical outcomes (¶2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include wherein determining whether the patient is engaged in the first activity further comprises determining a first score for a first set of features associated with the first activity, wherein the first score indicates how closely the one or more features extracted from the one or more individual bio-signals matches the first set of features of Wipperman in order for early detection and/or treatment of potential disease or disorder experienced by a patient (Wipperman, ¶2). Regarding claim 4, the combination of Vu and Wipperman teaches the system of claim 3, wherein the set of instructions is further executable by the processor to: apply a stimulus to the patient in response to the determination that the patient is engaged in the first activity (Vu, ¶64-control the stimulation output 125 based on the wakefulness classification (e.g., microsleep classification) determined above). Regarding claim 5, the combination of Vu and Wipperman teaches the system of claim 3, wherein the first activity is one of speaking, chewing, or swallowing (Vu, ¶5-swallowing sound; ¶34-speech). Regarding claim 6, Vu teaches the system of claim 1. However, Vu does not teach wherein the set of instructions is further executable by the processor to: diagnose whether the patient is afflicted with a first condition, wherein diagnosing whether the patient is afflicted with the first condition further comprises: determining a first score for a first set of features associated with a first activity while the patient is engaged in the first activity, wherein the first score indicates how closely the one or more features extracted from the one or more individual bio-signals matches the first set of features; determining whether the first score meets a threshold score for the first activity; and wherein if the threshold score is not met, determining that the patient is afflicted with the first condition. Wipperman teaches wherein the set of instructions is further executable by the processor to: diagnose whether the patient is afflicted with a first condition (¶74-an early diagnosis can be obtained; ¶153-determine the condition of a subject), wherein diagnosing whether the patient is afflicted with the first condition further comprises: determining a first score for a first set of features associated with a first activity while the patient is engaged in the first activity (¶235-activity detection F1 scores; ¶238-F1 scores for all biometric sensor device features (161 features) 5200B; ¶298), wherein the first score indicates how closely the one or more features extracted from the one or more individual bio-signals matches the first set of features (¶238-F1 scores range from 0 to 1, with 1 indicating perfect classification; ¶209-the identified clinically relevant features indicated that the biometric sensor device tested was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9; ¶91; ¶179-180); determining whether the first score meets a threshold score for the first activity (¶158-a determination may be made whether the corresponding sensor based device is reliably able to output the clinical outcome (e.g., if a threshold number of extracted features meet reliability thresholds, if the clinically relevant thresholds exceed reliability thresholds, etc.); ¶191); and wherein if the threshold score is not met, determining that the patient is afflicted with the first condition (¶115- these clinically relevant features 50 may meet or exceed one or more reliability thresholds such that the clinically relevant features 50 can be relied upon to produce the clinical output with a degree of confidence; ¶297-applying the clinically relevant features to determine a clinical outcome result, wherein the clinical outcome result is one of a diagnosis or a treatment plan; ¶113). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include wherein the set of instructions is further executable by the processor to: diagnose whether the patient is afflicted with a first condition, wherein diagnosing whether the patient is afflicted with the first condition further comprises: determining a first score for a first set of features associated with a first activity while the patient is engaged in the first activity, wherein the first score indicates how closely the one or more features extracted from the one or more individual bio-signals matches the first set of features; determining whether the first score meets a threshold score for the first activity; and wherein if the threshold score is not met, determining that the patient is afflicted with the first condition of Wipperman in order for early detection and/or treatment of potential disease or disorder experienced by a patient (Wipperman, ¶2). Regarding claim 17, Vu teaches the system of claim 1. However, Vu does not teach the one or more extracted features are passed to a machine learning model, wherein the machine learning model is configured to determine a respective similarity score of the one or more extracted features to each of the one or more sets of features including a first set of features associated with the first activity; and the set of instructions is further executable by the processor to: obtain a plurality of reference signals from a reference population, the plurality of reference signals corresponding to reference signals obtained from the reference population while engaged in speech, chewing, and swallowing; separate each reference signal of the plurality of reference signals into a respective set of one or more component bio-signals; extract a respective feature set from each set of one or more component bio-signals; train the machine learning model with the respective feature set, wherein training the machine learning model includes associating the respective feature set with a respective ground truth, wherein the respective ground truth corresponds to speech, chewing, or swallowing; and differentiate a first activity from other activities of the one or more activities based, at least in part, on the respective similarity scores of the one or more extracted features. Wipperman teaches the one or more extracted features are passed to a machine learning model (¶153-at 472, a plurality of available extracted features may be received; ¶208-feature vectors were used as input to machine learning models), wherein the machine learning model is configured to determine a respective similarity score of the one or more extracted features to each of the one or more sets of features including a first set of features associated with the first activity (¶235-activity detection F1 scores; ¶238- F1 scores range from 0 to 1, with 1 indicating perfect classification, F1 scores for all biometric sensor device features (161 features) 5200B; ¶209-the identified clinically relevant features indicated that the biometric sensor device tested was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9; ¶91; ¶179-180; ¶298); and the set of instructions is further executable by the processor to: obtain a plurality of reference signals from a reference population, the plurality of reference signals corresponding to reference signals obtained from the reference population while engaged in speech, chewing, and swallowing (¶115-one or more individuals may participate in such a clinical trial such that the data corresponding to the clinically relevant features 50 for those one or more individuals may be compared to reference data (e.g., data from the one or a cohort of test users); ¶153-chewing, swallowing, talking; ¶152); separate each reference signal of the plurality of reference signals into a respective set of one or more component bio-signals (¶8-a signal separation module extracts the extracted features from the mixed signal; ¶153-the combined EEG and face information and the reference information); extract a respective feature set from each set of one or more component bio-signals (¶208-a total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data); train the machine learning model with the respective feature set (¶208-feature vectors were used as input to machine learning models; ¶291-the training data 5412 and a training algorithm 5420 may be provided to a training component 5430 that may apply the training data 5412 to the training algorithm 5420 to generate a machine learning mode), wherein training the machine learning model includes associating the respective feature set with a respective ground truth, wherein the respective ground truth corresponds to speech, chewing, or swallowing (¶81-classification accuracy (F1 scores) of models; ¶91-the term “F1 score” refers to a measure of a model's accuracy on a dataset as a binary classification wherein a score of 0 is poor and a score of 1 is best; ¶179-the F1 score measures how well a model classifies a particular activity like swallowing, as shown by the results and computations in FIG. 14 and FIG. 15; Fig. 52); and differentiate a first activity from other activities of the one or more activities based, at least in part, on the respective similarity scores of the one or more extracted features (¶30-F1 score(s) to measure how well a model classifies a particular activity (e.g., swallowing); ¶180-application of the device used in a clinical setting was considered with a focus on measuring chewing, talking, and swallowing, as a result of the higher F1 score based reliability; ¶209). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include the one or more extracted features are passed to a machine learning model, wherein the machine learning model is configured to determine a respective similarity score of the one or more extracted features to each of the one or more sets of features including a first set of features associated with the first activity; and the set of instructions is further executable by the processor to: obtain a plurality of reference signals from a reference population, the plurality of reference signals corresponding to reference signals obtained from the reference population while engaged in speech, chewing, and swallowing; separate each reference signal of the plurality of reference signals into a respective set of one or more component bio-signals; extract a respective feature set from each set of one or more component bio-signals; train the machine learning model with the respective feature set, wherein training the machine learning model includes associating the respective feature set with a respective ground truth, wherein the respective ground truth corresponds to speech, chewing, or swallowing; and differentiate a first activity from other activities of the one or more activities based, at least in part, on the respective similarity scores of the one or more extracted features of Wipperman in order for early detection and/or treatment of potential disease or disorder experienced by a patient (Wipperman, ¶2). Regarding claim 18, the combination of Vu and Wipperman teaches the system of claim 17, wherein the set of instructions is further executable by the processor to: determine a subset of component bio-signals comprising features indicative of the first activity, wherein the subset of component bio-signals includes the one or more component bio-signals (Wipperman, ¶279-general features for each waveform, apart from a subset of features specific to EMG, EOG, or EEG activity, were summarized). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include wherein the set of instructions is further executable by the processor to: determine a subset of component bio-signals comprising features indicative of the first activity, wherein the subset of component bio-signals includes the one or more component bio-signals of Wipperman in order for early detection and/or treatment of potential disease or disorder experienced by a patient (Wipperman, ¶2). Regarding claim 19, the combination of Vu and Wipperman teaches the system of claim 17, wherein the machine learning model is a random forest classifier (Wipperman, ¶99-random forest; ¶156; ¶191). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include wherein the machine learning model is a random forest classifier of Wipperman in order for early detection and/or treatment of potential disease or disorder experienced by a patient (Wipperman, ¶2). Regarding claim 20, the combination of Vu and Wipperman teaches the system of claim 17, wherein the machine learning model is a convolutional neural network (Wipperman, ¶86-convolutional neural network; ¶180; ¶208). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include wherein the machine learning model is a convolutional neural network of Wipperman in order for early detection and/or treatment of potential disease or disorder experienced by a patient (Wipperman, ¶2). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Vu in view of Wipperman as applied to claim 17 above, and further in view of Kallonen (US 20230290511 filed on 10/11/21). Regarding claim 21, the combination of Vu and Wipperman teaches the system of claim 17. However, the combination of Vu and Wipperman does not teach wherein the machine learning model is a transformer network. Kallonen teaches wherein the machine learning model is a transformer network (¶76-transformer network). Kallonen relates generally to detection of a life-threatening condition based on and more particularly to computer-aided detection of a life-threatening condition based on measurements of biosignals (¶2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vu to include wherein the machine learning model is a transformer network of Kallonen in order for computer-aided detection of a life-threatening condition based on measurements of biosignals (Kallonen, ¶2). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAURA HODGE whose telephone number is (571) 272-7101. The examiner can normally be reached M-F: 8:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, UNSU JUNG can be reached at (571) 272-8506. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.N.H./Examiner, Art Unit 3792 /UNSU JUNG/Supervisory Patent Examiner, Art Unit 3792
Read full office action

Prosecution Timeline

Feb 01, 2023
Application Filed
Apr 07, 2025
Non-Final Rejection — §101, §102, §103
Oct 16, 2025
Response Filed
Nov 12, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599336
Wearable Apparatus For Continuous Monitoring Of Physiological Parameters
2y 5m to grant Granted Apr 14, 2026
Patent 12594422
SYSTEMS AND DEVICES FOR TREATING EQUILIBRIUM DISORDERS AND IMPROVING GAIT AND BALANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12594414
HEART SUPPORT AND MASSAGE MACHINE
2y 5m to grant Granted Apr 07, 2026
Patent 12582822
INTRA-ORAL APPLIANCES AND SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12576263
DEVICE FOR ATTACHING A HEART SUPPORT SYSTEM TO AN INSERTION DEVICE, AND METHOD FOR PRODUCING SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
42%
Grant Probability
86%
With Interview (+43.7%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 95 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month