Prosecution Insights
Last updated: April 19, 2026
Application No. 18/544,495

SYSTEM AND METHOD FOR ACTIVATION AND DEACTIVATION OF CUED HEALTH ASSESSMENT

Non-Final OA §DP
Filed
Dec 19, 2023
Examiner
GUERRA-ERAZO, EDGAR X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Sonde Health Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
671 granted / 796 resolved
+22.3% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
13 currently pending
Career history
809
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 796 resolved cases

Office Action

§DP
DETAILED ACTION Introduction 1. This office action is in response to Applicant’s submission filed on 12/19/2023. Claims 1-20 are pending in the application and has been examined. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings 3. The drawings filed on 12/19/2023 have been accepted and considered by the Examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 4. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “… a voice sample collector for receiving voice samples from a user; an audio processing module comprising a voice biomarker extractor, a health state classification unit, and a voice sample scheduler; the voice biomarker extractor for extracting acoustic features from the received voice samples; the health state classification unit for classifying the received voice samples based on the acoustic features extracted by the voice biomarker extractor, wherein the classification is one of an emotional or affective state of the user, or a physiological change associated with one or more of a cardiovascular system, nervous system, respiratory system, and endocrine system, along with a probability of the classification; and the voice sample scheduler for activating a cued health assessment module when the probability is more than a set threshold, and activating a passive health assessment module when the probability is less than the set threshold…” in claim 1; “… wherein the cued health assessment module performs the cued health assessment by providing a ranked list of services to the user…” in claim 2; “…wherein the voice sample collector comprises a cued voice sample collector and a passive voice sample collector…” in claim 4; “…wherein the passive health assessment module performs a passive health assessment using the passive voice sample collector to collect the voice samples…” in claim 5; “…wherein the step of activating a cued health assessment module further comprises: activating an elicitation module according to a predetermined schedule to alert the user to provide the voice samples, collect the voice samples using the cued voice sample collector, and perform a cued health assessment by collecting user response to a set of predetermined survey questions” in claim 6; “…an utterance-of-interest detector for determining that the received voice samples contain a predetermined utterance of interest, from which the acoustic features can be extracted; a geofencing module for determining that the received voice samples are collected from a predetermined location; a voice activity detector for detecting a voice activity in a received audio sample by determining that the audio sample contains a predetermined amount of spectral content; and a speaker identification module for determining that the received voice samples are collected from one of predetermined speakers based on the acoustic features” in claim 7; “… wherein the utterance-of-interest detector determines that the received voice samples contain the predetermined utterance of interest by matching the received voice samples to a predetermined time-domain template and comparing the acoustic features” in claim 8; “… wherein the voice sample scheduler for activating a cued health assessment module schedules one of a digital voice collection exercise, a digitally administered health survey, and a telehealth session to be provided to the user…” in claim 9; and “… wherein the health state classification unit of the audio processing module classifies the received voice samples to one of: depression, neurological, respiratory, and sleep disorders…” in claim 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Nonstatutory Double Patenting 5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of U.S. Patent No. 11,869,635. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of patent ‘635 anticipate the instant claims as presented in the chart below. Similarly, method claims 11-20 in the current App. ‘495 are also anticipated and follow likewise the mirror mapping to the corresponding system claims 11-20 in the patent ‘635. Present App. 18/544,495 1. A system for activating a cued health assessment, the system comprising: a voice sample collector for receiving voice samples from a user; an audio processing module comprising a voice biomarker extractor, a health state classification unit, and a voice sample scheduler; the voice biomarker extractor for extracting acoustic features from the received voice samples; the health state classification unit for classifying the received voice samples based on the acoustic features extracted by the voice biomarker extractor, wherein the classification is an emotional state or affective state or physiological state of the user, along with a probability of the classification; and the voice sample scheduler for activating a cued health assessment module when the probability is more than a set threshold, and activating a passive health assessment module when the probability is less than the set threshold. 2. The system of claim 1, wherein the cued health assessment module performs the cued health assessment by providing a ranked list of services to the user. 3. The system of claim 1, further comprising a contextual data collector to collect contextual health data using one or more integrated sensors, wherein the integrated sensors comprise one or more of an accelerometer and a light sensor, and wherein the collected contextual health data improves said classification of the health state of the user. 4. The system of claim 1, wherein the voice sample collector comprises a cued voice sample collector and a passive voice sample collector. 5. The system of claim 4, wherein the passive health assessment module performs a passive health assessment using the passive voice sample collector to collect the voice samples. 6. The system of claim 4, wherein the step of activating a cued health assessment module further comprises: activating an elicitation module according to a predetermined schedule to alert the user to provide the voice samples, collect the voice samples using the cued voice sample collector, and perform a cued health assessment by collecting user response to a set of predetermined survey questions. 7. The system of claim 1, further comprising: an utterance-of-interest detector for determining that the received voice samples contain a predetermined utterance of interest, from which the acoustic features can be extracted; a geofencing module for determining that the received voice samples are collected from a predetermined location; a voice activity detector for detecting a voice activity in a received audio sample by determining that the audio sample contains a predetermined amount of spectral content; and a speaker identification module for determining that the received voice samples are collected from one of predetermined speakers based on the acoustic features. 8. The system of claim 7, wherein the utterance-of-interest detector determines that the received voice samples contain the predetermined utterance of interest by matching the received voice samples to a predetermined time-domain template and comparing the acoustic features. 9. The system of claim 1, wherein the voice sample scheduler for activating a cued health assessment module schedules one of a digital voice collection exercise, a digitally administered health survey, and a telehealth session to be provided to the user. 10. The system of claim 1, wherein the physiological state is associated with one or more of a cardiovascular system, nervous system, respiratory system, and endocrine system, and wherein the health state classification unit classifies the received voice samples to one of: depression, neurological, respiratory, and sleep disorders. 11. A method of activating a cued health assessment, the method comprising: receiving voice samples from a user, by a voice sample collector; extracting acoustic features from the received voice samples, by a voice biomarker extractor of an audio processing module; classifying the received voice samples, by a health state classification unit of the audio processing module, based on the acoustic features extracted by the voice biomarker extractor, wherein the classification is an emotional state or affective state or a physiological state of the user; and activating a cued health assessment module when the probability is more than a set threshold, and activating a passive health assessment module when the probability is less than the set threshold, by a voice sample scheduler of the audio processing module. 12. The method of claim 11, wherein the cued health assessment module performs the cued health assessment by providing a ranked list of services to the user 13. The method of claim 11, wherein a contextual data collector is used to collect contextual health data using one or more integrated sensors, wherein the integrated sensors comprise one or more of an accelerometer and a light sensor, and wherein the collected contextual health data improves said classification of the health state of the user. 14. The method of claim 11, wherein the voice sample collector comprises a cued voice sample collector and a passive voice sample collector. 15. The method of claim 14, wherein the passive health assessment module performs a passive health assessment using the passive voice sample collector to collect the voice samples. 16. The method of claim 14, wherein in the step of activating a cued health assessment module further comprises: activating an elicitation module according to a predetermined schedule to alert the user to provide the voice samples, collect the voice samples using the cued voice sample collector, and perform a cued health assessment by collecting user response to a set of predetermined survey questions. 17. The method of claim 11, further comprising: determining that the received voice samples contain a predetermined utterance of interest, by an utterance-of-interest detector, and extracting the acoustic features from the predetermined utterance of interest; determining that the received voice samples are collected from a predetermined location, by a geofencing module; detecting a voice activity in a received audio sample, by a voice activity detector, wherein the voice activity detector determines that the audio sample contains a predetermined amount of spectral content: and determining that the received voice samples are collected from one of predetermined speakers based on the plurality of acoustic features, by a speaker identification module. 18. The method of claim 17, wherein the utterance-of-interest detector determines that the received voice samples contain the predetermined utterance of interest by matching the received voice samples to a predetermined time-domain template and comparing the acoustic features. 19. The method of claim 11, wherein the step of activating a cued health assessment module comprises the voice sample scheduler scheduling one of a digital voice collection exercise, a digitally administered health survey, and a telehealth session to be provided to the user. 20. The method of claim 11, wherein the physiological state is associated with one or more of a cardiovascular system, nervous system, respiratory system, and endocrine system, and wherein the step of classifying the received voice samples comprises the health state classification unit classifying the received voice samples to one of: depression, neurological, respiratory, and sleep disorders. U.S. Patent: 11,869,635 1. A system for activating a cued health assessment, the system comprising: a voice sample collector for receiving voice samples from a user; an audio processing module comprising a voice biomarker extractor, a health state classification unit, and a voice sample scheduler; the voice biomarker extractor for extracting acoustic features from the received voice samples; the health state classification unit for classifying the received voice samples based on the acoustic features extracted by the voice biomarker extractor, wherein the classification is one of an emotional or affective state of the user, or a physiological change associated with one or more of a cardiovascular system, nervous system, respiratory system, and endocrine system, along with a probability of the classification; and the voice sample scheduler for activating a cued health assessment module when the probability is more than a set threshold, and activating a passive health assessment module when the probability is less than the set threshold. 2. The system of claim 1, wherein the cued health assessment module performs the cued health assessment by providing a ranked list of services to the user. 3. The system of claim 1, further comprising a contextual data collector to collect contextual health data using one or more integrated sensors, wherein the integrated sensors comprise one or more of an accelerometer and a light sensor, and wherein the collected contextual health data improves said classification of the health state of the user. 4. The system of claim 1, wherein the voice sample collector comprises a cued voice sample collector and a passive voice sample collector. 5. The system of claim 4, wherein the passive health assessment module performs a passive health assessment using the passive voice sample collector to collect the voice samples. 6. The system of claim 4, wherein the step of activating a cued health assessment module further comprises: activating an elicitation module according to a predetermined schedule to alert the user to provide the voice samples, collect the voice samples using the cued voice sample collector, and perform a cued health assessment by collecting user response to a set of predetermined survey questions. 7. The system of claim 1, further comprising: an utterance-of-interest detector for determining that the received voice samples contain a predetermined utterance of interest, from which the acoustic features can be extracted; a geofencing module for determining that the received voice samples are collected from a predetermined location; a voice activity detector for detecting a voice activity in a received audio sample by determining that the audio sample contains a predetermined amount of spectral content; and a speaker identification module for determining that the received voice samples are collected from one of predetermined speakers based on the acoustic features. 8. The system of claim 7, wherein the utterance-of-interest detector determines that the received voice samples contain the predetermined utterance of interest by matching the received voice samples to a predetermined time-domain template and comparing the acoustic features. 9. The system of claim 1, wherein the voice sample scheduler for activating a cued health assessment module schedules one of a digital voice collection exercise, a digitally administered health survey, and a telehealth session to be provided to the user. 10. The system of claim 1, wherein the health state classification unit of the audio processing module classifies the received voice samples to one of: depression, neurological, respiratory, and sleep disorders. 11. A method of activating a cued health assessment, the method comprising: receiving voice samples from a user, by a voice sample collector; extracting acoustic features from the received voice samples, by a voice biomarker extractor of an audio processing module; classifying the received voice samples, by a health state classification unit of the audio processing module, based on the acoustic features extracted by the voice biomarker extractor, wherein the classification is one of an emotional or affective state of the user, or a physiological change associated with one or more of a cardiovascular system, nervous system, respiratory system, and endocrine system, along with a probability of the classification; and activating a cued health assessment module when the probability is more than a set threshold, and activating a passive health assessment module when the probability is less than the set threshold, by a voice sample scheduler of the audio processing module. 12. The method of claim 11, wherein the cued health assessment module performs the cued health assessment by providing a ranked list of services to the user. 13. The method of claim 11, wherein a contextual data collector is used to collect contextual health data using one or more integrated sensors, wherein the integrated sensors comprise one or more of an accelerometer and a light sensor, and wherein the collected contextual health data improves said classification of the health state of the user. 14. The method of claim 11, wherein the voice sample collector comprises a cued voice sample collector and a passive voice sample collector. 15. The method of claim 14, wherein the passive health assessment module performs a passive health assessment using the passive voice sample collector to collect the voice samples. 16. The method of claim 14, wherein in the step of activating a cued health assessment module further comprises: activating an elicitation module according to a predetermined schedule to alert the user to provide the voice samples, collect the voice samples using the cued voice sample collector, and perform a cued health assessment by collecting user response to a set of predetermined survey questions. 17. The method of claim 11, further comprising: determining that the received voice samples contain a predetermined utterance of interest, by an utterance-of-interest detector, and extracting the acoustic features from the predetermined utterance of interest; determining that the received voice samples are collected from a predetermined location, by a geofencing module; detecting a voice activity in a received audio sample, by a voice activity detector, wherein the voice activity detector determines that the audio sample contains a predetermined amount of spectral content: and determining that the received voice samples are collected from one of predetermined speakers based on the plurality of acoustic features, by a speaker identification module. 18. The method of claim 17, wherein the utterance-of-interest detector determines that the received voice samples contain the predetermined utterance of interest by matching the received voice samples to a predetermined time-domain template and comparing the acoustic features. 19. The method of claim 11, wherein the step of activating a cued health assessment module comprises the voice sample scheduler scheduling one of a digital voice collection exercise, a digitally administered health survey, and a telehealth session to be provided to the user. 20. The method of claim 11, wherein the step of classifying the received voice samples comprises the health state classification unit of the audio processing module classifying the received voice samples to one of: depression, neurological, respiratory, and sleep disorders. Allowable Subject Matter 6. Claims 1-20 would be allowable over the prior art of record for at least the following rationale. Notwithstanding how the teachings in Shrivastav et al., (U.S. Patent Application: 2012/0265024), already of record, hereinafter referred to as SHRIVASTAV, are disclosing the following. SHRIVASTAV discloses, see e.g., “…identification device 200 can be used to determine a health state of a subject by receiving, as input to the interface 201, one or more speech samples from a subject (S210 of FIG. 2B)… interface 201 then communicates the one or more speech samples to the processor 202, which identifies the acoustic measures from the speech samples (S220 of FIG. 2B) and compares the acoustic measures of the speech samples with the baseline acoustic measures 225 stored in the memory 203 (S230 of FIG. 2B)…,” (SHRIVASTAV paras. 46, 50, 52, Fig. 1, 2A-B, 5-7). Further, SHRIVASTAV discloses techniques, see e.g., where “…the user can have a notification transmitted to themselves as a reminder to the user to provide the speech sample at a regularly scheduled interval… user may produce speech samples that correspond to a scheduled time, day, week, or month that repeats at a predetermined frequency…analysis of the speech samples can be provided based on potential changes in the speech samples taken at the specified intervals. If speech parameters of the consumer indicate a certain probability of disease, the consumer can be warned… warning can be in the form of a phone call, and email, a text, or other form of communication. Optionally, the consumer can be prompted to complete a more specific test on the phone. Based on the test results, the consumer is directed for further action…,” “…biomarkers described above may be suitably weighted and combined using appropriate statistical, pattern-recognition and/or machine learning techniques prior to making a diagnostic decision. These include, but are not limited to, discriminant analyses, regression, hidden Markov-models, support-vector machines, and neural networks…,” (SHRIVASTAV paras. 36-46, 50, 52, Fig. 1, 2A-B, 5-7). Nevertheless, it is earnestly noted that in consideration of the aforementioned presented teachings in SHRIVASTAV, said teachings are respectfully found to fail to teach or fairly suggest either individually or in a reasonable combination the presented limitations comprising “the health state classification unit for classifying the received voice samples based on the acoustic features extracted by the voice biomarker extractor, wherein the classification is an emotional state or affective state or physiological state of the user, along with a probability of the classification; and the voice sample scheduler for activating a cued health assessment module when the probability is more than a set threshold, and activating a passive health assessment module when the probability is less than the set threshold” in independent claims 1 and 11 as specifically recited. Similarly, dependent claims 2-10; and 12-20 further limit allowable independent claims 1 and 11 correspondingly, and thus they would also be allowable over the prior art of record by virtue of their dependency. Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Seoane et al., (Seoane F, Mohino-Herranz I, Ferreira J, Alvarez L, Buendia R, Ayllón D, Llerena C, Gil-Pita R. Wearable biomedical measurement systems for assessment of mental stress of combatants in real time. Sensors (Basel). 2014 Apr 22;14(4):7120-41. doi: 10.3390/s140407120. PMID: 24759113; PMCID: PMC4029694), already of record, teaches, see e.g., “…the ATREC project funded by the “Coincidente” program aims at analyzing diverse biometrics to assess by real time monitoring the stress levels of combatants…project combines multidisciplinary disciplines and fields, including wearable instrumentation, textile technology, signal processing, pattern recognition and psychological analysis of the obtained information…ATREC project is described, including the different execution phases, the wearable biomedical measurement systems, the experimental setup, the biomedical signal analysis and speech processing performed. The preliminary results obtained from the data analysis collected during the first phase of the project are presented, indicating the good classification performance exhibited when using features obtained from electrocardiographic recordings and electrical bioimpedance measurements from the thorax. These results suggest that cardiac and respiration activity offer better biomarkers for assessment of stress than speech, galvanic skin response or skin temperature when recorded with wearable biomedical measurement systems…” (See e.g., Seoane et al., Abstract). Please, see PTO-892 for more details. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Edgar Guerra-Erazo whose telephone number is (571) 270-3708. The examiner can normally be reached on M-F 7:30a.m.-5:00p.m. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Bhavesh Mehta can be reached on (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR X GUERRA-ERAZO/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602198
SEARCH AND KNOWLEDGE BASE QUESTION ANSWERING FOR A VOICE USER INTERFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591746
LANGUAGE MODEL TUNING IN CONVERSATIONAL ARTIFICIAL INTELLIGENCE SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12572565
SEMANTIC CONTENT CLUSTERING BASED ON USER INTERACTIONS FOR CONTENT MODERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12542134
TRAINING AND USING A TRANSCRIPT GENERATION MODEL ON A MULTI-SPEAKER AUDIO STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12536373
TOKEN OPTIMIZATION IN GENERATIVE LARGE LANGUAGE MODEL LEARNING (LLM) INTERACTIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 796 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month