Prosecution Insights
Last updated: April 19, 2026
Application No. 18/529,903

CHRONIC PULMONARY DISEASE PREDICTION FROM AUDIO INPUT BASED ON INHALE-EXHALE PAUSE SAMPLES USING ARTIFICIAL INTELLIGENCE

Non-Final OA §101§103§112
Filed
Dec 05, 2023
Examiner
JANG, ELINA SOHYUN
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
58 granted / 85 resolved
-1.8% vs TC avg
Strong +42% interview lift
Without
With
+42.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
108
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
29.1%
-10.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 85 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are hereby under examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Step 1 of the subject matter eligibility test (see MPEP 2106.03). Claims 1-17 and 20 are directed to a “device”, which describes one of the four statutory categories of patentable subject matter, i.e., a machine. Claims 18-19 are directed to a “method”, which describes one of the four statutory categories of patentable subject matter, i.e., a process. Step 2A of the subject matter eligibility test (see MPEP 2106.04) Prong one: Claims 1-20 recite abstract idea, as follows: receive an audio input associated with a user… determine a first set of inhale-exhale pause samples… wherein each inhale-exhale pause sample of the determined first set of inhale-exhale pause samples corresponds to a time interval between consecutive inhale and exhale breathlessness samples; select an inhale-exhale pause sample from the first set of inhale-exhale pause samples… generate a flow volume curve associated with the selected inhale-exhale pause sample…; determine one or more voice spirometer parameters based on the generated flow volume curve; and render the determined one or more voice spirometer parameters on a display device... (claim 1; claims 18 and 20 recite similar claim limitations) Based on the broadest reasonable interpretation, receiving an audio input associated with a user, determining a first set of inhale-exhale pause samples, selecting an inhale-exhale pause sample from the first set of inhale-exhale pause samples, generating a flow volume curve associated with the selected inhale-exhale pause sample, and determining one or more voice spirometer parameters on a display device can be done mentally with the aid of a pen and paper. A person, when given the audio input in form of graph of data table can read it and determine the inhale-exhale pause sample graphically or mathematically, and select an inhale-exhale pause sample from observing the data, and generating a flow volume curve associated with the selected inhale-exhale pause sample by drawing it on paper, and one or more voice spirometer parameters on a display device by drawing it on a paper. Prong two: Claims 1-20 do not include additional elements that integrate the abstract into a practical application. The additional elements are as follows: Circuitry (claim 1) A non-transitory computer-readable medium (claim 20) Reciting a computer or computer components (circuitry and a non-transitory computer-readable medium) simply amounts to reciting a general processor to perform general functions of a computer as above to perform the mental processes of receiving an audio input associated with a user, determining a first set of inhale-exhale pause samples, selecting an inhale-exhale pause sample from the first set of inhale-exhale pause samples, generating a flow volume curve associated with the selected inhale-exhale pause sample, and determining one or more voice spirometer parameters on a display device are mere instructions to apply the judicial exception to general technology. Such elements do not integrate the exception into a practical application since they are merely instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d) and MPEP 2106.05(f). Reciting a circuitry, a non-transitory computer-readable medium, “apply a generative adversarial network (GAN) model on the selected inhale- exhale pause sample” or “apply an Artificial Intelligence (AI) model” do not integrate the exception into a practical application since it is merely insignificant extra-solution activity to the judicial exception, e.g., simply outputting the results of the algorithm in a high-level implementation. Therefore, claims 1-20 are ineligible at step 2A, prong two. Step 2B of the subject matter eligibility test (see MPEP 2106.05) Reciting a computer or computer components (circuitry and a non-transitory computer-readable medium) simply amounts to reciting a general processor to perform general functions of a computer as above to perform the mental processes of receiving an audio input associated with a user, determining a first set of inhale-exhale pause samples, selecting an inhale-exhale pause sample from the first set of inhale-exhale pause samples, generating a flow volume curve associated with the selected inhale-exhale pause sample, and determining one or more voice spirometer parameters on a display device are mere instructions to apply the judicial exception to general technology. Such elements do not qualify as significantly more because this limitation is simply appending well-understood, routine and conventional activities previously known in the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014)) and/or a claim to an abstract idea requiring no more than being stored on a computer readable medium which is a well-understood, routine and conventional activity previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014); SAP Am. v. InvestPic, 890 F.3d 1016 (Fed. Circ. 2018)). Reciting a circuitry, a non-transitory computer-readable medium, “apply a generative adversarial network (GAN) model on the selected inhale- exhale pause sample” or “apply an Artificial Intelligence (AI) model” do not integrate the exception into a practical application since it is merely insignificant extra-solution activity to the judicial exception, e.g., simply outputting the results of the algorithm in a high-level implementation. In view of the above, the additional elements individually do not integrate the exception into a practical application and do not amount to significantly more than the above-judicial exception (the abstract idea). Looking at the limitations as an ordered combination (that is, as a whole) adds nothing that is not already present when looking at the elements taking individually. There is no indication that the combination of elements improves the functioning of a computer, for example, or improves any other technology. There is no indication that the combination of elements permits automation of specific tasks that previously could not be automated. There is no indication that the combination of elements includes a particular solution to a computer-based problem or a particular way to achieve a desired computer-based outcome. Rather, the collective functions of the claimed invention merely provide conventional computer implementation, i.e., the computer is simply a tool to perform the process. Therefore, claims 1-20 are ineligible at step 2B. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The following terms are lacking in written description: “Artificial Intelligence (AI) model” (claims 1-4 and 18-20) “generative adversarial network (GAN) model” (claims 1, 18 and 20) “attention-based recurrent neural network (RNN) model” (claim 6) “multi-time frequency generative adversarial network (MTFGAN) model” (claim 6) “gated recurrent unit (GRU) model” (claim 7) “generator model” (claim 9) “discriminator model” (claim 9) “geometric graph autoencoder (GGAE) model” (claim 11) “singular value decomposition (SVD) model” (claim 13) Paragraphs [0042]-[0052] merely describe the above listed models as “Details related to the [models] are similar to the details of the neural network of the AI model 112A. the details related to the [models] are skipped for the sake of brevity of the disclosure”. However, the specification not only does not clearly disclose how the AI model 112A is similar to all of the different algorithms of the models, but also does not clearly disclose the AI model 112A itself. [0037]-[0039] broadly lists generic algorithms and functions of the AI model, but the specific details are lacking. Consequently, the lack of clear disclosure of the algorithms of the models raise doubt as to possession of the claimed invention at the time of filing. Claims 5, 8, 10, 12, and 14-17 are rejected based on their dependencies on the rejected claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “breathlessness” in claims 1, 6, 10, 18 and 20 is used by the claim to mean “pause between breaths,” while the accepted meaning is “not getting enough air.” The term is indefinite because the specification does not clearly redefine the term. For the purpose of examination, “breathlessness” is interpreted as pause between breaths. The meaning of every term used in a claim should be apparent from the prior art or from the specification and drawings at the time the application is filed. See MPEP 2173.05. The following terms are indefinite because the specification does not clearly define these terms: “Artificial Intelligence (AI) model” (claims 1-4 and 18-20) “generative adversarial network (GAN) model” (claims 1, 18 and 20) “attention-based recurrent neural network (RNN) model” (claim 6) “multi-time frequency generative adversarial network (MTFGAN) model” (claim 6) “gated recurrent unit (GRU) model” (claim 7) “generator model” (claim 9) “discriminator model” (claim 9) “geometric graph autoencoder (GGAE) model” (claim 11) “singular value decomposition (SVD) model” (claim 13) Examiner understands terms such as “attention-based recurrent neural network” are term of the art. However, it is unclear what attention-based recurrent neural network model is. The specification discloses [0043], “the attention-based RNN model 112C may be a neural network. Details related to the ML model and the neural network associated with the attention-based RNN model 112C are similar to the details of the ML model and the neural network of the AI model 112A. Hence, the details related to the ML model and the neural network of the attention-based RNN model 112C are skipped for the sake of brevity of the disclosure”, and the disclosure of “AI model 112A” does not clearly define the term RNN model. All other terms are written similarly in the specification, and lacking in definiteness. For the purpose of examination, the above models are interpreted as a computer algorithm. The term “high level” and “low level” in claims 7 and 15 are relative terms which renders the claim indefinite. The term “high level” and “low level” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. As such, the types of the features are rendered indefinite. For the purpose of examination, “high level features” and “low level features” are simply interpreted as features. Claims 5, 8, 12, 14 and 16-17 are rejected based on their dependencies on the rejected claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over TRIVEDY, SUDIPTO.ET AL, "Microphone based Smartphone enabled Spirometry Data Augmentation using Information Maximizing Generative Adversarial Network", 2020 IEEE International Instrumentation and Measurement Technology Conference (12MTC), 2020-05-25, Pages 1-6, XP033785947, cited by Applicant and hereto referred as Trivedy, and in view of US20210020191A1 (Venneti et. al), hereto referred as Venneti. Claims 1-2, 4, 11, 18 and 20 are rejected as best understood under the 112b rejection above. As to claims 1-2, 4-5, 8, 10-12, 15-18, and 20, Trivedy teaches an electronic device, comprising: circuitry configured to: receive an audio input associated with a user (Trivedy, pg. 1, “built-in mobile microphone”); select an inhale-exhale pause sample from the first set of inhale-exhale pause samples; apply a generative adversarial network (GAN) model on the selected inhale- exhale pause sample (Trivedy, pg. 1, “an information maximizing generative adversarial network (InfoGAN) model has been proposed that can learn the statistical characteristics of the spirometric data and generate similar sort of data to augment the spirometric dataset”); generate a flow volume curve associated with the selected inhale-exhale pause sample based on the application of the GAN model (Trivedy, Fig. 4, flow rate vs. time); determine one or more voice spirometer parameters based on the generated flow volume curve (Trivedy pg. 2, “FEV1, FVC, FEV1/FVC% and PEF”); and render the determined one or more voice spirometer parameters on a display device associated with the electronic device (Trivedy, pg. 2, “display flow-rate versus time curve and spirometric measurements”). However, Trivedy does not teach the specific details of applying AI model on the received input, determining a first set of inhale-exhale pause samples, wherein pause samples corresponds to a time interval between consecutive inhale and exhale samples. Venneti teaches a relevant art of diagnosing COPD using sound data (Venneti, [0014], “Voice Profiler may predict…chronic obstructive pulmonary disease). Venneti teaches: apply an Artificial Intelligence (AI) model on the received audio input (Venneti, [0011], "Various machine learning segmentation models may be applied to the audio portions to extract various types of segments isolated from background noise present in the input audio data.”); determine a first set of inhale-exhale pause samples based on the application of the Al model (Venneti, [0011], "The pause segments include inhale and exhale audio with background noise."), wherein each inhale-exhale pause sample of the determined first set of inhale-exhale pause samples corresponds to a time interval between consecutive inhale and exhale breathlessness samples (Venneti, [0011], “It is understood that extracted segments are homogeneous regions in the input audio data”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Trivedy in view of Venneti to include the details of applying AI model on the received input, determining a first set of inhale-exhale pause samples, wherein pause samples corresponds to a time interval between consecutive inhale and exhale samples because Trivedy recognizes the need of converting audio samples into related spirometry data (Trivedy, The audio samples are nothing but the measure of pressure and these pressure values need to be converted to an approximation of flow”), and Venneti supplies the improved method of doing so using an AI model, which increases the accuracy of the audio data. As to claim 2, Trivedy does not teach the details of the AI model. However, Venneti teaches that the circuitry is further configured to: denoise the received audio input, wherein the application of Al model is further based on the denoised audio input (Venneti, [0050], "“de-noised” positive segment"). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Trivedy in view of Venneti to include Al model is further based on the denoised audio input as doing so would increase the accuracy of the audio data. As to claims 3 and 19, Trivedy-Venneti teaches the circuitry is further configured to: receive a set of audio samples associated with a set of users; extract a set of audio features associated with each audio sample of the set of audio samples (Trivedy, pg. 2, “Spirometry sound data, recorded from a built-in mobile microphone using 16-bits per sample with a sampling rate of 44.1 kHz, have been analyzed in MATLAB to know the frequency content.”); However, Trivedy does not teach determining a threshold frequency or training the AI model. Venneti teaches: determine a threshold frequency associated with each audio sample of the set of audio samples (Venneti, [0069], "The first inhale threshold differentiates between portions of the input pause segment 520 which correspond to inhalation and other audio in the pause segment 520"); and train the Al model on the extracted set of audio features and on the determined threshold frequency associated with each audio sample of the set of audio samples, wherein the trained Al model is applied on the received audio input (Venneti, [0069], "identified as representing audio of an inhalation to be extracted from the input pause segment 520 and isolated from the background noise present in the pause segment 520"). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Trivedy in view of Venneti to include the details of determining a threshold frequency or training the AI model because doing so are further details of modifying the received audio data to make it useable for analysis that improves the accuracy of the output audio features. As to claim 5, Trivedy does not teach the details of the AI model. However, Venneti teaches: the circuitry is further configured to: determine an energy associated with each inhale-exhale pause sample of the first set of inhale-exhale pause samples; and rank each inhale-exhale pause sample of the first set of inhale-exhale pause samples based on determined energy, wherein the inhale-exhale pause sample is selected from the first set of inhale- exhale pause samples based on the ranking (Venneti, [0053], "The Voice Profiler selects a phonation candidate region that represents a highest total energy (i.e. duration & energy) and determines an energy threshold based on an average of audio frame energies in the selected phonation candidate region. The Voice Profiler fine-tunes selected phonation candidate region starting point based on the energy threshold."). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Trivedy in view of Venneti to include determining energy associated with each inhale-exhale pause sample and rank each inhale-exhale pause sample of the first set of inhale-exhale pause samples based on determined energy as Trivedy already recognizes the need to include determining energy (Trivedy, pg. 2, “the power spectral density”), and Venneti supplies the details of further use of the energy using an AI model. As to claim 10, Trivedy-Venneti teaches the determined one or more voice spirometer parameters is at least one of a forced expiratory flow (FEF), a forced expiratory volume (FEV), a forced vital capacity (FVC), a pulmonary function value (PFV), a total lung capacity (TLC), a ratio of FEV to FVC, or breathlessness data (Trivedy, pg. 3, “FEV1 is calculated from the flow-rate values”). As to claim 11, Trivedy-Venneti teaches the circuitry is further configured to: apply a geometric graph autoencoder (GGAE) model on the generated flow volume curve; and determine a breathing condition based on the application of the geometric graph autoencoder model (Trivedy, pg. 2, " flow-rate versus time curve and pirometric measurements, like FEV1, FVC, FEV1/FVC%"). As to claim 12, Trivedy-Venneti teaches the breathing condition is at least one of an obstructive breathing condition, a restrictive breathing condition, a pulmonary fibrosis breathing condition, or a normal breathing condition (Trivedy, pg. 6, " The developed spirometer, for home use, helps in close monitoring of lung functions"). As to claims 15-17, Trivedy-Venneti teaches the circuitry configured to: determine one or more frequency domain representations of the received audio input; determine a set of audio features based on the determined one or more frequency domain representations including Mel frequency cepstral coefficients (Venneti, [0065], "Spectral domain features are based on properties measurable in the frequency domain, such as, for example: Mel-frequency cepstral coefficient"); extract a set of low-level and a set of high-level features associated with the received audio input based on the determined set of audio features; determine a correlation of each feature of the set of low level and a set of high level features with other features of the set of low level and a set of high level features; select a set of correlated features based on the determined correlation; apply a transformer encoder on the selected set of correlated features; and determine a vocal disorder based on the application of the transformer encoder including mild, moderate or severe COPD (Venneti, [0014], "According to various embodiments, the Voice Profiler may predict the physical state of the speaker whereby the physical state may be related to a degree of lung conditioning for athletic performance, the presence of a mental health condition (such as stress, anxiety, depression), asthma, chronic obstructive pulmonary disease (COPD)."). Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Trivedy in view of Venneti as applied to claim 1 above, and further in view of US20220103922A1 (Jumbe et. al), hereto referred as Jumbe. Claim 1 is taught as above. Claims 6 and 8 are rejected as best understood under the 112b rejection above. As to claim 6, Trivedy teaches the circuitry is further configured to: pre-process the generated optimized signal based on normalization, wherein the flow volume curve is generated further based on the pre-processing (Trivedy, pg. 4, "Adam optimizer and batch normalization are used for most of the layers.") However, Trivedy does not teach the details of the AI models. Venneti teaches that the circuitry is further configured to: apply an attention-based recurrent neural network (RNN) model on the selected inhale-exhale pause sample (Venneti, [0011], "The pause segments include inhale and exhale audio with background noise."); determine a breathlessness signal based on the application of the low pass filter (Venneti, [0011], "During a first level of machine learning segmentation, the Voice Profiler extracts de-noised voiced segments, de-noised forced exhale segments, pause segments and inhale-background segments. The pause segments include inhale and exhale audio with background noise."); It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Trivedy in view of Venneti to include applying RNN and determine the breath signal based on the low pass filter because doing so would increase the accuracy of the breath pause samples. Trivedy-Venneti does not teach that the circuitry is further configured to determine an adaptive cutoff-frequency and determine multiple time-frequency spectrums. Jumbe teaches relevant art of characterizing health conditions from audio data (Jumbe, abstract). Jumbe teaches a circuitry is further configured to: determine an adaptive cutoff-frequency based on the application of the attention-based RNN model; apply a low pass filter of the determined adaptive cutoff-frequency on the selected inhale-exhale pause sample based on the determined adaptive cutoff- frequency (Jumbe, [0296], "The first stage 2412 may also include a first order low pass filter with cutoff frequency at about 15 kHz-20 kHz"); determine multiple time-frequency spectrums based on the determined breathlessness signal; apply a multi-time frequency generative adversarial network (MTFGAN) model on the determined multiple time-frequency spectrums; generate an optimized signal based on the application of the MTFGAN model (Jumbe, [0512], "the time-frequency points"). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Trivedy-Venneti in view of Jumbe to include a further configured to determine an adaptive cutoff-frequency and determine multiple time-frequency spectrums, which are known technique of processing data to make it useable for analysis, and Trivedy already recognizes such need (Trivedy, pg. 3, “Fast Fourier Transform (FFT)”). As to claim 8, Trivedy-Venneti does not teach that the multiple time-frequency spectrums is determined based on at least one of adaptive time-frequency transform, TFD-Based Quantification, Short time Fourier transform (STFT), pseudo-Wigner distribution, and discrete or continuous wavelet transform. However, Jumbe teaches multiple time-frequency spectrums is determined based on at least Short time Fourier transform (Jumbe, [0511], "The spectrogram, computed for the input audio by short-time Fourier transform"). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Trivedy-Venneti to include STFT because doing so is a known substitute of processing already performed by Trivedy (Trivedy, pg. 3, “Fast Fourier Transform (FFT)”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELINA S JANG whose telephone number is (571)272-7019. The examiner can normally be reached M-F 9:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Robertson can be reached at (571) 272-5001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELINA SOHYUN JANG/Examiner, Art Unit 3791 /JENNIFER ROBERTSON/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Dec 05, 2023
Application Filed
Feb 24, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593998
DEVICE, APPARATUS AND METHOD OF DETERMINING SKIN PERFUSION PRESSURE
2y 5m to grant Granted Apr 07, 2026
Patent 12575792
DENOISING SENSED SIGNALS FROM ARTIFACTS FROM CARDIAC SIGNALS
2y 5m to grant Granted Mar 17, 2026
Patent 12564339
AUTOREGULATION MONITORING USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12525330
METHOD OF DETERMINING A BOLUS TO BE ADMINISTERED BY AN INSULIN DELIVERING DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12521040
WEARABLE ACTIVITY PARAMETER COLLECTING DEVICE AND MOUNTING UNIT THEREFOR
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+42.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 85 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month