Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,007

METHOD OF ENCODING/DECODING SPEECH SIGNAL AND DEVICE FOR PERFORMING THE SAME

Non-Final OA §102
Filed
Jun 18, 2024
Examiner
GODBOLD, DOUGLAS
Art Unit
2655
Tech Center
2600 — Communications
Assignee
UIF (University Industry Foundation), Yonsei University
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
94%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
898 granted / 1079 resolved
+21.2% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
1104
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1079 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to correspondence filed 18 June 2024 in reference to application 18/747,007. Claims 1-14 are pending and have been examined. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Skordilis et al. (US PAP 2021/0074308). Consider claim 1, Skordilis teaches A method of encoding a speech signal (abstract), the method comprising: outputting, based on a first input speech signal of a previous timepoint and a second input speech signal of a current timepoint, a predicted signal that predicts the second input speech signal from the first input speech signal (0055, LTP engine generates a prediction for a current portion of the signal based on a previous portion of the signal); and obtaining, based on the second input speech signal and the predicted signal, a residual signal by removing a correlation between the first input speech signal and the second input speech signal from the second input speech signal (0057, residual signal encodes what is remaining in the signal after predicted components are removed, i.e. the correlation between the first and second signal. ). Consider claim 2, Skordilis teaches The method of claim 1, wherein the first input speech signal has a same signal length as the second input speech signal, and a greatest correlation with the second input speech signal (0055, length is based on pitch period, which is the same as the prediction length in the equation in 0055, where signal repeats itself, i.e. correlation, also see 0093-97, where correlation is calculated explicitly). Consider claim 3, Skordilis teaches The method of claim 1, wherein the outputting of the predicted signal comprises: extracting feature information for predicting the second input speech signal, based on the first input speech signal and the second input speech signal (0053-55, 0063-064 various features which may represent the signal); predicting a kernel based on the feature information (equation in 0055, term “g”); and generating the predicted signal based on the kernel and the first input speech signal, wherein the kernel is a weight applied to the first input speech signal when predicting the second input speech signal (equation in 0055, term g is a gain or weight which is applied to the components from the previous portion to predict the current portion). Consider claim 4, Skordilis teaches the method of claim 3, further comprising outputting a bitstream, wherein the bitstream comprises: a first bitstream encoding the feature information (0054, sending LP coefficients to decoder, 0068-69, feature coding); a second bitstream encoding a delay value (0069, parameters encoded including pitch lag,); and a third bitstream encoding the residual signal (0057, sending residual codebook index to decoder, also see 0068-69), wherein the delay value indicates a degree to which the first input speech signal is delayed from the second input speech signal (0093-97, where delay is calculated based on correlation, denoted in 0055 as T, which also corresponds to pitch period, or pitch lag). Consider claim 5, Skordilis teaches The method of claim 4, wherein the outputting of the bitstream comprises: quantizing the feature information and the residual signal (0069-70 quantizing feature vectors); outputting the first bitstream by encoding quantized feature information (0069-70 quantizing feature vectors and encoding); and generating the third bitstream by encoding a quantized residual signal (0057-58, 0069, codebooks are used to encode the residual signal.). Consider claim 6, Skordilis teaches a method of decoding a speech signal (abstract), the method comprising: receiving bitstreams from an encoder (0058, decoding, 0072, features sent to decoder); outputting, based on a first bitstream and a second bitstream, a predicted signal that predicts a second input speech signal of a current timepoint from a first input speech signal of a previous timepoint (0058, 0101, LTP decoding predicts egments based on LTP features); and outputting a restored speech signal obtained by restoring the second input speech signal, based on the predicted signal and a third bitstream (9958, 0106-0108, generating a predicted speech signal based on LTP decoding and decoding of other parameters such as LP encodings), wherein the first bitstream encodes feature information for predicting the second input speech signal (0057 and 0069, parameters encoded including pitch lag and index needed for LTP decoding), wherein the second bitstream encodes a delay value indicating a degree to which the first input speech signal is delayed from the second input speech signal (0093-97, where delay is calculated based on correlation, denoted in 0055 as T, which also corresponds to pitch period, or pitch lag), and wherein the third bitstream encodes a residual signal obtained by removing a correlation between the first input speech signal and the second input speech signal from the second input speech signal (0057, sending residual codebook index to decoder, also see 0068-69). Consider claim 7, Skordilis teaches the method of claim 6, wherein the first input speech signal has a same signal length as the second input speech signal, and a greatest correlation with the second input speech signal (0055, length is based on pitch period, which is the same as the prediction length in the equation in 0055, where signal repeats itself, i.e. correlation, also see 0093-97, where correlation is calculated explicitly). Consider claim 8, Skordilis teaches The method of claim 6, wherein the outputting of the predicted signal comprises: obtaining the first input speech signal based on the second bitstream (0058, 0101, LTP decoding predicts frames based on LTP features including pitch lag); and generating the predicted signal based on the first bitstream and the first input speech signal (0058, 0101, LTP decoding predicts segments based on LTP features and previous segments). Consider claim 9, Skordilis teaches The method of claim 1, wherein the outputting of the predicted signal comprises: predicting a kernel based on the first bitstream (equation in 0055, term “g”); and generating the predicted signal based on the kernel and the first input speech signal, wherein the kernel is a weight applied to the first input speech signal when predicting the second input speech signal (equation in 0055, term g is a gain or weight which is applied to the components from the previous portion to predict the current portion, also used in decoding, 0058). Consider claim 10, Skordilis teaches A device for encoding a speech signal, the device comprising: a memory configured to store one or more instructions (0168, memories); and a processor configured to execute the one or more instructions (0169, processors), wherein, when the one or more instructions are executed, the processor is configured to perform a plurality of operations, wherein the plurality of operations comprises: outputting, based on a first input speech signal of a previous timepoint and a second input speech signal of a current timepoint, a predicted signal that predicts the second input speech signal from the first input speech signal (0055, LTP engine generates a prediction for a current portion of the signal based on a previous portion of the signal); and obtaining, based on the second input speech signal and the predicted signal, a residual signal by removing a correlation between the first input speech signal and the second input speech signal from the second input speech signal (0057, residual signal encodes what is remaining in the signal after predicted components are removed, i.e. the correlation between the first and second signal). Claim 11 contains similar limitations a claim 2 is therefore rejected for the same reasons. Claim 12 contains similar limitations a claim 3 is therefore rejected for the same reasons. Claim 13 contains similar limitations a claim 4 is therefore rejected for the same reasons. Claim 14 contains similar limitations a claim 5 is therefore rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Onjanpera (US Patent 7,933,767) teaches a similar method of encoding audio signals. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS C GODBOLD whose telephone number is (571)270-1451. The examiner can normally be reached 6:30am-5pm Monday-Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DOUGLAS GODBOLD Examiner Art Unit 2655 /DOUGLAS GODBOLD/ Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Feb 23, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585879
ARTIFICIAL INTELLIGENCE ASSISTED NETWORK OPERATIONS REPORTING AND MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579371
USING MACHINE LEARNING TO GENERATE SEGMENTS FROM UNSTRUCTURED TEXT AND IDENTIFY SENTIMENTS FOR EACH SEGMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579372
KEY PHRASE TOPIC ASSIGNMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579383
VERIFYING TRANSLATIONS OF SOURCE TEXT IN A SOURCE LANGUAGE TO TARGET TEXT IN A TARGET LANGUAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12572749
Compressing Information Provided to a Machine-Trained Model Using Abstract Tokens
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
94%
With Interview (+10.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1079 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month