Prosecution Insights
Last updated: April 19, 2026
Application No. 16/710,576

PREDICTION OF OBJECTIVE VARIABLE USING MODELS BASED ON RELEVANCE OF EACH MODEL

Final Rejection §103
Filed
Dec 11, 2019
Examiner
LEY, SALLY THI
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
8 (Final)
15%
Grant Probability
At Risk
9-10
OA Rounds
3y 10m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
5 granted / 33 resolved
-39.8% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the communication filed on 15 Dec 2025. Claims 1-20 are being considered on the merits. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2 and 4-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, et. al. (US 2024/0331882 A1; hereinafter, “Kim”) in view of Quatieri, et. al. (US 10127929 B2; hereinafter, “Quatieri”), and further in view of Lee, Chang-Young (“A Study on the Optimal Mahalanobis Distance for Speech Recognition.” (2006); hereinafter, “Lee”) Regarding claims 1, 11, and 16, Kim teaches: A computer-implemented method for speech recognition, comprising: (Kim, para. 0137: “Computing device 1000 may have a speech recognition component 1023 that may generate speech recognition results for a speech data item as described above”) An apparatus for speech recognition, comprising: a programmable circuitry, and one or more computer readable mediums collectively including instructions that, in response to being executed by the programmable circuitry, cause the programmable circuitry to: (Kim, para. 0027 and 0029: “Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having program code embodied thereon.” “Modules may also be implemented in software for execution by various types of processors. An identified module of program code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.”) A computer program product including one or more computer readable storage mediums collectively storing program instructions that are executable by a processor or programmable circuitry to cause the processor or the programmable circuitry to perform operations for speech recognition comprising: (Kim, para. 0008 and 0034: “Computer program products comprising a computer readable storage medium are presented. In certain embodiments, a computer readable storage medium stores computer usable program code executable to perform operations for medical assessment based on voice. In some embodiments, one or more of the operations may be substantially similar to one or more steps described above with regard to the disclosed apparatuses, systems, and/or methods.” “Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.”) calculating, for each of a plurality of speech recognition models (Kim, para. 0043: “ A voice module 104 may extract one or more verbal queues and/or features and pass the extracted verbal queues and/or features to one or more machine learning models trained for a certain disease and/or other medical condition.” Kim teaches a plurality of speech models for each disease or medical condition) , a relevance of an output of the model with respect to a value of an objective speech recognition variable based on the value of the objective speech recognition variable and the output of the model in the past, (Kim, para. 0148: “By storing a history of a user's responses (e.g., baseline response data, test case response data, assessments, scores, or the like), in certain embodiments, the response module 1104 may enable the detection module 1106 to dynamically assess a medical condition for the user in response to a medical event.” Kim teaches time series data in the form of at least two data points consisting of a baseline response data and a response data). each of the plurality of speech recognition models (Kim, para. 0043: “ A voice module 104 may extract one or more verbal queues and/or features and pass the extracted verbal queues and/or features to one or more machine learning models trained for a certain disease and/or other medical condition.”) used to identify words or phrases included in an analyzed portion of speech for determining an existence of an illness (Kim, para. 0069: “The performance of medical condition classifier 240 may depend on the features computed by acoustic feature computation component 210 and language feature computation component 230. Further, a set of features that performs well for one medical condition may not perform well for another medical condition. For example, word difficulty may be an important feature for diagnosing Alzheimer's disease but may not be useful for determining if a person has a concussion. For another example, features relating to the pronunciation of vowels, syllables, or words may be important for Parkinson's disease but may be less important for other medical conditions. Accordingly, techniques are needed for determining a first set of features that performs well for a first medical condition, and this process may need to be repeated for determining a second set of features that performs well for a second medical condition.”) as time series data spanning at least days which reduces system cost and complexity of the speech recognition models (Kim, para. 0148 and 0085: “By storing a history of a user's responses (e.g., baseline response data, test case response data, assessments, scores, or the like), in certain embodiments, the response module 1104 may enable the detection module 1106 to dynamically assess a medical condition for the user in response to a medical event.” “A single training corpus may contain speech data relating to multiple medical conditions, or a separate training corpus may be used for each medical condition (e.g., a first training corpus for concussions and a second training corpus for Alzheimer's disease). A separate training corpus may be used for storing speech data for people with no known or diagnosed medical condition, as this training corpus may be used for training models for multiple medical conditions.” Kim teaches time series data in the form of at least two data points consisting of a baseline response data and a response data as well as a training corpus of a plurality of speech data storing speech data). calculating, for each of the plurality of speech recognition models, similarities between a current timing and a plurality of past timings based on the output of the model at the current timing, the output of the model at the plurality of past timings, and the relevance, (Kim, para. 0067: “ Language feature computation component 230 may receive speech recognition results from speech recognition component 220, and process the speech recognition results to determine language features, such as any of the language features described herein. The speech recognition results may be in any appropriate format and include any appropriate information. For example, the speech recognition results may include a word lattice that includes multiple possible sequences of words, information about pause fillers, and the timings of words, syllables, vowels, pause fillers, or any other unit of speech.”) by weighting similarities based on the differences between the current timing and the plurality of past timings using an attenuation coefficient and… (Kim, para. 0095-0099 teaches comparison of folds and statistics where the differences indicate stability removing less relevant data i.e. the unstable data). predicting, by a speech recognition variable prediction computing device, the value of the objective speech recognition variable at a target timing based on the similarities; (Kim, para. 0092 and 0093: “Feature selection score computation component 520 may compute a selection score for a feature using the pairs of feature values and diagnosis values. Feature selection score computation component 520 may compute any appropriate score that indicates a pattern or correlation between the feature values and the diagnosis values. For example, feature selection score computation component 520 may compute a Rand index, an adjusted Rand index, mutual information, adjusted mutual information, a Pearson correlation, an absolute Pearson correlation, a Spearman correlation, or an absolute Spearman correlation.” “The selection score may indicate the usefulness of the feature in detecting a medical condition. For example, a high selection score may indicate that a feature should be used in training the mathematical model, and a low selection score may indicate that the feature should not be used in training the mathematical model.” Examiner notes that Kim teaches a pattern or correlation which indicates a similarity in behavior or movement). outputting, by a speaker, an acoustic reply, determined based on the objective speech recognition variable, (Kim, para. 0043: “ A voice module 104 may interact with a user, asking questions verbally, recording the user's vocal responses, determining whether a response is accurate, or the like. For certain protocols, a voice module 104 may ask one or more questions multiple times (e.g., two times, three times, or the like) before moving on to a subsequent question, or the like. Based on the voice audio data, a voice module 104 may assess and/or diagnose one or more diseases or other medical conditions (e.g., concussion, depression, stress, stroke, cognitive well-being, mood, honesty, Alzheimer's disease, Parkinson's disease, cancer, or the like). For example, after audio is captured, a voice module 104 may score responses (e.g., by a device voice module 104a on a hardware device 102) and provide the initial one or more scores to a user, and the audio may be further analyzed (e.g., by a backend voice module 104b on a server device 108) and a secondary score may be provided to a user with respect to concussion and/or another specific disease or medical condition”) that determines a progression of an illness in response to different treatments (Kim, para. 0159: “ In certain embodiments, a plurality of voice modules 104 may be configured to perform one or more medical trials with users comprising medical trial participants (e.g., determining the efficacy of a medical treatment based on an analysis of voice data from participants). In such embodiments, the detection module 1106 may determine an assessment of an efficacy of a medical treatment for the medical condition associated with the medical trial. For example, users, as medical trial participants, may be divided into at least a placebo group that doesn't receive the medical treatment and a different group that receives the medical treatment, or into multiple groups that receive different medical treatments, or the like.”) Kim does not explicitly disclose, however Quatieri teaches: integrating speech recognition prediction results determined by the plurality of speech recognition models that each determine effects of different treatments to the illness as different feature value outputs (Quatieri, para. 0019: “Model-based feature extraction serves several purposes: (1) It can provide greater insight into the system under investigation. Superior insight can enable targeted treatments, or approaches to task performance (e.g., methods of effectively training an individual, treating different aspects of a disorder, monitoring the effectiveness of an intervention) (2) It can support simulation of interventions and performance/ risk reduction.”) into a single speech recognition prediction output based on the objective speech recognition variable, (Quatieri, para. 0030: “The full set of patterns (Θk, γk, z) for all k subjects and z disorders are processed in a final step by a machine learning algorithm or ensemble of algorithms, M. M may be a Gaussian mixture model, a deep neural network, a support vector machine [33], or a random forest to name a few examples”) the single speech recognition prediction output represented as speech recognition time series data (Quatieri, para. 0033: “ The mathematical framework of one embodiment is in FIG. 3. In our frame work, a neurophysiological model is parameterized by a set of control variables 301 represented as Θ. These parameters are input to the neurophysiological computational model 303 which generates a predicted time series 305. The predicted time series is compared against measured values 307 from the subject, and the difference 309 is used in an inverse model 311 to update the model's estimates of the internal parameters 313” Examiner notes that Quatieri teaches time series data output) spanning at least days (Quatieri, para. 0027: “In subsequent use of the system to analyze a specific individual, data is collected at 203 using the same protocol 201 to extract the target data at 205 to be applied to the neural computational model 207. The data derived from that model is similarly converted at 209 if the library at 211 is based on such converted data. The data of the test individual is then compared to the stored data for individuals of known disorders according to the appropriate model at 211 to obtain a prediction of a disorder for the individual at 213. The output from 213 may include a probability that the individual suffers the disorder and/or a prediction of the severity of the disorder.” Examiner notes that Quatieri teaches collection, process, and output from several individuals including stored data, which reasonably comprises data spanning at least days). …determined by the plurality of speech recognition models that each determine effects of different treatments to the illness as different feature value outputs…(Quatieri, para. 0019: “Model-based feature extraction serves several purposes: (1) It can provide greater insight into the system under investigation. Superior insight can enable targeted treatments, or approaches to task performance (e.g., methods of effectively training an individual, treating different aspects of a disorder, monitoring the effectiveness of an intervention) (2) It can support simulation of interventions and performance/ risk reduction.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Quatieri into Kim. Kim teaches medical assessment based on voice using speech recognition models. Quatieri teaches assessing the condition of a subject, control parameters are derived from a neurophysiological computational model that operates on features extracted from a speech signal. One of ordinary skill would have been motivated to combine the teachings of Quatieri into Kim in order to enable targeted treatments, or approaches to task performance (e.g., methods of effectively training an individual, treating different aspects of a disorder, monitoring the effectiveness of an intervention)—Superior insight can support simulation of interventions and performance/ risk reduction. (Quatieri, para. 0019). Morever, Lee teaches: …Mahalanobis’ distances of the differences that utilize a distance measurement matrix weighted with a hyperparameter for the relevance. (Lee, sec. IV and Fig 3: “In order to check whether the use of Mahalanobis distance with our prescription of the metric matrix is actually effective or not, we examined the discrimination in the Viterbi score of HMM of the two winners”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Lee into Kim, as modified. Lee teaches use of the Mahalanobis distance in the calculation of the similarity measure between feature vectors for speaker-independent speech recognition. One of ordinary skill would have been motivated to combine the teachings of Lee into Lloyd, as modified, in order to improve pattern classification and recognition using the Mahalanobis distance in speech recognition (Lee, sec. V). Regarding claims 2, 12, and 17, Kim, as modified, teaches claims 1, 1, and 16, above. Kim further teaches: The computer-implemented method of claim 1, wherein the calculating the similarities includes, for each model (Kim para 0043 teaches a plurality of models), calculating the similarities based on distances between the output of the model at the current timing and the output of the model at the plurality of past timings (Kim, para. 0095-0099 teaches comparison of folds and statistics and measuring variability where variability is a score based on distances which indicate stability and removing less relevant data i.e. the unstable data where each point of comparison may be a different output at a different timing). Regarding claims 4, 13, and 18, Kim, as modified, teaches claims 1, 11, and 16 above. Kim further teaches: the calculating the relevance includes, for each model: extracting a plurality of timings at which the output of the model is similar to the output of the model at a predetermined timing in the past; (Kim, para. 0045: “In some embodiments, a voice module 104 may compare results of a voice assessment and/or model for medical trial and/or study participants with results of a questionnaire or other test, may provide a score similar to and/or on the same scale as a questionnaire or other test, or the like” examiner notes that Kim teaches a comparison i.e. calculation of specific data i.e. extracted from general data of plurality of participants at assessment to another assessment). calculating a likelihood of the model given a value of the objective speech recognition variable at the predetermined timing, based on a distribution of values of the objective speech recognition variable at the plurality of timings; and (Kim para. 0043 and 0090: “ A voice module 104 may extract one or more verbal queues and/or features and pass the extracted verbal queues and/or features to one or more machine learning models trained for a certain disease and/or other medical condition.” “Feature selection score computation component 520 may compute a selection score for each feature (which may be an acoustic feature, a language feature, or any other feature described herein). To compute a selection score for a feature, a pair of numbers may be created for each speech data item in the training corpus, where the first number of the pair is the value of the feature and the second number of the pair is an indicator of the medical condition diagnosis. The value for the indicator of the medical condition diagnosis may have two values (e.g., 0 if the person does not have the medical condition and 1 if the person has the medical condition) or may have a larger number of values (e.g., a real number between 0 and 1 or multiple integers indicating a likelihood or severity of the medical condition).” Examiner notes that for examination purposes only “calculating a likelihood of a model” is interpreted as calculating the likelihood of a medical condition such that the variable is more or less likely to be used in the model associated with that condition). calculating the relevance based on the likelihood. (Kim, para. 0093: “The selection score may indicate the usefulness of the feature in detecting a medical condition. For example, a high selection score may indicate that a feature should be used in training the mathematical model, and a low selection score may indicate that the feature should not be used in training the mathematical model.” Kim teaches a score indicating a less useful i.e. less relevant feature) Regarding claim 5, Kim, as modified, teaches claim 4 above. Kim further teaches: wherein the calculating the relevance includes calculating the relevance based on an average of logarithm of the likelihoods of the model given the values of the objective speech recognition variable calculated respectively for a plurality of the predetermined timings. (Kim, para. 0101: “In some implementations, the selection scores and stability scores may be combined when selecting features. For example, for each feature a combined score may be computed (such as by adding or multiplying the selection score and the stability score for the feature) and features may be selected using the combined score.” Examiner notes Kim teaches combination of scores with examples of adding or multiplying where another example might be combination by average of logarithms). Regarding claim 6, Kim, as modified, teaches claim 5 above. Kim further teaches: The computer-implemented method of claim 5, wherein the calculating the relevance includes weighting the logarithm of the likelihoods, according to differences between the current timing and the predetermined timings. (Kim, para. 0079 and 0101: “In some implementations, an acoustic feature may be computed using statistics of the short-time segment features (e.g., arithmetic mean, standard deviation, skewness, kurtosis, first quartile, second quartile, third quartile, the second quartile minus the first quartile, the third quartile minus the first quartile, the third quartile minus the second quartile, 0.01 percentile, 0.99 percentile, the 0.99 percentile minus the 0.01 percentile, the percentage of short-time segments whose values are above a threshold (e.g., where the threshold is 75% of the range plus the minimum), the percentage of segments whose values are above a threshold (e.g., where the threshold is 90% of the range plus the minimum), the slope of a linear approximation of the values, the offset of a linear approximation of the values, the linear error computed as the difference of the linear approximation and the actual values, or the quadratic error computed as the difference of the linear approximation and the actual values. In some implementations, an acoustic feature may be computed as an i-vector or identity vector of the short-time segment features. An identity vector may be computed using any appropriate techniques, such as performing a matrix-to-vector conversion using a factor analysis technique and a Gaussian mixture model.” “In some implementations, the selection scores and stability scores may be combined when selecting features. For example, for each feature a combined score may be computed (such as by adding or multiplying the selection score and the stability score for the feature) and features may be selected using the combined score.” Examiner notes Kim teaches calculation of acoustic features and combination of scores with examples of adding or multiplying where another example might be combination by weights of logarithms). Claims 7-10, 14-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, in view of Quatieri, in view of Lee, and further in view of Lloyd, et al. (US 8694313 B2, hereinafter “Lloyd”) Regarding claims 7, 14, and 19, Kim, as modified, teaches claims 1, 11, and 16 above. Kim further teaches: The computer-implemented method of claim 1, wherein the predicting the value of the objective speech recognition variable includes generating a predicted distribution of the value of the objective speech recognition variable at the target timing. (Lloyd, col. 14:48-52 : “The probabilities may also be determined using a Dirichlet distribution-based approach. Using this more approach, the event of the user dialing a contact can be considered to be drawn from the categorical distribution, for which the conjugate prior under Bayes' theorem is the Dirichlet distribution.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Llyod into Kim, as modified. Lloyd teaches disambiguation of utterances in the context of initiating communications via mobile device. One of ordinary skill would have been motivated to combine the teachings of Llyod into Kim, as modified, in order to improve pattern classification and recognition using the Mahalanobis distance in speech recognition (Lee, sec. V). Regarding claim 8, Kim, as modified, teaches claim 7 above. Lloyd further teaches: The computer-implemented method of claim 7, further comprising: calculating an indicator of the generated predicted distribution, using a distribution of the values of the objective speech recognition variable at the plurality of past timings as a reference. (Lloyd, col. 12:29-38 and Table 1 : "The weight values assigned to the different past interactions in Table 1 are exemplary only, and in other implementations different weight values may be used. For instance, no weight values may be assigned, or all past interactions for a particular item of contact information may use the same weight value. When assigning weight values to a past interaction, higher weight values may be assigned to past interactions that were initiated through voice commands, since the distribution of communications is likely to be closer to what the user may have intended with a new utterance”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Llyod into Kim, as modified. Lloyd teaches disambiguation of utterances in the context of initiating communications via mobile device. One of ordinary skill would have been motivated to combine the teachings of Llyod into Kim, as modified, in order to improve pattern classification and recognition using the Mahalanobis distance in speech recognition (Lee, sec. V). Regarding claim 9, Kim, as modified, teaches claim 8 above. Lloyd further teaches: The computer-implemented method of claim 8, wherein the calculating the indicator includes calculating the indicator based on differences between distributions of the value of the objective speech recognition variable at the plurality of past timings and the generated predicted distribution. (Lloyd, col. 12:29-38 and Table 1 : "The weight values assigned to the different past interactions in Table 1 are exemplary only, and in other implementations different weight values may be used. For instance, no weight values may be assigned, or all past interactions for a particular item of contact information may use the same weight value. When assigning weight values to a past interaction, higher weight values may be assigned to past interactions that were initiated through voice commands, since the distribution of communications is likely to be closer to what the user may have intended with a new utterance”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Llyod into Kim, as modified. Lloyd teaches disambiguation of utterances in the context of initiating communications via mobile device. One of ordinary skill would have been motivated to combine the teachings of Llyod into Kim, as modified, in order to improve pattern classification and recognition using the Mahalanobis distance in speech recognition (Lee, sec. V). Regarding claims 10, 15 and 20, Kim, as modified, teaches claims 1, 11, and 16, above. Lloyd further teaches: The computer-implemented method of claim 1, wherein the calculating the similarities includes weighting the similarities according to differences between the current timing and the plurality of past timings (Kim, para. 0095-0099 teaches comparison of folds and statistics and measuring variability where variability is a score based on distances which indicate stability and removing less relevant data i.e. the unstable data). Kim as modified does not explicitly disclose, but Lloyd teaches: that gives more consideration to similarities calculated for timings closer to the current timing. (Lloyd, col 13:34-40: “For instance, the number of past interactions of a certain type may be added together, different types of communications may be weighted differently before being added together, and/or frequency counts may be scaled by multipliers to give more effect to one type of communication over another or that give more effect to recent communication over older communications.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Llyod into Kim, as modified. Lloyd teaches disambiguation of utterances in the context of initiating communications via mobile device. One of ordinary skill would have been motivated to combine the teachings of Llyod into Kim, as modified, in order to improve pattern classification and recognition using the Mahalanobis distance in speech recognition (Lee, sec. V). Response to Applicant Arguments/Remarks 35 U.S.C §101 In light of applicant’s amendments and arguments/remarks, the previously asserted 35 USC § 101 is withdrawn. 35 U.S.C §103 Starting on page 13, applicant argues that the claims 1, 4-11, 13-16, and 18-20 are not taught by the previous combination of Lloyd, Chebotar, Brabenec and Quartari. However, independent Claims 1, 11, and 16 are now currently amended to teach the use of Mahalanobis distances. Therefore, in light of amendments and applicant arguments and remarks, the claims now stand rejected over Kim, in view of Quartari and Lee where Lee teaches Mahalanobis distances. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STL/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Dec 11, 2019
Application Filed
Sep 28, 2022
Non-Final Rejection — §103
Dec 20, 2022
Applicant Interview (Telephonic)
Dec 20, 2022
Response Filed
Jan 09, 2023
Examiner Interview Summary
Mar 16, 2023
Final Rejection — §103
Apr 20, 2023
Response after Non-Final Action
May 10, 2023
Applicant Interview (Telephonic)
May 10, 2023
Response after Non-Final Action
Jun 21, 2023
Request for Continued Examination
Jun 26, 2023
Response after Non-Final Action
Jan 20, 2024
Non-Final Rejection — §103
Apr 25, 2024
Applicant Interview (Telephonic)
Apr 25, 2024
Examiner Interview Summary
Apr 30, 2024
Response Filed
Jun 18, 2024
Final Rejection — §103
Aug 16, 2024
Interview Requested
Aug 23, 2024
Response after Non-Final Action
Sep 03, 2024
Applicant Interview (Telephonic)
Sep 03, 2024
Response after Non-Final Action
Sep 11, 2024
Request for Continued Examination
Oct 03, 2024
Response after Non-Final Action
Dec 28, 2024
Non-Final Rejection — §103
Mar 21, 2025
Interview Requested
Apr 01, 2025
Examiner Interview Summary
Apr 01, 2025
Applicant Interview (Telephonic)
Apr 07, 2025
Response Filed
May 28, 2025
Final Rejection — §103
Jul 17, 2025
Interview Requested
Jul 23, 2025
Examiner Interview Summary
Jul 23, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Response after Non-Final Action
Aug 19, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Sep 11, 2025
Non-Final Rejection — §103
Nov 20, 2025
Interview Requested
Dec 01, 2025
Examiner Interview Summary
Dec 01, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103
Apr 03, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443830
COMPRESSED WEIGHT DISTRIBUTION IN NETWORKS OF NEURAL PROCESSORS
2y 5m to grant Granted Oct 14, 2025
Patent 12135927
EXPERT-IN-THE-LOOP AI FOR MATERIALS DISCOVERY
2y 5m to grant Granted Nov 05, 2024
Patent 11880776
GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
15%
Grant Probability
44%
With Interview (+28.8%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month