Prosecution Insights
Last updated: April 19, 2026
Application No. 18/162,744

IMPACT SCORE FOR ONTOLOGY CHANGES

Non-Final OA §103
Filed
Feb 01, 2023
Examiner
ORTIZ SANCHEZ, MICHAEL
Art Unit
2656
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 10m
To Grant
94%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
327 granted / 492 resolved
+4.5% vs TC avg
Strong +28% interview lift
Without
With
+27.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
26 currently pending
Career history
518
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 492 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/15/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1-7, 9-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Engelberg EP 4258601 A1 in view of Lavallee U.S. PAP 2021/0064829 A1, further in view of Subramanian U.S. Patent no. 10,445,170 B1. Regarding claim 1 Engelberg teaches a computer-implemented method for a re-analysis of assignments of terms to assets (methods, systems, and apparatus for ontology-based risk propagation over digital twins, see abstract), comprising: detecting a change associated with a term in a term ontology comprising a plurality of terms assigned to the assets (identifying, for the first process node, a set of incoming nodes, each incoming node comprising an asset node or a process node and being connected to the first process node by a respective edge, see par. [0008]); determining at least one selected from a group consisting of: a Domain Feature Change Vector (DFCV) for a domain of the term ontology affected by the change, and a Term Feature Change Vector (TFCV) for a term affected by the change (determining a direct risk for the first process based on relations between the first process node and asset nodes of the set of incoming nodes, see par. [0008]; the indirect risk for the first process is represented by an indirect risk vector including multiple risk values, each risk value being associated with the different aspect of risk; the aggregated risk for the first process is represented by an aggregated risk vector including multiple risk values, each risk value being associated with the different aspect of risk; determining the aggregated risk for the first process comprises generating the aggregated risk vector, including selecting, for each of the different aspects of risk, the maximum risk value between the direct risk vector and indirect risk vector, see par. [0010]); identifying assets for the re-analysis of the assignments of terms, wherein each of the assets is associated with an impact score value based on the DFCV and/or the TFCV (The user interface 700 also provides a total risk score 710 for the selected process of "Marriage.", see par. [0117]); and performing the re-analysis of the assignments of terms for the assets ordered by the impact score value (The user interface 700 provides a mitigation recommendation 720 for reducing the risk score of the selected process. The example mitigation recommendation 720 includes a recommended action that would reduce the total risk score of the Marriage process from 82 to zero. In some examples, the mitigation recommendation 720 includes multiple recommended actions. In some examples, the mitigation recommendation 720 specifies a priority of the multiple recommended actions., see par. [0118]). However Engelberg does not teach that the impact score is determined based on at least one selected from a group consisting of: a scalar product of the DFCV and an assignment feature vector (AFV) of each term assigned to the respective one of the assets, and a scalar product of the TFCV and the AFV of each term assigned to a respective one of the assets. In a similar field of endeavor Lavallee teaches a set of numeric features may be represented as a feature vector and a simple example of a two-way classification from a feature vector entails calculating the scalar product of the feature vector and a vector of weights, comparing the product to a threshold and assigning a classification based upon that comparison: one class for greater than the threshold the other class for less than or equal to the threshold. Processes for classification that may employ a feature vector include: nearest-neighbor classification, neural network classification, and statistical classification. In natural language understanding, as previously mentioned, features may include bigrams, number of tokens, positional features, unigrams, word stems, and grammar, for example, see par. [0068]. It would have been obvious to one of ordinary skill in the art to combine the Engelberg invention with the teachings of Lavallee for the benefit of reclassifying assigned classifications based on the comparison of the product to a threshold, see par. [0068] However Engelberg in view of Lavallee does not teach updating one or more of the assignments of terms for the assets based on the impact score value. In a similar field of endeavor Subramanian teaches a system for data lineage identification and change impact prediction in a distributed computing environment, see col. 2 lines 31-37. The system comprises a plurality of distributed server computing devices that coordinate over a network to capture metadata associated with each of a plurality of data sources coupled to the plurality of distributed server computing devices, the metadata comprising technical attributes that define data objects stored in the plurality of data sources. The plurality of distributed server computing devices extract unstructured text from one or more stored database incident tickets, the unstructured text comprising error messages associated with one or more of the data objects stored in the plurality of data sources and match the unstructured text to the metadata for the data objects. The plurality of distributed server computing devices generate a multidimensional vector for one or more of the data objects stored in the plurality of data sources based upon the data lineage and the unstructured text, the multidimensional vector comprising a change impact feature set for the data objects. The plurality of distributed server computing devices train a change classification model using the multidimensional vectors to predict a change impact score for each data object and rank the data objects based upon the change impact scores. The plurality of distributed server computing devices receive a request to change a data object stored in one of the data sources (updating one or more of the assignments of terms based on impact score). The plurality of distributed server computing devices determine, by executing the change classification model, a change impact score for the data object identified in the request. When the change impact score is below a predetermined threshold, the plurality of distributed server computing devices execute the requested change by generating programmatic instructions that are transmitted to the data source that stores the data object identified in the request, wherein the data source executes the programmatic instructions to change one or more of a data structure or a data type of the data object. It would have been obvious to one of ordinary skill in the art to combine the Engelberg in view of Lavallee invention with the teachings of Subramanian in order to keep track of how data is disseminated in the system, see col. 1 lines 39-52. Regarding claim 2 Engelberg teaches the computer-implemented method of claim 1, wherein the identified assets are selected if an associated impact score value is equal to or greater than a predefined first threshold value (through the risk assessment step, an alert can be presented considering the deviation of the quantified risk from a pre-defined threshold (denoted as a cardinal risk), see par. [0099]). Regarding claim 3 Engelberg teaches the computer-implemented method of claim 1, wherein the re-analysis is performed for the identified assets ordered by the impact score value until an associated impact score value is equal to or less than a predefined second threshold value, or the re-analysis yields no further change in the assignments of the terms (reduce the total risk score of the Marriage process from 82 to zero, see par. [0118]). Regarding claim 4 Engelberg teaches the computer-implemented method of claim 1, wherein the re-analysis is performed in an order of decreasing impact score values, and/or wherein the impact score values associated with the assets are indicative of at least one selected from a group consisting of: a likelihood for a change of the assignments of terms to respective assets, an expected change of an assignment confidence value for the terms assigned to the respective assets, and an expected change of assignment quality value due to a change of the assignments of terms to the respective assets (prioritization of remedial actions can include determining a risk assessment based on a knowledge graph, and generating a prioritized list of remedial actions based on the risk assessment and a risk tolerance profile, the prioritized list of remedial actions being generated by a mitigation simulator, see par. [0021]). Regarding claim 5 Engelberg teaches the computer-implemented method of claim 1, wherein the change in the term ontology comprises at least one selected from a group consisting of: a term being added to the term ontology, a term being removed from the term ontology, a change of a term name of the term in the term ontology, a change of a term description of the term in the term ontology, a change of a term relation between at least two of the terms in the term ontology, a term split of a term in the term ontology, and a term union of at least two of the terms in the term ontology (In comparing multiple knowledge graphs, the difference between the knowledge graphs is a target of interest, as differences can reveal vulnerabilities that were added, were removed or that persisted across all knowledge graphs, see par. [0020]). Regarding claim 6 Engelberg teaches the computer-implemented method of claim 1, wherein each of the assignments of terms comprises at least one selected from a group consisting of: an assignment confidence value indicative of a confidence that an assigned term matches a respective asset, an indicator of a type of an analysis of the respective asset used to create the assignment, an indicator of a type of the re-analysis of the respective assignment, and an assignment feature vector (AFV), wherein components of the AFV are indicative of a weight value for each feature of the assigned term, wherein the analysis or the re-analysis determines whether or not to assign the term to the respective asset depending on the features weighted according to the respective weight values (Risk can be used to quantify the possibility of reaching some given objectives, where such a quantity value is derived from the combination of the probability that a certain risk event occurs (as a perturbation of the plan for reaching the objectives) and a set of severity values, see par. [0052]). Regarding claim 7 Engelberg teaches the computer-implemented method of claim 6, wherein the features of the terms comprise at least one selected from a group consisting of: a name of the respective term, a description of the respective term, a term relation of the respective term to one or more other terms in the term ontology, an asset relation of the respective term to one or more other assets, a data class of the respective term, a classification of the respective term, and a domain in the term ontology which comprises the respective term ( determining a direct risk for the first process based on relations between the first process node and asset nodes of the set of incoming nodes, see par. [0008]). Regarding claim 9 Engelberg teaches the computer-implemented method of claim 1, wherein the assets are identified, or the impact score values are determined, based on the TFCV when a change of a single term in the term ontology is detected, or wherein the assets are identified or the impact score values are determined based on the DFCV when the detected at least one change comprises at least one selected from a group consisting of: changes of multiple terms in the term ontology, a term being added to the term ontology, and a term being removed from the term ontology ( In comparing multiple knowledge graphs, the difference between the knowledge graphs is a target of interest, as differences can reveal vulnerabilities that were added, were removed or that persisted across all knowledge graphs, see par. [0020]). Regarding claim 10 Engelberg teaches the computer-implemented method of claim 1, wherein components of the TFCV comprise a predefined or maximum value for each feature of a removed term or for each feature of an added term, or wherein one or more components of the TFCV are zero when the corresponding one or more features of the term are not affected by the determined at least one change, or wherein the DFCV of a domain of the term ontology is a sum of the TFCVs determined for the terms in the domain (Indirect risk, or followed risk, is an impact of a risk vector from an element to another that has process dependency relation. If the set of incoming nodes is zero, then indirect risk is zero and the importance vector is zero, see par. [0092]). Regarding claim 11 Engelberg teaches the computer-implemented method of claim 1, wherein the assets are physical assets, customer tables, documents, or metadata of source assets (assets include, without limitation, users 122, computing devices 124, electronic documents 126, and servers 128, see par. [0031]). Regarding claim 12 Engelberg teaches the computer-implemented method of claim 1, further comprising: using a machine learning system or a data classification system for the identification of the assets or for the determination of the impact score values (Pattern detection and classification of events for approved automation processes, see par. [0042]). Regarding claim 13 Engelberg teaches the computer-implemented method of claim 12, wherein the machine learning system is continuously re-trained based on new data of a first or second threshold value for changes of the assignments of the terms (In some examples, agile security automation bots continuously analyze attack probability, predict impact, and recommend prioritized actions for cyber risk reduction, see par. [0025]). Regarding claim 14 Engelberg teaches a re-analysis system for assignments of terms to assets, comprising: one or more computer readable storage media storing program instructions and one or more processors which, in response to executing the program instructions ( a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided, see par. [0012]), are configured to: detect a change associated with a term in a term ontology comprising a plurality of terms assigned to one or more assets (identifying, for the first process node, a set of incoming nodes, each incoming node comprising an asset node or a process node and being connected to the first process node by a respective edge, see par. [0008]); determine at least one selected from a group consisting of: a Domain Feature Change Vector (DFCV) for a domain of the term ontology affected by the change, and a Term Feature Change Vector (TFCV) for a term affected by the change (determining a direct risk for the first process based on relations between the first process node and asset nodes of the set of incoming nodes, see par. [0008]; the indirect risk for the first process is represented by an indirect risk vector including multiple risk values, each risk value being associated with the different aspect of risk; the aggregated risk for the first process is represented by an aggregated risk vector including multiple risk values, each risk value being associated with the different aspect of risk; determining the aggregated risk for the first process comprises generating the aggregated risk vector, including selecting, for each of the different aspects of risk, the maximum risk value between the direct risk vector and indirect risk vector, see par. [0010]); identify assets for the re-analysis of the assignments of terms, wherein each of the assets is associated with an impact score value based on the DFCV and/or the TFCV (The user interface 700 also provides a total risk score 710 for the selected process of "Marriage.", see par. [0117]); and perform the re-analysis of the assignments of terms for the assets ordered by the impact score value (The user interface 700 provides a mitigation recommendation 720 for reducing the risk score of the selected process. The example mitigation recommendation 720 includes a recommended action that would reduce the total risk score of the Marriage process from 82 to zero. In some examples, the mitigation recommendation 720 includes multiple recommended actions. In some examples, the mitigation recommendation 720 specifies a priority of the multiple recommended actions., see par. [0118]). However Engelberg does not teach that the impact score is determined based on at least one selected from a group consisting of: a scalar product of the DFCV and an assignment feature vector (AFV) of each term assigned to the respective one of the assets, and a scalar product of the TFCV and the AFV of each term assigned to a respective one of the assets. In a similar field of endeavor Lavallee teaches a set of numeric features may be represented as a feature vector and a simple example of a two-way classification from a feature vector entails calculating the scalar product of the feature vector and a vector of weights, comparing the product to a threshold and assigning a classification based upon that comparison: one class for greater than the threshold the other class for less than or equal to the threshold. Processes for classification that may employ a feature vector include: nearest-neighbor classification, neural network classification, and statistical classification. In natural language understanding, as previously mentioned, features may include bigrams, number of tokens, positional features, unigrams, word stems, and grammar, for example, see par. [0068]. It would have been obvious to one of ordinary skill in the art to combine the Engelberg invention with the teachings of Lavallee for the benefit of reclassifying assigned classifications based on the comparison of the product to a threshold, see par. [0068] However Engelberg in view of Lavallee does not teach updating one or more of the assignments of terms for the assets based on the impact score value. In a similar field of endeavor Subramanian teaches a system for data lineage identification and change impact prediction in a distributed computing environment, see col. 2 lines 31-37. The system comprises a plurality of distributed server computing devices that coordinate over a network to capture metadata associated with each of a plurality of data sources coupled to the plurality of distributed server computing devices, the metadata comprising technical attributes that define data objects stored in the plurality of data sources. The plurality of distributed server computing devices extract unstructured text from one or more stored database incident tickets, the unstructured text comprising error messages associated with one or more of the data objects stored in the plurality of data sources and match the unstructured text to the metadata for the data objects. The plurality of distributed server computing devices generate a multidimensional vector for one or more of the data objects stored in the plurality of data sources based upon the data lineage and the unstructured text, the multidimensional vector comprising a change impact feature set for the data objects. The plurality of distributed server computing devices train a change classification model using the multidimensional vectors to predict a change impact score for each data object and rank the data objects based upon the change impact scores. The plurality of distributed server computing devices receive a request to change a data object stored in one of the data sources (updating one or more of the assignments of terms based on impact score). The plurality of distributed server computing devices determine, by executing the change classification model, a change impact score for the data object identified in the request. When the change impact score is below a predetermined threshold, the plurality of distributed server computing devices execute the requested change by generating programmatic instructions that are transmitted to the data source that stores the data object identified in the request, wherein the data source executes the programmatic instructions to change one or more of a data structure or a data type of the data object. It would have been obvious to one of ordinary skill in the art to combine the Engelberg in view of Lavallee invention with the teachings of Subramanian in order to keep track of how data is disseminated in the system, see col. 1 lines 39-52. Regarding claim 15 Engelberg teaches the re-analysis system of claim 14, wherein the identified assets are selected when the associated impact score value is equal to or greater than a predefined first threshold value (through the risk assessment step, an alert can be presented considering the deviation of the quantified risk from a pre-defined threshold (denoted as a cardinal risk), see par. [0099]). Regarding claim 16 Engelberg teaches the re-analysis system of claim 14, wherein the program instructions configured to cause the one or more processors to perform the re-analysis are further configured to cause the one or more processors to: perform the re-analysis for the assets ordered by the impact score value until the associated impact score value is equal to or less than a predefined second threshold value, or the re-analysis yields no further change in the assignments of the terms (reduce the total risk score of the Marriage process from 82 to zero, see par. [0118]). Regarding claim 17 Engelberg teaches the re-analysis system of claim 14, wherein the program instructions are further configured to cause the one or more processors to: perform the re-analysis in an order of decreasing impact score values, wherein the impact score values associated with the assets are indicative of at least one selected from a group consisting of: a likelihood for a change of the assignments of terms to the respective assets, an expected change of an assignment confidence for the terms assigned to the respective assets, and an expected change of assignment quality value due to a change of the assignments of terms to the respective assets (prioritization of remedial actions can include determining a risk assessment based on a knowledge graph, and generating a prioritized list of remedial actions based on the risk assessment and a risk tolerance profile, the prioritized list of remedial actions being generated by a mitigation simulator, see par. [0021]). Regarding claim 18 Engelberg teaches the re-analysis system of claim 14, wherein the change in the term ontology comprises at least one selected from a group consisting of: a term being added to the term ontology, a term being removed from the term ontology, a change of a term name of the term in the term ontology, a change of a term description of the term in the term ontology, a change of a term relation between at least two of the terms in the term ontology, a term split of a term in the term ontology, and a term union of at least two of the terms in the term ontology (In comparing multiple knowledge graphs, the difference between the knowledge graphs is a target of interest, as differences can reveal vulnerabilities that were added, were removed or that persisted across all knowledge graphs, see par. [0020]).. Regarding claim 19 Engelberg teaches the re-analysis system of claim 14, wherein each of the assignments of terms comprises at least one selected from a group consisting of: an assignment confidence value indicative of a confidence that the assigned term matches the respective asset, an indicator of a type of an analysis of the respective asset used to create the assignment, an indicator of a type of the re-analysis of the respective assignment, and an assignment feature vector (AFV), wherein components of the AFV are indicative of a weight value for each feature of the assigned term, wherein the analysis or the re-analysis determines whether or not to assign the term to the respective asset depending on the features weighted according to the respective weight values (Risk can be used to quantify the possibility of reaching some given objectives, where such a quantity value is derived from the combination of the probability that a certain risk event occurs (as a perturbation of the plan for reaching the objectives) and a set of severity values, see par. [0052]). Regarding claim 20 Engelberg teaches a computer program product (computer programs , see par. [0009]) for a re-analysis of assignments of terms to assets, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions being executable by one or more computing systems or controllers to cause the one or more computing systems to: detect a change associated with a term in a term ontology comprising a plurality of terms assigned to one or more assets (identifying, for the first process node, a set of incoming nodes, each incoming node comprising an asset node or a process node and being connected to the first process node by a respective edge, see par. [0008]); determine at least one selected from a group consisting of: a Domain Feature Change Vector (DFCV) for a domain of the term ontology affected by the change, and a Term Feature Change Vector (TFCV) for a term affected by the change (determining a direct risk for the first process based on relations between the first process node and asset nodes of the set of incoming nodes, see par. [0008]; the indirect risk for the first process is represented by an indirect risk vector including multiple risk values, each risk value being associated with the different aspect of risk; the aggregated risk for the first process is represented by an aggregated risk vector including multiple risk values, each risk value being associated with the different aspect of risk; determining the aggregated risk for the first process comprises generating the aggregated risk vector, including selecting, for each of the different aspects of risk, the maximum risk value between the direct risk vector and indirect risk vector, see par. [0010]); identify assets for the re-analysis of the assignments of terms, wherein each of the assets is associated with an impact score value based on the DFCV and/or the TFCV (The user interface 700 also provides a total risk score 710 for the selected process of "Marriage.", see par. [0117]); and perform the re-analysis of the assignments of terms for the assets ordered by the impact score value (The user interface 700 provides a mitigation recommendation 720 for reducing the risk score of the selected process. The example mitigation recommendation 720 includes a recommended action that would reduce the total risk score of the Marriage process from 82 to zero. In some examples, the mitigation recommendation 720 includes multiple recommended actions. In some examples, the mitigation recommendation 720 specifies a priority of the multiple recommended actions., see par. [0118]). However Engelberg does not teach that the impact score is determined based on at least one selected from a group consisting of: a scalar product of the DFCV and an assignment feature vector (AFV) of each term assigned to the respective one of the assets, and a scalar product of the TFCV and the AFV of each term assigned to a respective one of the assets. In a similar field of endeavor Lavallee teaches a set of numeric features may be represented as a feature vector and a simple example of a two-way classification from a feature vector entails calculating the scalar product of the feature vector and a vector of weights, comparing the product to a threshold and assigning a classification based upon that comparison: one class for greater than the threshold the other class for less than or equal to the threshold. Processes for classification that may employ a feature vector include: nearest-neighbor classification, neural network classification, and statistical classification. In natural language understanding, as previously mentioned, features may include bigrams, number of tokens, positional features, unigrams, word stems, and grammar, for example, see par. [0068]. It would have been obvious to one of ordinary skill in the art to combine the Engelberg invention with the teachings of Lavallee for the benefit of reclassifying assigned classifications based on the comparison of the product to a threshold, see par. [0068] However Engelberg in view of Lavallee does not teach updating one or more of the assignments of terms for the assets based on the impact score value. In a similar field of endeavor Subramanian teaches a system for data lineage identification and change impact prediction in a distributed computing environment, see col. 2 lines 31-37. The system comprises a plurality of distributed server computing devices that coordinate over a network to capture metadata associated with each of a plurality of data sources coupled to the plurality of distributed server computing devices, the metadata comprising technical attributes that define data objects stored in the plurality of data sources. The plurality of distributed server computing devices extract unstructured text from one or more stored database incident tickets, the unstructured text comprising error messages associated with one or more of the data objects stored in the plurality of data sources and match the unstructured text to the metadata for the data objects. The plurality of distributed server computing devices generate a multidimensional vector for one or more of the data objects stored in the plurality of data sources based upon the data lineage and the unstructured text, the multidimensional vector comprising a change impact feature set for the data objects. The plurality of distributed server computing devices train a change classification model using the multidimensional vectors to predict a change impact score for each data object and rank the data objects based upon the change impact scores. The plurality of distributed server computing devices receive a request to change a data object stored in one of the data sources (updating one or more of the assignments of terms based on impact score). The plurality of distributed server computing devices determine, by executing the change classification model, a change impact score for the data object identified in the request. When the change impact score is below a predetermined threshold, the plurality of distributed server computing devices execute the requested change by generating programmatic instructions that are transmitted to the data source that stores the data object identified in the request, wherein the data source executes the programmatic instructions to change one or more of a data structure or a data type of the data object. It would have been obvious to one of ordinary skill in the art to combine the Engelberg in view of Lavallee invention with the teachings of Subramanian in order to keep track of how data is disseminated in the system, see col. 1 lines 39-52. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pertinent prior arty available on form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Ortiz-Sanchez whose telephone number is (571)270-3711. The examiner can normally be reached Monday- Friday 9AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ORTIZ-SANCHEZ/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Feb 01, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Jul 15, 2025
Interview Requested
Jul 28, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Examiner Interview Summary
Aug 11, 2025
Response Filed
Oct 22, 2025
Final Rejection — §103
Dec 05, 2025
Response after Non-Final Action
Jan 15, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103
Mar 30, 2026
Interview Requested
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596887
SYSTEMS AND METHODS FOR TEXT SIMPLIFICATION WITH DOCUMENT-LEVEL CONTEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12566831
METHODS AND SYSTEMS FOR TRAINING A MACHINE LEARNING MODEL AND AUTHENTICATING A USER WITH THE MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12567399
MANAGEMENT APPARATUS, MANAGEMENT SYSTEM, MANAGEMENT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12555577
Hotphrase Triggering Based On A Sequence Of Detections
2y 5m to grant Granted Feb 17, 2026
Patent 12548574
APPARATUS FOR IMPLEMENTING SPEAKER DIARIZATION MODEL, METHOD OF SPEAKER DIARIZATION, AND PORTABLE TERMINAL INCLUDING THE APPARATUS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
94%
With Interview (+27.7%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 492 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month