Prosecution Insights
Last updated: April 19, 2026
Application No. 18/359,255

CLUSTER LEARNING AND LARGE LANGUAGE MODEL FRAMEWORK

Final Rejection §101§102§103
Filed
Jul 26, 2023
Examiner
SCHMIEDER, NICOLE A K
Art Unit
2659
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
113 granted / 167 resolved
+5.7% vs TC avg
Strong +34% interview lift
Without
With
+34.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
192
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 167 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 12/09/2025. Claims 1, 3-14, 16-18, and 20-23 are pending and have been examined. All previous objections/rejections not mentioned in this Office Action have been withdrawn by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/09/2025 have been fully considered but they are not persuasive. Regarding the 101 rejections, Applicant asserts on pgs 7-11 that the claims do not recite mathematical concepts and incorporate AI-related recitations, therefore do not fall within mathematical concepts and mental processes, are not directed to an abstract idea, and further provide an improvement in computer technology. The Examiner respectfully disagrees with these assertions. First, the Examiner notes that the 101 rejection clearly states the claims are directed to the ”Mental Processes” grouping of abstract ideas, and does not include mathematical concepts as part of the rejection. Therefore, any arguments related to the mathematical concepts grouping are moot. Additionally, the machine learning model/algorithm as recited in the claims can be interpreted as a human learning a set of rules for how to understand and evaluate natural language information to produce a desired outcome. Therefore, while the claims recite AI-related limitations, the claims do not recite language regarding features or processes a human is incapable of performing in the mind and/or with the assistance of pen and paper, as a human is capable of performing the same processes of the machine learning model/algorithm as specifically recited in the claims. Additionally, the claims do not recite a clear improvement to computer technology. There is no indication of how the formation of clusters or identification of features to use as input provides an improvement to either the functionality of the machine learning model or to the process of generating a textual output to a query as a whole. Therefore, the claims are not integrated into a practical application, and remain not patent eligible. Regarding the 102 rejections, Applicant argues that Hemington does not mention forming the plurality of clusters comprises executing one or more unsupervised machine learning algorithms on the input dataset, and that the LLM is incorrectly conflated with the recited one or more unsupervised machine learning algorithms. Applicant further asserts that a machine learning language model is recited later in the independent claim to generate textual output, but is not recited as being used to form the plurality of clusters. The Examiner respectfully disagrees with the assertions regarding Hemington. The BRI of an unsupervised machine learning algorithm includes a LLM, and Hemington teaches that an LLM may be trained in an unsupervised manner (see [0056]). Hemington further teaches that the query processing engine may implement various features of the generative AI model, where a model is trained to produce a specific output or result, and the API call may include an identification of the LLM to be accessed. The LLM is additionally taught as being used to filter the clusters at an earlier step, and an LLM is used again to generate text response messages. (see Fig. 5,[0027],[0041],[0047],[0063],[0073],[0097-101]) These features taught by Hemington read on the recitation of the one or more unsupervised machine learning algorithms and the machine learning language model. Hemington further teaches the LLM is used to perform an additional level of filtering to identify, for a cluster, queries of the cluster that are semantically dissimilar so that the initial clusters are grouped into new clusters, thus using the LLM to refine the clusters. (see Figs 4 and 5,[0027],[0033],[0041],[0063], [0073],[0079-84],[0094-5]). These features taught by Hemington read on the BRI of “forming the plurality of clusters comprises executing one or more unsupervised machine learning algorithms on the input dataset to form the plurality of clusters”, as the LLM is implemented to refine the clusters before use in later parts of the process, and the refinement includes the use of information related to the queries. Hence, Applicant’s arguments are not persuasive. Claim Objections Applicant is advised that should claim 20 be found allowable, claim 21 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-14, 16-18, and 20-23, are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim(s) 1, 14, and 18, the limitation(s) of receiving a query, forming a plurality of clusters, selecting a plurality of data points, identifying one or more features, inputting the one or more features, and executing the machine learning language model, as drafted, are processes that, under broadest reasonable interpretation, covers performance of the limitation in the mind and/or with pen and paper but for the recitation of generic computer components. More specifically, the mental process of a human reading a set of text information, grouping a separate set of data using the information and a specific set of learned rules for how to group the information, choosing specific pieces of data from a grouping, recognizing important pieces of information related to the pieces of data, determining how the pieces of information fit into a set of language rules understood by the human, and using the set of language rules to turn the pieces of information into a written text response that would be understood by another human. The machine learning language model and machine learning algorithm is interpreted as a set of rules for understanding and working with human language. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind and/or with pen and paper but for the recitation of generic computer components, then it falls within the --Mental Processes-- grouping of abstract ideas. Accordingly, the claim(s) recite(s) an abstract idea. This judicial exception is not integrated into a practical application because the recitation of a processing device and memory of claim 1, an apparatus, processing device, and memory of claim 14, and an article of manufacture, storage medium, and processing device of claim 18, reads to generalized computer components, based upon the claim interpretation wherein the structure is interpreted using pg 22-23 in the specification. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim(s) is/are directed to an abstract idea. The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using generalized computer components to receive, form, select, identify, input, and execute, amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. With respect to claim(s) 3-5, the claim(s) recite(s) specifics of how the clusters are formed, which reads on a human using a specific set of learned rules and information to group the data. No additional limitations are present. With respect to claim(s) 6, the claim(s) recite(s) characteristics of the parameters, which reads on data having specific features. No additional limitations are present. With respect to claim(s) 7 and 8, the claim(s) recite(s) ways of selecting the plurality of data points, which reads on a human using a specific method when choosing which data points to further use. No additional limitations are present. With respect to claim(s) 9, 10, and 16, the claim(s) recite(s) (claims 9 and 16) executing a coefficient of variation analysis and (claim 10) ranking features based on a coefficient of variation, which reads on a human using a specific calculation to determine a value for pieces of data and using that information to further evaluate the pieces of data. No additional limitations are present. With respect to claim(s) 11, 12, 17, 20, 21, and 22, the claim(s) recite(s) generating an input prompt that includes specific information, which reads on a human writing down what specific pieces of information they are going to use when implementing language understanding rules to write out a response text. No additional limitations are present. With respect to claim(s) 13 and 23, the claim(s) recite(s) that the machine learning language model comprises a large language model, which reads on a specific set of learned rules for understanding human language and forming a response in a human language. No additional limitations are present. These claims further do not remedy the judicial exception being integrated into a practical application and further fail to include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 6, 7, 13, 14, 18, and 23, is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hemington et al. (US PG Pub No. 2024/0320251), hereinafter Hemington. Regarding claims 1, 14, and 18, Hemington teaches (claim 1) A method comprising (a method [0012]): (claim 14) An apparatus comprising (a computing system [0059-60]): (claim 14) a processing device operatively coupled to a memory and configured (the computing system that implements the process includes a memory and a processing unit that carries out the methods [0059-60]): (claim 18) An article of manufacture comprising a non-transitory processor-readable storage medium having stored therein program code of one or more software programs, wherein the program code when executed by at least one processing device causes said at least one processing device to perform the steps of (the computing system may include a memory that stores instructions for execution of the methods, systems, and modules, by the processor [0059-60],[0102]): receiving a query comprising one or more parameters for cluster formation (an inbound message is received by the system, and a query is extracted from the message, i.e. receiving a query, where clustering is performed on the queries, i.e. cluster formation, where the clustering is performed on vector embeddings including a feature set related to the text of the query itself, domain-specific information that provides additional context, and parameters related to metadata, i.e. comprising one or more parameters for cluster formation [0033],[0070-2],[0090-4]); forming a plurality of clusters from an input dataset, wherein the plurality of clusters comprise respective ones of a plurality of sub-datasets of the input dataset and are based at least in part on the one or more parameters (clustering is performed on queries received by the system, i.e. input dataset, to create clusters of similar queries, i.e. forming a plurality of clusters, where the queries are separated into different clusters, i.e. the plurality of clusters comprise respective ones of a plurality of sub-datasets of the input dataset, based on semantic similarity, as well as based on additional information such as domain specific information that identifies granular distinctions between queries, i.e. based at least in part on the one or more parameters [0033],[0078-84]), wherein forming the plurality of clusters comprises executing one or more unsupervised machine learning algorithms on the input dataset to form the plurality of clusters (the LLM may be a language model trained in an unsupervised manner, i.e. one or more unsupervised machine learning algorithms [0056], where the query processing engine may implement various features of the generative AI model, a model is trained to produce a specific output or result, and the API call may include an identification of the LLM to be accessed, and where LLM may perform an additional level of filtering to identify whether queries are similar or dissimilar and should be formed into the same or different clusters, i.e. executing…on the input dataset to form the plurality of clusters Figs 4 and 5,[0027],[0033],[0041],[0047],[0063], [0073],[0079-84],[0094-5]); selecting a plurality of data points from respective ones of the plurality of clusters (the system matches an incoming query to a particular one of the clusters, i.e. respective ones of the plurality of clusters, where data is associated with the matching cluster, such as matching queries and one or more associated solutions for each query, i.e. selecting a plurality of data points [0033],[0097-101]); identifying one or more features from the plurality of data points (the selected solutions associated with the matching queries, i.e. from the plurality of data points, are summarized and embedded to match with previously embedded sections of a knowledge base, where the text associated with the closest matches are extracted, and the matching queries have embeddings comprising feature sets, i.e. identifying one or more features [0090-3],[0097-101]); inputting the one or more features to a machine learning language model (the LLM, i.e. machine learning language model, is supplied with the query, the solution, and any additional resources, i.e. inputting the one or more features Fig. 5,[0027],[0041],[0047],[0063],[0073],[0097-101]); and executing the machine learning language model to generate a textual output based at least in part on the one or more features (the LLM, i.e. machine learning language model, is supplied with the query, the solution, and any additional resources, i.e. based at least in part on the one or more features, and generates text response messages, i.e. executing…to generate a textual output Fig. 5,[0027],[0041], [0047],[0063],[0073],[0090-3],[0097-101]); (claim 1) wherein the steps of the method are executed by at least one processing device operatively coupled to a memory (the computing system that implements the process includes a memory and a processing unit that carries out the methods [0059-60]). Regarding claim 6, Hemington teaches claim 1, and further teaches the one or more parameters comprise one or more designated characteristics to be included in the input dataset (an inbound message is received by the system, and a query is extracted from the message, where clustering is performed on the queries, i.e. input dataset, where the clustering is performed on vector embeddings including a feature set related to the text of the query itself, domain-specific information that provides additional context such as domain-specific keywords, and parameters related to metadata, i.e. the one or more parameters comprise one or more designated characteristics to be included [0033],[0070-2],[0090-4]). Regarding claim 7, Hemington teaches claim 1, and further teaches the selecting the plurality of data points from respective ones of the plurality of clusters is based at least in part on a threshold value for a number of representative data points from each of the respective ones of the plurality of clusters (the LLM may identify a matching query and associated solutions, and may be instructed to select, i.e. the selecting the plurality of data points, and summarize a defined number of the most common solutions, i.e. based at least in part on a threshold value for a number of representative data points, from the solutions associated with queries in the cluster, i.e. from each of the respective ones of the plurality of clusters). Regarding claims 13 and 23, Hemington teaches claims 1 and 18, and further teaches the machine learning language model comprises a large language model (the LLM is supplied with the query, the solution, and any additional resources, i.e. inputting the one or more features [0047],[0097-9]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hemington, in view of Marutho et al. (“The Determination of Cluster Number at k-mean using Elbow Method and Purity Evaluation on Headline News”, iSemantic, 2018), as found in the IDS, hereinafter Marutho. Regarding claim 3, Hemington teaches claim 1. While Hemington provides clustering algorithms, Hemington does not specifically teach a K-means algorithm for clustering, and thus does not teach the one or more unsupervised machine learning algorithms comprise a K-means algorithm. Marutho, however, teaches the one or more unsupervised machine learning algorithms comprise a K-means algorithm (k-means is a simple unsupervised learning algorithm (Sec. III.B.)). Hemington and Marutho are analogous art because they are from a similar field of endeavor in text clustering. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the use of clustering algorithms teachings of Hemington with the specific use of k-means clustering as taught by Marutho. It would have been obvious to combine the references to improve the purity of clusters by optimizing the number of clusters (Marutho (Abstract, Sec. I)). Regarding claim 4, Hemington teaches claim 1. While Hemington provides clustering algorithms and refining the clusters, Hemington does not specifically teach optimizing the number of clusters, and thus does not teach optimizing a number of the plurality of clusters by identifying an elbow point on one or more performance metrics for the plurality of clusters. Marutho, however, teaches optimizing a number of the plurality of clusters by identifying an elbow point on one or more performance metrics for the plurality of clusters (the change in the sum square error, i.e. one or more performance metrics, and the elbow method is used on the change in the SSE value, such as when the value drops drastically and forms a smaller angle, to find the optimal k value, i.e. optimizing a number of the plurality of clusters by identifying an elbow point (Sec. III.B-C.)). Hemington and Marutho are analogous art because they are from a similar field of endeavor in text clustering. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the use of clustering algorithms and refining the clusters teachings of Hemington with the specific use of the elbow method to optimize the number of clusters as taught by Marutho. It would have been obvious to combine the references to improve the purity of clusters by optimizing the number of clusters (Marutho (Abstract, Sec. I)). Regarding claim 5, Hemington in view of Marutho teaches claim 4, and Marutho further teaches the one or more performance metrics comprise at least one of a silhouette score, a sum of squared distances, and a within-cluster sum of squares (the change in the sum square error, i.e. one or more performance metrics, and the elbow method is used on the change in the SSE value to find the optimal k value, where the SSE is the sum of the average Euclidean distance of each point against the centroid, i.e. a sum of squared distances (Sec. III.B-C.)). Where the motivation to combine is the same as previously presented. Claim(s) 8, 11, 12, 17, and 20-22, is/are rejected under 35 U.S.C. 103 as being unpatentable over Hemington, in view of Bertsimas et al. (“Interpretable clustering: an optimization approach”, Machine Learning 110, 2021), as found in the IDS, hereinafter Bertsimas. Regarding claim 8, Hemington teaches claim 1. While Hemington provides determining a similarity of vectors within a cluster, Hemington does not specifically teach computing a silhouette score, and thus does not teach computing a silhouette score for respective ones of the plurality of data points. Bertsimas, however, teaches computing a silhouette score for respective ones of the plurality of data points (the silhouette metric is calculated for the distance from a point to other points in its cluster versus the distance from a point to points in a different cluster to determine whether a point has been assigned to the correct cluster (Sec. 2.2)). Where Hemington teaches that queries are matched to a cluster that has been refined such that the queries within the cluster represent the same query [0081-5]. Hemington and Bertsimas are analogous art because they are from a similar field of endeavor in effective clustering. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the determining a similarity of vectors within a cluster to refine the cluster teachings of Hemington with the use of a silhouette metric to determine if a point has been assigned to the correct cluster as taught by Bertsimas. It would have been obvious to combine the references to provide a method of evaluating the quality of cluster assignments (Bertsimas Sec. 2.2). Regarding claims 11, 17, 20, and 21, Hemington teaches claims 1, 14, 18, and 18, and further teaches generating an input prompt for the machine learning language model, the input prompt comprising the one or more features and at least one of…value associated with the one or more features (the LLM, i.e. machine learning language model, is supplied with an input prompt that is generated by the system, i.e. generating an input prompt, where the input includes the query, the solution, and any additional resources, i.e. comprising…the one or more features, where the LLM may also be provided data on a matched cluster, and the clustering module outputs information regarding the clusters, such as cluster labels, clustering algorithms, distance metrics, linkage criterion, and cluster membership, i.e. value associated with the one or more features [0033],[0047],[0058],[0063],[0072],[0085],[0097-9]). While Hemington provides providing data on a matched cluster to an LLM, Hemington does not specifically teach that one of the values includes a standard deviation value or a mean value, and thus does not teach the input prompt comprising the one or more features and at least one of a standard deviation value and a mean value associated with the one or more features. Bertsimas, however, teaches the input prompt comprising the one or more features and at least one of a standard deviation value and a mean value associated with the one or more features (the standard deviation and mean values, i.e. at least one of a standard deviation value and a mean value, for each of the variables in each of the clusters, i.e. one or more features, are determined (See Table 4)). Where Hemington teaches that information about the matched cluster is provided to the LLM [0072],[0085]. Hemington and Bertsimas are analogous art because they are from a similar field of endeavor in effective clustering. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the providing data on a matched cluster to an LLM teachings of Hemington with the information for cluster variables including standard deviation and mean values as taught by Bertsimas. It would have been obvious to combine the references to provide a method of evaluating the quality of cluster assignments (Bertsimas Sec. 2.2). Regarding claims 12 and 22, Hemington in view of Bertsimas teaches claims 11 and 21, and Bertsimas further teaches the input prompt further comprises at least one of a cluster density value and cluster compactness value (the clustering algorithm can take into account a specific intra-cluster density, i.e. cluster density value (Sec. 1.1)). Where Hemington teaches that information about the matched cluster is provided to the LLM, i.e. input prompt further comprises [0072],[0085]. And where the motivation to combine is the same as previously presented. Claim(s) 9, 10, and 16, is/are rejected under 35 U.S.C. 103 as being unpatentable over Hemington, in view of Fong et al. (“A Novel Feature Selection by Clustering Coefficients of Variations”, IEEE, 2014), hereinafter Fong. Regarding claims 9 and 16, Hemington teaches claims 1 and 14. While Hemington provides identifying characteristics of the data to determine the characteristics to use, Hemington does not specifically teach the use of a coefficient of variation analysis, and thus does not teach executing a coefficient of variation analysis on a base set of features derived from the plurality of data points. Fong, however, teaches executing a coefficient of variation analysis on a base set of features derived from the plurality of data points (the coefficients of variation are calculated for each feature, where each vector is a set of feature values, i.e. executing a coefficient of variation analysis on a base set of features derived from the plurality of data points, where features are identified as good or bad, i.e. identifying of the one or more features (Sec. III. Intro para. and III.A.)). Where Hemington teaches that the query and solution information sent to the LLM is based on closest matches based on a comparison of the embeddings [0097-101]. Hemington and Fong are analogous art because they are from a similar field of endeavor in improving clustering of data. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the identifying characteristics of the data to determine the characteristics to use teachings of Hemington with using coefficients of variation to determine whether or not a feature is useful for classification as taught by Fong. It would have been obvious to combine the references to outperform other feature selection techniques in averaged performance and speed (Abstract). Regarding claim 10, Hemington in view of Fong teaches claim 9, and Fong further teaches ranking respective ones of the features from the base set based at least in part on a coefficient of variation of the respective ones of the features (the coefficients of variation are calculated for each feature, where each vector is a set of feature values, i.e. coefficient of variation of the respective ones of the features, where features are ranked and identified as good or bad based on the variation values, i.e. ranking respective ones of the features from the base set based at least in part on (Sec. III. Intro para. and III.A.). Where the motivation to combine is the same as previously presented. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICOLE A K SCHMIEDER whose telephone number is (571)270-1474. The examiner can normally be reached 8:00 - 5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICOLE A K SCHMIEDER/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 26, 2023
Application Filed
Sep 05, 2025
Non-Final Rejection — §101, §102, §103
Dec 09, 2025
Response Filed
Feb 10, 2026
Final Rejection — §101, §102, §103
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572751
ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12567408
MULTI-MODAL SMART AUDIO DEVICE SYSTEM ATTENTIVENESS EXPRESSION
2y 5m to grant Granted Mar 03, 2026
Patent 12554930
TRANSFORMER-BASED TEXT ENCODER FOR PASSAGE RETRIEVAL
2y 5m to grant Granted Feb 17, 2026
Patent 12542131
SYSTEM AND METHOD FOR COMMUNICATING WITH A USER WITH SPEECH PROCESSING
2y 5m to grant Granted Feb 03, 2026
Patent 12531071
PACKET LOSS CONCEALMENT METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+34.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 167 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month