Prosecution Insights
Last updated: April 19, 2026
Application No. 18/400,678

SYSTEM AND METHOD FOR GENERATING REVIEW SUMMARIES BASED ON CLASSIFIED TOPICS

Final Rejection §101§103
Filed
Dec 29, 2023
Examiner
WEAVER, ADAM MICHAEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Shopify Inc.
OA Round
2 (Final)
92%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed on 11/25/2025 has been entered. Claims 1-20 remain pending in this application. Response to Arguments Applicant’s arguments filed 11/25/2025 have been fully considered but are not persuasive. With respect to the 35 U.S.C. 101 rejection, on pages 10-14, the Applicant asserts that the claims, as amended, do not recite an abstract idea, and even if they did recite a purported abstract idea, the claims as a whole integrate the purported abstract idea into a practical application. They argue that the independent claims, as amended, is not something which can be practically performed in the human mind. They state that it is not practical or possible to generate semantic vectors for reviews using the human mind, or to cluster or classify the reviews into the topics using the generated semantic vectors using the human mind. The Applicant states that the addition of “automatically” also prevents the claims from being performed by the human mind. The Applicant also asserts that this addition describes specific benefits and technical improvements resulting from using a computer to automatically associate reviews with topics without human input and using a computer to automatically determine which reviews should be included in an input prompt based on the associated topics. They state that the claims, as amended, include additional elements that improve existing methods and systems for prompting generative language models by generating input prompts which take into account the inherent technical limitations of such generative language models. The Examiner respectfully disagrees. It appears that the Applicant is merely restating what is in the claim language without specifically identifying what elements and how each limitation is significantly more. The amended independent claim, taken as a whole, is simply the organization of reviews of topics for input into a generative language model. This can easily be performed by a human with a pen and paper, as a human would be able to create associations of reviews with topics by writing out vectors for the reviews, grouping these vectors together, and then selecting specific reviews and instructions to input into a generative language model. The usage and application of a generative language model in this case is purely the recitation of generic computer components. Automatically creating groupings of reviews and providing instructions to a chose generative language model in order to abide by the “input prompt limit” of that chose generative language model does not display an improvement of the technical field of prompting language models. The Applicant has not provided any reasoning or evidence as to why the noted individual limitations are not mental activities. The Examiner has considered all of the limitations as noted by the Applicant as part of the abstract idea as mental activities. The Examiner also notes in the rejection notes below that the claims only recited a few additional limitations of “for a generative language model”, “into the generative language model”, and “as generated by the generative language model”. These elements, as stated below, are general purpose computing elements. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Hence, the Applicant’s arguments are not persuasive. With respect to the 35 U.S.C. 103 rejection, on pages 14-17, of claims 1-9, 12-18, and 20 under Khan et al. (US Patent Application Publication No. 2022/0415203), hereinafter referred to as Khan, in view of Chatterjee et al. (US Patent Application Publication No. 2020/0285662), hereinafter referred to as Chatterjee, claim 10 under Khan, in view of Chatterjee, and further in view of Wang et al. (US Patent Application Publication No. 2020/0175052), and claims 11 and 19 under Khan, in view of Chatterjee, and further in view of Privault et al. (US Patent Application Publication No. 2010/0312725), hereinafter referred to as Privault, the Applicant asserts that Chatterjee fails to rectify Khan’s failure to disclose the association of reviews with topics or obtaining a summary review from a generative language model. They assert that Chatterjee “does not select reviews based on the determined sentiment nor generate a summary review based on such selected reviews”, and therefore does not disclose the amended independent claim’s limitations. They also assert that neither Khan nor Chatterjee disclose selected reviews are associated with at least one topic of the topics or none of the topics. In response to the argument that Chatterjee fails to rectify Khan’s failure to disclose the association of reviews with topics or obtaining a summary review from a generative language model, Chatterjee para [0007] states “Thereafter, the one or more words are classified into one or more categories based on respective sentiment”. This directly states that the reviews are being classified, i.e. associated, based upon sentiment, i.e. a topic. Chatterjee Fig. 3 reference character 306 states “Generate a summary in natural language for each of the one or more categories based on the classified one or more words”, which directly states that a summary review is being generated for the categories. The assertion that Chatterjee does not select reviews based on the determined sentiment nor generate a summary review based on such selected reviews is also incorrect. Khan Fig. 3 shows clustering (grouping) assessment items and then using those to generate an input to a natural language generator, i.e. the selected items for input into the generative language model are the clustered items. Chatterjee above shows teaching of the generation of a summary review (Chatterjee Fig. 3 reference character 306). It would have been obvious to use a combination of the aforementioned references to teach selecting reviews based on the topics and generating a summary review based on the selected reviews. In response to the argument that neither Khan nor Chatterjee disclose selected reviews are associated with at least one topic of the topics or none of the topics, Khan Fig. 3 shows clustering (grouping) assessment items and then using those to generate an input to a natural language generator, which directly shows that the selected items are associated with at least one topic of the topics. Privault para [0068], as shown in the rejection below, states "A threshold level of compatibility with the model is established and documents which do not meet the threshold level of compatibility are classed as outliers." This teaches that the selected items can be outliers, i.e. associated with none of the topics. This inclusion would have been obvious in light of the use of clustering and classification, as it is quite common that there will be inputs that do not align with any specific grouping, class, or cluster. Classifying or clustering data points as outliers in a classification or clustering model helps to improve data quality and model robustness by identifying possible errors within the data itself and improving the model’s diversity and generalization. This, alongside Khan’s and Chatterjee’s teachings, would have been an obvious motivation for combination to teach the amended limitations. Hence, the Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 13, and 20 recite “automatically associating reviews with topics”, “generating semantic vectors for the reviews”, “clustering or classifying the reviews into the topics”, “automatically generating an input prompt”, “inputting the input prompt”, and “obtaining, from the generative language model, the summary review”. These limitations, as drafted, are a process that, under a broadest reasonable interpretation, covers the abstract idea of “mental processes” because they cover concepts performed in the human mind, including observation, evaluation, judgement, and opinion. See MPEP 2106.04(a)(2). That is, other than reciting “for a generative language model”, “into the generative language model”, and “as generated by the generative language model”, nothing in the claimed elements preclude the steps from being practically performed by a person grouping reviews into categories, transcribing an input prompt concerning these categorized reviews onto a piece of paper, and inputting it into a generative language model. This judicial exception is not integrated into a practical application because the additional elements “for a generative language model”, “into the generative language model”, and “as generated by the generative language model” are generic computer components and are recited at such a high level of generality. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims as a whole are directed to an abstract idea (Step 2A, prong two). Claims 1, 13, and 20 do not include any additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “for a generative language model”, “into the generative language model”, and “as generated by the generative language model” amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (Step 2B). Dependent claims 2-12 and 14-19 are directed to further limitations of the associating reviews with their respective topics, as well as adding additional information into the prompts during generation. These limitations are also related to the abstract idea of “mental processes”. That is, nothing in the claimed elements preclude the steps from practically being performed by a person grouping reviews into categories, transcribing an input prompt concerning these categorized reviews onto a piece of paper, and inputting it into a generative language model. No additional elements are present. The added limitation of “a review classifier” is not recited with sufficient specificity as to provide any details about how the review classifier classifies reviews. Thus, the claims as a whole are directed to an abstract idea (Step 2A, prong two). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9 and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khan et al. (US Patent Application Publication No. 2022/0415203), hereinafter referred to as Khan, in view of Chatterjee et al. (US Patent Application Publication No. 2020/0285662), hereinafter referred to as Chatterjee, and further in view of Privault et al. (US Patent Application Publication No. 2010/0312725), hereinafter referred to as Privault. Regarding claim 1, Khan discloses generating semantic vectors for the reviews (Khan Fig. 3 reference character 128); and clustering or classifying the reviews into the topics based on the semantic vectors for the reviews (Khan Fig. 3 reference character 118 and "For example, instructions for clustering 118 may implement a classifier trained to place items into predetermined groups," Khan para [0049]); automatically generating an input prompt for a generative language model, the input prompt comprising: selected reviews , wherein the selected reviews are associated with at least one topic of the topics (Khan Fig. 3 shows clustering (grouping) assessment items and then using those to generate an input to a natural language generator); instructions and context identifying the at least one topic ("The instructions for input generation 120 generally create conditioning input to transmit to a NLG 112 using text of the model assessment items and information generated by clustering 118 including, in some examples, groupings of model knowledge assessment items, centroids for the groupings or clusters identified through clustering 118, and numeric representations of the model knowledge assessment items," Khan para [0050]); and instructions instructing generation of a summary review of the selected reviews ("For example, NLG models may mimic conditioning input in both form and content to generate literary passages on given topics, computer code, summarize text, answer questions, etc.," Khan para [0020]); inputting the input prompt into the generative language model (Khan Fig. 3 shows inputting the generated input to a natural language generator). However, Khan does not disclose automatically associating reviews with topics by; or none of the topics; and obtaining, from the generative language model, the summary review as generated by the generative language model. Chatterjee teaches a method for generating summaries for reviews. Chatterjee teaches automatically associating reviews with topics by ("Thereafter, the one or more words are classified into one or more categories based on respective sentiment," Chatterjee para [0007] and Chatterjee Fig. 3 reference character 304); and obtaining, from the generative language model, the summary review as generated by the generative language model (Chatterjee Fig. 3 reference character 306 and "In an embodiment, Natural Language Processing (NLP) may be used for generating the summary in the natural language," Chatterjee para [0052]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Khan’s method of natural language generation of knowledge assessment items by including Chatterjee’s method of generating summaries for reviews. Classifying reviews by topics, in this case sentiments, allows for better understanding of those reviews themselves, and using natural language processing to summarize reviews that are alike in topic or category allows for a more comprehensive, condensed understanding of the reviews themselves. Combining these would decrease the time and effort necessary to scrape all of the pertinent information from each individual review. Including both of these would have been obvious to one of ordinary skill in the art. Privault teaches a method for assisted document review. Privault teaches or none of the topics ("A threshold level of compatibility with the model is established and documents which do not meet the threshold level of compatibility are classed as outliers," Privault para [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Khan’s method of natural language generation of knowledge assessment items and Chatterjee’s method of using a classifier to classify reviews by including Privault’s method of classifying outliers. Classifying data as outliers in a classification model helps to improve data quality and model robustness by identifying possible errors within the data itself and improving the model’s diversity and generalization. This inclusion would have been obvious to one of ordinary skill in the art. Regarding claim 2, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses wherein clustering or classifying the reviews into topics clustering, using one or more clustering processes, the reviews into topic clusters based on the semantic vectors for the reviews, wherein the topic clusters correspond to the topics (Khan Fig. 3 reference character 118). Regarding claim 3, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses wherein clustering or classifying the reviews into the topics ("For example, instructions for clustering 118 may implement a classifier trained to place items into predetermined groups," Khan para [0049]). Regarding claim 4, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 3. However, Khan fails to disclose further comprising training the review classifier by: creating a review training set of training pairs, the training pairs pairing training topic labels with training reviews, wherein the training topic labels correspond to the topics; and training the review classifier using the review training set. Chatterjee teaches further comprising training the review classifier by: creating a review training set of training pairs ("In an embodiment, the training dataset (205) may include a plurality of training text. The plurality of training text may comprise samples of user reviews," Chatterjee para [0027] and "In an embodiment, the word classification data (207) may comprise one or more categories of sentiments," Chatterjee para [0029]), the training pairs pairing training topic labels with training reviews, wherein the training topic labels correspond to the topics ("The training dataset (205) may further include training vectors. The training vectors are generated for the sample user reviews. The training vectors may indicate a context of words in the sample user reviews, semantic of the words in the sample user reviews, syntax similarity between words in the sample user reviews and a relationship between words in the sample user reviews," Chatterjee para [0027]); and training the review classifier using the review training set ("In an embodiment, the training dataset (205) may include a plurality of training text. The plurality of training text may comprise samples of user reviews," Chatterjee para [0027], classifier training data is inherently used for training the classifier). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Khan’s method of natural language generation of knowledge assessment items by including Chatterjee’s method of using a classifier to classify reviews. Classifying reviews by topics, in this case sentiments, allows for better understanding of those reviews themselves. Gathering larger amounts of review training data separated into multiple categories allows for a more efficient model down the line, as the model will be more adaptive from the experience from more training data. Separating the review data into these categories also allows to specialize classifier models to more niche topics. This inclusion would have been obvious to one of ordinary skill in the art. Regarding claim 5, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 4. Khan further discloses further comprising generating the training topic labels by: generating semantic vectors for the training reviews (Khan Fig. 3 reference character 128); and clustering, using one or more clustering processes, the training reviews into training topic clusters based on the semantic vectors for the training reviews, wherein each of the training topic clusters corresponds to one of the training topic labels ("In any implementation of clustering 118, the items may be placed into subgroups based on semantic and lexical similarities identified based on the numeric representations of the model items," Khan para [0049]). Regarding claim 6, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses wherein the reviews are associated with the topics based on text content of the reviews ("In any implementation of clustering 118, the items may be placed into subgroups based on semantic and lexical similarities identified based on the numeric representations of the model items," Khan para [0049]). Regarding claim 7, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses wherein the at least one topic comprises one of the topics, wherein the selected reviews are the one of the topics ("At block 308, the NLG interface 110 generates conditioning input for a NLG model for each of the clusters of model knowledge assessment items," Khan para [0082]) and wherein the instructions and context identifying the at least one topic comprises instructions or context identifying the one of the topics ("The instructions for input generation 120 generally create conditioning input to transmit to a NLG 112 using text of the model assessment items and information generated by clustering 118 including, in some examples, groupings of model knowledge assessment items, centroids for the groupings or clusters identified through clustering 118, and numeric representations of the model knowledge assessment items," Khan para [0050]). Regarding claim 8, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 7. Khan further discloses wherein the selected reviews are selected from amongst the reviews further based on a classification score of the selected reviews relative to the one of the topics (Khan Fig. 7 reference character 412, assigning each item to a cluster based on a distance, i.e. a score). Regarding claim 9, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses and wherein the instructions and context identifying the at least one topic comprises instructions or context identifying the at least two of the topics ("The instructions for input generation 120 generally create conditioning input to transmit to a NLG 112 using text of the model assessment items and information generated by clustering 118 including, in some examples, groupings of model knowledge assessment items, centroids for the groupings or clusters identified through clustering 118, and numeric representations of the model knowledge assessment items," Khan para [0050]). However, Khan does not disclose wherein the at least one topic comprises at least two of the topics, wherein the selected reviews include reviews associated with the at least two of the topics. Chatterjee teaches wherein the at least one topic comprises at least two of the topics, wherein the selected reviews include reviews associated with the at least two of the topics ("At step (305), the classification module (213) may classify the one or more words into one or more categories," Chatterjee para [0045]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Khan’s method of natural language generation of knowledge assessment items by including Chatterjee’s method of using a classifier to classify reviews into more than one category. Classifying a text or a review into multiple classes will create richer representation and allow the model to provide more efficient prediction sets when compared to only have each item belong to one topic or cluster. This would also allow them to be used as inputs to generate summaries on different items, which improves the model’s adaptability and diversity. This inclusion would have been obvious to one of ordinary skill in the art. Regarding claim 11, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses and wherein the instructions and context identifying the at least one topic comprises instructions or context identifying the topics ("The instructions for input generation 120 generally create conditioning input to transmit to a NLG 112 using text of the model assessment items and information generated by clustering 118 including, in some examples, groupings of model knowledge assessment items, centroids for the groupings or clusters identified through clustering 118, and numeric representations of the model knowledge assessment items," Khan para [0050]). However, Khan does not disclose wherein the selected reviews include reviews associated with none of the topics. Privault teaches a method for assisted document review. Privault teaches wherein the selected reviews include reviews associated with none of the topics ("A threshold level of compatibility with the model is established and documents which do not meet the threshold level of compatibility are classed as outliers," Privault para [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Khan’s method of natural language generation of knowledge assessment items and Chatterjee’s method of using a classifier to classify reviews by including Privault’s method of classifying outliers. Classifying data as outliers in a classification model helps to improve data quality and model robustness by identifying possible errors within the data itself and improving the model’s diversity and generalization. This inclusion would have been obvious to one of ordinary skill in the art. Regarding claim 12, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 1. Khan further discloses wherein generating the input prompt further comprises: determining sets of selected reviews of the reviews, wherein a set of selected reviews of the sets of selected reviews is associated with a corresponding topic of the topics (Khan Fig. 3 shows clustering (grouping) assessment items and then using those to generate an input to a natural language generator and "For example, NLG models may mimic conditioning input in both form and content to generate literary passages on given topics, computer code, summarize text, answer questions, etc.," Khan para [0020]); and generating input prompts (Khan Fig. 3 reference character 120), wherein one input prompt of the input prompts includes the set of selected reviews and instructions or context identifying the corresponding topic ("The instructions for input generation 120 generally create conditioning input to transmit to a NLG 112 using text of the model assessment items and information generated by clustering 118 including, in some examples, groupings of model knowledge assessment items, centroids for the groupings or clusters identified through clustering 118, and numeric representations of the model knowledge assessment items," Khan para [0050]). As to claim 13, system claim 13 and method claim 1 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 13 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 14, system claim 14 and method claim 2 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 14 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 15, system claim 15 and method claim 3 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 15 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 16, system claim 16 and method claim 4 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 16 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 17, system claim 17 and method claim 7 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 18, system claim 18 and method claim 9 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 19, system claim 19 and method claim 11 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 20, computer-readable medium (CRM) claim 20 and method claim 1 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to the method claim. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khan, in view of Chatterjee, further in view of Privault, and further in view of Wang et al. (US Patent Application Publication No. 2020/0175052), hereinafter referred to as Wang. Regarding claim 10, Khan, in view of Chatterjee, and further in view of Privault, discloses all of the limitations of claim 9. However, Khan fails to disclose wherein the selected reviews are selected from amongst the reviews further based on at least one of an averaged classification score or a summed classification score of the reviews relative to the at least two of the topics. Wang teaches a method for the classification of electronic documents. Wang teaches wherein the selected reviews are selected from amongst the reviews further based on at least one of an averaged classification score or a summed classification score of the reviews relative to the at least two of the topics ("The topic-vector-comparison process 340 may be configured to compare two or more topic vectors 322 to generate a topic-vector similarity score 342…, score 342 may be scaled or averaged to include a single number," Wang para [0057]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Khan’s method of natural language generation of knowledge assessment items and Chatterjee’s method of using a classifier to classify reviews by including Wang’s method of using an averaged classification score for two or more categories. Using an averaged classification value of multiple classes will allow the model to provide more efficient prediction sets when compared to only using a single score. An averaged classification score would also allow the text or review to fit into multiple categories, allowing them to be used as an input to generate summaries on different items, which improves the model’s adaptability and diversity. This inclusion would have been obvious to one of ordinary skill in the art. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM MICHAEL WEAVER whose telephone number is (571)272-7062. The examiner can normally be reached Monday-Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM MICHAEL WEAVER/ Examiner, Art Unit 2658 /RICHEMOND DORVIL/ Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Aug 22, 2025
Non-Final Rejection — §101, §103
Nov 25, 2025
Response Filed
Mar 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591752
ZERO-SHOT DOMAIN TRANSFER WITH A TEXT-TO-TEXT MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12585765
SYSTEM AND METHOD FOR ROBUST NATURAL LANGUAGE CLASSIFICATION UNDER CHARACTER ENCODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579375
IMPLEMENTING ACTIVE LEARNING IN NATURAL LANGUAGE GENERATION TASKS
2y 5m to grant Granted Mar 17, 2026
Patent 12562077
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month