Prosecution Insights
Last updated: April 19, 2026
Application No. 18/501,444

KEYWORD EXTRACTION TO GENERATE SUBJECT LINES

Final Rejection §101§103
Filed
Nov 03, 2023
Examiner
SONIFRANK, RICHA MISHRA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Shopify Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
250 granted / 379 resolved
+4.0% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
408
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 379 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1, 8, 11, 15, 18 and 20 are amended. Claims 1-20 are presented for examination. Response to Arguments Applicant’s argument filed on 1/20/2026 have been reviewed. Following are the remarks: 35 U.S.C. §101 Applicant argues “Claim 1 recites for example, tokenizing a body of text into numerical tokens and embedding the tokens into a feature vector, and obtaining an original list of keywords based on the body of text using a trained machine learning model that processes the feature vector. These operations inherently require computing devices to execute instructions, perform the tokenizing and embedding, and subsequently generate the prompt. Such limitations cannot reasonably be performed by a human mind, nor are they abstract ideas or mental processes. These limitations require a specific, structured sequence of machine operations” However, refer to the 101 rejection below. The idea of tokenization is an additional element however this can be performed by any generic machine learning model and is well routine and convention. Step 2A, Prong Two Applicant further argues “Even if the claimed subject matter was deemed to recite an abstract idea, the claimed subject matter is integrated into a practical application. The claims recite a set of machine- executed operations that address a technical challenge associated with generating controlled, reliable subject lines using large language models while reducing computational cost and output variability. The solution is not a mere application of a generic computer performing mental steps, but rather a structured, system-level implementation involving tokenizing a body of text into numerical tokens and embedding the tokens into a feature vector, obtaining an original list of keywords based on the body of text using a trained machine learning model that processes the feature vector, and generating a prompt to a large language model for generating a subject line, the prompt including a chosen list of keywords based on the original list of keywords, wherein the prompt does not include the body of text. Under the approach endorsed in Ex parte Desjardins (Appeal 2024-000567), simply rejecting claims for reciting ML or AI in the abstract is improper. In the decision, the Director's panel criticized the PTAB for failing to recognize how additional elements may integrate with the algorithmic portion to yield a practical application and emphasized the need to assess claims at a level of specificity rather than collapsing them into an abstract idea” However, in Ex parte Desjardins, it involves specific method of training a computer component. The claims are mere using the computer to perform an abstract idea which is to generate subject lines. Applicant further argues “By excluding the full body of text from the prompt and using keywords, the system requires less processing power and time as fewer tokens are necessary. The specification further explains that this approach helps to prevent hallucinations and drift in generated outputs by limiting the influence of irrelevant properties of the original body of text (see for example, specification, paragraph [0110]). These improvements reflect a technical solution to challenges inherent in large language model operation. The claimed subject matter therefore improves the functioning of the computer system by reducing computational resource consumption and enabling more predictable and controllable output generation. These improvements are achieved through the recited sequence of machine-implemented operations. Accordingly, the claimed subject matter recites a practical application of machine learning that improves computer functionality by restructuring large language model prompting to reduce computational load and improve output reliability. Applicant respectfully points the Examiner to Ex parte Desjardins (Appeal 2024-000567) which recites that "Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious- because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality." Accordingly, any alleged abstract idea is integrated into a concrete technological implementation that improves how the computing system processes natural language inputs, manages computational resources during large language model inference, and controls output variability. The claimed subject matter therefore constitutes a practical application under Step 2A, Prong Two.” However, in Desjardins The invention addressed a specific technical challenge in AI—"catastrophic forgetting"—where models lose previously learned information when trained on new tasks, which was a technical challenge of computer itself. In the current claims the invention can be performed by human mind using a computer. Step 2B Applicant further argues “Applicant's claimed subject matter recites an inventive concept and is not a generic application of known ideas using conventional computer functions. The claimed operations implement a non-conventional processing pipeline that transforms natural language text into a reduced, machine-generated semantic representation used to influence downstream large language model behavior. As described above, excluding the full body of text from the prompt reduces the number of tokens required and reduces processing power while also helping to prevent hallucinations and drift in generated outputs. This approach introduces abstraction to improve efficiency and output reliability. The inventive concept lies in this particular coordination of numerical tokenization, feature vector embedding, keyword extraction, and keyword-based prompt generation that excludes the original body of text. The claims do not merely recite a generic computer performing known functions but instead specify a structured sequence of machine-implemented operations that require numerical computation and trained model inference and cannot be performed in the human mind. Accordingly, Applicant's claimed subject matter recites significantly more than an abstract idea and provides a meaningful improvement to computer functionality.” However, using the model for tokenization is well routine and convention. Refer to the rejection below. 35 U.S.C. §103 Applicant argues “Wu does not show, teach, or suggest tokenizing a body of text into numerical tokens and embedding the tokens into a feature vector, and obtain an original list of keywords based on the body of text using a trained machine learning model that processes the feature vector, as recited.” However, Wu discloses BERT model and the US Pat 11322133.pn. which is incorporated by reference clearly discloses that received body of text ( whether for training or communication ) is being tokenized and machine learning model is processing it as a feature vector ( Abstract). Additionally, Wu teaches training subject ( body of text) is input to the model 304 which uses BERT model ( which is described in US Pat 11322133.pn. incorporated by reference) for tokenizing the text and processing the feature vector to get the results ( which in this case is labels). Applicant argues “Wu states that "the subject line generation system 106 performs an act 314 of using the named entity recognition model 304 to extract training subject line keywords from a set of training subject lines" (see Wu, paragraph 0067). In the Office Action, the Examiner has likened the act 314 to obtaining an original list of keywords based on a body of text using a trained machine learning model. Applicant respectfully disagrees. As described in Wu, the act 314 relates to applying the named entity recognition model 304 to training subject lines themselves to extract keywords, not to any corresponding body of text.” However, claim requires obtaining the keywords from the tokenize text which is shown in Fig 3 as tokenize the text use feature vector to obtain the label, train the NER model and use that training model to extract keywords. Claim does not prevent that extra training step for gathering the keywords and what the input can be. Here the input is the subject lines which is a body of text and the end result is the summarization of the input. Applicant further argue “Although Wu describes generally that subject lines may be associated with or summarize a communication (e.g., an email), Wu does not disclose providing the body of text of that communication as input to the named entity recognition model 304 or extracting keywords from such a body of text. Rather, the keyword extraction operation in act 314 is performed solely on the training subject line text.” However, under BRI Wu does teach this concept as 308 are the input ( which is body of text, Para 0063) is input to the model. Applicant argues “ Wu generally discusses bi-directional encoder representations from transformers that may be incorporated into the named entity recognition model 304. However, as described above, Wu does not disclose processing a body of text to extract keywords. Rather, Wu applies the named entity recognition model 304 to subject line text itself. Accordingly, Wu does not disclose tokenizing a body of text into numerical tokens, embedding the body of text into a feature vector, or obtaining an original list of keywords based on the body of text using a trained machine learning model that processes such a feature vector.” Nevertheless, refer to the named entity recognition model 304 includes one of the BERT models described in U.S. Pat. No. 11,322,133, filed on Jul. 21, 2020, entitled EXPRESSIVE TEXT-TO-SPEECH UTILIZING CONTEXTUAL WORD-LEVEL STYLE TOKENS which describes this limitation as recited above. Applicant argues “Bai fails to remedy the deficiencies of Wu as Bai also does not show, teach, or suggest tokenizing a body of text into numerical tokens and embedding the tokens into a feature vector, and obtain an original list of keywords based on the body of text using a trained machine learning model that processes the feature vector, as recited.” Examiner is not relying on Bai to teach this concept. Applicant further argues “Furthermore, Bai is directed to the generation of images based on text segments, which is fundamentally different from generating subject lines corresponding to communication subject lines. Generating images from text requires an system to create new content in a different modality, whereas generating an email subject line is a reductive, derivative transformation of existing text. Even if Bai were assumed to remedy the deficiencies of Wu, which Applicant does not concede, one of ordinary skill in the art would not have been motivated to combine the teachings of Wu and Bai as they address different technical problems and are directed toward fundamentally different objectives. Wu focuses on generating and evaluating subject lines, whereas Bai is concerned with constructing prompts for image generation, and neither reference suggests adapting its techniques for the purpose or architecture of the other.” Under to Graham v. John Deere Co. ( 1966) the determination of obviousness requires considerate including , Resolving the level of ordinary skill in the pertinent art. With respect to the scope Wu has the concept of obtaining a keyword from the subject lines use that to summarize or generate subject lines but using generative model. Bai discloses a broader concept that model could be an LLM model. Regarding analogous art, Bai is reasonably pertinent to the problem addressed by Wu. Both model references to generative models. With respect to motivation to combine Wu already extracts keywords and generates the subject line ( summary or generative additional subject lines). Bai teaches LLM model using the keyword to produce content. It’s known to use keywords to prompt the LLM models as described in the Bai and the motivation would be to improve reliability of the process of generating content, and improve generation efficiency since only a keywords are being used ( Para 0089, Bai). The results would be predictable no matter how the keywords are derived (See KSR Int’l Co. v. Teleflex Inc, (2007)) Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1 recites a processing unit configured to execute instructions to cause the system to: (a)tokenize a body of text into numerical tokens and embed the tokens into a feature vector; (b) obtain an original list of keywords based on a body of text using a trained machine learning model; (c) generate a prompt to a large language model (LLM) for generating a subject line, the prompt including a chosen list of keywords based on the original list of keywords, wherein the prompt does not include the body of text; and (d) obtain, from the LLM responsive to the prompt, at least one generated subject line corresponding to the body of text Step (a) is an additional element performed by a generic computer component. Its an insignificant extra solutional activity Step (b) is a mental step since a human can look at the body of text and determine what keywords are defining the body of text Step (c) can be performed in a human mind since a person can create prompt for the computer based on those keywords Step (d) can be performed in a human mind since based on the keyword one can deduce the title or subject line for an email Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. As discussed above, the broadest reasonable interpretation of steps (b)-(d) is that those steps fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Step (b) is a mental step since a human can look at the body of text and determine what keywords are defining the body of text Step (c) can be performed in a human mind since a person can create prompt for the computer based on those keywords Step (d) can be performed in a human mind since based on the keyword one can deduce the title or subject line for an email Under its broadest reasonable interpretation when read in light of the specification, the “obtaining” and “generating” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The limitation of tokenization is a extra solutional activity. To use the computer to process text this is a step so the computer can understand. The claim recites the additional elements of “using a training machine learning model that process feature vector” and “prompt an LLM and using the LLM” however these limitation provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The judicial exception. The claim also recited processing unit which is the computer components recited at a high level of generality. In limitations (b)-(d) , the computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f).Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES) Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. At Step 2A, Prong Two, the second additional element in step (c), “tokenizing and using LLM ,” was found to represent no more than mere instructions to apply the judicial exception on a computer using generic computer components. The analysis under Step 2A, Prong Two is carried through to Step 2B. Further, the first additional element in step (a) was found to be insignificant extra-solution activity. However, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the re-evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). Here, the step of tokenizing and as discussed in the disclosure, is well-understood (e.g., see US 11790227 - The tokenization process may, for example, be carried out using conventional automated, computer-based tokenization software known to those of ordinary skill in the art such as, for example, the spaCy tokenizer. ). Therefore, this limitation remains insignificant extra solution activity even upon reconsideration and does not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept (Step 2B: NO). The claim is not eligible. Regarding claims 2-4, human can determine keyword based on various rules Regarding claim 5, human can give a feedback to another human and suggest other keywords to decide the subject line Regarding claim 6a and 7 the concept of saving the subject line in the memory and presenting is a generic computer performing computer function, hence similar analysis analogous to claim 1, are applicable under step 2, prong 2 and step 2b, Regarding claims 8-10, a human user is able to look at the historic data, deduce the organization for the which the email is meant to be and summarize in their mind to determine a keyword which represents an email Regarding claim 11-19, analysis analogous to claims 1-9 are applicable. Regarding claim 20, analysis analogous to claim 1, is applicable. In addition, claim 20 including non-transitory computer readable medium, which is a generic computer component and hence the similar analysis analogous to claim 1 are applicable under prong 2, step 2 and step 2b. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 10-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Wu ( US 20240143941) and further in view of Bai ( US Pub: 20250095252) Regarding claim 1, Wu teaches a system comprising: a processing unit configured to execute instructions to cause the system ( Fig 2, 3 and 7) to: tokenize a body of text into numerical tokens and embed the tokens into a feature vector ( named entity recognition model extract keywords using BERT model described in US Pat 11322133.pn. which is incorporated by reference clearly discloses that received body of text ( whether for training or communication ) is being tokenized and machine learning model is processing it as a feature vector ( Abstract). Additionally, Wu teaches training subject ( body of text) is input to the model 304 which uses BERT model ( which is described in US Pat 11322133.pn. incorporated by reference; It’s also well known in the art that BERT processes feature vector and take token as input) for tokenizing the text and processing the feature vector to get the results ( which in this case is labels).; obtain an original list of keywords based on a body of text using a trained machine learning model that processes the feature vector( extract keyword list 314 ( using BERT which processes feature vector ) , Fig 3) ; generate a prompt to a language model for generating a subject line, the prompt including a chosen list of keywords based on the original list of keywords ( using a generative model to generate a subject line using keyword, Para 0068), wherein the prompt does not include the body of text ( just using keywords and NER, Para 0068) ; and obtain, from the LLM responsive to the prompt, at least one generated subject line corresponding to the body of text ( generative the subject line, Fig 3, Fig 7) Wu does not explicitly teaches obtain an original list of keywords based on a body of text using a trained machine learning model and Although Wu teaches a generative model, and its well known in the art that LLM can be generative models, it does not include the concept of generate a prompt for the LLM the prompt including a chosen list of keywords based on the original list of keywords, wherein the prompt does not include the body of text However, Bai teaches obtain an original list of keywords based on a body of text using a trained machine learning model ( extract the word based text content, Para 0126) and generate a prompt for the LLM the prompt including a chosen list of keywords based on the original list of keywords, wherein the prompt does not include the body of text ( generate prompt based on the prompt word extracted, Para 0014-0016) It would have been obvious having the teachings of Wu to further include the concept of Bai before effective filing date since Wu has a system to generate the subject line based on keywords, Bai teaches the idea of getting the prompt word from the description of the email to determine the keyword, hence it would be obvious to use the system of using LLM of Bai to improve reliability of the process of generating content, and improve generation efficiency since only a keywords are being used ( Para 0089, Bai) Regarding claim 2, Bai as above in claim 1, teaches wherein the processing unit is further configured to provide the original list of keywords to a user device, wherein the user device is configured to present the original list of keywords as suggestions (prompt words are displayed, Para 0167-0168, 0186-0188) Regarding claim 3, Wu as above in claim 2, teaches , wherein the processing unit is further configured to receive the chosen list of keywords from the user device ( received keyword from the user, Fig 2) Regarding claim 4, Wu modified by Bai as above in claim 1, teaches wherein the chosen list of keywords is the original list of keywords or includes at least one of the keywords from the original list of keywords (keywords, Fig 2, Wu; Fig 4, Bai) Regarding claim 5, Wu modified by Bai as above in claim 1, teaches wherein the processing unit is further configured to: provide the at least one generated subject line to a user device for presentation ( Fig 2, Subject Line) ; receive a revised list of keywords, based on the chosen list of keywords, from the user device ( fig 2 – user can change the keyword) ; generate another prompt to the LLM for generating another subject line, the prompt including the revised list of keywords, wherein the prompt does not include the body of text; and obtain, from the LLM, another at least one generated subject line (user can change the keyword and the updated subject will be generated, Fig 2, Wu; The prompt word s6 is displayed in an editable control, to facilitate receiving modification of the user on the prompt word, thereby adjusting the currently selected preview image in a targeted manner, Para 0186, 0203-0204, Bai) Regarding claim 6, Wu as above in claim 5, teaches wherein the processing unit is further configured to: save the at least one generated subject line in memory; and provide the at least one generated subject line and the other at least one generated subject line to the user device for presentation ( Fig 10-12, Wu) Regarding claim 7, Wu as above in claim 5, teaches wherein the processing unit is further configured to: provide the chosen list of keywords with the at least one generated subject line to the user device for presentation ( Fig 12, Wu) Regarding claim 10, Wu modified by Bai as above in claim 1, teaches wherein the trained machine learning model is a text summarizer that uses natural language processing (NLP) techniques ( Para 0126, Bai) Regarding claim 11, arguments analogous to claim 1, are applicable. In addition, Wu teaches a computer implemented method ( Para 0002) Regarding claim 12, arguments analogous to claim 2, are applicable. Regarding claim 13, arguments analogous to claim 3, are applicable. Regarding claim 14, arguments analogous to claim 4, are applicable. Regarding claim 15, arguments analogous to claim 5, are applicable. Regarding claim 16, arguments analogous to claim 6, are applicable. Regarding claim 17, arguments analogous to claim 7, are applicable. Regarding claim 20, arguments analogous to claim 1, are applicable. In addition Wu teaches A non-transitory computer-readable medium storing instructions that, when executed by a processor of a system, causes the system to performs the functions as in claim 1 ( Para 0002) Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Wu ( US 20240143941) and further in view of Bai ( US Pub: 20250095252) and further in view of Smith (US 20240273291) Regarding claim 8, Wu modified by Bai as above in claim 1, teaches wherein the processing unit is further configured to: provide the at least one generated subject lines to a user device for presentation ( fig 2, generate the subject line, Fig 8, 10, Wu); Wu modified Bai does not explicitly teach provide feedback options for the at least one generated subject line to the user device for presentation; and receive feedback from the user device However, Smith teaches provide feedback options for the at least one generated subject line to the user device for presentation; and receive feedback from the user device ( a new set of title prompts is generated based on feedback input from one or more users of a network. For example, input on the first set of document titles is received from a network, wherein the input is generated by at least one user of the network, a second set of title prompts is generated based on the received input; and the first generative language model is applied to the second set of title prompts, Fig 8 and Fig 18, Para 0299) It would have been obvious having the teachings of Wu of generating multiple title which user can choose from to further include the concept of Smith to generate a new title based on user feedback to make the generating of title more user friendly Regarding claim 18, arguments analogous to claim 8, are applicable. Claim 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Wu ( US 20240143941) and further in view of Bai ( US Pub: 20250095252) and Agrawal ( US 20240312087) Regarding claim 9, Bai as above in claim 1, does mention the prompt including age demographic ( 0127, 0229) it does not include that title/subject line prompts and wherein the prompt further includes one or more of historical email data, business information, and user demographic information associated with a user In the same field of endeavor Agrawal teaches title/subject line prompts and wherein the prompt further includes one or more of historical email data, business information, and user demographic information associated with a user (For instance, to generate a subject line for an email on a certain occasion, the following sentence prompt is provided to a generative model: “generate one short, exciting email subject line from {brand} on {occasion}.” Each generative model of the content generation apparatus 110 is fine-tuned with multiple (e.g., 100) examples, Para 0036) It would have been obvious having the teachings of Wu and Bai to further include the concept of Agrawal to make the subject line more consistent with brands/organizations ( Abstract, Agrawal) Regarding claim 19, arguments analogous to claim 9, are applicable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mohammad (US 20250139382) Mishra (US 20250094711) teaches obtain an original list of keywords based on a body of text using a trained machine learning model ( obtain a keyword/prompt word for text segments, Para 0147, 0154) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Richa Sonifrank whose telephone number is (571)272-5357. The examiner can normally be reached M-T 7AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Phan Hai can be reached at (571)272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Richa Sonifrank/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Oct 24, 2025
Non-Final Rejection — §101, §103
Jan 20, 2026
Response Filed
Feb 20, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602552
Machine-Learning-Based OKR Generation
2y 5m to grant Granted Apr 14, 2026
Patent 12603085
ENTITY LEVEL DATA AUGMENTATION IN CHATBOTS FOR ROBUST NAMED ENTITY RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12585883
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12585877
GROUPING AND LINKING FACTS FROM TEXT TO REMOVE AMBIGUITY USING KNOWLEDGE GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579988
METHOD AND APPARATUS FOR CONTROLLING AUDIO FRAME LOSS CONCEALMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+24.9%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 379 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month