Prosecution Insights
Last updated: April 19, 2026
Application No. 18/758,291

TEXT CLASSIFICATION WITH WEIGHTED EMBEDDINGS

Non-Final OA §103
Filed
Jun 28, 2024
Examiner
MUELLER, PAUL JOSEPH
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
97 granted / 128 resolved
+13.8% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§103
DETAILED ACTION Introduction This office action is in response to Applicant’s submission filed on June 28, 2024. Claims 1-20 are pending in the application. As such, claims 1-20 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on June 28, 2024. These drawings have been accepted and considered by the Examiner. Claim Objections Claims 8-12 and 20 are objected to because of the following informalities: Claim 8, line 6, reads “the phrase”. Examiner believes this to be a clerical error and it is intended to read “the given phrase”, or “the second phrase”, or “the particular phrase”, or “the each phrase”. Claim 9, line 7, reads “the phrase”. Examiner believes this to be a clerical error and it is intended to read “the given phrase”, or “the second phrase”, or “the particular phrase”, or “the each phrase”. Claims 10-12 depend from claim 9 and therefore inherit this objection. Claim 20, line 6, reads “the phrase”. Examiner believes this to be a clerical error and it is intended to read “the given phrase”, or “the second phrase”, or “the particular phrase”, or “the each phrase”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 7-9, 11-13, 16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ghosh et al. (US Patent Pub. No. 20240143907 A1), hereinafter Ghosh, in view of Kabra et al. (US Patent Pub. No. 20210357779 A1), hereinafter Kabra, in view of Jain et al. (US Patent Pub. No. 20210279420 A1), hereinafter Jain. Regarding claims 1, 9 and 13, Ghosh teaches a method of training a text classification model, a method of classifying text, and a system for training a text classification model (Ghosh in [0037] teaches using supervised learning to optimize a spatial correction model, and in [0031] teaches the model is a text classification model), comprising: [claim 13 only] one or more processors; and a memory comprising instructions that, when executed by the one or more processors, cause the system to: (Ghosh in [0096] teaches using processors with memories that contain instructions to execute code) generating, via an embedding model, a first embedding representation of a training text (Ghosh in [0034] teaches generating embeddings of the text in the text strings, using an embedding layer within the model, and in [0047] teaches using training data which may include a plurality of text strings); [claim 9 only] classifying the text based on the updated embedding representation of the given text using a text classification model (Ghosh in [0018] teaches utilizing a machine learning model to automatically provide classifications of text strings), assigning, via a text classification model, a class to the training text based on the first embedding representation of the training text (Ghosh in [0018] teaches utilizing a machine learning model to automatically provide classifications of text strings); and training the text classification model through a supervised learning process involving [the updated embedding representation of the training text] (Ghosh in [0050] teaches training the model using a supervised learning process). Ghosh does not teach, however Kabra teaches confirming that the class assigned to the training text is an incorrect class for the training text (Kabra in 0062] teaches “wrongly classified” instances are affirmatively misclassified or instances (correctly or incorrectly classified) that were made with a sufficiently low (below a threshold) confidence level). Kabra is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh further in view of Kabra to allow for identifying “wrongly classified” instances are affirmatively misclassified or instances (correctly or incorrectly classified) that were made with a sufficiently low. Motivation to do so would allow for advantages which include identifying characteristics of additional samples to seek out and provide for training the model so that the model can better understand ‘hyperplane’ boundaries between model classes (Kabra [0012]). Ghosh, as modified above, does not teach, however Jain teaches generating, via the embedding model, an embedding representation of a given phrase within the training text based on [confirming that the class assigned to the training text is an incorrect class for the training text] (Jain in [0046] teaches using phrase embeddings, and multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations), wherein the given phrase is selected based on an association between the given phrase and a correct class for the training text (Jain in [0046] teaches a rules-based match can be used to detect matches between words and phrases in a segment of text and the words and phrases of a concept and embeddings can be used to detect semantic relatedness of the segment of text to the concept); generating an updated embedding representation of the training text based on the first embedding representation of the training text and the embedding representation of the given phrase (Jain in [0046] teaches multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations). Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for multiple embeddings to be combined in many ways ranging from simple concatenation to complex non-linear transformations. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Regarding claims 4, 11 and 16, Ghosh, as modified above, teaches the methods and system of Claims 1, 9 and 13. Ghosh further teaches wherein the supervised learning process comprises updating parameters of the text classification model (Ghosh in [0051] teaches parameters of the various layers of spatial correction model are iteratively adjusted until output label matches known label) based on comparing the correct class for the training text to one or more classes output by the text classification model (Ghosh in [0051] teaches parameters of the various layers of spatial correction model are iteratively adjusted until output label matches known label) Ghosh, as modified above, does not teach, however Jain teaches based on the updated embedding representation of the training text (Jain in [0046] teaches multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations). Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for multiple embeddings to be combined in many ways ranging from simple concatenation to complex non-linear transformations. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Regarding claims 7 and 19, Ghosh, as modified above, teaches the method and system of Claims 1 and 13. Ghosh further teaches wherein the trained text classification model is used to assign a given class to a given text (Ghosh in [0051] teaches training and utilizing a machine learning model perform automated classifications of text strings based on spatial analysis). Regarding claims 8 and 20, Ghosh, as modified above, teaches the method and system of Claims 7 and 19. Ghosh further teaches wherein assigning the given class to the given text comprises: generating an embedding representation of the given text (Ghosh in [0034] teaches generating embeddings of the text in the text strings, using an embedding layer within the model, and in [0047] teaches using training data which may include a plurality of text strings); wherein the set of phrases were selected based on an association between each phrase of the set of phrases (Ghosh in [0050] teaches using the predictions (e.g., label predicted for text string) are compared to the known labels associated with the training inputs (e.g., known label is the known label associated with text string) to determine the accuracy of spatial correction model) and classifying the given text [based on the revised embedding representation] using the trained text classification model (Ghosh in [0051] teaches training and utilizing a machine learning model perform automated classifications of text strings based on spatial analysis). Ghosh does not teach, however Kabra teaches an incorrect classification of a respective text that contains the phrase (Kabra in 0062] teaches “wrongly classified” instances are affirmatively misclassified or instances (correctly or incorrectly classified) that were made with a sufficiently low (below a threshold) confidence level). Kabra is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Kabra to allow for identifying “wrongly classified” instances are affirmatively misclassified or instances (correctly or incorrectly classified) that were made with a sufficiently low. Motivation to do so would allow for advantages which include identifying characteristics of additional samples to seek out and provide for training the model so that the model can better understand ‘hyperplane’ boundaries between model classes (Kabra [0012]). Ghosh, as modified above, does not teach, however Jain teaches generating a revised embedding representation of the given text based on detecting a particular phrase of a set of phrases in the given text (Jain in [0046] teaches using phrase embeddings, and multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations), Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for using phrase embeddings. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Regarding claim 12, Ghosh, as modified above, teaches the method of Claim 9. Ghosh, as modified above, teaches the given phrase, the training text, and the first embedding representation. Ghosh, as modified above, does not teach, however Jain teaches wherein the given phrase is selected based on applying a semantic similarity algorithm to detect the given phrase within the training text based on the first embedding representation (Jain in [0148] teaches a method of detecting semantic similarity to detect sections of speech, screen content, chat content, or document text that are potentially semantically related). Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for detecting semantic similarity. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Claims 2-3, 5-6, 10, 14-15 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ghosh, in view of Kabra, in view of Jain, in view of Medalion et al. (US Patent Pub. No. 20210287261 A1), hereinafter Medalion. Regarding claims 2, 10 and 14, Ghosh, as modified above, teaches the methods and system of Claims 1, 9 and 13. Ghosh, as modified above, does not teach, however Jain teaches wherein generating the updated embedding representation comprises combining the first embedding representation and the embedding representation of the given phrase (Jain in [0046] teaches multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations), Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for multiple embeddings to be combined in many ways ranging from simple concatenation to complex non-linear transformations. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Ghosh, as modified above, does not teach, however Medalion teaches wherein a respective weight is assigned to each of the first embedding representation and the embedding representation of the given phrase (Medalion in [0032] teaches the convolutional neural network may be configured to add or subtract the plurality of vectors with various weights to create a single vector). Medalion is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Medalion to allow for using processors configured to add or subtract the plurality of vectors with various weights to create a single vector. Motivation to do so would allow for method to be used to analyze the invoices of a user base, extract purchases (e.g., products and services), cluster the offerings, and generate a representative offering description for each cluster, allowing for a more standardized and comprehensive database of product and service offerings among a user base (Medalion [0033]). Regarding claims 3 and 15, Ghosh, as modified above, teaches the method and system of Claims 2 and 14. Ghosh, as modified above, does not teach, however Jain teaches wherein: an embedding representation of the second phrase is generated (Jain in [0046] teaches using phrase embeddings, and multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations); and creating the updated embedding representation of the training text is further based on the embedding representation of the second phrase (Jain in [0046] teaches multiple embeddings may be combined in many ways ranging from simple concatenation to complex non-linear transformations). Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for multiple embeddings to be combined in many ways ranging from simple concatenation to complex non-linear transformations. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Ghosh, as modified above, does not teach, however Medalion teaches a second phrase within the training text is selected based on the second phrase not being relevant for assigning classes (Medalion in [0055] teaches using encoders which may learn, based on the provided positive and negative samples, how to embed similar merchants to the same regions and similar vendors to the same region, and vice versa); a negative weight is assigned to the embedding representation of the second phrase (Medalion in [0055] teaches using encoders which may learn, based on the provided positive and negative samples, how to embed similar merchants to the same regions and similar vendors to the same region, and vice versa, and in [0032] teaches the convolutional neural network may be configured to add or subtract the plurality of vectors with various weights to create a single vector [here the negative samples would correspond to negative weights]); Medalion is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Medalion to allow for using processors configured to add or subtract the plurality of vectors with various weights to create a single vector. Motivation to do so would allow for method to be used to analyze the invoices of a user base, extract purchases (e.g., products and services), cluster the offerings, and generate a representative offering description for each cluster, allowing for a more standardized and comprehensive database of product and service offerings among a user base (Medalion [0033]). Regarding claims 5 and 17, Ghosh, as modified above, teaches the method and system of Claims 1 and 13. Ghosh, as modified above, does not teach, however Medalion teaches wherein the given phrase is selected from a set of phrases identified as being associated with the correct class (Medalion in [0055] teaches using encoders which may learn, based on the provided positive and negative samples, how to embed similar merchants to the same regions and similar vendors to the same region, and vice versa). Medalion is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Medalion to allow for using positive and negative samples. Motivation to do so would allow for method to be used to analyze the invoices of a user base, extract purchases (e.g., products and services), cluster the offerings, and generate a representative offering description for each cluster, allowing for a more standardized and comprehensive database of product and service offerings among a user base (Medalion [0033]). Regarding claims 6 and 18, Ghosh, as modified above, teaches the method and system of Claims 5 and 17. Ghosh, as modified above, teaches the given phrase, the training text, and the first embedding representation. Ghosh, as modified above, does not teach, however Jain teaches wherein the given phrase is selected based on applying a semantic similarity algorithm to detect the given phrase within the training text based on the first embedding representation (Jain in [0148] teaches a method of detecting semantic similarity to detect sections of speech, screen content, chat content, or document text that are potentially semantically related). Jain is considered to be analogous to the claimed invention because it is in the same field of text classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ghosh, as modified above, further in view of Jain to allow for detecting semantic similarity. Motivation to do so would allow for using embeddings to allow a system to generalize beyond patterns defined by the rules while the rule-based matching ensures that that the system does not miss detection of those specific patterns defined in the rules (Jain [0046]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL J. MUELLER whose telephone number is (571)272-1875. The examiner can normally be reached M-F 9:00am-5:00pm (Eastern). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL MUELLER Examiner Art Unit 2657 /PAUL J. MUELLER/Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597419
NATURAL LANGUAGE PROCESSING APPARATUS AND NATURAL LANGUAGE PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596867
Detecting Computer-Generated Hallucinations using Progressive Scope-of-Analysis Enlargement
2y 5m to grant Granted Apr 07, 2026
Patent 12596886
PERSONALIZED RESPONSES TO CHATBOT PROMPT BASED ON EMBEDDING SPACES BETWEEN USER AND SOCIETY
2y 5m to grant Granted Apr 07, 2026
Patent 12579378
USING LLM FUNCTIONS TO EVALUATE AND COMPARE LARGE TEXT OUTPUTS OF LLMS
2y 5m to grant Granted Mar 17, 2026
Patent 12562174
NOISE SUPPRESSION LOGIC IN ERROR CONCEALMENT UNIT USING NOISE-TO-SIGNAL RATIO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month