Prosecution Insights
Last updated: April 18, 2026
Application No. 18/384,322

ML USING N-GRAM INDUCED INPUT REPRESENTATION

Non-Final OA §103§DP
Filed
Oct 26, 2023
Examiner
NGUYEN, CHAU T
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
372 granted / 549 resolved
+12.8% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
31 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 549 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The preliminary amendment filed on 10/31/2023 has been entered. Claims 2-21 are pending. Claim 1 is canceled without prejudice. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,836,438 B2, respectively. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-20 of U.S. Patent No. 11,836,438 B2 teach every limitation of claims 2-21 of the instant application, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rangarajan Sridhar et al. (Rangarajan Sridhar), US Patent Application Publication No. US 2022/0093088 A1, and further in view of West et al. (West), US Patent Application Publication No. US 2021/0279577 A1. As to independent claim 2, Rangarajan Sridhar discloses a system comprising: processing circuitry (paragraphs [0006],[0042]: a device includes one or more processor/processing circuitry); a memory coupled to the processing circuitry, the memory including a program stored thereon that, when executed by the processing circuitry, cause the processing circuitry to perform operations comprising (paragraph [0006]: the device includes a memory and one or more programs are stored in the memory and configured to be executed by the one or more processor, the one or more programs including instructions for performing operations/actions): generating, based on a series of tokens that represent a string of words, a local string-dependent embedding of each token of the series of tokens (paragraph [0210]: process a text string to produce multiple candidate text representations, and each candidate text representation is a sequence of words or tokens; Also see an example of an input sentence is tokenized/encoded in an n-gram; paragraph [0267]: the word contextualizer may implement a first natural language model, e.g., a word or token embedding model; paragraphs [0268]-[0269]: word contextualizer receives the n-gram encoding sentence, and the first language model may be employed to embed each token included in the n-gram in one or more token or word embedding (local string-dependent embedding of each token; paragraph [0270]: word embbeder 816 generates a first token vector E1 (local string-dependent embedding)); generating, based on the series of tokens, a global string-dependent embedding of each token of the series of tokens (paragraph [0267]: outputs of the first language model (token vector) are provided to sentence embedder , which may implement a second natural language model (sentence embedder model ) to generate sentence vector (global string-dependent embedding); combining the local string-dependent embedding and the global string-dependent embedding resulting in an induced embedding of each token of the series of tokens (paragraphs [0271][0272]: a linear combination of the three token vectors for each token may be taken to generate a fourth token vector for each of the tokens in the input sentence; the sentence embedder implement a second language model that receives as input a token vector for each token included in the sentence, and sentence embedder may additionally receive the n-gram encoding sentence as input and generate the final sentence vector (local string-dependent embedding and global string-dependent embedding); generating, based on the series of tokens, a relative position embedding that includes a vector representation of a position of each token in the series of tokens (paragraphs [0257], [0262]: as a consequence of the multitask training of the sentence embedding model, a sentence vector may be employed to determine meaningful features of the semantic context of the corresponding sentence, especially, a sentence vector corresponding to a sentence may be employed to infer/determine the semantic context of the sentence, via a locality/position of the sentence vector within the vector space); combining the relative position embedding and the induced embedding resulting in a disentangled attention embedding for each token of the series of tokens (paragraph [0327]: a spatial relationship-to-semantic mapping may map a spatial relationship between the vector embeddings of two sentences (series of tokens) to a semantic relationship between the semantic contexts of the two sentences (series of tokens), and such a special relationship may include the absolute and/or relative positions of each of the sentence vectors in the vector space). Rangarajan Sridhar, however, does not disclose executing a masked language model (MLM) based on the disentangled attention embedding resulting in a masked word prediction. In the same field of endeavor, West discloses transformer, encoder-decoder, attention-based neural net, LSTM based recurrent neural network (RNN), a convolutional neural net (CNN) or other machine learning model may provide prediction functionality, and a common component of these models is that the models create vectors n-gram that represent pieces or groups of the input/training data, and those vectors may be passed into the layers of the models such as masked language modeling for word prediction (paragraph [0040]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to modify the system of Rangarajan Sridhar to include executing a masked language model (MLM) based on the disentangled attention embedding resulting in a masked word prediction, as taught by West for the purpose of predicting masked words. As to dependent claim 3, Rangarajan Sridhar discloses wherein generating the local string-dependent embedding of each token includes generating a multi-word embedding of a window of tokens of the series of tokens (paragraphs [0268], [0269]). As to dependent claim 4, Rangarajan Sridhar discloses wherein generating the local string-dependent embedding of each window of tokens includes using a deep neural network (DNN) (paragraph [0210]). As to dependent claim 5, Rangarajan Sridhar discloses wherein generating the global string-dependent embedding of each token includes using a neural network (NN) transformer (paragraph [0210]). As to dependent claim 6, Rangarajan Sridhar discloses wherein the operations further comprise generating a relative position embedding of each token, the relative position embedding including a vector of values indicating an influence of every token on every other token (paragraphs [0257], [0262]). As to dependent claim 7, Rangarajan Sridhar discloses wherein the operations further comprise determining a mathematical combination of two or more of (i) a content-to-position dependent embedding, (ii) a position-to-content dependent embedding, or (iii) a content-to-content dependent embedding and implement the MLM based on the mathematical combination (paragraph [0210]). As to dependent claim 8, Rangarajan Sridhar discloses wherein the operations further comprise determining a mathematical combination of (i) a content-to-position dependent embedding, (ii) a position-to-content dependent embedding, and (iii) a content-to-content dependent embedding and implement the MLM based on the mathematical combination (paragraph [0210]). As to dependent claim 9, Rangarajan Sridhar discloses wherein (i) the content-to-position dependent embedding and the position-to-content dependent embedding are determined based on both the relative position embedding and the induced embedding of each token, and (ii) the content-to-content dependent embedding is determined based on the induced embedding of each token (paragraph [0210]). As to dependent claim 11, Rangarajan Sridhar does not disclose but West discloses wherein the operations further comprising providing the masked word prediction to an application (West, paragraph [0040]). Claims 12-19 are method claims that contain similar limitations of claims 2-9, respectively. Therefore, claims 12-19 are rejected under the same rationale. Claim 20 are machine-readable medium claim that contains similar limitations of claim 2. Therefore, claim 20 is rejected under the same rationale. Allowable Subject Matter Claims 10 and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication should be directed to CHAU T NGUYEN at telephone number (571)272-4092. The examiner can normally be reached on M-F from 8am to 5pm (PT). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula, can be reached at telephone number 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /CHAU T NGUYEN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Oct 26, 2023
Application Filed
Apr 04, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596765
GENERATION AND USE OF CONTENT BRIEFS FOR NETWORK CONTENT AUTHORING
2y 5m to grant Granted Apr 07, 2026
Patent 12591795
METHOD FOR PROVIDING EXPLAINABLE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 31, 2026
Patent 12585722
IMAGE GENERATION SYSTEM, COMMUNICATION APPARATUS, METHODS OF OPERATING IMAGE GENERATION SYSTEM AND COMMUNICATION APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579356
MATHEMATICAL CALCULATIONS WITH NUMERICAL INDICATORS
2y 5m to grant Granted Mar 17, 2026
Patent 12547825
WHITELISTING REDACTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 549 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month