Prosecution Insights
Last updated: April 19, 2026
Application No. 18/650,290

METHOD, SYSTEM AND PROCESSOR FOR ENHANCING ROBUSTNESS OF SOURCE-CODE CLASSIFICATION MODEL

Non-Final OA §103§112
Filed
Apr 30, 2024
Examiner
KANG, INSUN
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Huazhong University Of Science And Technology
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
515 granted / 655 resolved
+23.6% vs TC avg
Strong +40% interview lift
Without
With
+40.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
23 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responding to application papers dated 4/3/2024. Claims 1-20 are pending in the application. The information disclosure statement filed on 5/15/2024 has been considered. Claim Objections Claims 1-19 are objected to because of the following informalities: per claims 1-19, the double quotation marks used are not needed. Per claim 3, “a said” needs to be “said.” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a training set-expanding module for combining, a training set-expanding module for converting codes, a training set-expanding module for merging, a training set-expanding module for converting code texts, a model-training module for converting, a model-training module for randomly picking, model-training module for pairing the samples, model-training module for inputting, model-training module for iteratively updating, model-training module for training, recited in claims 10-19; Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 10-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The corresponding structure for the generic placeholder, “module” is not described in the specification. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 10 recite the limitation "the extracted invariant features.” There is insufficient antecedent basis for this limitation in the claim. Interpretation: iteratively updating the feature extractor to extract invariant features …. training the extracted invariant features. Claims 2 and 11 recite the limitation "the existing source-code classification model.” There is insufficient antecedent basis for this limitation in the claim. Interpretation: the source-code classification model. On line 5, it is not clear which target characteristics it is referring because claim 1 also recites the target characteristics. Interpretation: the transformation target characteristics. Claim 4 and 13 recite the limitation "the specially processed code images.” There is insufficient antecedent basis for this limitation in the claim. Interpretation: the code images. Claim 5 and 14 recite the limitation "the pre-processed code images.” There is insufficient antecedent basis for this limitation in the claim. Interpretation: the code images. The pre-processing is selected from an open list of alternatives, therefore, it is unclear what other alternatives are intended to be encompassed by the claim. MPEP 2173.05(h). It is recommended to use terms such as: the pre-processing is, the pre-processing consists of. Per claim 20, it is not clear if the computer program refers to the modules in claim 10 and how a processor can comprise a software. For the examination, the claim is interpreted as: A processor executing the method of claim 1 via a computer program. Per claims 10-19, the corresponding structures for the modules are not described in the specification, therefore, it is not clear to what structures the modules refer to. Per claims 3, 6-9, 12, 15-20, these claims are rejected because they depend on claims 1 and 10 respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jain et al. (“Contrastive Code Representation Learning,” 2021, hereafter Jain) in view of Chang et al. (“Towards Robust Classification Model by Counterfactual and Invariant Data Generation, 6/2021, hereafter Chang). 1. A method for enhancing robustness of a source-code classification model, based on invariant features, wherein the method at least includes steps of: (Jain, see at least page 4, right col., that frames contrastive learning as a classification task; page 18, right col., contrastive learning learns representations that are invariant to a wide class of automated compiler-based transformations …With a hybrid loss combining masked language modeling and contrastive learning, representations of variants of the same program once again cluster; Note that the model robustness is improved based on invariant representations); combining non-robustness features to generate a plurality of different style templates, converting codes in an input code training set into new codes of different styles using a code conversion program, so as to obtain a converted-code training set composed of the new codes, merging the input code training set and the converted-code training set into an expanded training set, and converting code texts in the expanded training set into code images (Jain, see at least page 2, left col., ContraCode generates syntactically diverse but functionally equivalent programs using source-to source compiler transformation techniques; page 4, left col., Intermediate programs are converted between AST and source form as needed for the compiler… the resulting diversity in programs … applying 20 random sequences of transformations. … transformation to derive different tokenizations every batch, so pairs derived from the same original method will still differ; page 2, merge sort with variants … we apply semantics preserving transformations to produce functionally equivalent yet textually distinct code sequences; abstract, We scalably generate these variants using an automated source-to-source compiler as a form of data augmentation; page 3, left col., We apply compiler transforms to unlabeled code to generates many variants with equivalent functionality; page 9, left col., Data augmentation artificially expands labeled training sets. For sequence to-sequence summarization, we apply a variety of augmentations (LS, SW, VR, DCI); page 16, left col., The ContraCode summarization Transformer only needed to be pre-trained for 20K iterations, with substantially faster convergence than RoBERTa (240K iterations) … code summarization and type prediction trained their models on an inconsistent set of programming languages and datasets. In order to normalize the effect of datasets, we selected several diverse state-of-the-art baselines and reimplemented them on the JavaScript dataset-- Note that the diversity created by program transformations corresponds to different style baselines/templates (transformed variants using an automated source-to-source compiler as a form of data augmentation). Jain does not explicitly teach that the variant features are non-robust, however, variants are typically considered non-robust in adversarial machine learning that only appear to correlate under specific conditions. Nonetheless, Chang teaches such non-robust features (Chang, see at least page 1, right col., address such spurious associations in the typical ML classification framework by incorporating human causal knowledge; page 2, various counterfactual and invariant data generations to augment training datasets which makes models more robust to spurious correlations. … combining our augmentations with saliency regularization can further improve performance… focus on causal features that provide better explanations, although we find strong salience on causal features only correlates weakly with good generalization stronger salience focus on causal features that provide better explanations … non-causal features (backgrounds) and labels; abstract, many approaches are known to be non-robust, often relying on spurious correlations to make predictions; Note that the spurious features are non-robustness features and those features are augmented (combined)). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Chang’s non-robust features with Jain’s contrastive code representation system to modify Jain’s system to combine the non-robustness feature augmentation as taught by Chang, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to machine learning. Combining Chang’s functionality with that of Jain results in a system that incorporates non-robustness features augmentation. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to address spurious associations in the typical ML classification framework to augment training datasets to improve performance (Chang, see at least page 1, right col., address such spurious associations in the typical ML classification framework by incorporating human causal knowledge; page 2, various counterfactual and invariant data generations to augment training datasets which makes models more robust to spurious correlations. … combining our augmentations with saliency regularization can further improve performance… focus on causal features that provide better explanations, although we find strong salience on causal features only correlates weakly with good generalization stronger salience focus on causal features that provide better explanations … non-causal features (backgrounds) and labels; abstract, many approaches are known to be non-robust, often relying on spurious correlations to make predictions; Note that the spurious features are non-robustness features and those features are augmented (combined)). Jain in view of Chang further teaches: converting the code images into vectors required by model training through data pre-processing (Jain, see at least page 15, left col., For the hybrid model pre-trained with both RoBERTa (MLM) and contrastive objectives; page 9, right col., Ablating pre-training augmentations; Note that converting raw data into a vector that a computer can process is required), randomly picking samples of an identical class from the expanded training set, pairing the samples into matched sample pairs, and inputting the matched sample pairs into a feature extractor, iteratively updating the feature extractor and the matched sample pairs by means of contrastive learning and extracting target characteristics, and training the extracted invariant features in a classifier, so as to produce the source-code classification model with enhanced robustness (Jain, see at least page 16, left col., The ContraCode summarization Transformer only needed to be pre-trained for 20K iterations, with substantially faster convergence than RoBERTa (240K iterations); page 17, left col., Figure 13: Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves for non-adversarial classifiers on the code clone detection task; page 18, right col., contrastive learning learns representations that are invariant to a wide class of automated compiler-based transformations …With a hybrid loss combining masked language modeling and contrastive learning, representations of variants of the same program once again cluster; page 16, left col., paths are extracted from each function’s AST as a precomputed dataset … train the models; page 1, right col., Contrastive pre training with ContraCode learns a more robust representation of functionality; page 13, left col., Arguments can be renamed with random word sequences and identifiers can be replaced with short tokens to make the model robust to naming choices … We randomly sample (p = 0.9) lines from a method body; page 4, right col., for contrastive image representation learning. In our case, we learn a program encoder fq that maps a sequence of program tokens to a single, fixed dimensional embedding. We organize programs into functionally similar positive pairs and dissimilar negative pairs. Generating two augmentations of the same GitHub program yields a positive pair … during contrastive pre-training … prevent the encoder fq from mapping all programs to the same; page 14, left col., Dissimilarity ranges from 0% for programs with the same sequence of tokens, to 100% for programs without any shared tokens. Note that whitespace transformations do not affect the metric because the tokenizer collapses repeated whitespace. For the positives, we estimate dissimilarity by sampling one pair per source program in the CodeSearchNet dataset; -- Note that the use of contrastive learning and extracted invariant features enhance robustness in code classification by random samples paired). Per claim 2: Jain in view of Chang further teaches: analyzing the existing source-code classification model and attack means that have been applied thereto, and summarizing transformation target characteristics and transformation modes generated by the attack means for attacking code samples, wherein the target characteristics receiving attacks are used as the non-robustness features for classification, and different combinations of the non-robustness features are picked to form the different style templates distinctive from each other (Jain, see at least page 1, right col., investigate adversarial attacks on code clone detection; page 2, right col., Adversarial attacks on code models; page 8, left col., 4.3 Extreme Code Summarization The extreme code summarization task asks a model to predict the name of a method given its body… We create a JavaScript summarization dataset … A sequence-to-sequence model with an autoregressive decoder is trained to maximize log likelihood of the ground-truth name, a form of abstractive summarization; page 16, left col., The ContraCode summarization Transformer only needed to be pre-trained for 20K iterations, with substantially faster convergence than RoBERTa (240K iterations) … code summarization and type prediction trained their models on an inconsistent set of programming languages and datasets. In order to normalize the effect of datasets, we selected several diverse state-of-the-art baselines and reimplemented them on the JavaScript dataset; Note that diverse program transformations variants are picked for distinctive baselines). Per claim 3: Jain in view of Chang further teaches: applying the code conversion program to the input code training set, according to code style templates performing directional transformation of the style templates on the codes in the input code training set, so as to generate the new codes semantically unchanged but changed in style, wherein each of the style templates is associated with a said converted-code training set, and the input code training set and the converted-code training set are merged into the expanded training set (Jain, see at least page 9, left col., Data augmentation artificially expands labeled training sets. For sequence to-sequence summarization, we apply a variety of augmentations (LS, SW, VR, DCI). These all pre serve the method name. For type inference, labels are aligned to input tokens, so they must be re aligned after transformation. We only apply token level transforms (LS, SW) as we can track labels; page 9, right col., Ablating pre-training augmentations … Semantics-preserving code compression passes that require code analysis are the most important --Note that the semantics are preserved (e.g. Identifier modifications preserve semantics). Per claim 4: Jain in view of Chang further teaches: using a text-image conversion tool to process the code texts of the expanded training set, and generating the specially processed code images from the input code texts (Jain, see at least page 4, we learn a program encoder… an augmentation of a different program; page 9, left col., Data augmentation artificially expands labeled training sets; Note that contrastive image representation learning is code/text to contrastive image transformation). Per claim 5: Jain in view of Chang further teaches: converting the pre-processed code images into the vectors usable in model training, wherein the pre-processing includes but is not limited to scaling, cutting and/or normalization (Jain, see at least page 6 right col., a scaled … edit distance between normalized and tokenized programs; page 16, right col., We normalize baseline parameter count by reducing the number of Transformer layers; page 4, left col., Stochastic augmentations in other modalities like random crops generate diverse outputs, but most of our compiler-based transformations are deterministic; Note that converting raw data into a vector that a computer can process is required). Per claim 6: Jain in view of Chang further teaches: randomly picking the samples of the identical class and of different said training sets from the expanded training set composed of the input code training set and the converted-code training set, and pairing the samples (Jain, see at least page 16, left col., The ContraCode summarization Transformer only needed to be pre-trained for 20K iterations, with substantially faster convergence than RoBERTa (240K iterations); page 17, left col., Figure 13: Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves for non-adversarial classifiers on the code clone detection task; page 18, right col., contrastive learning learns representations that are invariant to a wide class of automated compiler-based transformations …With a hybrid loss combining masked language modeling and contrastive learning, representations of variants of the same program once again cluster; page 16, left col., paths are extracted from each function’s AST as a precomputed dataset … train the models; page 1, right col., Contrastive pre training with ContraCode learns a more robust representation of functionality; page 13, left col., Arguments can be renamed with random word sequences and identifiers can be replaced with short tokens to make the model robust to naming choices … We randomly sample (p = 0.9) lines from a method body; page 4, right col., for contrastive image representation learning. In our case, we learn a program encoder fq that maps a sequence of program tokens to a single, fixed dimensional embedding. We organize programs into functionally similar positive pairs and dissimilar negative pairs. Generating two augmentations of the same GitHub program yields a positive pair … during contrastive pre-training … prevent the encoder fq from mapping all programs to the same; page 14, left col., Dissimilarity ranges from 0% for programs with the same sequence of tokens, to 100% for programs without any shared tokens. Note that whitespace transformations do not affect the metric because the tokenizer collapses repeated whitespace. For the positives, we estimate dissimilarity by sampling one pair per source program in the CodeSearchNet dataset; -- Note that the use of contrastive learning and extracted invariant features enhance robustness in code classification by random samples paired). Per claim 7: Jain in view of Chang further teaches: dividing the model into two parts, namely the feature extractor and the classifier, inputting the randomly picked pairs of the samples of the identical class into the feature extractor, figuring out differences among the samples using a contrastive loss function, iteratively updating the feature extractor and the matched sample pairs, replacing the randomly picked sample pairs with new sample pairs (Jain, see at least page 15, left col., extracting a global program representation. We aggregate a 1024 dimensional representation of the program by concatenating its four terminal hidden states (from two sequence processing directions and two stacked LSTM layers), then apply the same MLP architecture as before to extract a 128-dimensional representation; Page 4, right col., Pre-training objective Like He et al. (2019), contrastive learning as a classification task; Page 7-8, adversarially edit one program in each pair by applying the loss-maximizing code compression and identifier modification transformation among N samples from Algorithm 1; page 15, left col., We report metrics that treat code clone detection as a binary classification task given a pair of programs; page 6, right col., combining both the contrastive loss and MLM has the best performance; page 14, left col., Fig. 11 shows a histogram of token dissimilarity; page 13, left col., Note that arguments can be renamed with random word sequences and identifiers can be replaced with short tokens to make the model robust to naming choices). and performing training iteratively until training of the feature extractor reaches convergence (Jain, see at least Page 4, 3.3. contrastive pre-training … during pre-training … The EMA update stabilizes the pre-computed key embeddings across training iterations; page 9, right col., pre-training converges faster with a smaller set of augmentations at the same batch size since the positives are syntactically more similar; page 17, right col., Using the global features for pre-training yields significantly improved performance … iterations of pre training (not converged for the purposes of ablation; Note that the training is iterated until reaching convergence). Per claim 8: Jain in view of Chang further teaches: inputting the latest sample pairs into the feature extractor, extracting the target characteristics, and inputting the target characteristics into the classifier for training, until training of the classifier reaches convergence (Jain, see at least Page 4, 3.3. contrastive pre-training … during pre-training … The EMA update stabilizes the pre-computed key embeddings across training iterations; page 9, right col., pre-training converges faster with a smaller set of augmentations at the same batch size since the positives are syntactically more similar; page 17, right col., Using the global features for pre-training yields significantly improved performance … iterations of pre training (not converged for the purposes of ablation; Note that the training is iterated until reaching convergence with pairs of programs solving different problems to construct an evaluation dataset). Per claim 20, it is the processor version of claim 1, and is rejected for the same reasons set forth in connection with the rejection of claim 1 above. Claims 9, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Jain in view of Chang and Li et al. (“RoPGen: Towards Robust Code Authorship Attribution via Automatic Coding Style Transformation,” 2022, hereafter Li). Per claims 9, 18 and 19: Jain in view of Chang teaches: wherein the loss function and the target characteristics are invariant high-robustness features (Jain, see at least page 4, right col., that frames contrastive learning as a classification task; page 18, right col., contrastive learning learns representations that are invariant to a wide class of automated compiler-based transformations …With a hybrid loss combining masked language modeling and contrastive learning, representations of variants of the same program once again cluster; page 16, right col., We pre-trained an encoder using RoBERTa’s masked language modeling loss on our augmented version of CodeSearch Net, the same data used to pre-train Contra Code; Note that the model robustness is improved based on invariant representations). Jain and Chang do not explicitly teach a cross-entropy loss function. Li discloses a cross-entropy loss function (Li, see at least Page 6, left col., We compute the full-fledged network’s loss using the standard 𝐿𝑠𝑡𝑑 = 𝑙(N(𝜃,𝑢),𝑣) and loss function 𝑙 (e.g., cross entropy). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Li’s cross entropy loss function with Chang’s non-robust features and Jain’s contrastive code representation system to modify Jain’s system to combine the cross-entropy loss function as taught by Li, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to machine learning. Combining Li’s functionality with that of Jain and Chang results in a system that incorporates the cross-entropy loss function. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to optimize a classification models with a faster and stable convergence (Li, see at least Page 6, left col., We compute the full-fledged network’s loss using the standard 𝐿𝑠𝑡𝑑 = 𝑙(N(𝜃,𝑢),𝑣) and loss function 𝑙 (e.g., cross entropy). Per claims 18 and 19, these are the system versions of claim 9 and are rejected for the same reasons set forth in connection with the rejection of claim 9. Examiner’s Note The Examiner has pointed out particular references contained in the prior art of record within the body of this action for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Marvolo: Programmatic Data Augmentation for Practical ML-Driven Malware Detection” by Wong is related to data augmentation for malware detection; Peng et al., “Learning Invariant Representation via Contrastive Feature Alignment for Clutter Robust SAR Target Recognition” is related to a mixed clutter variants generation strategy and a new inference branch; CN 115630358 is related to a malicious software classification; Guo et al. (“GRAPHCODEBERT: PRE-TRAINING CODE REPRESEN TATIONS WITH DATA FLOW”) is related to GraphCodeBERT based on Transformer. Any inquiry concerning this communication or earlier communications from the examiner should be directed to INSUN KANG whose telephone number is (571)272-3724. The examiner can normally be reached M-TR 8 -5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /INSUN KANG/ Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596632
METHOD FOR TESTING A COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12578981
GAME TRANSLATION METHOD, AND ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12578945
INSTANT INSTALLATION OF APPS
2y 5m to grant Granted Mar 17, 2026
Patent 12530211
SYSTEMS AND METHODS FOR DYNAMIC SERVER CONTROL BASED ON ESTIMATED SCRIPT COMPLEXITY
2y 5m to grant Granted Jan 20, 2026
Patent 12498906
INLINE CONVERSATION WITH ARTIFICIAL INTELLIGENCE WITHIN CODE EDITOR USER INTERFACE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+40.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month