Prosecution Insights
Last updated: April 19, 2026
Application No. 18/221,742

METHOD FOR EMBEDDING DATA AND SYSTEM THEREOF

Non-Final OA §103
Filed
Jul 13, 2023
Examiner
CASANOVA, JORGE A
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
664 granted / 783 resolved
+29.8% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
14 currently pending
Career history
797
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 783 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. This Office action is Non-Final. Information Disclosure Statement The information disclosure statement (IDS) filed on 07/13/2023 has been considered by the Examiner and made of record in the application file. Allowable Subject Matter Claims 6 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 7-13 are also objected for being dependents of claim 6. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record do not teach or suggest the additional architectural limitations recited in claims 6 and 14. Claim 6 further requires: generating a transformed data sample from an original data sample, producing a second embedding representation via an auxiliary embedding model using the transformed data sample, performing a transformation-determination or transformation-detection task based on that second embedding representation to compute a second task loss, and updating an associated prompt encoder based on the second task loss rather than updating the embedding model itself. (emphasis added) The applied prior art fails to disclose or suggest: an auxiliary embedding model operating on transformed data in conjunction with prompt tuning of a frozen embedding model, and loss-driven updating of a prompt encoder using a transformation-detection objective withing such auxiliary learning framework. (emphasis added) Accordingly, claim 6 defines a distinct auxiliary transformation-learning architecture not taught or suggested by the prior art. (emphasis added) Claim 14 includes all substantiative architectural feature of claim 6 and further narrows the invention by requiring: an anchor data sample, paired positive and negative samples derived from transformation relationships, and computation of the transformation-related loss within this contrastive pairing structure while still updating the prompt encoder based on the auxiliary loss. (emphasis added) These narrowing limitations are recited in the claim. The prior art record fails to teach or suggest: integrating contrastive anchor/positive/negative pairing with auxiliary transformation-detection supervision and prompt-encoder-only updating in a frozen-model prompt-tuning framework. (emphasis added) Thus, claim 14 defines a further-restricted embodiment of the allowable subject matter of claim 6 and is likewise considered patentable over the applied references. (emphasis added) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lester et al. (“The Power of Scale for Parameter-Efficient Prompt Tuning”, September 2nd, 2021) hereinafter “Lester”, further in view of Liang et al. (“Prefix-Tuning: Optimizing Continuous Prompts for Generation”, January 1st, 2021) hereinafter “Liang”. With respect to claims 1,16 and 20, the Lester reference discloses a method, system and non-transitory computer-readable recording medium comprising: acquiring a pretrained embedding model [see section 8, disclosing adapting frozen pretrained language models to downstream tasks]; generating a prompt associated with a data sample through a prompt encoder [see section 2.1, disclosing our soft-prompt modulates the frozen network’s behavior in the same way as text preceding the input, so it follows that a word-like representation might serve as a good initialization spot], the prompt encoder being lighter than the embedding model; (emphasis added) generating an embedding representation of the data sample by inputting the prompt and the data sample to the embedding model [see section 2, disclosing prompt tokens being inputted together with the data sample into the frozen model to generate embeddings]; calculating a task loss by performing a predefined task by using the embedding representation [see section 3, disclosing training the prompts using T5’s standard cross-entropy loss with a constant learning rate and a batch size]; and updating the prompt encoder based on the task loss [see section 4, disclosing P-tuning has to be used in conjunction with model tuning, that is, models jointly update both the prompt and the main model parameters, whereas our approach keeps the original language model frozen]. Lester discloses the method and system, as referenced above. Lester does not explicitly disclose generating a prompt associated with a data sample through a prompt encoder, the prompt encoder being lighter than the embedding model. (emphasis added) However, Liang discloses generating a prompt associated with a data sample through a prompt encoder, the prompt encoder being lighter than the embedding model [see section 8.3, disclosing we observe that prefix-tuning requires vastly fewer parameters compared to adapter-tuning while maintaining comparable performance]. (emphasis added) It would have been obvious before the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to modify the parameter-efficient prompt tuner as taught by Lester with the prefix-tuner as taught by Liang. Doing so would have reduced learning cost by updating a lightweight prompt encoder instead of the large embedding model. With respect to claims 2 and 17, as modified the combination of Lester and Liang discloses the method and system of claims 1 and 16, as referenced above. The combination further teaches wherein the updating the prompt encoder includes updating the prompt encoder in a state of freezing the embedding model [Lester, see section 4, disclosing P-tuning has to be used in conjunction with model tuning, that is, models jointly update both the prompt and the main model parameters, whereas our approach keeps the original language model frozen]. With respect to claims 3 and 18, as modified the combination of Lester and Liang discloses the method and system of claims 1 and 16, as referenced above. The combination further teaches wherein the generated prompt includes a first prompt and a second prompt, and the first prompt and the second prompt are input to different layers of the embedding model [Liang, see section 4.1, disclosing we can optimize the instruction as continuous word embeddings, whose effects will be propagated upward to all Transformer activation layers and rightward to subsequent tokens]. With respect to claims 4 and 19, as modified the combination of Lester and Liang discloses the method and system of claims 1 and 16, as referenced above. The combination further teaches wherein the data sample is a text sample, the embedding model is a model for further receiving a special token in addition to tokens included in the text sample, and the generated prompt is reflected in an internal embedding representation of the embedding model associated with the special token [Lester and Liang, as understood by the Examiner, the soft prompts and prefix-tuning act as learnable token embeddings influencing internally pooled or special-token representations]. With respect to claim 5, as modified the combination of Lester and Liang discloses the method of claim 4, as referenced above. The combination further teaches wherein the generating the embedding representation includes replacing the internal embedding representation associated with the special token with the generated prompt to generate the embedding representation [Lester, see section 2, disclosing prompt tuning can be thought of as using a fixed prompt of special tokens, where only the embeddings of these prompt tokens can be updated]. Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang et al. (‘846 and ‘549) discloses contrastive affinity learning via auxiliary prompts for generalized novel category discovery. Hall et al. discloses active prompt tuning of vision-language models for human-confirmable diagnostics from images. Sripada et al. discloses text and image based prompt generation. Pham et al. discloses memory-optimized contrastive learning. Shakeri et al. discloses soft knowledge prompts for language models. Shen et al. discloses reliable gradient-free and likelihood-free prompt tuning. Altam et al. discloses change detection in remote sensing using segment anything model. Hu et al. discloses multilingual and code-switching asr using large language model generated text. Ye et al. discloses attribute recognition with image-conditioned prefix language modeling. Zhang et al. (‘143) discloses prompt tuning for zero-shot compositional learning in machine learning systems. Irving et al. discloses guided dialogue using language generation neural networks and search. Conclusions/Points of Contacts Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CASANOVA whose telephone number is (571)270-3563. The examiner can normally be reached M-F: 9 a.m. to 6 p.m. (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CASANOVA/Primary Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Jul 13, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596748
GRAPH DATABASE STORAGE ENGINE
2y 5m to grant Granted Apr 07, 2026
Patent 12591620
TEMPORAL GRAPH ANALYTICS ON PERSISTENT MEMORY
2y 5m to grant Granted Mar 31, 2026
Patent 12566798
CAUSAL ANALYSIS WITH TIME SERIES DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12554734
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Feb 17, 2026
Patent 12554739
CONFIGURATION-DRIVEN EFFICIENT TRANSFORMATION OF FORMATS AND OBJECT STRUCTURES FOR DATA SPECIFICATIONS IN COMPUTING SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+20.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 783 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month