Prosecution Insights
Last updated: April 19, 2026
Application No. 18/355,901

DIALOGUE SKELETON ASSISTED PROMPT TRANSFER FOR DIALOGUE SUMMARIZATION

Non-Final OA §103
Filed
Jul 20, 2023
Examiner
TILLERY, RASHAWN N
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+9.5% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
32 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This communication is responsive to the application filed 7/20/2023. 2. Claims 1-20 are pending in this application. Claims 1, 10 and 17 are independent claims. Applicant's election with traverse in the reply filed 11/17/2025 is acknowledged. The traversal is found persuasive. This action is made Non-Final. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claim(s) 1-5, 9, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over in view of Wu et al (“Wu” US 2022/0108086) in view of Vu et al (“Vu” US 2024/0020546). Regarding claim 1, Wu discloses a method comprising: receiving, by a processing device, a language model (see fig 2a, 130; e.g., trained model) configured to generate summaries of dialogues (see fig 2a, 150; e.g., dialog summary), the language model trained using training dialogues (see the Abstract; e.g., “A dialogue summary is generated using the generative language model trained using the summary draft.”) and dialogue skeletons generated based on the training dialogues (see the Abstract; e.g., “language model is trained to generate a segment summary for each dialogue segment using a portion of the summary draft that corresponds to at least one dialogue turn in the dialogue segment.”); receiving, by the processing device, an input including an input dialogue (see fig 2a, 202; also see paragraphs [0023] and [0056]; e.g., input dialogue); and generating, by the processing device, a summary of the input dialogue using the language model (see fig 2a; e.g., label 150 is output from label 220). Wu does not expressly disclose supervision in a prompt transfer approach between a source task and a target task. However, Vu discloses supervision in a prompt transfer approach between a source task and a target task is well known in the art (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). It would have been obvious to an artisan before the effective filing date of the present invention to include Vu’s teachings in Wu’s user interface in an effort to provide a user-friendly interface that significantly boosts the performance of prompt tuning across many tasks. Regarding claim 2, Vu discloses wherein the source task is a dialogue state tracking task for a particular domain and the target task is a dialogue summarization task for the particular domain (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 3, Vu discloses wherein the generating the summary includes configuring an input sequence to the language model to include a soft prompt generated during training of the language model based in part on the dialogue skeletons (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 4, Vu discloses wherein the prompt transfer approach includes freezing parameters of the language model and learning parameters of the soft prompt (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 5, Vu discloses wherein the prompt transfer approach includes learning a soft prompt for the source task and using the soft prompt from the source task to initialize parameters of a soft prompt for the target task (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 9, Vu discloses wherein the dialogue skeletons represent an intermediate task-specific medium between the source task and the target task (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Claim 17 is similar in scope to claim 1 and is therefore rejected under similar rationale. Regarding claim 18, Vu discloses wherein the generating the summary includes prepending the soft prompt to an input sequence generated based on the input dialogue (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). 5. Claim(s) 6-8, 10-16 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu and Vu in view of Aggarwal et al (“Aggarwal” US 2021/0358478). Regarding claim 6, Wu and Vu do not expressly wherein the dialogue skeletons include a subset of dialogue turns extracted from training dialogues using one or more perturbation-based probes. However, Aggarwal discloses wherein the dialogue skeletons include a subset of dialogue turns extracted from training dialogues using one or more perturbation-based probes (see paragraphs [0026], [0050]-[0055]; e.g., remove dialogue turns; “training a language model according to the perturbations”). It would have been obvious to an artisan before the effective filing date of the present invention to include Aggarwal’s teachings in user interface in an effort to provide a more user-friendly interface since removing turns can drastically reduce the probability for escalation (see paragraph [0055]). Regarding claim 7, Aggarwal discloses wherein the one or more perturbation-based probes are configured to generate the dialogue skeletons by determining a sensitivity of a dialogue state tracking model to dialogue turns of the training dialogues (see paragraphs [0026], [0050]-[0055]; e.g., weighted dialogue turns; “training a language model according to the perturbations”). Regarding claim 8, Aggarwal discloses wherein the subset of dialogue turns includes dialogue turns over a threshold level of sensitivity (see paragraphs [0026], [0050]-[0055]; e.g., weighted dialogue turns; “training a language model according to the perturbations”). Claim 10 is similar in scope to claims 1 and 6, above, and is therefore rejected under similar rationale. Regarding claim 11, Wu discloses receiving an input including a dialogue; and generating a summary of the dialogue using the trained machine learning model (see claim 1 above). Regarding claim 12, Vu discloses wherein the source task is a dialogue state tracking task and the target task is a dialogue summarization task (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 13, Aggarwal discloses wherein the one or more perturbation-based probes are configured to determine a sensitivity of the machine learning model to dialogue turns for a particular training dialogue, and wherein the subset of dialogue turns includes dialogue turns over a threshold level of sensitivity (see paragraphs [0026], [0050]-[0055]; e.g., weighted dialogue turns; “training a language model according to the perturbations”). Regarding claim 14, Vu discloses wherein the machine learning model includes a pretrained language model, and the training includes freezing parameters of the pretrained language model and generating a soft prompt to adjust an input sequence to the pretrained language model (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 15, Vu discloses wherein the training includes using the dialogue skeletons as supervision to refine the soft prompt as part of the prompt transfer for both the source task and the target task (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 16, Vu discloses wherein the training includes using the dialogue skeletons as supervision to refine the soft prompt as part of the prompt transfer for either the source task or the target task (see paragraphs [0011]-[0012], [0044], [0050], [0054], [0141] and [0155]; e.g., “prompt-based transfer learning approach”; “soft prompt”; “a frozen language model (e.g., the parameters of the language model may be fixed as the parameters of the source prompt and/or the target prompt are being learned)”). Regarding claim 19, Wu and Vu do not expressly wherein the dialogue skeletons include a subset of dialogue turns extracted from training dialogues using one or more perturbation-based probes. However, Aggarwal discloses wherein the dialogue skeletons include a subset of dialogue turns extracted from training dialogues using one or more perturbation-based probes (see paragraphs [0026], [0050]-[0055]; e.g., remove dialogue turns; “training a language model according to the perturbations”). It would have been obvious to an artisan before the effective filing date of the present invention to include Aggarwal’s teachings in user interface in an effort to provide a more user-friendly interface since removing turns can drastically reduce the probability for escalation (see paragraph [0055]). Regarding claim 20, Aggarwal discloses wherein the one or more perturbation-based probes are configured to generate the dialogue skeletons by determining a sensitivity of a dialogue state tracking model to dialogue turns of the training dialogues (see paragraphs [0026], [0050]-[0055]; e.g., weighted dialogue turns; “training a language model according to the perturbations”). Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Xin, et al (CN 114398906). 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RASHAWN N TILLERY/ Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Jul 20, 2023
Application Filed
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602701
INTERACTIVE MAP INTERFACE INCORPORATING CUSTOMIZABLE GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12547302
PAGE PRESENTATION METHOD, DISPLAY SYSTEM AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12542871
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12536219
DIGITAL CONTAINER FILE FOR MULTIMEDIA PRESENTATION
2y 5m to grant Granted Jan 27, 2026
Patent 12524138
METHOD AND APPARATUS FOR ADJUSTING POSITION OF VIRTUAL BUTTON, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.6%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month