Prosecution Insights
Last updated: April 19, 2026
Application No. 18/214,905

GENERATIVE COLLABORATIVE MESSAGE SUGGESTIONS

Non-Final OA §102§103
Filed
Jun 27, 2023
Examiner
STEINLE, ANDREW J
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
479 granted / 547 resolved
+29.6% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
17 currently pending
Career history
564
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-5, 8-9, 13-14, and 17-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gangadharaiah et al., (US 10860629 B1) hereinafter referred to as Gang. Regarding Claims 1, 13, and 17, Gang discloses A method comprising: configuring a first machine learning model to generate and output suggested message content based on first correlations between message content and message acceptance data, [Abstract, A seq2seq ML model can be trained using a corpus of training data and a loss function that is based at least in part on a distance to a goal. The seq2seq ML model can be provided a user utterance as an input, and a vector of a plurality of values output by a plurality of hidden units of a decoder of the seq2seq ML model can be used to select one or more candidate responses to the user utterance via a nearest neighbor algorithm. In some embodiments, the specially adapted seq2seq ML model can be trained using unsupervised learning, and can be adapted to select intelligent, coherent agent responses that move a task-oriented dialog toward its completion] [Figures 1, 2, and 6] wherein the first machine learning model comprises a first encoder-decoder model architecture; [Column 6, lines 11-21, The skip-connection model for handling multi-turn dialog can be trained as described above with regard to FIG. 2. The output of the last hidden unit of the encoder and the decoder at the last turn T of a training dialog are used to represent the goal state, although in other embodiments other approaches known to those of skill in the art can also be used to obtain a representation for the final agent's response, provided that those approaches would be extended to consider the dialog context] [Claim 13, wherein the ML model comprises at least an encoder and at least a decoder, and wherein the goal value for each training sample used to train the ML model is based at least in part on an output of the encoder or an output of the decoder at a final turn of a training chat dialog that the training sample is from] configuring a second machine learning model to generate and output message evaluation data based on second correlations between the message content and the message acceptance data, [Claim 13, select one or more candidate responses to the chat message to be provided to the agent system as one or more recommendations for an agent response to the chat message within the agent-user chat dialog, wherein to select the one or more candidate responses the chatbot system is to: obtain a vector of a plurality of values including the first embedding and also a second embedding generated by the ML model during a previous turn of the chat dialog, identify one or more other vectors based on a nearest neighbor search using the vector, and select, as the one or more candidate responses, one or more chat messages corresponding to the identified one or more other vectors] wherein the second machine learning model comprises a second encoder-decoder model architecture; coupling an output of the first machine learning model to an input of the second machine learning model; [Claim 13, wherein the ML model comprises at least an encoder and at least a decoder, and wherein the goal value for each training sample used to train the ML model is based at least in part on an output of the encoder or an output of the decoder at a final turn of a training chat dialog that the training sample is from; and select one or more candidate responses to the chat message to be provided to the agent system as one or more recommendations for an agent response to the chat message within the agent-user chat dialog, wherein to select the one or more candidate responses the chatbot system is to: obtain a vector of a plurality of values including the first embedding and also a second embedding generated by the ML model during a previous turn of the chat dialog, identify one or more other vectors based on a nearest neighbor search using the vector, and select, as the one or more candidate responses, one or more chat messages corresponding to the identified one or more other vectors]and coupling an output of the second machine learning model to an input of the first machine learning model. [Column 19, lines 58-60, the output of one trained machine learning model is used as an input to another trained machine learning model] Regarding Claim 2, Gang discloses further comprising: inputting the suggested message content output by the first machine learning model to the second machine learning model. [Column 19, lines 54-63, the deployment request can identify multiple model data files corresponding to different trained machine learning models because the trained machine learning models are related (e.g., the output of one trained machine learning model is used as an input to another trained machine learning model). Thus, the user may desire to deploy multiple machine learning models to eventually receive a single output that relies on the outputs of multiple machine learning models] Regarding Claim 3, Gang discloses further comprising: inputting the message evaluation data output by the second machine learning model to the first machine learning model. [Column 19, lines 58-60, the output of one trained machine learning model is used as an input to another trained machine learning model] Regarding Claim 4, Gang discloses further comprising: training the first machine learning model based on first training data, wherein the first training data comprises positive examples of the message acceptance data. [Abstract, A seq2seq ML model can be trained using a corpus of training data and a loss function that is based at least in part on a distance to a goal. The seq2seq ML model can be provided a user utterance as an input, and a vector of a plurality of values output by a plurality of hidden units of a decoder of the seq2seq ML model can be used to select one or more candidate responses to the user utterance via a nearest neighbor algorithm. In some embodiments, the specially adapted seq2seq ML model can be trained using unsupervised learning, and can be adapted to select intelligent, coherent agent responses that move a task-oriented dialog toward its completion][Figures 1, 2, and 6] Regarding Claim 5, Gang discloses further comprising: training the second machine learning model based on the first training data and second training data, wherein the second training data comprises negative examples of the message acceptance data. [Column 3, lines 6-21, Embodiments can use SL type techniques to learn embeddings (or real valued representations) of dialog history, at each turn of the dialog, offline without the need for additional human annotation. Embodiments can add a reward term to the negative cross entropy at each turn that measures the deviation of the predicted next state learned embedding from the final state embedding for that dialog. The final embedding may capture information about the goal API call that was issued by the agent (or other ending event/state), and information extracted from the customer in the course of the dialog. This additional reward term encourages agent responses that semantically move the conversation in the right direction in the latent space, and de-emphasizes the cross-entropy loss] Regarding Claim 8, Gang discloses further comprising: receiving, via a message generation interface, pre-send feedback data relating to the suggested message content; and tuning at least one of the first machine learning model or the second machine learning model based on the received pre-send feedback data. [Column 3, lines 11-18, Embodiments can add a reward term to the negative cross entropy at each turn that measures the deviation of the predicted next state learned embedding from the final state embedding for that dialog. The final embedding may capture information about the goal API call that was issued by the agent (or other ending event/state), and information extracted from the customer in the course of the dialog – the “predicted next state” is the “pre-send feedback data”] Regarding Claim 9, Gang discloses wherein the pre-send feedback data is based on at least one interaction of a prospective message sender with the message generation interface in response to a presentation by the message generation interface of the suggested message content prior to a sending of a message comprising the suggested message content by the prospective message sender to at least one recipient. [Column 3, lines 8-18, Embodiments can use SL type techniques to learn embeddings (or real valued representations) of dialog history, at each turn of the dialog, offline without the need for additional human annotation. Embodiments can add a reward term to the negative cross entropy at each turn that measures the deviation of the predicted next state learned embedding from the final state embedding for that dialog. The final embedding may capture information about the goal API call that was issued by the agent (or other ending event/state), and information extracted from the customer in the course of the dialog – the “predicted next state” is the “pre-send feedback data”] Regarding Claims 14, and 18, Gang discloses wherein the instructions, when executed by the at least one processor, cause the at least one processor to perform at least one operation further comprising: inputting the suggested message content output by the first machine learning model to the second machine learning model; and inputting the message evaluation data output by the second machine learning model to the first machine learning model. [Column 19, lines 54-63, the deployment request can identify multiple model data files corresponding to different trained machine learning models because the trained machine learning models are related (e.g., the output of one trained machine learning model is used as an input to another trained machine learning model). Thus, the user may desire to deploy multiple machine learning models to eventually receive a single output that relies on the outputs of multiple machine learning models] Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Gang, as applied to Claim 1, above, in view of Awadallah et al., (US 20190286451 A1) hereinafter referred to as Awadallah. Regarding Claim 10, Gang does not explicitly teach further comprising: receiving, via a message receiving interface, post-send feedback data relating to the suggested message content; and tuning at least one of the first machine learning model or the second machine learning model based on the received post-send feedback data. Awadallah teaches further comprising: receiving, via a message receiving interface, post-send feedback data relating to the suggested message content; and tuning at least one of the first machine learning model or the second machine learning model based on the received post-send feedback data. [paragraph 0122, the interactive user interface 904 comprises different fields, values and so forth that correspond directly to the output of the activated decoder modules in the factored decoder, such as decoder 520 and 618. Because of the fine-grained control and feedback that a user has and the direct correspondence to individual decoder modules, once the user has made any corrections and submitted the finalized information for the API frame 908, the submitted data can be used as an additional training data point. Effectively, the user becomes the annotator for the submitted NL utterance showing how the machine learning model should have generated the API frame. Additionally, because the submitted corrections correspond directly to the output of the various decoder modules, the submitted data 908 show the correct layout and the output that should have been produced by each of the activated decoder modules] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Awadallah with the disclosure of Gang. The motivation or suggestion would have been “to using a trained machine learning model to convert natural language input into an application programming interface call.” (paragraph 0001) Regarding Claim 11, Gang does not explicitly teach wherein the post-send feedback data is based on at least one interaction of a prospective message recipient with the message receiving interface in response to a presentation by the message receiving interface of a message comprising the suggested message content to the prospective message recipient. Awadallah teaches wherein the post-send feedback data is based on at least one interaction of a prospective message recipient with the message receiving interface in response to a presentation by the message receiving interface of a message comprising the suggested message content to the prospective message recipient. [paragraph 0122, the interactive user interface 904 comprises different fields, values and so forth that correspond directly to the output of the activated decoder modules in the factored decoder, such as decoder 520 and 618. Because of the fine-grained control and feedback that a user has and the direct correspondence to individual decoder modules, once the user has made any corrections and submitted the finalized information for the API frame 908, the submitted data can be used as an additional training data point. Effectively, the user becomes the annotator for the submitted NL utterance showing how the machine learning model should have generated the API frame. Additionally, because the submitted corrections correspond directly to the output of the various decoder modules, the submitted data 908 show the correct layout and the output that should have been produced by each of the activated decoder modules] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Awadallah with the disclosure of Gang. The motivation or suggestion would have been “to using a trained machine learning model to convert natural language input into an application programming interface call.” (paragraph 0001) Claims 12, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gang, as applied to Claim 1, above, in view of Cella et al., (US 20230097438 A1) hereinafter referred to as Cella. Regarding Claims 12, 16, and 20, Gang discloses further comprising: determining, for an instance of suggested message content, a model input to which the first machine learning model is applied to generate the instance of suggested message content; [Column 15, lines 64-67, The evaluation data is separate from the data used to train a machine learning model and includes both input data and expected outputs (e.g., known results)] Gang does not explicitly teach determining a difference between the instance of suggested message content and the model input; and tuning the first machine learning model based on the difference between the instance of suggested message content and the model input. Cella teaches determining a difference between the instance of suggested message content and the model input; and tuning the first machine learning model based on the difference between the instance of suggested message content and the model input. [paragraph 1557, all parameters and weights (including the weights in the filters and weights for the fully-connected layer are initially assigned (e.g., randomly assigned). Then, during training, a training image or images, in which the objects have been detected and classified, are provided as the input to the CNN 8860, which performs the forward propagation steps. In other words, CNN 8860 applies convolution, non-linear activation, and pooling layers to each training image to determine the classification vectors (i.e., detect and classify each training image). These classification vectors are compared with the predetermined classification vectors. The error (e.g., the squared sum of differences, log loss, softmax log loss) between the classification vectors of the CNN and the predetermined classification vectors is determined. This error is then employed to update the weights and parameters of the CNN in a backpropagation process which may use gradient descent and may include one or more iterations. The training process is repeated for each training image in the training set] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Cella with the disclosure of Gang. The motivation or suggestion would have been “for management of value chain network entities, including supply chain and demand management entities.” (paragraph 0002) Allowable Subject Matter Claims 6-7, 15, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner’s statement of reasons for allowance: Regarding Claims 6-7, 15, and 19, the closest prior art of record, Gangadharaiah et al., (US 10860629 B1), Awadallah et al., (US 20190286451 A1), and Cella et al., (US 20230097438 A1) does not explicitly teach nor suggest in detail the limitations of these claims in view of other limitations of the intervening claims. Thus the prior arts of record taking singly or in combination do not teach or suggest the above-stated limitations taking wholly in combination with all the elements of each independent claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW J STEINLE whose telephone number is (571)272-9923. The examiner can normally be reached M-F 10am-6pm CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW J STEINLE/Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Feb 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598068
SYSTEMS AND METHODS FOR HANDLING ENCRYPTED DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596771
SECURE ENFORCEMENT OF DIGITAL RIGHTS IN ARTIFICIAL INTELLIGENCE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12592817
Message Service with Distributed Key Caching for Server-Side Encryption
2y 5m to grant Granted Mar 31, 2026
Patent 12591680
TRUST-CHAIN BASED ADAPTABLE TELEMETRY
2y 5m to grant Granted Mar 31, 2026
Patent 12587365
SECRET MANAGEMENT IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+19.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month