Prosecution Insights
Last updated: April 19, 2026
Application No. 18/129,668

MACHINE LEARNING STRUCTURED RESULT GENERATION

Non-Final OA §103
Filed
Mar 31, 2023
Examiner
WU, DAXIN
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
529 granted / 620 resolved
+30.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
26 currently pending
Career history
646
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
55.4%
+15.4% vs TC avg
§102
4.9%
-35.1% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 620 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office action is in response to the Request for Continued Examination (RCE) filed on October 2, 2025. Claims 1-20 are pending and examined below. Allowable Subject Matter Upon extensive searches of various databases, claims 1-8 and 14-20 are allowable over the prior art of record. Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9-10 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0088175 (hereinafter “Ross”), in view of US 2016/0077810 (hereinafter “Bertilsson”), and further in view of US 2023/0153641 (hereinafter “Manda”). In the following claim analysis, Applicant’s claim language is in bold text and Examiner’s explanations are enclosed in square brackets. As to claim 9, Ross discloses A method (claim 1, A computer-implemented method), comprising: receiving, from a computing device, a generative machine learning (ML) processing request (Ross, Fig. 3, ¶ 35, document 220 may refer to a general input and/or a request to a generative system; ¶ 52, The generative system/model may be based on a machine learning model) that includes a description of a result interface (Ross, ¶ 35, document 220 may refer to a general input and/or a request to a generative system. The input to the generative system may include natural language descriptions, data, sensor readings, computer code, a chemical formula, a diagram, natural speech, an image, etc.; ¶ 37, Software application 204 may call and/or employ a standard translation/transcoding program or generative model to translate document 220 from the first programming language to the second programming language [as described in document 220 that includes a description of a result interface]; ¶ 48, FIGS. 10A and 10B are rendering of an example of graphical user interface 206 [a result interface] displaying a multiple version difference summary 230 in accordance with one or more embodiments. In FIG. 10A, graphical user interface 206 can display on display 119 multiple version difference summary 230 with highlighted divergent regions 1002 for the multiple translations of the source code in document 220); processing, using [[an]] a generative ML model, the generative ML processing request to generate output (Ross, Fig. 3, ¶ 37, Software application 204 may call and/or employ a standard translation/transcoding program or generative model to translate document 220 from the first programming language to the second programming language … Transcoding/translation model 210 can be a trained model that uses machine learning) according to the description of the result interface (Ross, Fig. 3, ¶ 50, At block 314, software application 204 is configured to integrate selected alternative translations/suggestions into a final version translated document 240 for execution of the final source code [e.g., in the second program language as the description of the result interface]); and providing the output in response to the generative ML processing request (Ross, Fig. 11, ¶ 53, At block 1104, software application 204 of computer system 202 is configured to use, call, and/or employ the machine learning model (e.g., transcoding/translation model 210, a generative system) to generate multiple hypotheses (e.g., multiple different translated documents 222) (e.g., as an output) regarding a translation of the document (e.g., source code document 220, request, etc.) from a first language to a second language). Ross does not appear to explicitly disclose the intended use of a description of a result interface for a software data structure of a user application. However, in an analogous art to the claimed invention in the field of machine learning, Bertilsson teaches a description of a result interface for a software data structure of a user application (Bertilsson, ¶ 150, a graphical user interface 1400 is illustrated that is a form editor for creating input forms, output forms, and form collections according to aspects of a method for creating an application data structure. The illustrated graphical user interface 1400 is the output of an exemplary application data structure created [Thus, the GUI definition constitutes a structured description that governs how application data structured s are created and rendered.]with the application builder wizard [e.g., the generative ML model taught by Ross]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ross’ system to incorporate structured interface descriptions as taught by Bertilsson. The modification would be obvious because one of ordinary skill in the art would be motivated to providing user-selectable options to guide the creation of application data structures for forming and solving problems (Bertilsson, ¶ 2), thereby to improve usability, increase structured consistency of generated outputs, and allow user control over output structure.. Ross as modified does not appear to explicitly disclose generate structured model output; validating the structured model output; and based on determining the structured model output is validated, providing the structured model output. However, in an analogous art to the claimed invention in the field of machine learning, Manda teaches generate structured model output (Manda, Abstract, The machine learning model transmits the output in structured form to a target application; ¶ 25, he machine learning models described herein can be structured and/or trained to perform various horizontal and/or domain-specific functions); validating the structured model output (Manda, ¶ 54, The validation GUI can include data input controls structured to allow validators to specify whether the output is correct); and based on determining the structured model output is validated (Manda, ¶ 54, The validation GUI can include data input controls structured to allow validators to specify whether the output is correct. Responsive to accepting validator feedback at 348, the extraction engine 150 can make a determination, at 350, whether the output is indicated to be correct), providing the structured model output (Manda, ¶ 55, the validated output can be provided to a downstream system, such as the target application). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ross’ system as modified by incorporating Manda’s structured validation mechanism. The modification would be obvious because one of ordinary skill in the art would be motivated to ingest unstructured items and output structured and/or summarized items. The output can be transmitted to one or more target applications and transmit the output to a computing device for validation. If validators specifies the output is incorrect, the submitted feedback can be stored, in structured form (e.g., as a mapping between an incorrect value and an expected value, a mapping between an expected value and an input value) and provided to the model as training data (Manda, ¶ ¶ 31 and 54) to improve reliability of ML generated structured results, reduce propagation of incorrect data structure, and enable feedback-based control over generative output. As to claim 10, the rejection of claim 9 is incorporated. Ross as modified further disclose The method of claim 9, wherein validating the structured model output comprises at least one of: validating a syntax of the structured model output; or evaluating the structured model output compared to the description of the result interface (Manda, ¶ 54, The validation GUI can include data input controls structured to allow validators to specify whether the output is correct. … the submitted feedback can be stored, at 354, in structured form (e.g., as a mapping between an incorrect value [e.g., the structured model output] and an expected value [e.g., the description of the result interface], a mapping between an expected value and an input value) and provided to the model as training data). The motivation to combine the references is the same as set forth in the rejection of claim 9. As to claim 12, the rejection of claim 9 is incorporated. Ross as modified further discloses The method of claim 9, wherein: the generative ML processing request is a first generative ML processing request (Ross, ¶ ¶ 35, 37, 48, and 52); and the structured model output is a first instance of structured model output ((Manda, Abstract and ¶ 54); and the method further comprises: receiving a second generative ML processing request (Manda, Fig. 3B, ¶ 49, At 342, the extraction engine 150 can determine the type [as a second request] of output needed, such as classification, entity extraction, and/or text generation; ¶ 51 If the determined type of output is entity extraction, then, at 344b, the extraction engine 150 can use an entity recognition machine learning model to process the extracted items. The output of the entity recognition machine learning model can include data (e.g., extracted key-value pairs where a key can be an entity name and a value can correspond to the extracted data item), embedded item coordinates, confidence scores, and the like); generating, for the second generative ML processing request, a second instance of structured model output (Manda, ¶¶ 51-52, The output of the entity recognition machine learning model can include data (e.g., extracted key-value pairs where a key can be an entity name and a value can correspond to the extracted data item), embedded item coordinates, confidence scores, and the like; Abstract, The machine learning model transmits the output in structured form to a target application); validating the second instance of structured model output (Manda, Fig. 3B, ¶ 54, The extraction engine 150 can generate a validation GUI and display the output in a result set rendered on the validation GUI. … Responsive to accepting validator feedback at 348, the extraction engine 150 can make a determination, at 350, whether the output is indicated to be correct); and based on determining the second instance of structured model output is not validated (Mandy, Fig. 3B, ¶ 54, If indicated incorrect, the extraction engine 150 can generate or update the GUI to allow the validator to submit feedback), performing a remedial action (Mandy, Fig. 3B, ¶ 54, the submitted feedback can be stored, at 354, in structured form (e.g., as a mapping between an incorrect value and an expected value, a mapping between an expected value and an input value) and provided to the model as training data [a remedial action]). The motivation to combine the references is the same as set forth in the rejection of claim 9. As to claim 13, the rejection of claim 12 is incorporated. Ross as modified further discloses The method of claim 12, wherein the remedial action is at least one of processing the second instance of structured output to correct malformed syntax of the second instance of structured output; evaluating a secondary output of the generative ML model that was generated based on the second generative ML processing request; or providing a request to the generative ML model to process the second instance of structured output and generate a third instance of structured output (Manda, Fig. 3B, ¶ 54; ¶ 48, The set of operations 340 can be performed by machine learning models, which can be trained, at 341, to improve accuracy, recall, and other performance parameters of the machine learning models. For example, at 341, the extraction engine 150 can generate performance metrics and/or other historical feedback for a particular machine learning model and store this data relationally to the inputs and/or outputs of the machine learning model. The stored data can be used to customize the respective model to improve its performance metrics; ¶¶ 49-51, The output [a third instance of structured output] of the entity recognition machine learning model can include data (e.g., extracted key-value pairs where a key can be an entity name and a value can correspond to the extracted data item), embedded item coordinates, confidence scores, and the like). The motivation to combine the references is the same as set forth in the rejection of claim 9. Response to Arguments Applicant’s arguments with respect to claims 9-13 have been considered, but are moot in view of the new ground(s) of rejection. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2021/0264234 teaches requesting one or more effects be performed, and having the generative neural network generate a new result with the requested effects. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAXIN WU whose telephone number is (571) 270-7721. The examiner can normally be reached on M-F (7 am - 11:30 am; 1:30- 5 pm). If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /DAXIN WU/ Primary Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Jan 11, 2025
Non-Final Rejection — §103
May 16, 2025
Response Filed
Jun 30, 2025
Final Rejection — §103
Aug 21, 2025
Examiner Interview Summary
Aug 21, 2025
Applicant Interview (Telephonic)
Oct 02, 2025
Request for Continued Examination
Oct 12, 2025
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585451
SOFTWARE UPDATES BASED ON TRANSPORT-RELATED ACTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12578949
DEVICE AND METHOD FOR EXCHANGING A PUBLIC KEY IN THE COURSE OF A FIRMWARE UPDATE FOR LEVEL SENSORS
2y 5m to grant Granted Mar 17, 2026
Patent 12555079
VERSION MAINTENANCE SERVICE FOR ANALYTICS COMPUTING
2y 5m to grant Granted Feb 17, 2026
Patent 12547391
Mobile Application Updates for Analyte Data Receiving Devices
2y 5m to grant Granted Feb 10, 2026
Patent 12547395
MOBILE TERMINAL AND SOFTWARE DISTRIBUTION SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+18.6%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 620 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month