Prosecution Insights
Last updated: April 19, 2026
Application No. 18/622,675

GENERATIVE STYLE TOOL FOR CONTENT SHAPING

Non-Final OA §102§103§112
Filed
Mar 29, 2024
Examiner
ULRICH, NICHOLAS S
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
77%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
425 granted / 614 resolved
+14.2% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION 1. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The IDS filed 3/24/2025 is considered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In regard to claim 1, claim 1 recites the limitation "the content" (in multiple lines) and “the context associated with the content” (in multiple lines). There is insufficient antecedent basis for these limitations in the claim. It is unclear whether the recited “the content” is referring back to the recited “a portion of content” or some other content. ‘Context’ has not been defined in the claims and it is unclear what the recited “the context associated with the content” is referring to. Accordingly, the claim is indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In regard to claims 2-11, these claims are rejected at least based on their dependency to claim 1. Further, in regard to claim 3, claim 3 recites “a second user interface elements” in line 4. It is unclear whether this limitation is singular or plural as the recited “a” would indicate singular but the recited “elements” indicates plural. Accordingly, the claim is indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In regard to claim 12, claim 12 recites similar subject matter as claim 1 and is rejected for similar reasons. In regard to claims 13-17, these claims are rejected at least based on their dependency to claim 12. Further, in regard to claim 14, claim 14 recites similar subject matter as claim 3 and is rejected for similar reasons. In regard to claim 18, claim 18 recites similar subject matter as claim 1 and is rejected for similar reasons. In regard to claims 19-20, these claims are rejected at least based on their dependency to claim 12. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. 5. Claims 2 and 13 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The subject matter of claims 2 and 13 is already recited in claims 1 and 12, respectively, and therefore fails to further limit the subject matter of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claim(s) 1, 2, 5, 6, 9-13, and 17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sivaji et al. (US 2018/0150446 A1). In regard to claim 1, Sivaji discloses a method for content shaping using one or more generative styles, the method comprising (Paragraphs 0047-0056): determining the one or more generative styles applicable to at least a portion of content based on the content and the context associated with the content, the one or more generative styles representing one or more style features adapted to shape the at least a portion of the content (Fig. 11 elements 1102 and 1104, Paragraph 0033, Paragraph 0034, and Paragraph 0035: document templates having the highest score based on analyzing objects placed in a document, including size, shape, text, and the subject matter of the objects are determined); generating first user interface elements representing the one or more generative styles based on the content and the context associated with the content (Fig. 11 element 1106 and Paragraph 0035: the determined document templates are output via the user interface); receiving a first generative style selected from the one or more generative styles via the first user interface elements (Fig. 11 element 1108 and Paragraph 0036: selection of a document template is received); applying the selected generative style to a selected portion of the content; and causing a display of the content transformed based on the selected generative style (Fig. 11 element 1110 and Paragraph 0037: the selected document template is applied to the document which comprises modifying the size, shape an orientation of the object). In regard to claim 2, Sivaji discloses wherein generating the first user interface elements representing the one or more generative styles comprises generating the first user interface elements representing the one or more generative styles based on the content and the context associated with the content (Fig. 11 element 1106 and Paragraph 0035: the determined document templates are output via the user interface). In regard to claim 5, Sivaji discloses further comprising receiving a modification to the first generative style via a user input (Paragraph 0024 lines 22-24: user may modify size, position, or orientation of any object). In regard to claim 6, Sivaji discloses wherein the user input includes modifying one or more style features associated with the first generative style (Paragraph 0024 lines 22-24: user may modify size, position, or orientation of any object). In regard to claim 9, Sivaji discloses wherein the one or more style features includes generating, formatting, and/or styling text, an image, a table, a graph, an audio, a video, a 3D object, interactive content, and/or other data based on the at least a portion of the content (Paragraph 0037: modifying size, shape, and orientation of objects). In regard to claim 10, Sivaji discloses wherein the first user interface elements are communicatively coupled to an application that has the content (Fig. 3, Paragraph 0020 lines 6-10, and Paragraph 0024: the document templates are provided within document editing application user interface). In regard to claim 11, Sivaji discloses wherein applying the selected generative style to the selected portion of the content comprises applying the selected generative style to the selected portion of the content directly in the application (Fig. 4, Paragraph 0020 lines 6-10, and Paragraph 0025 lines 1-7: the document template is applied to content (e.g. object) in the document editing application). In regard to claims 12, 13, and 17, device claims 12, 13, and 17 correspond generally to method claims 1, 2, and 9, respectively, and recite similar features in device form, and therefore are rejected under the same rationale. 7. Claim(s)s 1, 3, 4, 12, 14, 15, 18, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hwang (US 2019/0042551 A1). In regard to claim 1, Hwang discloses a method for content shaping using one or more generative styles, the method comprising: determining the one or more generative styles applicable to at least a portion of content based on the content and the context associated with the content, the one or more generative styles representing one or more style features adapted to shape the at least a portion of the content (Paragraph 0153: summary generative style is identified after an article (e.g. content) is displayed (e.g. context associated with the content) and after a predetermined button or user command of a predetermined pattern is input (e.g. context associated with the content); generating first user interface elements representing the one or more generative styles based on the content and the context associated with the content (Fig. 8 and Paragraph 0153: summary icon 810 is displayed); receiving a first generative style selected from the one or more generative styles via the first user interface elements (Paragraph 0154: the icon 810 is selected); applying the selected generative style to a selected portion of the content; and causing a display of the content transformed based on the selected generative style (Fig. 8, Paragraph 0156, and Paragraph 0157: summary of the content is generated and displayed). In regard to claim 3, Hwang discloses further comprising: determining whether one or more variations of the first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style (Fig. 8 and Paragraphs 0154-0155: document summary settings are determined and displayed in UI 820). In regard to claim 4, Hwang discloses wherein applying the selected the generative style to the selected portion of the content comprises: receiving a second generative style from the one or more variations of the first generative style via the second user interface elements; and applying the second generative style to the selected portion of the content (Paragraphs 0156-0157: the summary is generated based on the set summary settings). In regard to claims 12, 14, and 15, device claims 12, 14, and 15 correspond generally to method claims 1, 3, and 4, respectively, and recite similar features in device form, and therefore are rejected under the same rationale. In regard to claim 18, Hwang discloses a method for content shaping using one or more generative styles, the method comprising: determining the one or more generative styles applicable to at least a portion of content created in an application based on the content and the context associated with the content, the one or more generative styles representing one or more style features adapted to shape the at least a portion of the content (Paragraph 0153: summary generative style is identified after an article (e.g. content) is displayed (e.g. context associated with the content) and after a predetermined button or user command of a predetermined pattern is input (e.g. context associated with the content); generating first user interface elements representing the one or more generative styles based on the content and the context associated with the content (Fig. 8 and Paragraph 0153: summary icon 810 is displayed); causing a display of the first user interface elements (Fig. 8 and Paragraph 0153: summary icon 810 is displayed); receiving a first generative style selected from the one or more generative styles via the first user interface elements (Paragraph 0154: the icon 810 is selected); determining whether one or more variations of the first generative style exist; in response to determining that the one or more variations of the first generative style exist, generating a second user interface element representing the one or more variations of the first generative style; and causing a display of the second user interface elements (Fig. 8 and Paragraphs 0154-0155: the document summary settings are determined and displayed in UI 820). In regard to claim 20, Hwang discloses wherein: causing the display of the first user interface elements comprises causing the display of the first user interface elements as a popup window on top of the application; and causing the display of the second user interface elements comprises causing the display of the second user interface elements as a popup window on top of the application (Fig. 8 Paragraphs 0153 and 0154: summary icon 810 and UI 820 are displayed as separate layers overlaying the content). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claim(s) 3, 14, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sivaji et al. (US 2018/0150446 A1) and further in view of Thibault (US 2010/0070930 A1). In regard to claim 3, while Sivaji teaches the first generative style, they fail to show the determining whether one or more variations of the first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style, as recited in the claims. Thibault teaches a first generative style similar to that of Thibault. In addition, Thibault further teaches determining whether one or more variations of a first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style (Fig. 3F and Paragraph 0164: when multiple variation of as selected template exist, the variation are displayed for selection). It would have been obvious to one of ordinary skill in the art, having the teachings of Sivaji and Thibault before him before the effective filing date of the claimed invention, to modify Sivaji to include the determining whether one or more variations of a first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style of Thibault, in order to obtain determining whether one or more variations of the first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style. It would have been advantageous for one to utilize such a combination as providing templates in different languages, as suggested by Thibault (Paragraph 0164 lines 5-8). In regard to claim 14, device claim 14 corresponds generally to method claim 3 and recites similar features in device form and therefore is rejected under the same rationale. In regard to claim 18, Sivaji discloses a method for content shaping using one or more generative styles, the method comprising (Paragraphs 0047-0056): determining the one or more generative styles applicable to at least a portion of content created in an application based on the content and the context associated with the content, the one or more generative styles representing one or more style features adapted to shape the at least a portion of the content (Fig. 11 elements 1102 and 1104, Paragraph 0033, Paragraph 0034, and Paragraph 0035: document templates having the highest score based on analyzing objects placed in a document, including size, shape, text, and the subject matter of the objects are determined); generating first user interface elements representing the one or more generative styles based on the content and the context associated with the content; causing a display of the first user interface elements (Fig. 11 element 1106 and Paragraph 0035: the determined document templates are output via the user interface); receiving a first generative style selected from the one or more generative styles via the first user interface elements (Fig. 11 element 1108 and Paragraph 0036: selection of a document template is received). While Sivaji teaches the first generative style, they fail to show the determining whether one or more variations of the first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style, as recited in the claims. Thibault teaches a first generative style similar to that of Thibault. In addition, Thibault further teaches determining whether one or more variations of a first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style; and causing display of the second user interface elements (Fig. 3F and Paragraph 0164: when multiple variation of as selected template exist, the variation are displayed for selection). It would have been obvious to one of ordinary skill in the art, having the teachings of Sivaji and Thibault before him before the effective filing date of the claimed invention, to modify Sivaji to include the determining whether one or more variations of a first generative style exist; and in response to determining that the one or more variations of the first generative style exist, generating a second user interface elements representing the one or more variations of the first generative style; and causing display of the second user interface elements of Thibault, in order to obtain determining whether one or more variations of the first generative style exist; in response to determining that the one or more variations of the first generative style exist, generating a second user interface element representing the one or more variations of the first generative style; and causing a display of the second user interface elements. It would have been advantageous for one to utilize such a combination as providing templates in different languages, as suggested by Thibault (Paragraph 0164 lines 5-8). In regard to claim 19, Sivaji discloses displaying user interface elements corresponding to generative styles as part of user interface elements of the application (Fig. 3, Paragraph 0020 lines 6-10, and Paragraph 0024: the document templates are provided within document editing application user interface). Accordingly, the combination would further reasonably disclose wherein: causing the display of the first user interface elements comprises causing the display of the first user interface elements as part of user interface elements of the application; and causing the display of the second user interface elements comprises causing the display of the second user interface elements as part of user interface elements of the application (The rejection of claim 18 is incorporated herein in its entirety. As Sivaji teaches that the document templates are displayed as part of the application user interface, Sivaji suggests causing the display of the first user interface elements comprises causing the display of the first user interface elements as part of user interface elements of the application and the combination would reasonably suggest causing the display of the second user interface elements comprises causing the display of the second user interface elements as part of user interface elements of the application). 9. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sivaji et al. (US 2018/0150446 A1) and further in view of Sarrafzadeh et al. (US 11763075 B1) hereinafter referred to as Sarraf. In regard to claim 8, while Sivaji teaches determining the one or more generative styles applicable to at least a portion of content, they fail to show the using a generative model, the generative model is trained to output one or more style features suggested for shaping the content, as recited in the claims. Sarraf teaches generative styles similar to that of Sivaji. In addition, Sarraf further teaches determining one or more generative styles applicable to at least a portion of content using a generative model, the generative model is trained to output one or more style features suggested for shaping the content (Column 3 line 66-Column 14 line 13: machine learning model identifies and presents recommended document templates). It would have been obvious to one of ordinary skill in the art, having the teachings of Sivaji and Sarraf before him before the effective filing date of the claimed invention, to modify the determining the one or more generative styles applicable to at least a portion of content taught by Sivaji to include the determining one or more generative styles applicable to at least a portion of content using a generative model, the generative model is trained to output one or more style features suggested for shaping the content of Sarraf, in order to obtain wherein determining the one or more generative styles applicable to at least a portion of content comprises determining the one or more generative styles applicable to at least a portion of content using a generative model, the generative model is trained to output one or more style features suggested for shaping the content. It would have been advantageous for one to utilize such a combination as enabling users to quickly and efficiently identify templates that are likely to correspond with the input document, as suggested by Sarraf (Column 4 lines 29-32). In regard to claim 16, device claim 16 corresponds generally to method claim 8 and recites similar features in device form and therefore is rejected under the same rationale. Claims NOT Rejected Over the Prior Art 10. Claim 7 is objected to as being dependent upon a rejected base claim, as claim 7 recites subject matter in combination with the other elements recited that is not disclosed in the prior art of record. Claim 7 would NOT be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, as claim 7 stands rejected over 35 U.S.C. 112. The prior art of record fails to disclose or suggest the recited “wherein the user input includes dragging a third generative style of the one or more generative styles to the first generative style of the one or more generative styles to combine the first and third generative styles”. Although, dragging and dropping inputs are well-known the state of the art, there is no motivation found to modify the prior art of record to teach the specificity of dragging a third generative style of the one or more generative styles to the first generative style of the one or more generative styles to combine the first and third generative styles in order to provide user input for modifying one ore more style features associated with the first generative style, as required by claims. Conclusion 11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen et al. (US 2025/0117973 A1), see at least Fig. 4. Shirani et al. (US 2022/0358280 A1), see at least the abstract and Fig. 9. Lundin et al. (US 2022/0245322 A1), see at least the abstract. Meng et al. (US 2022/0222432 A1), see at least Fig. 11. Goyal et al. (US 2022/0121879 A1), see at least the abstract, Fig. 4, and Fig. 5. Mishra et al. (US 2020/0311195 A1), see at least the abstract, Fig.2, and Fig. 4. Dubey et al. (US 2020/0265113 A1), see at leas the abstract and Figs. 8-13 and Paragraphs 0072-0080. 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S ULRICH whose telephone number is (571)270-1397. The examiner can normally be reached M-F 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571)272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 13. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Nicholas Ulrich/Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602239
SYSTEMS AND METHODS FOR RESOLVING AN ERROR CONDITION OF A MEDICAL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12578844
DATA PROCESSING METHOD FOR INTERACTION MODE SWITCHING, ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572264
GRAPHICAL USER INTERFACE FOR ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) COGNITIVE SIGNALS ANALYSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12572255
NOTIFICATION MESSAGE DISPLAY METHOD AND APPARATUS, DEVICE, READABLE STORAGE MEDIUM, AND CHIP
2y 5m to grant Granted Mar 10, 2026
Patent 12561509
Page Layout Method and Apparatus
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
77%
With Interview (+7.6%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month