Prosecution Insights
Last updated: April 19, 2026
Application No. 17/609,686

RECOMMENDING THEME PATTERNS OF A DOCUMENT

Final Rejection §103§112
Filed
Nov 08, 2021
Examiner
SHIBEROU, MAHELET
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
6 (Final)
73%
Grant Probability
Favorable
7-8
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
409 granted / 561 resolved
+17.9% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
592
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 561 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This office action is in response to the Amendment received 7/21/2025. 3. Claims 1-2,4-12 and 14-16 are pending in the application. Claims 1, 12, and 15 are independent claims. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. 4. Claims 1-2,4-12 and 14-16 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Specifically, independent claims 1, 12, and 15 recite “obtain at least one image at least according to the current content of the text document, wherein the at least one image is retrieved from an image database by determining that at least one candidate image in the image database satisfies a similarity measure with the identified current content and, in response. determining that at least one candidate image in the image database satisfies the similarity measure with the identified current content, retrieving the at least one candidate image from the image database. However, the specification does not support the cited step of determining that at least one candidate image in the image database satisfies the similarity measure with the identified current content, retrieving the at least one candidate image from the image. Applicant stated that Specification support - [0028] “candidate images are scored using a similarity measure between image features and features extracted from the current document content, only images whose similarity score exceeds a threshold are retrieved”, and [0030] “the similarity measure may be cosine similarity in an embedding space output by a neural network”, and Fig. 2 (block 208 “Retrieve image whose similarity score > threshold’)” (see page 8 of remarks). However, these paragraphs or any other paragraph(s) in the specification do not disclose the newly added features, specifically “similarity measure” satisfied or not satisfied as claimed in independent claims 1, 12, and 15 and new claim 16. Rather, the specification, e.g. paragraphs 0030, appears to describe matching the content of a document with images in the database and automatically score or rank candidate images generated from text of the document. Remaining, claims 2, 4-11, 14, and 16 depend on the independent claims cited above and do not remedy the discussed issue. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-2, 4-12, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sivaji et al. (USPN 10,713,43, hereinafter Sivaji) in view of Ghoshal et al. (US 20200125574, hereinafter Ghoshal). In reference to independent claim 1, Sivaji teaches: detecting, at a document application operating using one or more processors of a computing device, a trigger for providing theme pattern (See Sivaji, Col. 4, lines 47-67 and Col. 5, lines 55-57 and Figure 2, 5) a user enters content into a document application which act as a trigger to perform content analysis on the content entered by the user. The user may also modify current content which also is being as a trigger for the process to begin analyzing and providing theme patters. Figure 2 and 5 illustrate two distinct examples of content input into the document application which caused (i.e. triggered) the analysis by the document application of the input content for providing theme patters. identifying current content of the text document (See Sivaji, Col. 5, lines 5-10, Col 6. Lines 1-5 and figures 3, 5) Figures 3 and 5 provide two distinct examples of a user entering text and image content illustrated in figure 3 and solely textual content illustrated in figure 5. The textual and image content of figure 3 and the textual content of figure 5 is analyzed to identify a particular theme of the content. providing, in a graphical user interface of the document application, the at least one theme pattern (See Sivaji, Col. 5, lines 1-18, Col. 6, lines 4-7, figure 3 and 5) suggested theme templates are displayed to the user within the graphical user interface for selection by a user. Figure 3 and 5 illustrate to examples of the application for displaying suggested theme templates based on the content input wherein figure 3 displays the layouts as item 304 of figure 3 and item 504 of figure 5. applying the at least one theme pattern to the text document by modifying visual properties of the current content of the text document and keeping a layout of the current content of the text document (See Sivaji, Col. 5, lines 29-49; Col. 6, lines 13-15 and lines 53-62; figure 4, 6, and 8) theme templates and objects considered to be recommended objects are displayed and selectable by a user and applied to the current content document. In addition, the layout may be modified in different ways based on applying the selected templates and/or objects. Figures 4, 6, and 8 illustrate different examples where a template/object is applied to the current document to modify visual properties which the examiner is interpreting as using the visual rearrangement of current content modifications of figure 4, 6, and 8 since there is a change in the visual properties of the current content both image and textual based on the applied theme pattern. Figure 8 illustrates a specific example where the visual properties of the current content have been modified based on the addition of an image however the layout of the current content in the document displayed in the upper left corner of the document remains the same thus keeping the layout of the current content of the text document. Sivaji does not appear to expressly teach obtaining at least one image at least according to the current content of the text document, wherein the at least one image is retrieved from an image database by determining that at least one candidate image in the image database satisfies a similarity measure with the identified current content and, in response determining that at least one candidate image in the image database satisfies the similarity measure with the identified current content, retrieving the at least one candidate image from the image database: generating at least one theme pattern according to the at least one image; using a pre trained machine learning model to identify labels of images of the at least one theme pattern matching the current content of the text document. Ghoshal teaches obtaining at least one image at least according to the current content of the text document, wherein the at least one image is retrieved from an image database or generated according to text of the current content of the text document; wherein the at least one image is retrieved from an image database by determining that at least one candidate image in the image database satisfies a similarity measure with the identified current content and, in response determining that at least one candidate image in the image database satisfies the similarity measure with the identified current content, retrieving the at least one candidate image from the image database (“To perform the vector space comparison in step 1208, the content recommendation engine 425 may calculate the Euclidean distance between the feature vector generated in step 1206, and each of the other feature vectors stored in the vector space/spaces 430….Such techniques may allow the content recommendation engine 425 to determine a set of highest ranking feature vectors within the vector space(s) 430, which are most similar in features/characteristics/etc. to the feature vector generated in step 1206 based on the input received in step 1202. In some cases, a predetermined number (N) of the highest-ranking feature vectors may be selected in step 1208 (e.g., the 5 most similar articles, 10 most similar images, etc.), while in other cases all of the feature vectors satisfying a particular closeness threshold (e.g., distance between vectors<threshold(T)) may be selected.” Paragraph 0074): generating at least one theme pattern according to the at least one image (“after one or more content resources (e.g., images, articles, etc.) have been identified as potentially related to the content currently being created by the user via the user interface 415, the related content resources are transmitted back to the content recommendation engine 425, where they may be retrieved, modified, and embedded into the user interface 415, for example, by the content retrieval/embedding component 445..” Paragraph 0025); using a pre trained machine learning model to identify labels of images of the at least one image matching the current content of the text document (“As shown in FIG. 21, the keyword-to-tag vector space analysis performed in FIG. 20 may determine that the image tag “Mountaineer” sufficiently close within the word vector space to the extracted keywords, and thus should be considered as an image tag match for the filtered feature space comparison.” Paragraph 0075, last sentence). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sivaji to comprise obtaining at least one image at least according to the current content of the text document, wherein the at least one image is retrieved from an image database by determining that at least one candidate image in the image database satisfies a similarity measure with the identified current content and, in response determining that at least one candidate image in the image database satisfies the similarity measure with the identified current content, retrieving the at least one candidate image from the image database: generating at least one theme pattern according to the at least one image; using a pre trained machine learning model to identify labels of images of the at least one theme pattern matching the current content of the text document. One would have been motivated to make such a combination to provide an efficient process for the user/author to locate and incorporate any relevant content within their original authored content (Ghoshal [0049]) and generate more appealing or attractive content to users. In reference to dependent claim 2, Sivaji teaches: Wherein the theme patterns comprise a combination of: a background of a window of the document; background of a canvas of the document; format of text presented on the canvas; and identification presented on the canvas, wherein the identification is associated with at least one of the current content of the document, a creator of the document, and a receiver of the document (See Sivaji, Col. 5, lines 29-50, Col. 6, lines 1-32 and figure 4 and 6) figure 4 illustrates in first display area a template under item 402 which includes an ‘about us’ format of the text presented on the canvas, background information as it relates to the image content and identification information as it relates the team members which the examiner is interpreting as the receiver and/or creator of the document. In reference to dependent claim 4, Sivaji teaches: Identifying a change of the current document (See Sivaji, Col. 5, lines 19-22) a user may want to modify the size, position or orientation of any of the objects already in the document. Wherein the at least one image is obtained further according to the change (See Sivaji, Col. 5, lines 21-23) the interaction may lead to a modification of the suggested template images. In reference to dependent claim 5, Sivaji teaches: Identifying other information related to the document, wherein the other information related to the document includes one or more of a profile of a creator of the document, a history usage record of the creator with respect to theme patterns, a profile of a receiver of the document, and information for target entities of the document determined from other applications, wherein the at least one image is obtained further according to the other information (See Sivaji, Col. 5, lines 20-29 and figure 3) multiple collaborators may insert objects into the document and lead to a modification of the suggested templates. Further, the examiner is interpreting ‘information for target entities of the document’ is being interpreted using figure 3 which illustrates user content input that relates to information (i.e. team member information) for target entities (i.e. individuals currently on the team). Further, image content is obtained in describing at least one of the current team members. In reference to dependent claim 6, Sivaji teaches: Identifying other information related to the document further comprises: determining that there is a plurality of different receives of the document (See Sivaji, Col. 5, lines 20-29 and figure 3) the document may be shared with multiple collaborators. Further, the examiner is interpreting ‘determining there are a plurality of different receivers of the document’ as the analysis that takes place of team member information input by the user and displayed in figure 3 which is analyzed and thus determined to include information for a different receivers of the document. Providing the at least one theme pattern further comprises: providing a plurality of theme patterns related to the current content of the document, wherein each of the plurality of theme patterns is associated with a receiver in the plurality of different receivers (See Sivaji, Col. 5, lines 20-29 and figure 4) multiple collaborators may insert objects into the document and lead to a modification of the suggested templates. Figure 4 illustrates a plurality templates that each relate to at least the team members currently input by the user. In reference to dependent claim 7, Sivaji teaches: Wherein the at least one image is retrieved from an image database or generated according to text of the current content of the document (See Sivaji, Col. 8, lines 5-10 and lines 15-18) a means of matching detected metadata of the object to the metadata of objects in each document template in order to generate a score for each template and ranking of each template. The specific text ‘CAMP SCHEDULE’ is matched to scored/ranked templates. In reference to dependent claim 8, Sivaji teaches: Wherein generating the at least one theme pattern further comprises: generating, according to the at least one image, one or more of background of a window of the document, background of a canvas of the document, and format of text of the document (See Sivaji, figure 6) a means of selecting an image from the ‘RoadMap’ section (item 602) and display a similar background of the browser window and background of a specific object within the text content of the document, and specific format of textual content within the document. In reference to dependent claim 9, Sivaji teaches: Wherein the trigger comprises one or more of an activation operation to the document, an editing operation to text of the document, and reception of a request for providing theme patterns (See Sivaji, (See Col. 4, lines 56-58; Col. 5, lines 4-6 and Col. 5, lines 55-57) based on the addition of the words ‘Camp Schedule’ in the text box, the layout engine filters the layouts in the predefined layout templates. Further, Sivaji teaches an indication of the analysis of the content added to the document is shown when an assistant button is highlighted in the interface and may be part of a menu. In response to the button being selected, the suggested templates are displayed. In reference to dependent claim 10, Sivaji teaches: Wherein the current content of the document includes one or more of: format of the text of the document, keywords in the document, and language used in the document (See Sivaji, Col. 6, lines 3-6) based on the presence of the word ‘SCHEDULE’ in the text box the suggested templates contain templates depicting a timeline or a roadmap. In reference to dependent claim 11, Sivaji teaches: Detecting a selection of one of the at least one theme pattern in the graphical user interface and in response to detecting the selection, applying one of the at least one theme pattern to the document (See Sivaji, Col. 6, lines 4-7) suggested theme templates are displayed to the user within the graphical user interface for selection by a user and applying the theme template (See Sivaji, Col. 6, lines 13-15; figure 6) to the presentation document including the application of a selected layout template from one or more suggested layout templates. In reference to claims 12 and 14, the claims recite a computer-readable storage medium including computer executable instructions for carrying out similar limitations to those found in the method claims, 1 and 4, respectively. Therefore, the claims are rejected under similar rationale. In reference to independent claim 15, the claim recites an apparatus for carrying out similar limitations for recommending theme patterns to those recited in independent claim 1. Therefore, the claim is rejected under similar rationale. 6. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Sivaji et al. in view of Ghoshal et al., and further in view of Malu Srinivasan et al. (US 20190266442 A1, hereinafter Malu Srinivasan). In reference to dependent claim 16, Sivaji does not appear to expressly teach the method of claim 1, wherein obtaining the at least one image further comprises: determining that no candidate image in the image database satisfies the similarity measure with the identified current content; and in response determining that no candidate image in the image database satisfies the similarity measure with the identified current content, generating the at least one image from text of the current content using a text-to-image generation model that employs a generative adversarial network (GAN). Ghoshal teaches determining that no candidate image in the image database satisfies the similarity measure with the identified current content (“a predetermined number (N) of the highest-ranking feature vectors may be selected in step 1208 (e.g., the 5 most similar articles, 10 most similar images, etc.), while in other cases all of the feature vectors satisfying a particular closeness threshold (e.g., distance between vectors<threshold(T)) may be selected.” Paragraph 0074, this implies that images that do not satisfy the similarity measure are not selected). Sivaji and Ghosahl do not appear to expressly teach in response determining that no candidate image in the image database satisfies the similarity measure with the identified current content, generating the at least one image from text of the current content using a text-to-image generation model that employs a generative adversarial network (GAN). Malur Srinivasan teaches generating the at least one image from text of the current content using a text-to-image generation model that employs a generative adversarial network (GAN) (“Further, in response to user input 313, attribute generator 310 is configured to generate image 314,” paragraph 0025). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sivaji to comprise wherein obtaining the at least one image further comprises: determining that no candidate image in the image database satisfies the similarity measure with the identified current content; and in response determining that no candidate image in the image database satisfies the similarity measure with the identified current content, generating the at least one image from text of the current content using a text-to-image generation model that employs a generative adversarial network (GAN). One would have been motivated to make such a combination to improve the accuracy of images generated. Response to Arguments 7. Applicant's prior art arguments filed 7/21/2025 have been fully considered but they are moot in view the rejection presented above. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Mei et al. US 20160275067 A1 teaches domain-based generation of communication media content layout. Jin et al. US 2018/0082156 teaches font replacement based on visual similarity. He et al. US 10558701 teaches method and system to recommend image in a social application. Heyward et al. US 2016/0070804 teaches system and method for automatically selecting images to accompany text. Penta et al. US 20200175061 A1 teaches an image retrieval system to obtain a text descriptor associated with the image descriptor in the descriptor repository. The image retrieval system may generate a document query comprising a search parameter, the search parameter including the text descriptor. The image retrieval system may identify, in a document database, text documents based on the document query. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHELET SHIBEROU whose telephone number is (571)270-7493. The examiner can normally be reached Monday-Friday 9:00 AM-5:00 PM Eastern Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHELET SHIBEROU/Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Nov 08, 2021
Application Filed
Feb 09, 2023
Non-Final Rejection — §103, §112
May 15, 2023
Response Filed
Aug 08, 2023
Final Rejection — §103, §112
Oct 12, 2023
Request for Continued Examination
Oct 17, 2023
Applicant Interview (Telephonic)
Oct 17, 2023
Examiner Interview Summary
Oct 19, 2023
Response after Non-Final Action
Jul 02, 2024
Non-Final Rejection — §103, §112
Oct 02, 2024
Response Filed
Jan 17, 2025
Final Rejection — §103, §112
Mar 12, 2025
Response after Non-Final Action
Apr 24, 2025
Request for Continued Examination
May 01, 2025
Response after Non-Final Action
May 17, 2025
Non-Final Rejection — §103, §112
Jul 21, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596535
Editing User Interfaces using Free Text
2y 5m to grant Granted Apr 07, 2026
Patent 12591348
ELECTRONIC DEVICE FOR CONTROLLING DISPLAY OF MULTIPLE WINDOW, OPERATION METHOD THEREOF, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12591419
Prompt Based Hyper-Personalization of User Interfaces
2y 5m to grant Granted Mar 31, 2026
Patent 12578845
CUSTOMIZED GRAPHICAL USER INTERFACE GENERATION GRAPHICALLY DEPICTING ICONS VIA A COMPUTER SCREEN
2y 5m to grant Granted Mar 17, 2026
Patent 12572270
USER INTERFACE FOR DISPLAYING AND MANAGING WIDGETS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+27.8%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 561 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month