Prosecution Insights
Last updated: April 19, 2026
Application No. 18/336,631

COMPREHENSIVE SEARCHES BASED ON TEXT SUMMARIES

Non-Final OA §103
Filed
Jun 16, 2023
Examiner
TRACY JR., EDWARD
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
81 granted / 105 resolved
+15.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
131
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
71.9%
+31.9% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 105 resolved cases

Office Action

§103
Introduction 1. This office action is in response to Applicant’s submission filed on 7/30/2025. Claims 1-20 are pending in the application and have been examined. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 10/31/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Continued Examination Under 37 CFR 1.114 4. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/2/2026 has been entered. Response to Arguments 5. The Amendment filed 1/2/2026 has been entered and fully considered. With regard to the rejections under 35 USC 103, those arguments with respect to Claims 1 and 15 are rendered moot by the new grounds of rejection below based on U.S. Pat. App. Pub. No. 20250005303 (Gray et al., hereinafter “Gray”). With respect to Claim 10, the rejection is maintained. Claim 10 now recites that the 2 search results are concurrent displayed. As Berg describes displaying the search results to a user, including the first search result, and Liao describes displaying the second search result, concurrent display of both results is rendered obvious by the proposed combination. There would only be 2 possibilities, concurrent and alternative display, and one of ordinary skill would be able to select concurrent display based on the teachings of the references and convenience to the user. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1 and 3-9 are rejected under 35 U.S.C. 103 as unpatentable over U.S. Pat. App. Pub. No. 20240403341 (Berglund et al., hereinafter “Berg”) in view of U.S. Pat. App. Pub. No. 20240273105 (Martigny et al., hereinafter “Mart”) and U.S. Pat. App. Pub. No. 20250005303 (Gray et al., hereinafter “Gray”). With regard to Claim 1, Berg describes: “A computer-implemented method for including: obtaining a content item having text and an image; (Paragraph 14 describes that a content repository stores a plurality of content items that can be searched with a query.) generating, via a text embedding model, a text embedding representing the text summary; and (Paragraph 14 describes that the text chunks are turned into text embeddings.) storing the text embedding representing the text summary of the content item, the text embedding stored for subsequently performing a [[semantic]] search to determine that the content item is relevant to a search query.” (Paragraph 14 describes that the text embeddings are compared to query embeddings to determine a similarity between the text embedding and the query embedding.) Berg does not explicitly describe “generating a text summary that summarizes the content item by providing to a large language model (LLM) a model prompt including the text of the content item and text associated with the image, the text summary representing an entirety of the content item by removing at least a portion of the text of the content item while retaining key points associated with the content item” or comparing the summary to the query using a semantic search. However, paragraph 40 of Mart describes that a semantic search can be used to find content similar to a query. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the semantic search as described by Mart into the system of Bert to determine the most relevant content to a query, as described in paragraph 40 of Mart. Bert in view of Mart does not explicitly describe “generating a text summary that summarizes the content item by providing to a large language model (LLM) a model prompt including the text of the content item and text associated with the image, the text summary representing an entirety of the content item by removing at least a portion of the text of the content item while retaining key points associated with the content item.” However, paragraph 65 of Gray describes generating text summaries by inputting content into a model. The content may include text, images, and captions of the images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the summarization as described by Gray into the system of Bert in view of Mart to efficiently generate text summaries for search results, as described in paragraph 65 of Gray. With regard to Claim 3, Bert describes “the content item summarized by the text summary comprises a set of one or more pages, a set of one or more sections, or a set of one or more paragraphs.” Paragraph 27 of Berg describes that the content is broken into sentence length sections, and paragraph 29 describes that these sections are used to determine the corresponding text chunks. With regard to Claim 4, Bert describes “the text of the content item includes an image caption generated for an image of the content item.” Paragraph 27 describes that video content can be summarized by text transcripts, which are cited as an “image caption.” With regard to Claim 5, Bert describes “generating an image caption for an image of the content item and incorporating the image caption in the content item such that the content item having the image caption is summarized in the text summary.” Paragraph 27 describes that video content can be summarized by text transcripts, which are cited as an “image caption.” The transcripts are divided into sentence length sections, which are used to create the text chunk summaries. With regard to Claim 6, Bert describes: “obtaining the search query; (Paragraph 14 describes that a search query is received.) generating a query text embedding, via the text embedding model, that represents the search query; (Paragraph 14 describes that the query is converted to a query embedding.) performing the [[semantic]] search to determine that the content item is relevant to the search query by comparing the query text embedding to the text embedding representing the text summary to analyze similarity between the query text embedding and the text embedding representing the text summary; and (Paragraph 14 describes that a similarity analysis is done between the query embedding and the text chunk embedding to determine relevant content.) providing a search result corresponding with the content item for presentation in response to the search query.” (Paragraph 14 describes that an answer is returned to the user, which may include the relevant content.) Berg does not explicitly describe comparing the summary to the query using a semantic search. However, paragraph 40 of Mart describes that a semantic search can be used to find content similar to a query. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the semantic search as described by Mart into the system of Bert to determine the most relevant content to a query, as described in paragraph 40 of Mart. With regard to Claim 7, Berg describes “the search result includes a result context that indicates at least a portion of the text summary that corresponds with the search query.” Paragraph 14 describes that an answer is returned to the user, which may include the content which contained the relevant text chunks. With regard to Claim 8, Berg describes: “obtaining a user feedback modifying the at least the portion of the text summary that corresponds with the search query; (Paragraph 53 describes that the user can provide feedback. Paragraph 18 describes that the user can modify the content included in the content store.) updating the text summary to incorporate the user feedback; and (Paragraphs 27-29 describe how the device will create the text chunk summary for modified content.) generating a new text embedding for the updated text summary.” (Paragraph 14 describes that the text chunks are used to create the text embeddings.) With regard to Claim 9, Berg does not explicitly describe this subject matter. However, Mart describes: “generating, via a lexical data model, lexical search data based on the content item or the text summary; and (Paragraph 87 describes that a lexical search is done of stored content.) storing the lexical search data for subsequently performing a lexical search to determine that the content item is relevant to a particular search query.” (Paragraph 87 describes that a ranking of lexical similarity is created.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the lexical search as described by Mart into the system of Bert to determine the most relevant lexical content to a query, as described in paragraph 87 of Mart. 8. Claims 10-13 are rejected under 35 U.S.C. 103 as unpatentable over Berg in view of Mart and U.S. Pat. App. Pub. No. 20100312764 (Liao et al., hereinafter “Liao”). With regard to Claim 10, Berg describes: “A computer-implemented method comprising: obtaining a search query; (Paragraph 14 describes that a search query is received.) generating a query text embedding to represent the search query; (Paragraph 14 describes generating a query embedding.) comparing the query text embedding to a set of text embeddings representing text summaries generated for corresponding content items having text; (Paragraph 14 describes comparing the query embedding to text chunk embeddings.) based on the comparing, identifying a first content item, of the content items, as semantically similar to the search query; and (Paragraph 14 describes determining similar content items based on the comparison.) providing, for [[concurrent]] display, a search result set comprising [[both]] an indication of the first content item identified as [[semantically]] similar to the search query.” (Paragraph 14 describes that an answer is returned to the user, which may include the relevant content.) Berg does not explicitly describe “using the search query to perform a prefix search to identify a second content item, of the content items, as lexically similar to the search query,” concurrently displaying the second content item, or comparing the summary to the query using a semantic search. However, paragraph 40 of Mart describes that a semantic search can be used to find content similar to a query. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the semantic search as described by Mart into the system of Bert to determine the most relevant content to a query, as described in paragraph 40 of Mart. Bert in view of Mart does not explicitly describe “using the search query to perform a prefix search to identify a second content item, of the content items, as lexically similar to the search query,” and displaying the second content item. However, paragraph 14 of Liao describes performing a search on a prompt to find lexically similar content. Further, the search results are displayed in 1382 as described in paragraph 47. Further, it would have been obvious to concurrently display this search result with the first search result. The combination of the references describes displaying the search results to the user, which would be all the search results at once. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the lexical search as described by Liao into the system of Bert in view of Mart to determine similar content to a query, as described in paragraph 83 of Liao. With regard to Claim 11, Berg describes “the content item includes an image, and wherein a text summary generated for the content item is based on an image caption generated for the image of the content item.” Paragraph 27 describes that video content can be summarized by text transcripts, which are cited as an “image caption.” With regard to Claim 12, Berg describes “the content item is identified as [[semantically]] similar to the search query based on a similarity distance between the query text embedding and a text embedding representing a text summary generated for the content item.” Paragraph 51 describes that a similarity distance is computed between the text embedding and the query embedding. Berg does not explicitly describe comparing the summary to the query using a semantic search. However, paragraph 40 of Mart describes that a semantic search can be used to find content similar to a query. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the semantic search as described by Mart into the system of Bert to determine the most relevant content to a query, as described in paragraph 40 of Mart. With regard to Claim 13, Berg describes “the search result set includes a result context that indicates at least a portion of a text summary, generated for the content item, that corresponds with the search query.” Paragraph 14 describes that an answer is returned to the user, which may include the content which contained the relevant text chunks. 9. Claims 15 and 18-20 are rejected under 35 U.S.C. 103 as unpatentable over Berg in view of Gray. With regard to Claim 15, Berg describes “One or more computer storage media having computer-executable instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform a method, (Paragraph 88) the method comprising: obtaining a content item including an image; (Paragraph 14 describes that a content repository stores a plurality of content items. Paragraph 17 describes that some of the content may be images.) for a search query, performing a search in association with the text summary that summarizes the content item to determine that the content item is relevant to the search query; and (Paragraph 14 describes that the text embeddings are compared to query embeddings to determine a similarity between the text embedding and the query embedding.) providing, for display, a search result indicating the content item determined to be relevant to the search query. (Paragraph 53 describes that search results are provided to a user on a display.) Bert does not explicitly describe: “generating, via a machine learning model, a text summary that summarizes the content item, wherein generating the text summary comprises inputting the image caption into the machine learning model and obtaining, in response, the text summary that summarizes at least the image caption; generating, via an image-to-text model, an image caption providing a text description of the image.” However, Gray describes “generating, via a machine learning model, a text summary that summarizes the content item, wherein generating the text summary comprises inputting the image caption into the machine learning model and obtaining, in response, the text summary that summarizes at least the image caption; (Paragraph 65 of Gray describes generating text summaries by inputting content into a model. The content may include text, images, and captions of the images.) generating, via an image-to-text model, an image caption providing a text description of the image.” (Paragraph 65 describes that the model can automatically generate image captions.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the text generation as described by Gray into the system of Bert to efficiently generate text summaries for search results, as described in paragraph 65 of Gray. With regard to Claim 18, Berg describes “the search result includes an indication of the content item and a result context that indicates at least a portion of the text summary, generated for the content item, that corresponds with the search query.” Paragraph 14 describes that an answer is returned to the user, which may include the content which contained the relevant text chunks. With regard to Claim 19, Berg describes “the search result includes an indication of the content item and a result context that indicates at least a portion of the image caption that corresponds with the search query.” Paragraph 14 describes that an answer is returned to the user, which may include the answer description which is based on the relevant text chunks. With regard to Claim 20, Berg describes: “obtaining a user feedback modifying the at least the portion of the image caption that corresponds with the search query; (Paragraph 53 describes that the user can provide feedback. Paragraph 18 describes that the user can modify the content included in the content store. Modifying video would change the corresponding transcript for the video.) updating the image caption to incorporate the user feedback; and (Paragraphs 27-29 describe how the device will create the text chunk summary for modified content based on the new transcript.) generating a new text embedding for the updated image caption.” (Paragraph 14 describes that the text chunks are used to create the text embeddings.) 10. Claim 16 is are rejected under 35 U.S.C. 103 as unpatentable over Berg in view of Gray and further in view of Mart. With regard to Claim 16, Berg describes “the search comprises a [[semantic]] search performed by: generating a query text embedding to represent the search query; (Paragraph 14 describes generating a query embedding.) generating a content text embedding to represent the text summary of the content item; and (Paragraph 14 describes generating a text chunk embedding.) performing similarity analysis of the query text embedding and the content text embedding to determine [[semantic]] similarity between the search query and the content item. (Paragraph 14 describes comparing the query embedding to text chunk embeddings to determine similarity.) Berg in view of Gray does not explicitly describe comparing the summary to the query using a semantic search. However, paragraph 40 of Mart describes that a semantic search can be used to find content similar to a query. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the semantic search as described by Mart into the system of Bert in view of Gray to determine the most relevant content to a query, as described in paragraph 40 of Mart. 11. Claim 17 is rejected under 35 U.S.C. 103 as unpatentable over Berg in view of Gray and further in view of U.S. Pat. App. Pub. No. 20150058720 (Smadja et al., hereinafter “Sma”). With regard to Claim 17, Berg in view of Gray does not explicitly describe this subject matter. However, Sma describes “for the search query, performing a prefix search to determine a second content item relevant to the search query; and (Paragraph 69 describes performing a lexical prefix search for content.) providing, for display, a second search result indicating the second content item determined to be relevant to the search query. (Paragraph 70 describes that the search results are displayed to the user.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the lexical prefix search as described by Sma into the system of Bert in view of Gray to more accurately search content with lexically similar words, as described in paragraphs 68 and 69 of Sma. Conclusion 12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pat. No. 12,050,658 (Mishra et al.) also describes summarizing text and images. 13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD TRACY whose telephone number is (571)272-8332. The examiner can normally be reached Monday-Friday 9 AM- 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD TRACY JR./Examiner, Art Unit 2656 /BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
May 02, 2025
Non-Final Rejection — §103
Jun 11, 2025
Interview Requested
Jun 18, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Examiner Interview Summary
Jul 29, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566969
METHOD AND APPARATUS FOR TRAINING MACHINE READING COMPREHENSION MODEL, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561524
TRAINING MACHINE LEARNING MODELS TO AUTOMATICALLY DETECT AND CORRECT CONTEXTUAL AND LOGICAL ERRORS
2y 5m to grant Granted Feb 24, 2026
Patent 12548552
DYNAMIC LANGUAGE SELECTION OF AN AI VOICE ASSISTANCE SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12548554
SYSTEM AND METHOD FOR ACTIVE LEARNING BASED MULTILINGUAL SEMANTIC PARSER
2y 5m to grant Granted Feb 10, 2026
Patent 12536374
METHOD FOR CONSTRUCTING SENTIMENT CLASSIFICATION MODEL BASED ON METAPHOR IDENTIFICATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 105 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month