Prosecution Insights
Last updated: April 19, 2026
Application No. 18/429,131

SEARCHING PROGRAMMING CODE REPOSITORIES USING LATENT SEMANTIC ANALYSIS

Final Rejection §103
Filed
Jan 31, 2024
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION This Office action is responsive to amendments and arguments filed on January 20th, 2026. Claims 1-2, 4-8, 10-16 and 19-20 are amended, claim 18 is cancelled, and claim 21 is added. Claims 1-17 and 18-21 are pending and have been examined; hence, this action is made FINAL. Any objections/rejections not mentioned in this Office action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Arguments With respect to rejections made under 35 U.S.C. 101, Applicant argues, “As discussed during the Examiner interview on January 9, 2026, and noted in the Examiner Interview Summary, the Office agreed that amended independent Claims 1, 7, and 13 are directed to patent-eligible subject matter. For at least this reason, Applicant respectfully submits that independent Claims 1, 7, and 13, as well as claims dependent thereon, are in condition for allowance and requests withdrawal of this rejection of the claims under Section 101,” (page 10 of Remarks). Claims as amended recite a technical improvement to the operation of a computer, and therefore are patent-eligible. Accordingly, the rejections under 35 U.S.C. 101 are withdrawn. With respect to rejections made under 35 U.S.C. 102, Applicant argues, "In particular, an embedding (arguendo 'a vector representation') of the code entity in Smith is not, and cannot be reasonably interpreted to represent, an embedding of a machine learning model-generated natural language description of the code entity," (page 11 of Remarks). Examiner respectfully disagrees. Smith teaches, at column 1, line 45, "Some embodiments relate to using a neural network encoder to generate tensor embeddings of source code and related text in a joint tensor space. Relatedness between embeddings in this joint tensor space for text and associated source code is used in some embodiments to facilitate code search," and column 22, line 5, "Code snippets may be tagged with an embedding vector to identify the higher level task that it is a part of. In some embodiments, an embedding vector may be created based on words in documentation or other textual sources associated with the multiple code snippets." The Specification discloses, at paragraph [0024], "The resulting natural language descriptions, or code summaries, received from the machine learning model can then be converted into multi-dimensional vector representations using embedding techniques that preserve semantic relationships in a multi-dimensional latent space. For example, in machine learning and natural language processing, text can be converted into vectors (numerical arrays) using various embedding techniques. Embeddings are a way of translating the semantic meaning of text into a multi-dimensional latent space. Each vector typically consists of several dimensions (alternatively, elements or features), where each dimension represents some aspect of the text’s meaning or features. The vector representations are not random, but are structured in a way such that similar meanings or contexts are represented by vectors that are close to each other in the multi-dimensional latent space. That is, by employing one or more embedding techniques, semantic relationships between words can be preserved." A person having ordinary skill in the art would recognize that a “tensor” represents relationships between objects in a vector space, including vectors, scalars and other tensors. Therefore, the teachings of Smith read on the claim limitations. Applicant further argues, “that Smith fails to describe at least generating ... a plurality of first natural language descriptions for a first programming code segment ... converting ... the plurality of first natural language descriptions into a plurality of first vector representations ... [and] determining a proximity score between the vector representation of the natural language search query and at least one first vector representation for the first programming code segment, of the plurality of first vector representations recited in independent Claim 1,” (emphasis original, page 11 of Remarks). In this regard, Applicant’s argument is persuasive, and accordingly, the rejections under 35 U.S.C. 102 are withdrawn; however, new grounds of rejection are raised under 35 U.S.C. 103 in view of reference Brenner and further in view of U.S. Patent Application Publication 2004/0243645 to Broder et al. Further details are provided below. With respect to rejections made under 35 U.S.C. 103, Applicant argues, “Some cited portions of Brenner, namely paragraphs [0047] and [0048], describe generating a first natural language summary corresponding to an actual data object that is stored in a data store based on inputting a set of metadata corresponding to the actual data object. Office Action, pg. 14. Further, in cited paragraph [0070], Brenner discloses generating a second natural language summary corresponding to a hypothetical data object based on inputting a natural language query. Thus, Brenner describes generating a first natural language summary for an actual data object that is stored in a data store, and generating a second natural language summary for a hypothetical data object. Brenner does not disclose generating two different natural language summaries for a same data object (arguendo the respective programming code segment). As such, Applicant respectfully submits that Brenner and Smith, either singly or in any combination thereof, do not explicitly disclose, implicitly teach, or otherwise suggest for each respective programming code segment ...generating, by a machine learning model, at least two first natural language descriptions of the respective programming code segment using at least two first prompts input to a machine learning model recited in independent Claim 13.” (emphasis original, page 12 of Remarks). Applicant’s argument is moot, as new grounds of rejection are raised in view of "Generating Summaries with Controllable Readability Levels" by Riberio et al. and further in view of U.S. Patent Application Publication 2004/0243645 to Broder et al. Further details are provided below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 and 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent 11,822,918 to Smith et al. (hereinafter, "Smith") in view of U.S. Patent Application Publication 2025/0245236 to Brenner et al. (hereinafter, “Brenner”), and further in view of U.S. Patent Application Publication 2004/0243645 to Broder et al. (hereinafter, “Broder”). Regarding claims 1 and 7, Smith teaches a method and system comprising:receiving a natural language search query via a graphical user interface (column 2, lines 30-32, "The computer-implemented method also includes receiving a natural language search query," and column 24, lines 9-15, "At step 1501, a selection of one or more lines of code and a search query is received that may include keywords, natural language queries, or both. For example, in one embodiment of a user interface, a selection of one or more lines of code is received in an editor, and the editor displays a pop up text entry field for receiving one or more keywords or natural language queries from the user."); converting, by an embedding generator, the natural language search query into a vector representation that encodes a semantic meaning of the natural language search query (column 2, lines 32-35, "The computer-implemented method also includes determining an embedding in the joint embedding space for the natural language search query with the trained natural language neural network encoder."); determining a proximity score between the vector representation of the natural language search query and at least one first vector representation for the first programming code segment, of the plurality of first vector representations, based on a proximity of the vector representation of the natural language search query and the at least one first vector representation for the first programming code segment in a multi-dimensional latent space (column 2, lines 35-39, "The computer-implemented method also includes determining a similarity between the embedding in the joint embedding space for the natural language search query and the code entity."); and providing a search result corresponding to the first programming code segment based on the proximity score (column 2, lines 42-47, "The computer-implemented method also includes transmitting the code entity in response to determining that the similarity between the embedding in the joint embedding space for the natural language search query and the code entity satisfies a search condition."). Smith teaches comparing representations of a query and a code segment in a vector space. In the claim, “determining a proximity score” is read as analogous to Smith’s “determining a similarity” as each operation is based on spatial comparisons, wherein a proximity score expresses a similarity. Smith does not explicitly teach “generating, by a machine learning model, a plurality of first natural language descriptions for a first programming code segment, each respective first natural language description describing a semantic aspect of the first programming code segment,” or “converting, by the embedding generator, the plurality of first natural language descriptions into a plurality of first vector representations for the first programming code segment,” and thus, Brenner is introduced. Brenner teaches generating, by a machine learning model, a […] first natural language [description] for a first programming code segment […] (paragraph [0016], "According to one or more aspects of the present disclosure, a system may utilize generative artificial intelligence (AI) and a large language model (LLM) to process structured documents into unstructured summaries of the documents, which may efficiently enable semantic searching of the structured data… Based on inputting the set of metadata in the second serialized format into the LLM, the system may generate a first natural language summary associated with the data object."); and converting, by the embedding generator, the plurality of first natural language descriptions into a plurality of first vector representations for the first programming code segment (paragraph [0017], "The system may vectorize and compare the first and second natural language summaries (in a vector-space) to identify a document or other data object closely related to the natural language query."). Smith and Brenner are considered analogous because they are each concerned with semantic searching. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Smith’s search method with the natural language descriptions of Brenner for the purpose of improving search result quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. The combination of Smith and Brenner does not explicitly teach “generating… a plurality of first natural language descriptions for a first programming code segment, each respective first natural language description describing a semantic aspect of the first programming code segment,” and thus, Broder is introduced. Brenner teaches generating natural language summaries of data objects, but does not teach a many-to-one relationship between the summaries and the data objects. Broder teaches, at paragraph [0015], "In accordance with an aspect of this invention there is disclosed a data processing system for processing document data. The system includes data storage for storing a collection of document data that comprises unstructured document data, further includes at least one text analysis engine that comprises a plurality of coupled annotators. At least some of the coupled annotators are operable for tokenizing document data for identifying and annotating a particular type of semantic content. The at least one text analysis engine operates to generate a plurality of views of a document, each of the plurality of views being derived from a different tokenization of the document." Smith, Brenner and Broder are considered analogous because they are each concerned with information search and retrieval. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Smith and Brenner with the multiple views of Broder for the purpose of improving search result quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 2 and 8, Smith further teaches a method and system wherein determining the proximity score between the vector representation of the natural language search query and the at least one first vector representation for the first programming code segment comprises applying a similarity metric between the vector representation of the natural language search query and the at least one first vector representation for the first programming code segment (column 2, lines 39-43, "The computer-implemented method also includes determining that the similarity between the embedding in the joint embedding space for the natural language search query and the code entity satisfies a search condition."). Regarding claims 3 and 9, Smith further teaches a method and system wherein the similarity metric comprises a cosine similarity measure (column 15, lines 53-58, "Similarity measure 605 determines a similarity or distance between embeddings in the joint embedding space. In an embodiment, the distance between embeddings in the joint embedding space may be a cosine similarity determined by similarity measure 605."). Regarding claims 4 and 10, Smith further teaches a method and system further comprising: identifying one or more relevant programming code segments based on respective proximity scores between the vector representation of the natural language search query and at least one second vector representation of the plurality of second vector representations (column 19, lines 25-28, "Next, at step 904, the database of code entities and their embeddings is evaluated to identify a set of embeddings of code entities that are close to the embedding of the search query in the tensor space." Mostafa teaches at page 2, "Conversely, examples disclosed herein utilize an increasing number of usage contexts to improve code embedding… If the code snippet is used multiple times in the input code, then examples disclosed herein select multiple usage contexts for the code snippet."); and ranking the one or more relevant programming code segments based on the respective proximity scores, wherein providing the search result comprises displaying the ranked one or more relevant programming code segments (column 19, lines 31-35, "The distance between embeddings in the joint embedding space may be determined by a similarity measure such as a cosine similarity. At step 905, the search results are ranked according to their distance from the search query embedding and returned for display and usage."). Smith does not explicitly teach “generating, by the machine learning model, a plurality of second natural language descriptions for a plurality of second programming code segments,” or “converting, by the embedding generator, the plurality of second natural language descriptions into a plurality of second vector representations for the plurality of second programming code segments,” however, Brenner teaches generating, by the machine learning model, a plurality of second natural language descriptions for a plurality of second programming code segments (paragraph [0016], "According to one or more aspects of the present disclosure, a system may utilize generative artificial intelligence (AI) and a large language model (LLM) to process structured documents into unstructured summaries of the documents, which may efficiently enable semantic searching of the structured data… Based on inputting the set of metadata in the second serialized format into the LLM, the system may generate a first natural language summary associated with the data object."); and converting, by the embedding generator, the plurality of second natural language descriptions into a plurality of second vector representations for the plurality of second programming code segments (paragraph [0017], "The system may vectorize and compare the first and second natural language summaries (in a vector-space) to identify a document or other data object closely related to the natural language query."). Brenner contemplates additional steps to the method taught, and in consideration of the claims disclosure of second natural language descriptions for second programming code segments, under the broadest reasonable interpretation, these limitations may be read as a repetition of “generating… first natural language descriptions for a first programming code segment,” and “converting… the plurality of first natural language descriptions into a plurality of first vector representations,” with no additional steps or outcome to distinguish from the previous iteration. Accordingly, the method taught by the combination of Smith and Brenner reads on the limitations. Smith and Brenner are considered analogous because they are each concerned with semantic searching. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Smith’s search method with the natural language descriptions of Brenner for the purpose of improving search result quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Claims 13-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Brenner in view of "Generating Summaries with Controllable Readability Levels" by Riberio et al. (hereinafter, "Riberio"), further in view of Broder. Regarding claim 13, Brenner teaches a method comprising:receiving a plurality of programming code segments (paragraph [0028], "In some cases, a device (e.g., any component of subsystem 125, such as a cloud client 105, a server or server cluster associated with the cloud platform 115 or data center 120, etc.) may perform procedures relating to the discovery of document. For example, a data center 120 (e.g., a data store) may store a set of documents or other data objects (e.g., tables, databases, spreadsheets), including reports and other assets," and Mostafa teaches, at page 1, "As Al has advanced, developers have applied Al to many different fields. One field of application for Al is code intelligence tasks," and page 2, "As described above, code intelligence tasks include clone detection, code summarization, and code repair."); and for each respective programming code segment of the plurality of programming code segments, generating, by a machine learning model, […] natural language descriptions of the respective programming code segment […] (paragraph [0047], "At 320, based on the generative prompt, the set of metadata may be input to the LLM in the second serialized format. The LLM may be trained on unstructured data, and as such, may be able to use the set of metadata (which is also in an unstructured format)," and paragraph [0048], "At 325, the LLM may generate a first natural language summary based on inputting the set of metadata in the second serialized format into the LLM. In some examples, the first natural language summary may be a summary of the details and information included in the set of metadata based on the set of metadata itself (and thus, the actual data object)."); generating, by an embedding generator, […] vector representations for the respective programming code segment that encode the […] natural language descriptions (paragraph [0017], "The system may vectorize and compare the first and second natural language summaries (in a vector-space) to identify a document or other data object closely related to the natural language query."), and storing the […] vector representations for the respective programming code segment to enable comparison with vector representations of natural language search queries (paragraph [0049], "At 330, 335, and 340, the application server may use an embedding model to generate a vectorized version of the first natural language summary (e.g., an embedding vector). The embedding model may embed the intent or meaning of the data object into an embedding vector, which may be stored in a vector database. In this way, each natural language summary generated by the LLM may be vectorized and embedded for comparison to future generated natural language summaries."). Brenner does not explicitly teach receiving a “generating… at least two natural language descriptions… using at least two prompts,” “a first prompt comprising first instructions associated with generation of a first semantic aspect of the respective programming code segment,” or “a second prompt comprising second instructions associated with generation of a second semantic aspect of the respective programming code segment, the second semantic aspect being different than the first semantic aspect,” however, Ribeiro teaches a first prompt comprising first instructions associated with generation of a first semantic aspect of the respective programming code segment (section 3.1 Instruction-Aligning Readability Methods, "Inspired by previous works (He et al., 2022; Zhang and Song, 2022) that explore prompt guidance to generate text with desired attributes, we develop instructions that encode the summary readability level."); and a second prompt comprising second instructions associated with generation of a second semantic aspect of the respective programming code segment, the second semantic aspect being different than the first semantic aspect (section 3.1 Category-based Instructions, "Drawing on established guidelines for text complexity levels (Fountas and Pinnell, 1999; DuBay, 2004), we define four instructions based on distinct reading level categories (see Table 1) aligned with particular FRE scores (Vajjala, 2022)."). Brenner and Riberio are considered analogous because they are each concerned with information summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summary generation method of Brenner with the multiple prompts of Ribeiro for the purpose of improving summarization efficacy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Further, Broder teaches at paragraph [0015], "In accordance with an aspect of this invention there is disclosed a data processing system for processing document data. The system includes data storage for storing a collection of document data that comprises unstructured document data, further includes at least one text analysis engine that comprises a plurality of coupled annotators. At least some of the coupled annotators are operable for tokenizing document data for identifying and annotating a particular type of semantic content. The at least one text analysis engine operates to generate a plurality of views of a document, each of the plurality of views being derived from a different tokenization of the document." Brenner, Riberio and Broder are considered analogous because they are each concerned with information summarization. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined method of Brenner and Ribeiro with the multiple perspectives of Broder for the purpose of improving summarization efficacy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 14, Brenner further teaches a method comprising receiving a natural language search query (paragraph [0017], "Based on receiving a natural language query (e.g., from a user) and inputting it into the LLM, the system may generate a second natural language query summary corresponding to the data object."); generating a vector representation of the natural language search query (paragraph [0017], "Based on receiving a natural language query (e.g., from a user) and inputting it into the LLM, the system may generate a second natural language query summary corresponding to the data object… the second natural language summary may represent a hypothetical document that is likely to correspond to the document that the query is searching for. The system may vectorize and compare the first and second natural language summaries (in a vector-space) to identify a document or other data object closely related to the natural language query."); and for each respective programming code segment, comparing the vector representation of the natural language search query to the at least two vector representations for the respective programming code segment to identify relevant programming code segments (paragraph [0041], "In some examples, application server 205 may perform a vector-space comparison of the vectorized versions of the natural language summaries 230 to identify a data object from the data store 210 that corresponds to the natural language query 235."). Regarding claim 15, Brenner further teaches a method wherein comparing the vector representation of the natural language search query to identify the relevant programming code segments comprises: calculating proximity scores between the vector representation of the natural language search query and the at least two vector representations (paragraph [0042], "In some implementations, the application server 205 may perform a ranking procedure to rank a set of vector distances (e.g., between the vectorized natural language summary 230-a and one or more natural language summaries 230 generated based on a natural language query 235)."); and ranking the relevant programming code segments based on the proximity scores (paragraph [0042], "The ranking may indicate an accuracy of the natural language summaries 230 based on semantic scores provided by the vector store or database. For example, a higher ranking may indicate that the natural language summary 230-b is more similar to the natural language summary 230-a, and thus, may result in highly-accurate search results for a corresponding document."). Regarding claim 16, Brenner further teaches a method comprising providing search results comprising the relevant programming code segments that have been ranked(paragraph [0017], "The system may display an indication of the document (or a list of the top most relevant documents based on the vector search space) accordingly, for example, to a user."). Regarding claim 20, Brenner further teaches a method wherein for each respective programming code segment, storing the at least two vector representations for the respective programming code segment comprises storing the at least two vector representations in a structure to enable the comparison with the vector representations of the natural language search queries (paragraph [0049], "At 330, 335, and 340, the application server may use an embedding model to generate a vectorized version of the first natural language summary (e.g., an embedding vector). The embedding model may embed the intent or meaning of the data object into an embedding vector, which may be stored in a vector database. In this way, each natural language summary generated by the LLM may be vectorized and embedded for comparison to future generated natural language summaries."). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Brenner, Ribeiro and Broder as applied to claim 15 above, further in view of Smith. Regarding claim 17, the combination of Brenner, Ribeiro and Broder does not explicitly teach a method “wherein the proximity scores are based on a cosine similarity measure,” however, Smith teaches the proximity scores are based on a cosine similarity measure (column 15, lines 53-58, "Similarity measure 605 determines a similarity or distance between embeddings in the joint embedding space. In an embodiment, the distance between embeddings in the joint embedding space may be a cosine similarity determined by similarity measure 605."). Brenner, Ribeiro, Broder and Smith are considered analogous because they are each concerned with information embedding. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined method of Brenner, Riberio and Broder with the cosine similarity of Smith for the purpose of improving information retrieval quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Claims 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Brenner, Ribeiro and Broder as applied to claim 13 above, further in view of German patent publication DE 102022133799 to Mostafa (hereinafter, "Mostafa"). Regarding claim 19, the combination of Brenner, Ribeiro and Broder does not explicitly teach a method wherein “each respective programming code segment is a function defined within a code base,” and thus, Mostafa is introduced. Mostafa teaches each respective programming code segment is a function defined within a code base (page 2, "The code fragment can be a function (e.g. the body of a function) or coda written in the body of a program."). Brenner, Ribeiro, Broder and Mostafa are considered analogous because they are each concerned with information embedding. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined method of Brenner, Ribeiro and Broder with the teachings of Mostafa for the purpose of improving information retrieval in a particular field of endeavor. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 21, the combination of Brenner, Ribeiro and Broder does not explicitly teach a method wherein “the first semantic aspect is associated with a functionality, a usage, one or more limitations, or a design pattern of the respective programming code segment,” or “the second semantic aspect is associated with the functionality, the usage, the one or more limitations, or the design pattern of the respective programming code segment that is different than the first semantic aspect,” however, Mostafa teaches the first semantic aspect is associated with a functionality, a usage, one or more limitations, or a design pattern of the respective programming code segment (page 2, "Advantageously, example usage contexts disclosed herein provide additional information about the code fragment to be processed, including information about function arguments, information about how the output of a function is used, and/or general information about the programming context in which a function is used and/or invoked."), and the second semantic aspect is associated with the functionality, the usage, the one or more limitations, or the design pattern of the respective programming code segment that is different than the first semantic aspect (page 2, "Conversely, examples disclosed herein utilize an increasing number of usage contexts to improve code embedding… If the code fragment is a function, the context of use includes the LOCs around the LOC at which the function is called and the LOC at which the function is called. If the code snippet is used multiple times in the input code, then examples disclosed herein select multiple usage contexts for the code snippet."). Brenner, Ribeiro, Broder and Mostafa are considered analogous because they are each concerned with information embedding. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined method of Brenner, Ribeiro and Broder with the usage contexts of Mostafa for the purpose of improving information embedding quality. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Allowable Subject Matter Claims 5-6 and 11-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent 9,135,241 to Bangalore. U.S. Patent 9,268,558 to Elshishiny et al. U.S. Patent 11,604,626 to Sawant et al. U.S. Patent 11,720,346 to Wu et al. U.S. Patent 11,966,446 to Socher et al. U.S. Patent 12,073,195 to Duan et al. U.S. Patent 12,112,133 to Ogura et al. U.S. Patent Application Publication 2014/0189640 to DeLuca et al. U.S. Patent Application Publication 2018/0373507 to Mizrahi et al. U.S. Patent Application Publication 2021/0073459 to McCann et al. U.S. Patent Application Publication 2021/0141863 to Wu et al. U.S. Patent Application Publication 2021/0303989 to Bird et al. U.S. Patent Application Publication 2022/0067095 to Kota et al. U.S. Patent Application Publication 2022/0236964 to Bahrami et al. U.S. Patent Application Publication 2022/0374595 to Gotmare et al. U.S. Patent Application Publication 2024/0256840 to Song et al. U.S. Patent Application Publication 2025/0124229 to Andreas et al. China Invention Application 111177312 to Ding et al. China Invention Application 114625361 to Ibarra Von Borstel et al. China Invention Application 114625844 to Gu et al. China Invention Application 116909574 to Xu et al. China Invention Application 117033546 to Li et al. European patent specification EP-4235455 to Santus et al. “A Neural Framework for Retrieval and Summarization of Source Code” by Chen and Zhou. “A Multi-Perspective Architecture for Semantic Code Search” by Haldar et al. “Neural Code Search Revisited: Enhancing Code Snippet Retrieval Through Natural Language Intent” by Heyman and Cutsem. “NS3: Neuro-Symbolic Semantic Code Search” by Arakelyan et al. “Multi-Modal Code Summarization with Retrieved Summary” by Lin et al. “Multi-Perspective Alignment Mechanism for Code Search” by Yang and Cai. “A Prompt Learning Framework for Source Code Summarization” by Sun et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Oct 08, 2025
Non-Final Rejection — §103
Dec 23, 2025
Interview Requested
Jan 09, 2026
Examiner Interview Summary
Jan 09, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Feb 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month