Prosecution Insights
Last updated: April 19, 2026
Application No. 18/199,312

GENERATION OF TECHNICAL DATA

Non-Final OA §101§103§112
Filed
May 18, 2023
Examiner
LUU, CUONG V
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
692 granted / 963 resolved
+16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
999
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submissions filed on 10/20/2025 and 11/17/2025 have been entered; wherein claims 1, 2, 11, 16, and 20 have been amended. DETAILED ACTION Claims 1 – 20 remain pending and have been examined. Response to Amendment Objections for claims 1 – 20 are withdrawn in view of Applicant’s amendments and Applicant clarifications (remark dated 10/20/2025, p. 7.) Response to Arguments Applicant’s arguments, filed on 10/20/2025, with respect to claim(s) 1 – 20 have been considered but are moot in view of new ground(s) of rejection as necessitated by amendments. See ISHIKAWA (JP 2010128527 A.) Regarding 101 rejections, Applicant argues that “The claim recites a way to more efficiently utilize computing resources to obtain useful release notes using an LLM rather than the inefficient use of multiple computing resources by multiple users. For example, claim 1 recites accessing source data comprising source code or data about the source code, and finding an item in the source data. The item is a ticket thread in a bug ticketing system, or an error message in the source code, or alt tag in the source code. The ticket thread comprises a sequence of comments making up a conversation, the comments comprising pieces of text input by one or more users or automated assistants, and the finding comprises extracting keywords from one or more of the sequence of comments. The alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, and the finding comprises finding alt tags in the source data. In response to criteria being met for generating technical data related to the item, a computing device generates a large language model (LLM) prompt comprising the item. The LLM prompt instructs an LLM to generate a release note pertaining to the item. The LLM prompt is input, by the computing device, toa LLM, and a release note is received from the LLM in response to the LLM prompt. The release note is stored in association with the source data, or the source data is updated using the release note. Additionally, the release note received from the LLM is programmatically inserted into a source code repository in a database configured to store a plurality of release notes. Thus the system and network are updated with the generated release note…” (Remark; p. 8: last full paragraph – p. 9: third full paragraph.) Applicant seems to suggest that sequence of steps in the claims utilizes computing resources efficiently and such efficient computing resource utilization is considered as an improvement which can be integrated into a practical application. However, Applicant does not explain in detail how those steps utilize computing resources efficiently. In absence of clarification for efficient computing resource utilization, the steps are not considered as the improvement. As a result, the claims remain rejected under 35 USC 101. Claim Objections Claims 1 – 20 are objected to because of the following informalities: Claim 1 Line 9; insert --in-- before “place” Claims 2-15 These claims are dependent claims of claim 1 either directly or indirectly; therefore, they inherit issues of claim 1. Claim 16 Line 12; insert --in-- before “place” Claims 17 – 19 These claims are dependent claims of claim 16; therefore, they inherit issues of claim 16. Claim 20 Line 9; insert --in-- before “place” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, “the release note” in line 18 and in line 19 after “using” are unclear whether they refer to “a release note” in line 15 or line 17 of the claim. For the examination purposes, “the release note” in line 18 will be treated as --the release note received from the LLM-- and “using the release note, wherein the release note received from the LLM” in line 19 will be treated as --using the release note received from the LLM--. Claim 16 and 20 have the same issue as claim 1. Claim 9, “the release note” in line 3 is unclear whether it refers to “a release note” in line 15 or line 17 of the claim 1. For the examination purposes, “the release note” in line 3 will be treated as --the release note received from the LLM--. Claim 10, “the release note” in lines 3-4 is unclear whether it refers to “a release note” in line 15 or line 17 of the claim 1. For the examination purposes, “the release note” in lines 3-4 will be treated as --the release note received from the LLM--. Claim 14, “the release note” in lines 1-2 are unclear whether they refer to “a release note” in line 15 or line 17 of the claim 1. For the examination purposes, “the release note” in lines 1-2 will be treated as --the release note received from the LLM--, respectively. Claims 2-8, 10-13, 15, and 17-19 depend on the rejected claim 1,direcetly or indirectly and inherit the same issue. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 Step 1 The claim is statutory because it is directed to a method. Step 2A, prong 1 The claim recites steps of: “finding an item in the source data, where the item is a ticket thread in a bug ticketing system, or an error message in the source code, or alt tag in the source code; checking criteria by determining quality of the item, or determining completeness of the item, or by comparing the item with a specification; in response to the criteria being met …, generating … with a prompt comprising the item …” These steps fall into the category of mental process within the realm of abstract ideas because they rely on human observing item and criteria, evaluating the criteria, and offering opinion by generating a prompt. Thus, these steps recite a mental process. Step 2A, prong 2 The claim further recites additional steps of “accessing source data comprising source code or data about the source code; the ticket thread comprises a sequence of comments …, wherein the finding comprises extracting keywords from one or more of the sequence of comments; and the alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, wherein the finding comprises finding alt tags in the source data; inputting … the LLM prompt into the LLM; receiving … a release note in response to the prompt; storing the release note … or updating the source data”, and additional elements “a computing device and a large language model (LLM).” These additional steps are insignificant extra solution activity elements, and the additional elements are just high level of generality as tool for performing the abstract idea. Therefore, they do not integrate the exception into a practical application. Steps 2B The claim as a whole is not amounted to significantly more than the judicial exception. In other words, claim 1 is directed to an abstract idea. Therefore, claim 1 and its dependent claims are not patent eligible. Analysis of claims 2 – 15 as follow Claim 2 The claim recites limitations “the source data comprises data about the source code comprising ticket threads in the bug ticketing system, the criteria comprise completion of the ticket thread, and the technical data is a release note for the source code.” These limitations, as drafted, define source and technical data. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 3 The claim recites limitations “checking that the criteria are met by searching the ticket threads for key words indicating the completion of the ticket thread.” These limitations, as drafted, check criteria. Thus, these limitations are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 4 The claim recites limitations “prompting the LLM with the ticket thread comprising a title, description and comments of the ticket thread.” These limitations, as drafted, define ticket thread. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 5 The claim recites limitations “prior to prompting the LLM with the item, prompting the LLM with a technical writer role and writing style.” These limitations, as drafted, define writer role and writing style. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 6 The claim recites limitations “prior to prompting the LLM with the item, removing personal data from the item.” These limitations, as drafted, remove data. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 7 The claim recites limitations “prior to prompting the LLM, adapting the LLM by training the LLM with content from an enterprise which produces the source data.” These limitations, as drafted, merely trains LLM. Thus, these limitations are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 8 The claim recites limitations “the item is the alt tag, … determining whether the criteria are met by prompting the LLM to analyze the alt tag and, in response to results from the LLM indicating the alt tag is inadequate, determining the criteria are met.” These limitations, as drafted, analyzing alt tag and determining criteria. Thus, these limitations are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 9 The claim recites limitations “in response to determining the criteria are met, prompting the LLM with the prompt, where the prompt comprises the alt tag, source code around the alt tag and a request to improve the alt tag; and updating the source code with the release note.” These limitations, as drafted, updating source data with technical data. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 10 The claim recites limitations “in response to the determining the criteria are met, prompting the LLM with the prompt, where the prompt comprises an image associated with the alt tag; and modifying the source code by adding the release note to the alt tag.” These limitations, as drafted, updating source data by adding technical data to alt tag. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 11 The claim recites limitations “in response to the results from the LLM indicating the alt tag is adequate, determining the criteria are not met and finding a next item in the source data.” These limitations, as drafted, find next item after determining that criteria are not met. Thus, these limitations are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 12 The claim recites limitations “the item is the error message, … determining whether the criteria are met by prompting the LLM to analyze the error message and, in response to results from the LLM indicating the error message is inadequate, determining the criteria are met.” These limitations, as drafted, analyzing error message and determining criteria status. Thus, these limitations are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 13 The claim recites limitations “in response to the determining the criteria are met, prompting the LLM with the prompt, where the prompt comprises the error message, source code around the error message and a request to improve the error message.” These limitations, as drafted, prompting and improve error message. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 14 The claim recites limitations “only storing the release note or only updating the source data with the release note, in response to input from an operator.” These limitations, as drafted, stores data. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 15 The claim recites limitations “adding a command token in the LLM prompt where the command token triggers the LLM to call a search engine using a query comprising key words from the LLM prompt.” These limitations, as drafted, searches data. Thus, these limitations are insignificant extra solution activity elements, and they are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, they do not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 16 The claim is statutory because it is directed to a method. The claim recites limitations in the same manner as claim 1; therefore, it is rejected for the same reasons. The claim recites additional elements of “processor”, “memory”, and “a large language model (LLM).” These additional elements are just high level of generality as tool for performing the abstract idea. Analysis of claims 17 – 19 as follow Claims 17 – 19 recite limitations in the same manner as claims 3 – 5 respectively; therefore, claims 17 – 19 are also rejected for the same reasons. Claim 20 Step 1 The claim is statutory because it is directed to a method. Step 2A, prong 1 The claim recites steps of: “finding an item in the source data …; checking criteria by determining quality of the item, or determining completeness of the item, or by comparing the item with a specification; in response to the criteria being met …, forming a prompt …” These steps fall into the category mental process because human mentally observing item and criteria, evaluating the criteria, and offering opinion by forming a prompt. Thus, these steps recite a mental process. Step 2A, prong 2 The claim further recites additional steps of “accessing source data comprising source code or data about source code; the ticket thread comprises a sequence of comments …, wherein the finding comprises extracting keywords …; the alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, wherein the finding comprises finding alt tags in the source data; prompting the LLM with the prompt; receiving technical data … in response to the prompt; storing the release note data …, or updating the source data …”, and additional element “a large language model (LLM).” These additional steps are insignificant extra solution activity elements, and additional element is just high level of generality as tool for performing the abstract idea. Therefore, they do not integrate the exception into a practical application. Steps 2B The claim as a whole is not amounted to significantly more than the judicial exception. In other words, claim 20 is directed to an abstract idea. Therefore, claim 20 is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 4, 7 – 9, 11-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over JIN et al. (NPL “InterFix: End-to-End Program Repair with LLMs”; hereinafter Jin; IDS filed on 08/05/2024) in view of LEINONEN, et al. (NPL “Using large language models to enhance programming error messages”; hereinafter Leinonen; IDS dated 08/05/2024) and ISHIKAWA (JP 2010128527 A; hereinafter Ishikawa.) Claim 1 Jin teaches a computer-implemented method comprising: accessing source data comprising source code or data about the source code (Jin; p. 2: right column: third full paragraph – p. 3: left column: first half paragraph; Figure 1 illustrates a typical software development workflow at Microsoft Developer Division in presence of InferFix. As a pull request proposing code changes (source data, source code) is created, continuous integration pipeline (CI) triggers unit testing, build, and Infer static analysis steps… Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM … p. 5: left column: second full paragraph – third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data).); finding an item in the source data, where the item is a ticket thread in a bug ticketing system, or an error message in the source code, or alt tag in the source code (Jin; p. 2: right column: third full paragraph – p. 3: left column: first half paragraph; Figure 1 illustrates a typical software development workflow at Microsoft Developer Division in presence of InferFix. As a pull request proposing code changes (source data, data, alt tag) is created, continuous integration pipeline (CI) triggers unit testing, build, and Infer static analysis steps… Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM … p. 5: left column: second full paragraph – third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag).), wherein: the ticket thread comprises a sequence of comments making up a conversation, the comments comprising pieces of text input by automated assistants, wherein the finding comprises extracting keywords from one or more of the sequence of comments (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …); and ; and checking criteria by determining quality of the item, or determining completeness of the item, or by comparing the item with a specification (Jin; p. 5: left column: second full paragraph – third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag); p. 3: left column: last half paragraph; Infer is an open-source static analysis tool originating from program analysis research on separation logic … It computes program specifications to detect errors related to memory safety, concurrency, security, and more (compare with specification) … p. 3: right column: right column: third full paragraph – last half paragraph; … we provide details on how we executed Infer over the change histories of software projects in order to detect introduced and fixed bugs. Given as input the current commit curr and the previous commit prev, we begin by computing a git diff to identify the files involved in the change performed by the developer in the commit curr. Next, we analyze the status of the files at commit prev… we build the system using the project-specific build tool. During the build process, the infer capture command intercepts calls to the compiler to read source files and translates them into an intermediate representation which will allow Infer to analyze these files. Next, we invoke the infer analyze command specifying the files to be analyzed (i.e., the files diff involved in the commit). This analysis produces a report reportPrev detailing (completeness, quality) the bugs (error, item) identified within the specified files… Finally, with the infer reportdiff command, we compute the differences between the two infer reports reportPrev and reportCurr. The output bugs (error, item) contain three categories (criteria) of issues: • introduced: issues found in curr but not in prev; • fixed: issues (ticket thread) found in prev but not in curr; • preexisting: issues (ticket thread) found in both prev and curr …); in response to the criteria being met for generating technical data related to the item, generating, by a computing device, a large language model (LLM) prompt comprising the item (Jin; p. 2: right column: last full paragraph – p.3: left column: first half paragraph; Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM (finetuned 12 billion parameter Codex model) to generate fixes. … The context preprocessing module utilizes the information provided by the analyzer to extract the buggy method, and retains surrounding context most relevant to fixing the bug … The retrieval augmentation engine then searches for semantically similar buggy code snippets in the historic database, prepending similar bug-fixes to the prompt. Finally, the augmented prompt is sent to the finetuned Codex model for inference …; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs … (iii) generator module – a large language model finetuned on a dataset of prompts enriched with the information provided by the static analyzer and the retriever to generate fixes (technical data). p. 4: right column: last full paragraph; Demonstration learning is a prompt augmentation technique in which a few answered prompts are prepended to the context with the purpose of demonstrating how a language model should approach a downstream task. For program repair, we introduce a prefix constructed of two answered prompts as, followed by the actual buggy code snippet [X], as shown in Figure 3…; see Figs. 3 & 4; p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…; see Figs. 3 & 4),a release note pertaining to the item (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …) inputting, by the computing device, the LLM prompt to the LLM (Jin; p. 5: left column: second full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool …, (ii) retrieval module …, and (iii) generator module – a large language model finetuned on a dataset of prompts enriched with the information provided by the static analyzer and the retriever to generate fixes.), prompt was inputted to LLM; receiving from the LLM, a release note in response to the LLM prompt (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …); and storing the release note in association with the source data or updating the source data using the release note (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix). The developer has the option to accept (update) or decline the recommended fix.), wherein the release note received from the LLM is programmatically inserted into a source code repository in a database configured to store a plurality of release notes (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …; p. 2: left column: last full paragraph; … (iii) we introduce a dedicated prompt augmentation technique for program repair task, which leverages dense retrieval from an external database of historic bugs and fixes, bug type annotations, and syntactic hierarchies across the entire source code file affected by a bug …; bug, fixes, bug annotation, hierarchies == parts of release note and they are stored in database.) But Jin does not explicitly teach the LLM prompt instructing an LLM to generate a release note pertaining to the item. However, Leinonen teaches the LLM prompt instructing an LLM to generate a release note pertaining to the item (Leinonen; p. 564: left column: last half paragraph; Large Language Models (LLMs), particularly pre-trained transformer models … One such model is GPT-3 … GPT-3 also powers several other tools such as Codex …; p. 564: right column: last half paragraph – p. 565: left column: second full paragraph; Programming error messages were generated using the most recent and performant Codex model … We evaluated a number of prompts to identify a version that seemed to provide useful explanations (release note) …; prompt instructs LLM to generate explanations; p. 566: left column: section “5.1 Are Error Message Explanation Useful?”; Our results suggest that using large language models to explain programming error messages (PEMs) is feasible and shows promise…, See picture below. Also see code example 2 & 3 on p. 567.) PNG media_image1.png 401 470 media_image1.png Greyscale Jin and Leinonen are in the same analogous art as they are in the same field of endeavor, utilizing LLM for processing data. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Leinonen teachings into Jin invention to allow Jin to use LLM to enhance programming error messages with explanations of the errors and suggestions on how to fix them as suggested by Leinonen (abstract.) But Jin and Leinonen do not explicitly teach the alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, wherein the finding comprises finding alt tags in the source data. However, Ishikawa teaches the alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, wherein the finding comprises finding alt tags in the source data (Ishikawa; [0040] … FIG. 4 shows an example of description data (source data) written in HTML. The alternative text determining unit 41 detects (find) from this description data an <img> tag, which is an image display element for displaying an image, and determines whether or not alt attribute text (alt tag) is included in the <img> tag. The alt attribute text is the character portion of “ALT attribute text” described by “alt=“ALT attribute text”” (alt tag) shown in FIG. 4, and is text that takes the place of an image. [0045] … In FIG. 5A, text 51, “Text Near Image,” is displayed adjacent to the left side of an image 50 shown as a rectangle, and explains the contents of the image 50. The alt attribute text 52 is displayed in place of the image 50 when loading of the image 50 fails, when the image 50 cannot be displayed, when the image 50 is not displayed, etc., and the display at that time will be as shown in Figure 5(b).) Jin, Leinonen, and Ishikawa are in the same analogous art as they are in the same field of endeavor, processing source data. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Ishikawa teachings into Jin/Leinonen invention to allow Jin to process source data comprising image and alt tag, where the alt tag is displayed when the image in the source data cannot be displayed, as suggested by Ishikawa ([0040 & 0045,) to arrive at invention. Claim 2 Jin also teaches the source data is data about the source code comprising ticket threads in the bug ticketing system, the criteria comprise completion of the ticket thread (Jin; p. 3: right column: last full paragraph; Finally, with the infer reportdiff command, we compute the differences between the two infer reports reportPrev and reportCurr. The output bugs (error, item) contain three categories (criteria) of issues: • introduced: issues found in curr but not in prev; • fixed: issues (ticket thread) found in prev but not in curr; • preexisting: issues (ticket thread) found in both prev and curr.), and the technical data is a release note for the source code (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix). The developer has the option to accept or decline the recommended fix.) Claim 3 Jin also teaches checking that the criteria are met by searching the ticket threads for key words indicating the completion of the ticket thread (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs, (ii) retrieval module – a large index of historic bugs and fixes, equipped with a facility to efficiently search and retrieve “hints” – semantically-similar source code segments – given a query…) Claim 4 Jin also teaches prompting the LLM with the ticket thread comprising a title, description and comments of the ticket thread (Jin; p. 4: right column: last full paragraph; Demonstration learning is a prompt augmentation technique in which a few answered prompts are prepended to the context with the purpose of demonstrating how a language model should approach a downstream task. For program repair, we introduce a prefix constructed of two answered prompts as, followed by the actual buggy code snippet [X], as shown in Figure 3…; (see picture below); p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…; (see picture below)); PNG media_image2.png 150 488 media_image2.png Greyscale PNG media_image3.png 144 494 media_image3.png Greyscale Claim 7 Jin also teaches prior to prompting the LLM, adapting the LLM by training the LLM with content from an enterprise which produces the source data (Jin; p. 1: left column: last full paragraph; To train and evaluate our approach, we curated InferredBugs, a novel, metadata-rich dataset of bugs extracted by executing the Infer static analyzer on the change histories of thousands of Java and C# repositories. Our evaluation demonstrates that InferFix outperforms strong LLM baselines …) Claim 8 Jin also teaches the item is an alt tag (Jin; p. 2: right column: third full paragraph – p. 3: left column: first half paragraph; Figure 1 illustrates a typical software development workflow at Microsoft Developer Division in presence of InferFix. As a pull request proposing code changes (source data, data, alt tag) is created, continuous integration pipeline (CI) triggers unit testing, build, and Infer static analysis steps… Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM …) and determining whether the criteria are met by prompting the LLM to analyze the alt tag (Jin; p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…) and, in response to results from the LLM indicating the alt tag is inadequate, determining the criteria are met (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag); p. 3: left column: last half paragraph; Infer is an open-source static analysis tool originating from program analysis research on separation logic … It computes program specifications to detect errors related to memory safety, concurrency, security, and more (compare with specification) … p. 3: right column: right column: third full paragraph – last half paragraph; … we provide details on how we executed Infer over the change histories of software projects in order to detect introduced and fixed bugs. Given as input the current commit curr and the previous commit prev, we begin by computing a git diff to identify the files involved in the change performed by the developer in the commit curr. Next, we analyze the status of the files at commit prev… we build the system using the project-specific build tool. During the build process, the infer capture command intercepts calls to the compiler to read source files and translates them into an intermediate representation which will allow Infer to analyze these files. Next, we invoke the infer analyze command specifying the files to be analyzed (i.e., the files diff involved in the commit). This analysis produces a report reportPrev detailing (completeness, quality) the bugs (error, item) identified within the specified files. Subsequently, we move to the current commit curr and perform the same steps described for the commit prev, that is: checking out the commit, building system while capturing the source files, and analyzing the diff files in order to detect bugs. Finally, with the infer reportdiff command, we compute the differences between the two infer reports reportPrev and reportCurr. The output bugs (error, item) contain three categories (criteria) of issues: • introduced: issues found in curr but not in prev; • fixed: issues (ticket thread) found in prev but not in curr; • preexisting: issues (ticket thread) found in both prev and curr …) Claim 9 Jin also teaches in response to determining the criteria are met, prompting the LLM with the prompt, where the prompt comprises the alt tag, source code around the alt tag and a request to improve the alt tag (Jin; p. 4: right column: last full paragraph; Demonstration learning is a prompt augmentation technique in which a few answered prompts are prepended to the context with the purpose of demonstrating how a language model should approach a downstream task. For program repair, we introduce a prefix constructed of two answered prompts as, followed by the actual buggy code snippet [X], as shown in Figure 3…; (see picture below); p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…; (see picture below)); and PNG media_image2.png 150 488 media_image2.png Greyscale PNG media_image3.png 144 494 media_image3.png Greyscale updating the source code with the release note(Jin; p. 8: right column: second full paragraph – p. 9: left column: first half paragraph; …If bugs are detected, the InferFix patch generation module is invoked to propose a fix… The InferFix module proposes a (configurable) set of candidate patches… The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request … The developer has the option to accept (update) or decline the recommended fix.) Claim 11 Jin also teaches in response to the results from the LLM indicating the alt tag is adequate, determining the criteria are not met and finding a next item in the source data (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag); p. 3: left column: last half paragraph; Infer is an open-source static analysis tool originating from program analysis research on separation logic … It computes program specifications to detect errors related to memory safety, concurrency, security, and more (compare with specification) … p. 3: right column: right column: third full paragraph – last half paragraph; … we provide details on how we executed Infer over the change histories of software projects in order to detect introduced and fixed bugs. Given as input the current commit curr and the previous commit prev, we begin by computing a git diff to identify the files involved in the change performed by the developer in the commit curr. Next, we analyze the status of the files at commit prev… we build the system using the project-specific build tool. During the build process, the infer capture command intercepts calls to the compiler to read source files and translates them into an intermediate representation which will allow Infer to analyze these files. Next, we invoke the infer analyze command specifying the files to be analyzed (i.e., the files diff involved in the commit). This analysis produces a report reportPrev detailing (completeness, quality) the bugs (error, item) identified within the specified files. Subsequently, we move to the current commit curr and perform the same steps described for the commit prev, that is: checking out the commit, building system while capturing the source files, and analyzing the diff files in order to detect bugs. Finally, with the infer reportdiff command, we compute the differences between the two infer reports reportPrev and reportCurr. The output bugs (error, item) contain three categories (criteria) of issues: • introduced: issues found in curr but not in prev; • fixed: issues (ticket thread) found in prev but not in curr; • preexisting: issues (ticket thread) found in both prev and curr …) Claim 12 Jin also teaches the item is the error message (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code) and wherein the method comprises determining whether the criteria are met by prompting the LLM to analyze the error message (Jin; p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…) and, in response to results from the LLM indicating the error message is inadequate, determining the criteria are met (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag); p. 3: left column: last half paragraph; Infer is an open-source static analysis tool originating from program analysis research on separation logic … It computes program specifications to detect errors related to memory safety, concurrency, security, and more (compare with specification) … p. 3: right column: right column: third full paragraph – last half paragraph; … we provide details on how we executed Infer over the change histories of software projects in order to detect introduced and fixed bugs. Given as input the current commit curr and the previous commit prev, we begin by computing a git diff to identify the files involved in the change performed by the developer in the commit curr. Next, we analyze the status of the files at commit prev… we build the system using the project-specific build tool. During the build process, the infer capture command intercepts calls to the compiler to read source files and translates them into an intermediate representation which will allow Infer to analyze these files. Next, we invoke the infer analyze command specifying the files to be analyzed (i.e., the files diff involved in the commit). This analysis produces a report reportPrev detailing (completeness, quality) the bugs (error, item) identified within the specified files. Subsequently, we move to the current commit curr and perform the same steps described for the commit prev, that is: checking out the commit, building system while capturing the source files, and analyzing the diff files in order to detect bugs. Finally, with the infer reportdiff command, we compute the differences between the two infer reports reportPrev and reportCurr. The output bugs (error, item) contain three categories (criteria) of issues: • introduced: issues found in curr but not in prev; • fixed: issues (ticket thread) found in prev but not in curr; • preexisting: issues (ticket thread) found in both prev and curr …) Claim 13 Jin also teaches in response to the determining the criteria are met, prompting the LLM with the prompt, where the prompt comprises the error message, source code around the error message and a request to improve the error message (Jin; p. 4: right column: last full paragraph; Demonstration learning is a prompt augmentation technique in which a few answered prompts are prepended to the context with the purpose of demonstrating how a language model should approach a downstream task. For program repair, we introduce a prefix constructed of two answered prompts as, followed by the actual buggy code snippet [X], as shown in Figure 3…; (see picture below); p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…; (see picture below)); and PNG media_image2.png 150 488 media_image2.png Greyscale PNG media_image3.png 144 494 media_image3.png Greyscale Claim 14 Jin also teaches only storing the release note or only updating the source data with the release note, in response to input from an operator (Jin; p. 8: right column: second full paragraph – p. 9: left column: first half paragraph; …If bugs are detected, the InferFix patch generation module is invoked to propose a fix… The InferFix module proposes a (configurable) set of candidate patches… The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request … The developer has the option to accept (update) or decline the recommended fix (technical data).) Update only when developer accepts the recommended fix. Claim 15 Jin also teaches adding a command token in the LLM prompt where the command token triggers the LLM to call a search engine using a query comprising key words from the LLM prompt (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs, (ii) retrieval module – a large index of historic bugs and fixes, equipped with a facility to efficiently search and retrieve “hints” – semantically-similar source code segments – given a query…) Claim 16 This is an apparatus version of the method version in claim 1; therefore, it is rejected for the same reasons. Furthermore, Jin implicitly teaches an apparatus comprising a processor and a memory storing instructions (Jin’s method is to be performed by a computer. Therefore, the computer is implicitly disclosed, wherein the computer comprises a processor and memory storing instructions.) Claim 17 This limitation is already discussed in claim 3; therefore, it is rejected for the same reasons. Claim 18 This limitation is already discussed in claim 4; therefore, it is rejected for the same reasons. Claim 20 Jin teaches a computer-implemented method comprising: accessing source data comprising source code or data about the source code (Jin; p. 2: right column: third full paragraph – p. 3: left column: first half paragraph; Figure 1 illustrates a typical software development workflow at Microsoft Developer Division in presence of InferFix. As a pull request proposing code changes (source data, data, source code) is created, continuous integration pipeline (CI) triggers unit testing, build, and Infer static analysis steps… Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM … p. 5: left column: second full paragraph – third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data).); finding an item in the source data, where the item is a ticket thread in a bug ticketing system, or an error message in the source code, or alt tag in the source code (Jin; p. 2: right column: third full paragraph – p. 3: left column: first half paragraph; Figure 1 illustrates a typical software development workflow at Microsoft Developer Division in presence of InferFix. As a pull request proposing code changes (source data, data, alt tag) is created, continuous integration pipeline (CI) triggers unit testing, build, and Infer static analysis steps… Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM … p. 5: left column: second full paragraph – third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag).), wherein: the ticket thread comprises a sequence of comments making up a conversation, the comments comprising pieces of text input by automated assistants, wherein the finding comprises extracting keywords from one or more of the sequence of comments (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …); and ; and checking criteria by determining quality of the item, or determining completeness of the item, or by comparing the item with a specification (Jin; left column: second full paragraph – third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs… Infer performs automated program analysis over this graph and produces compositional method summaries in order to determine whether there are defects (data, error) present in the source code (source data, data, alt tag); p. 3: left column: last half paragraph; Infer is an open-source static analysis tool originating from program analysis research on separation logic … It computes program specifications to detect errors related to memory safety, concurrency, security, and more (compare with specification) … p. 3: right column: right column: third full paragraph – last half paragraph; … we provide details on how we executed Infer over the change histories of software projects in order to detect introduced and fixed bugs. Given as input the current commit curr and the previous commit prev, we begin by computing a git diff to identify the files involved in the change performed by the developer in the commit curr. Next, we analyze the status of the files at commit prev… we build the system using the project-specific build tool. During the build process, the infer capture command intercepts calls to the compiler to read source files and translates them into an intermediate representation which will allow Infer to analyze these files. Next, we invoke the infer analyze command specifying the files to be analyzed (i.e., the files diff involved in the commit). This analysis produces a report reportPrev detailing (completeness, quality) the bugs (error, item) identified within the specified files. Subsequently, we move to the current commit curr and perform the same steps described for the commit prev, that is: checking out the commit, building system while capturing the source files, and analyzing the diff files in order to detect bugs. Finally, with the infer reportdiff command, we compute the differences between the two infer reports reportPrev and reportCurr. The output bugs (error, item) contain three categories (criteria) of issues: • introduced: issues found in curr but not in prev; • fixed: issues (ticket thread) found in prev but not in curr; • preexisting: issues (ticket thread) found in both prev and curr …); in response to the criteria being met for generating technical data related to the item, forming a prompt from the item and a command token (Jin; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs … (iii) generator module – a large language model finetuned on a dataset of prompts enriched with the information provided by the static analyzer and the retriever to generate fixes …) a release note pertaining to the item (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …); prompting the LLM with a prompt (Jin; p. 2: right column: last full paragraph – p.3: left column: first half paragraph; Our approach combines a static analyzer to detect, localize, and classify bugs with a powerful LLM (finetuned 12 billion parameter Codex model) to generate fixes. … The context preprocessing module utilizes the information provided by the analyzer to extract the buggy method, and retains surrounding context most relevant to fixing the bug … The retrieval augmentation engine then searches for semantically similar buggy code snippets in the historic database, prepending similar bug-fixes to the prompt. Finally, the augmented prompt is sent to the finetuned Codex model for inference …; p. 5: left column: third full paragraph; InferFix program repair framework is composed of three following key modules: (i) a static analysis tool that detects, localizes, and classifies bugs … (iii) generator module – a large language model finetuned on a dataset of prompts enriched with the information provided by the static analyzer and the retriever to generate fixes (technical data). p. 4: right column: last full paragraph; Demonstration learning is a prompt augmentation technique in which a few answered prompts are prepended to the context with the purpose of demonstrating how a language model should approach a downstream task. For program repair, we introduce a prefix constructed of two answered prompts as, followed by the actual buggy code snippet [X], as shown in Figure 3…; (see picture below); p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…; (see picture below)); PNG media_image2.png 150 488 media_image2.png Greyscale PNG media_image3.png 144 494 media_image3.png Greyscale receiving a release note from the LLM in response to the prompt (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …); and storing the release note in association with the source data or updating the source data using the release note (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …), wherein the release note received from the LLM is programmatically inserted into a source code repository in a database configured to store a plurality of release notes (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches … The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix) …; p. 2: left column: last full paragraph; … (iii) we introduce a dedicated prompt augmentation technique for program repair task, which leverages dense retrieval from an external database of historic bugs and fixes, bug type annotations, and syntactic hierarchies across the entire source code file affected by a bug …; bug, fixes, bug annotation, hierarchies == parts of release note and they are stored in database.) But Jin does not explicitly teach the prompt instructing a large language model (LLM) to generate a release note pertaining to the item. However, Leinonen teaches the prompt instructing a large language model (LLM) to generate a release note pertaining to the item (Leinonen; p. 564: left column: last half paragraph; Large Language Models (LLMs), particularly pre-trained transformer models … One such model is GPT-3 … GPT-3 also powers several other tools such as Codex …; p. 564: right column: last half paragraph – p. 565: left column: second full paragraph; Programming error messages were generated using the most recent and performant Codex model … We evaluated a number of prompts to identify a version that seemed to provide useful explanations (release note) …; prompt instructs LLM to generate explanations; p. 566: left column: section “5.1 Are Error Message Explanation Useful?”; Our results suggest that using large language models to explain programming error messages (PEMs) is feasible and shows promise…, See picture below. Also see code example 2 & 3 on p. 567.) PNG media_image1.png 401 470 media_image1.png Greyscale Jin and Leinonen are in the same analogous art as they are in the same field of endeavor, utilizing LLM for processing data. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Leinonen teachings into Jin invention to allow Jin to use LLM to enhance programming error messages with explanations of the errors and suggestions on how to fix them as suggested by Leinonen (abstract.) But Jin and Leinonen do not explicitly teach the alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, wherein the finding comprises finding alt tags in the source data. However, Ishikawa teaches the alt tag is associated with an image such that when the image cannot be displayed the alt tag is displayed place of the image, wherein the finding comprises finding alt tags in the source data (Ishikawa; [0040] … FIG. 4 shows an example of description data (source data) written in HTML. The alternative text determining unit 41 detects from this description data an <img> tag, which is an image display element for displaying an image, and determines whether or not alt attribute text is included in the <img> tag. The alt attribute text is the character portion of “ALT attribute text” described by “alt=“ALT attribute text”” (alt tag) shown in FIG. 4, and is text that takes the place of an image. [0045] … In FIG. 5A, text 51, “Text Near Image,” is displayed adjacent to the left side of an image 50 shown as a rectangle, and explains the contents of the image 50. The alt attribute text 52 is displayed in place of the image 50 when loading of the image 50 fails, when the image 50 cannot be displayed, when the image 50 is not displayed, etc., and the display at that time will be as shown in Figure 5(b).) Jin, Leinonen, and Ishikawa are in the same analogous art as they are in the same field of endeavor, processing source data. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Ishikawa teachings into Jin/Leinonen invention to allow Jin to process source data comprising image and alt tag, where the alt tag is displayed when the image in the source data cannot be displayed, as suggested by Ishikawa ([0040 & 0045,) to arrive at invention. Claims 5 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Jin, Leinonen, and Ishikawa as applied to claims 1 and 16 above, and further in view of Smith et al. (Pub. No. US 2024/0273291 A1; hereinafter Smith.) Claim 5 Jin, Leinonen, and Ishikawa do not explicitly teach prior to prompting the LLM with the item, prompting the LLM with a technical writer role and writing style. However, Smith teaches prior to prompting the LLM with the item, prompting the LLM with a technical writer role and writing style (Smith; [0294] FIG. 18 is a flow diagram of an example method 1800 for prompt and content generation… [0296] At operation 1802, the processing device creates a first set of title prompts by applying a first set of title prompt templates to a seed…; [0301] At operation 1804, the processing device, in response to input of the first set of title prompts to a first generative language model, outputs, by the first generative language model, based on the first set of title prompts, a first set of document titles… [0112] To produce generated prompt 304, prompt generation subsystem 302 applies a prompt template to the seed. A prompt template includes a format (style) and/or specification (style) for arranging data and/or instructions, including the seed, for input a generative language model so that the generative language model can read and process the inputs and generate corresponding output…” [0163] The following is an example of how a prompt can be refined based on pre-publication feedback and/or post publication feedback. Suppose a prompt template includes the following: “write an article about [title] in the style of the Harvard Business Review,” …) “Harvard Business” == role. Jin, Leinonen, Ishikawa and Smith are in the same analogous art as they are in the same field of endeavor, creating prompt for language model. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Smith teachings into Jin/Leinonen/Ishikawa invention to generate and refines prompt by formatting the prompt to improve likelihood the desired output is produced as suggested by smith ([0039].) Claim 19 This limitation is already discussed in claim 5; therefore, it is rejected for the same reasons. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Jin, Leinonen, and Ishikawa as applied to claim 1 above, and further in view of Kumar et al. (Patent. No. US 11,855,860 B1; hereinafter Kumar.) Claim 6 Jin, Leinonen, and Ishikawa do not explicitly teach prior to prompting the LLM with the item, removing personal data from the item. However, Kumar teaches prior to prompting the LLM with the item, removing personal data from the item (Kharbanda; Fig. 1; col. 7: 47 – 56; The ticket processor 121 may also be configured to remove any personal information of the user 105, such as name, email address, or phone number…Thus, the ticket processor 121 may perform processing as simple as word and character counting, or as complex as having a separately trained natural language processing (NLP) and/or natural language understanding (NLU) model used to recognize and characterize included ticket content. Col. 9: 11 – 21; Thus, through the operations of the training manager 102 as described above, and provided in more detail, below, raw incident ticket data in the ticket data repository 109 may be transformed into high-quality training data in the processed training data 124. Consequently, the training engine 126 may be enabled to quickly and efficiently generate one or more domain-specific LLMs…) Jin, Leinonen, Ishikawa and Kumar are in the same analogous art as they are in the same field of endeavor, utilizing LLM for processing data. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Kumar teachings into Jin/Leinonen/Ishikawa invention to remove user’s personal data before inputting those into the LLM as suggested by Kumar (col. 7: 47 – 56 and col. 9: 11 – 21.) Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Jin, Leinonen, and Ishikawa as applied to claim 1 above, and further in view of Kharbanda et al. (Pub. No. US 2024/0378236 A1; hereinafter Kharbanda.) Claim 10 Jin teaches in response to determining the criteria are met, prompting the LLM with the prompt, where the prompt comprises the alt tag (Jin; p. 4: right column: last full paragraph; Demonstration learning is a prompt augmentation technique in which a few answered prompts are prepended to the context with the purpose of demonstrating how a language model should approach a downstream task. For program repair, we introduce a prefix constructed of two answered prompts as, followed by the actual buggy code snippet [X], as shown in Figure 3…; (see picture below); p. 5: left column: first full paragraph; Instruction learning is a prompt augmentation technique that introduces a natural language description of the task. To approach program repair, we prepare prompts following a template: We utilize OpenaAI GPT-3 Davinci model, a 175 billion parameter language model and a close sibling of ChatGPT, to complete the prompts…; (see picture below)); and …; (see picture below)); and PNG media_image2.png 150 488 media_image2.png Greyscale PNG media_image3.png 144 494 media_image3.png Greyscale modifying the source code by adding the technical data to the alt tag (Jin; p. 8: right column: last half paragraph – p. 9: left column: first half paragraph; The InferFix module proposes a (configurable) set of candidate patches. Each candidate patch is packaged as a separate Pull Request… The validated fix is then provided to the developer within the feature branch of the developer’s Pull Request… We implemented a GitHub action which receives a validated patch from InferFix and surfaces it to the developer in form of a GitHub comment (release note, technical data) in the PR. The comment provides detailed information about the bug (i.e., extracted by Infer), and the resolution (i.e., served by InferFix). The developer has the option to accept or decline the recommended fix.) Jin, Leinonen, and Ishikawa do not explicitly teach the prompt comprises an image associated with the alt tag. However, Kharbanda teaches the prompt comprises an image associated with the alt tag (Kharbanda; [0104] FIG. 8 depicts a flow chart diagram of an example method 800 to provide visual search information derived from documents that include images retrieved based on a visual similarity with a query image… [0107] In prior to processing the query image, the operations comprise obtaining the query image from the user computing device. For example, a user can utilize the user computing device to capture an image that depicts an unfamiliar object. To learn more about the object, the user can use a visual search service by providing the image and an associated prompt (e.g., “what is this object”, etc.) to the computing system. Alternatively, in some implementations, the computing system can receive the image and an associated prompt from an automated service or software program …; See Fig. 4 for visual search example. [0108] At 804, the computing system can identify a plurality of source documents. Each of the plurality of source documents can include a result image of the plurality of result images and textual content associated with the result image…) Jin, Leinonen, Ishikawa and Kharbanda are in the same analogous art as they are in the same field of endeavor, utilizing LLM for processing data. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Kumar teachings into Jin/Leinonen/Ishikawa invention to remove user’s personal data before inputting those into the LLM as suggested by Kumar (col. 7: 47 – 56 and col. 9: 11 – 21.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CUONG V LUU whose telephone number is (571)270-1733. The examiner can normally be reached 6:30 AM - 3:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached on (571) 272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CUONG V LUU/Examiner, Art Unit 2192 /S. Sough/SPE, Art Unit 2192
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §101, §103, §112
Jun 11, 2025
Examiner Interview Summary
Jun 11, 2025
Applicant Interview (Telephonic)
Jul 29, 2025
Response Filed
Aug 11, 2025
Final Rejection — §101, §103, §112
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Oct 20, 2025
Response after Non-Final Action
Nov 17, 2025
Request for Continued Examination
Nov 24, 2025
Response after Non-Final Action
Dec 30, 2025
Non-Final Rejection — §101, §103, §112
Feb 20, 2026
Examiner Interview Summary
Feb 20, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602208
SYSTEM AND METHOD FOR SOURCE CODE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12585435
REAL-TIME VISUALIZATION OF COMPLEX SOFTWARE ARCHITECTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12572714
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12572447
SELECTIVE TRACING OF ENTITIES DURING CODE EXECUTION USING DYNAMIC TRACING CONFIGURATION
2y 5m to grant Granted Mar 10, 2026
Patent 12561396
PERSONALIZED PARTICULATE MATTER EXPOSURE MANAGEMENT USING FINE-GRAINED WEATHER MODELING AND OPTIMAL CONTROL THEORY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+36.7%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month